Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit 27944c4

Browse filesBrowse files
fixed typo (abetlen#178)
1 parent 2d15d6c commit 27944c4
Copy full SHA for 27944c4

File tree

Expand file treeCollapse file tree

1 file changed

+1
-1
lines changed
Filter options
Expand file treeCollapse file tree

1 file changed

+1
-1
lines changed

‎README.md

Copy file name to clipboardExpand all lines: README.md
+1-1Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -199,7 +199,7 @@ https://user-images.githubusercontent.com/271616/225014776-1d567049-ad71-4ef2-b0
199199
- We don't know yet how much the quantization affects the quality of the generated text
200200
- Probably the token sampling can be improved
201201
- The Accelerate framework is actually currently unused since I found that for tensor shapes typical for the Decoder,
202-
there is no benefit compared to the ARM_NEON intrinsics implementation. Of course, it's possible that I simlpy don't
202+
there is no benefit compared to the ARM_NEON intrinsics implementation. Of course, it's possible that I simply don't
203203
know how to utilize it properly. But in any case, you can even disable it with `LLAMA_NO_ACCELERATE=1 make` and the
204204
performance will be the same, since no BLAS calls are invoked by the current implementation
205205

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.