Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit d6fb16e

Browse filesBrowse files
committed
docs: Update README
1 parent 5b258bf commit d6fb16e
Copy full SHA for d6fb16e

File tree

Expand file treeCollapse file tree

1 file changed

+4
-1
lines changed
Filter options
Expand file treeCollapse file tree

1 file changed

+4
-1
lines changed

‎README.md

Copy file name to clipboardExpand all lines: README.md
+4-1Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -163,7 +163,7 @@ Below is a short example demonstrating how to use the high-level API to for basi
163163
)
164164
>>> output = llm(
165165
"Q: Name the planets in the solar system? A: ", # Prompt
166-
max_tokens=32, # Generate up to 32 tokens
166+
max_tokens=32, # Generate up to 32 tokens, set to None to generate up to the end of the context window
167167
stop=["Q:", "\n"], # Stop generating just before the model would generate a new question
168168
echo=True # Echo the prompt back in the output
169169
) # Generate a completion, can also call create_completion
@@ -425,6 +425,9 @@ pip install -e .[all]
425425
make clean
426426
```
427427

428+
You can also test out specific commits of `lama.cpp` by checking out the desired commit in the `vendor/llama.cpp` submodule and then running `make clean` and `pip install -e .` again. Any changes in the `llama.h` API will require
429+
changes to the `llama_cpp/llama_cpp.py` file to match the new API (additional changes may be required elsewhere).
430+
428431
## FAQ
429432

430433
### Are there pre-built binaries / binary wheels available?

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.