Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit d410f12

Browse filesBrowse files
committed
Update docs. Closes abetlen#386
1 parent 9f528f4 commit d410f12
Copy full SHA for d410f12

File tree

Expand file treeCollapse file tree

1 file changed

+1
-1
lines changed
Filter options
Expand file treeCollapse file tree

1 file changed

+1
-1
lines changed

‎llama_cpp/llama.py

Copy file name to clipboardExpand all lines: llama_cpp/llama.py
+1-1Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -228,7 +228,7 @@ def __init__(
228228
model_path: Path to the model.
229229
n_ctx: Maximum context size.
230230
n_parts: Number of parts to split the model into. If -1, the number of parts is automatically determined.
231-
seed: Random seed. 0 for random.
231+
seed: Random seed. -1 for random.
232232
f16_kv: Use half-precision for key/value cache.
233233
logits_all: Return logits for all tokens, not just the last token.
234234
vocab_only: Only load the vocabulary no weights.

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.