Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit 0952d53

Browse filesBrowse files
authored
Merge pull request abetlen#415 from lexin4ever/patch-1
server: pass seed param from command line to llama
2 parents 3e7eae4 + 282698b commit 0952d53
Copy full SHA for 0952d53

File tree

Expand file treeCollapse file tree

1 file changed

+4
-0
lines changed
Filter options
Expand file treeCollapse file tree

1 file changed

+4
-0
lines changed

‎llama_cpp/server/app.py

Copy file name to clipboardExpand all lines: llama_cpp/server/app.py
+4Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,9 @@ class Settings(BaseSettings):
3030
ge=0,
3131
description="The number of layers to put on the GPU. The rest will be on the CPU.",
3232
)
33+
seed: int = Field(
34+
default=1337, description="Random seed. -1 for random."
35+
)
3336
n_batch: int = Field(
3437
default=512, ge=1, description="The batch size to use per eval."
3538
)
@@ -109,6 +112,7 @@ def create_app(settings: Optional[Settings] = None):
109112
llama = llama_cpp.Llama(
110113
model_path=settings.model,
111114
n_gpu_layers=settings.n_gpu_layers,
115+
seed=settings.seed,
112116
f16_kv=settings.f16_kv,
113117
use_mlock=settings.use_mlock,
114118
use_mmap=settings.use_mmap,

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.