Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit b70b6a8

Browse filesBrowse files
Add "numa" parameter support
1 parent 1372e4f commit b70b6a8
Copy full SHA for b70b6a8

File tree

Expand file treeCollapse file tree

1 file changed

+2
-0
lines changed
Filter options
Expand file treeCollapse file tree

1 file changed

+2
-0
lines changed

‎llama_cpp/llama_cpp.py

Copy file name to clipboardExpand all lines: llama_cpp/llama_cpp.py
+2Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -252,6 +252,7 @@ class llama_token_data_array(Structure):
252252
# bool use_mmap; // use mmap if possible
253253
# bool use_mlock; // force system to keep model in RAM
254254
# bool embedding; // embedding mode only
255+
# bool numa; // optimizations that help on some systems with non-uniform memory access
255256
# };
256257
class llama_context_params(Structure):
257258
_fields_ = [
@@ -273,6 +274,7 @@ class llama_context_params(Structure):
273274
("use_mmap", c_bool),
274275
("use_mlock", c_bool),
275276
("embedding", c_bool),
277+
("numa", c_bool),
276278
]
277279

278280

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.