Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit a5cfeb7

Browse filesBrowse files
committed
feat: Update llama.cpp
1 parent 7bb91f0 commit a5cfeb7
Copy full SHA for a5cfeb7

File tree

Expand file treeCollapse file tree

2 files changed

+10
-1
lines changed
Filter options
Expand file treeCollapse file tree

2 files changed

+10
-1
lines changed

‎llama_cpp/llama_cpp.py

Copy file name to clipboardExpand all lines: llama_cpp/llama_cpp.py
+9Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -230,6 +230,15 @@ def _load_shared_library(lib_base_name: str):
230230
LLAMA_ROPE_SCALING_YARN = 2
231231
LLAMA_ROPE_SCALING_MAX_VALUE = LLAMA_ROPE_SCALING_YARN
232232

233+
# enum llama_pooling_type {
234+
# LLAMA_POOLING_NONE = 0,
235+
# LLAMA_POOLING_MEAN = 1,
236+
# LLAMA_POOLING_CLS = 2,
237+
# };
238+
LLAMA_POOLING_NONE = 0
239+
LLAMA_POOLING_MEAN = 1
240+
LLAMA_POOLING_CLS = 2
241+
233242
# enum llama_split_mode {
234243
# LLAMA_SPLIT_NONE = 0, // single GPU
235244
# LLAMA_SPLIT_LAYER = 1, // split layers and KV across GPUs

‎vendor/llama.cpp

Copy file name to clipboard

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.