Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit 2811014

Browse filesBrowse files
iamlemecabetlen
andauthored
feat: Switch embed to llama_get_embeddings_seq (abetlen#1263)
* switch to llama_get_embeddings_seq * Remove duplicate definition of llama_get_embeddings_seq Co-authored-by: Andrei <abetlen@gmail.com> --------- Co-authored-by: Andrei <abetlen@gmail.com>
1 parent 40c6b54 commit 2811014
Copy full SHA for 2811014

File tree

Expand file treeCollapse file tree

1 file changed

+1
-1
lines changed
Filter options
Expand file treeCollapse file tree

1 file changed

+1
-1
lines changed

‎llama_cpp/llama.py

Copy file name to clipboardExpand all lines: llama_cpp/llama.py
+1-1Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -814,7 +814,7 @@ def decode_batch(n_seq: int):
814814

815815
# store embeddings
816816
for i in range(n_seq):
817-
embedding: List[float] = llama_cpp.llama_get_embeddings_ith(
817+
embedding: List[float] = llama_cpp.llama_get_embeddings_seq(
818818
self._ctx.ctx, i
819819
)[:n_embd]
820820
if normalize:

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.