Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit 9f528f4

Browse filesBrowse files
committed
Merge branch 'main' of github.com:abetlen/llama_cpp_python into main
2 parents 60426b2 + ff9faaa commit 9f528f4
Copy full SHA for 9f528f4

File tree

Expand file treeCollapse file tree

1 file changed

+1
-1
lines changed
Filter options
Expand file treeCollapse file tree

1 file changed

+1
-1
lines changed

‎llama_cpp/llama.py

Copy file name to clipboardExpand all lines: llama_cpp/llama.py
+1-1Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -814,7 +814,7 @@ def _create_completion(
814814
llama_cpp.llama_reset_timings(self.ctx)
815815

816816
if len(prompt_tokens) > self._n_ctx:
817-
raise ValueError(f"Requested tokens exceed context window of {self._n_ctx}")
817+
raise ValueError(f"Requested tokens ({len(prompt_tokens)}) exceed context window of {self._n_ctx}")
818818

819819
# Truncate max_tokens if requested tokens would exceed the context window
820820
max_tokens = (

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.