Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit f74b90e

Browse filesBrowse files
committed
Fix streaming hang on last token when cache is on.
1 parent 5be8354 commit f74b90e
Copy full SHA for f74b90e

File tree

Expand file treeCollapse file tree

1 file changed

+9
-5
lines changed
Filter options
Expand file treeCollapse file tree

1 file changed

+9
-5
lines changed

‎llama_cpp/llama.py

Copy file name to clipboardExpand all lines: llama_cpp/llama.py
+9-5Lines changed: 9 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -848,11 +848,6 @@ def _create_completion(
848848
finish_reason = "length"
849849
break
850850

851-
if self.cache:
852-
if self.verbose:
853-
print("Llama._create_completion: cache save", file=sys.stderr)
854-
self.cache[prompt_tokens + completion_tokens] = self.save_state()
855-
856851
if self.verbose:
857852
llama_cpp.llama_print_timings(self.ctx)
858853

@@ -941,8 +936,17 @@ def _create_completion(
941936
}
942937
],
943938
}
939+
if self.cache:
940+
if self.verbose:
941+
print("Llama._create_completion: cache save", file=sys.stderr)
942+
self.cache[prompt_tokens + completion_tokens] = self.save_state()
944943
return
945944

945+
if self.cache:
946+
if self.verbose:
947+
print("Llama._create_completion: cache save", file=sys.stderr)
948+
self.cache[prompt_tokens + completion_tokens] = self.save_state()
949+
946950
text_str = text.decode("utf-8", errors="ignore")
947951

948952
if echo:

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.