Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit d696251

Browse filesBrowse files
committed
Fix logits_all bug
1 parent 6ee413d commit d696251
Copy full SHA for d696251

File tree

Expand file treeCollapse file tree

1 file changed

+2
-2
lines changed
Filter options
Expand file treeCollapse file tree

1 file changed

+2
-2
lines changed

‎llama_cpp/llama.py

Copy file name to clipboardExpand all lines: llama_cpp/llama.py
+2-2Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -439,7 +439,7 @@ def eval_tokens(self) -> Deque[int]:
439439
def eval_logits(self) -> Deque[List[float]]:
440440
return deque(
441441
self.scores[: self.n_tokens, :].tolist(),
442-
maxlen=self._n_ctx if self.model_params.logits_all else 1,
442+
maxlen=self._n_ctx if self.context_params.logits_all else 1,
443443
)
444444

445445
def tokenize(self, text: bytes, add_bos: bool = True) -> List[int]:
@@ -964,7 +964,7 @@ def _create_completion(
964964
else:
965965
stop_sequences = []
966966

967-
if logprobs is not None and self.model_params.logits_all is False:
967+
if logprobs is not None and self.context_params.logits_all is False:
968968
raise ValueError(
969969
"logprobs is not supported for models created with logits_all=False"
970970
)

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.