Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit 5575fed

Browse filesBrowse files
authored
fix: llama_grammar_accept_token arg order (abetlen#1649)
Old was: llama_grammar_accept_token(ctx, grammar, token) Now this is: llama_grammar_accept_token(grammar, ctx, token)
1 parent f7b9e6d commit 5575fed
Copy full SHA for 5575fed

File tree

Expand file treeCollapse file tree

1 file changed

+1
-1
lines changed
Filter options
Expand file treeCollapse file tree

1 file changed

+1
-1
lines changed

‎llama_cpp/_internals.py

Copy file name to clipboardExpand all lines: llama_cpp/_internals.py
+1-1Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -511,7 +511,7 @@ def sample_token(self, candidates: "_LlamaTokenDataArray") -> int:
511511
def grammar_accept_token(self, grammar: LlamaGrammar, token: int):
512512
assert self.ctx is not None
513513
assert grammar.grammar is not None
514-
llama_cpp.llama_grammar_accept_token(self.ctx, grammar.grammar, token)
514+
llama_cpp.llama_grammar_accept_token(grammar.grammar, self.ctx, token)
515515

516516
def reset_timings(self):
517517
assert self.ctx is not None

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.