We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent f94faab commit b681674Copy full SHA for b681674
README.md
@@ -423,10 +423,10 @@ Due to discrepancies between llama.cpp and HuggingFace's tokenizers, it is requi
423
>>> from llama_cpp import Llama
424
>>> from llama_cpp.llama_tokenizer import LlamaHFTokenizer
425
>>> llm = Llama.from_pretrained(
426
- repo_id="meetkai/functionary-7b-v1-GGUF",
+ repo_id="meetkai/functionary-small-v2.2-GGUF",
427
filename="functionary-small-v2.2.q4_0.gguf",
428
chat_format="functionary-v2",
429
- tokenizer=LlamaHFTokenizer.from_pretrained("meetkai/functionary-7b-v1-GGUF")
+ tokenizer=LlamaHFTokenizer.from_pretrained("meetkai/functionary-small-v2.2-GGUF")
430
)
431
```
432
</details>
0 commit comments