File tree Expand file tree Collapse file tree 1 file changed +2
-2
lines changed
Filter options
Expand file tree Collapse file tree 1 file changed +2
-2
lines changed
Original file line number Diff line number Diff line change @@ -327,7 +327,7 @@ You'll need to install the `huggingface-hub` package to use this feature (`pip i
327
327
328
328
``` python
329
329
llm = Llama.from_pretrained(
330
- repo_id = " Qwen/Qwen1.5 -0.5B-Chat -GGUF" ,
330
+ repo_id = " Qwen/Qwen2 -0.5B-Instruct -GGUF" ,
331
331
filename = " *q8_0.gguf" ,
332
332
verbose = False
333
333
)
@@ -688,7 +688,7 @@ For possible options, see [llama_cpp/llama_chat_format.py](llama_cpp/llama_chat_
688
688
If you have ` huggingface-hub ` installed, you can also use the ` --hf_model_repo_id ` flag to load a model from the Hugging Face Hub.
689
689
690
690
``` bash
691
- python3 -m llama_cpp.server --hf_model_repo_id Qwen/Qwen1.5 -0.5B-Chat -GGUF --model ' *q8_0.gguf'
691
+ python3 -m llama_cpp.server --hf_model_repo_id Qwen/Qwen2 -0.5B-Instruct -GGUF --model ' *q8_0.gguf'
692
692
```
693
693
694
694
### Web Server Features
You can’t perform that action at this time.
0 commit comments