Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit 27d5358

Browse filesBrowse files
authored
docs: Update readme examples to use newer Qwen2 model (abetlen#1544)
1 parent 5beec1a commit 27d5358
Copy full SHA for 27d5358

File tree

Expand file treeCollapse file tree

1 file changed

+2
-2
lines changed
Filter options
Expand file treeCollapse file tree

1 file changed

+2
-2
lines changed

‎README.md

Copy file name to clipboardExpand all lines: README.md
+2-2Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -327,7 +327,7 @@ You'll need to install the `huggingface-hub` package to use this feature (`pip i
327327

328328
```python
329329
llm = Llama.from_pretrained(
330-
repo_id="Qwen/Qwen1.5-0.5B-Chat-GGUF",
330+
repo_id="Qwen/Qwen2-0.5B-Instruct-GGUF",
331331
filename="*q8_0.gguf",
332332
verbose=False
333333
)
@@ -688,7 +688,7 @@ For possible options, see [llama_cpp/llama_chat_format.py](llama_cpp/llama_chat_
688688
If you have `huggingface-hub` installed, you can also use the `--hf_model_repo_id` flag to load a model from the Hugging Face Hub.
689689

690690
```bash
691-
python3 -m llama_cpp.server --hf_model_repo_id Qwen/Qwen1.5-0.5B-Chat-GGUF --model '*q8_0.gguf'
691+
python3 -m llama_cpp.server --hf_model_repo_id Qwen/Qwen2-0.5B-Instruct-GGUF --model '*q8_0.gguf'
692692
```
693693

694694
### Web Server Features

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.