Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit c305be6

Browse filesBrowse files
committed
Merge branch 'main' of github.com:abetlen/llama_cpp_python into main
2 parents a7d17b8 + b76724c commit c305be6
Copy full SHA for c305be6

File tree

Expand file treeCollapse file tree

1 file changed

+6
-6
lines changed
Filter options
Expand file treeCollapse file tree

1 file changed

+6
-6
lines changed

‎docs/install/macos.md

Copy file name to clipboardExpand all lines: docs/install/macos.md
+6-6Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -38,19 +38,19 @@ llama-cpp-python         0.1.68
3838
3939
```
4040

41-
**(5) Download a v3 ggml model**
42-
- **ggmlv3**
43-
- file name ends with **q4_0.bin** - indicating it is 4bit quantized, with quantisation method 0
41+
**(5) Download a v3 gguf v2 model**
42+
- **ggufv2**
43+
- file name ends with **Q4_0.gguf** - indicating it is 4bit quantized, with quantisation method 0
4444

45-
https://huggingface.co/TheBloke/open-llama-7b-open-instruct-GGML
45+
https://huggingface.co/TheBloke/CodeLlama-7B-GGUF
4646

4747

4848
**(6) run the llama-cpp-python API server with MacOS Metal GPU support**
4949
```
5050
# config your ggml model path
51-
# make sure it is ggml v3
51+
# make sure it is gguf v2
5252
# make sure it is q4_0
53-
export MODEL=[path to your llama.cpp ggml models]]/[ggml-model-name]]q4_0.bin
53+
export MODEL=[path to your llama.cpp ggml models]]/[ggml-model-name]]Q4_0.gguf
5454
python3 -m llama_cpp.server --model $MODEL --n_gpu_layers 1
5555
```
5656

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.