Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit baeb7b3

Browse filesBrowse files
committed
Merge branch 'main' of github.com:abetlen/llama_cpp_python into main
2 parents b62c449 + fb1f956 commit baeb7b3
Copy full SHA for baeb7b3

File tree

Expand file treeCollapse file tree

1 file changed

+2
-2
lines changed
Filter options
Expand file treeCollapse file tree

1 file changed

+2
-2
lines changed

‎docs/server.md

Copy file name to clipboardExpand all lines: docs/server.md
+2-2Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ You'll first need to download one of the available function calling models in GG
4545
Then when you run the server you'll need to also specify the `functionary-7b-v1` chat_format
4646

4747
```bash
48-
python3 -m llama_cpp.server --model <model_path> --chat-format functionary
48+
python3 -m llama_cpp.server --model <model_path> --chat_format functionary
4949
```
5050

5151
### Multimodal Models
@@ -61,7 +61,7 @@ You'll first need to download one of the available multi-modal models in GGUF fo
6161
Then when you run the server you'll need to also specify the path to the clip model used for image embedding and the `llava-1-5` chat_format
6262

6363
```bash
64-
python3 -m llama_cpp.server --model <model_path> --clip-model-path <clip_model_path> --chat-format llava-1-5
64+
python3 -m llama_cpp.server --model <model_path> --clip_model_path <clip_model_path> --chat_format llava-1-5
6565
```
6666

6767
Then you can just use the OpenAI API as normal

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.