Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit e6a36b8

Browse filesBrowse files
committed
docs: edit function calling docs
1 parent 8c3aa78 commit e6a36b8
Copy full SHA for e6a36b8

File tree

Expand file treeCollapse file tree

1 file changed

+2
-1
lines changed
Filter options
Expand file treeCollapse file tree

1 file changed

+2
-1
lines changed

‎docs/server.md

Copy file name to clipboardExpand all lines: docs/server.md
+2-1Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -66,12 +66,13 @@ Then just update your settings in `.vscode/settings.json` to point to your code
6666
### Function Calling
6767

6868
`llama-cpp-python` supports structured function calling based on a JSON schema.
69+
Function calling is completely compatible with the OpenAI function calling API and can be used by connecting with the official OpenAI Python client.
6970

7071
You'll first need to download one of the available function calling models in GGUF format:
7172

7273
- [functionary-7b-v1](https://huggingface.co/abetlen/functionary-7b-v1-GGUF)
7374

74-
Then when you run the server you'll need to also specify the `functionary-7b-v1` chat_format
75+
Then when you run the server you'll need to also specify the `functionary` chat_format
7576

7677
```bash
7778
python3 -m llama_cpp.server --model <model_path> --chat_format functionary

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.