File tree Expand file tree Collapse file tree 1 file changed +17
-3
lines changed Open diff view settings
Expand file tree Collapse file tree 1 file changed +17
-3
lines changed Open diff view settings
Original file line number Diff line number Diff line change @@ -34,15 +34,29 @@ NOTE: All server options are also available as environment variables. For exampl
3434
3535## Guides
3636
37- ### Multi-modal Models
37+ ### Function Calling
38+
39+ ` llama-cpp-python ` supports structured function calling based on a JSON schema.
40+
41+ You'll first need to download one of the available function calling models in GGUF format:
42+
43+ - [ functionary-7b-v1] ( https://huggingface.co/abetlen/functionary-7b-v1-GGUF )
44+
45+ Then when you run the server you'll need to also specify the ` functionary-7b-v1 ` chat_format
46+
47+ ``` bash
48+ python3 -m llama_cpp.server --model < model_path> --chat-format functionary
49+ ```
50+
51+ ### Multimodal Models
3852
3953` llama-cpp-python ` supports the llava1.5 family of multi-modal models which allow the language model to
4054read information from both text and images.
4155
4256You'll first need to download one of the available multi-modal models in GGUF format:
4357
44- - [ llava1.5 7b] ( https://huggingface.co/mys/ggml_llava-v1.5-7b )
45- - [ llava1.5 13b] ( https://huggingface.co/mys/ggml_llava-v1.5-13b )
58+ - [ llava-v1.5- 7b] ( https://huggingface.co/mys/ggml_llava-v1.5-7b )
59+ - [ llava-v1.5- 13b] ( https://huggingface.co/mys/ggml_llava-v1.5-13b )
4660
4761Then when you run the server you'll need to also specify the path to the clip model used for image embedding and the ` llava-1-5 ` chat_format
4862
You can’t perform that action at this time.
0 commit comments