File tree Expand file tree Collapse file tree 1 file changed +17
-3
lines changed
Filter options
Expand file tree Collapse file tree 1 file changed +17
-3
lines changed
Original file line number Diff line number Diff line change @@ -34,15 +34,29 @@ NOTE: All server options are also available as environment variables. For exampl
34
34
35
35
## Guides
36
36
37
- ### Multi-modal Models
37
+ ### Function Calling
38
+
39
+ ` llama-cpp-python ` supports structured function calling based on a JSON schema.
40
+
41
+ You'll first need to download one of the available function calling models in GGUF format:
42
+
43
+ - [ functionary-7b-v1] ( https://huggingface.co/abetlen/functionary-7b-v1-GGUF )
44
+
45
+ Then when you run the server you'll need to also specify the ` functionary-7b-v1 ` chat_format
46
+
47
+ ``` bash
48
+ python3 -m llama_cpp.server --model < model_path> --chat-format functionary
49
+ ```
50
+
51
+ ### Multimodal Models
38
52
39
53
` llama-cpp-python ` supports the llava1.5 family of multi-modal models which allow the language model to
40
54
read information from both text and images.
41
55
42
56
You'll first need to download one of the available multi-modal models in GGUF format:
43
57
44
- - [ llava1.5 7b] ( https://huggingface.co/mys/ggml_llava-v1.5-7b )
45
- - [ llava1.5 13b] ( https://huggingface.co/mys/ggml_llava-v1.5-13b )
58
+ - [ llava-v1.5- 7b] ( https://huggingface.co/mys/ggml_llava-v1.5-7b )
59
+ - [ llava-v1.5- 13b] ( https://huggingface.co/mys/ggml_llava-v1.5-13b )
46
60
47
61
Then when you run the server you'll need to also specify the path to the clip model used for image embedding and the ` llava-1-5 ` chat_format
48
62
You can’t perform that action at this time.
0 commit comments