You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+8-11Lines changed: 8 additions & 11 deletions
Original file line number
Diff line number
Diff line change
@@ -293,19 +293,16 @@ To constrain the response to a specific JSON Schema, you can use the `schema` pr
293
293
294
294
The high-level API also provides a simple interface for function calling.
295
295
296
-
Note that the only model that supports full function calling at this time is "functionary".
297
-
The gguf-converted files for this model can be found here: [functionary-7b-v1](https://huggingface.co/abetlen/functionary-7b-v1-GGUF)
296
+
The only set of models that supports full function calling at this time is [functionary](https://github.com/MeetKai/functionary). The various gguf-converted files for this set of models can be found [here](https://huggingface.co/meetkai). Functionary is able to intelligently call functions and also analyze any provided function outputs to generate coherent responses. All v2 models of functionary supports **parallel function calling**. You can provide either `functionary-v1` or `functionary-v2` for the `chat_format` when initializing the Llama class.
297
+
298
+
Note that due to discrepancies between llama.cpp and HuggingFace's tokenizers, it is required to provide HF Tokenizer for functionary. The `LlamaHFTokenizer` class can be initialized and passed into the Llama class. This will override the default llama.cpp tokenizer used in Llama class. The tokenizer files are already included in the respective HF repositories hosting the gguf files.
"content": "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. The assistant calls functions with appropriate input when necessary"
307
-
308
-
},
309
306
{
310
307
"role": "user",
311
308
"content": "Extract Jason is 25 years old"
@@ -332,12 +329,12 @@ The gguf-converted files for this model can be found here: [functionary-7b-v1](h
0 commit comments