Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit 945e20f

Browse filesBrowse files
committed
docs: update link
1 parent e6a36b8 commit 945e20f
Copy full SHA for 945e20f

File tree

Expand file treeCollapse file tree

1 file changed

+1
-1
lines changed
Filter options
Expand file treeCollapse file tree

1 file changed

+1
-1
lines changed

‎docs/server.md

Copy file name to clipboardExpand all lines: docs/server.md
+1-1Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ Then when you run the server you'll need to also specify the `functionary` chat_
7878
python3 -m llama_cpp.server --model <model_path> --chat_format functionary
7979
```
8080

81-
Check out the example notebook [here](https://github.com/abetlen/llama-cpp-python/blob/main/examples/notebooks/Functions.ipynb) for a walkthrough of some interesting use cases for function calling.
81+
Check out this [example notebook](https://github.com/abetlen/llama-cpp-python/blob/main/examples/notebooks/Functions.ipynb) for a walkthrough of some interesting use cases for function calling.
8282

8383
### Multimodal Models
8484

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.