Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit a0f373e

Browse filesBrowse files
authored
fix: Changed local API doc references to hosted (abetlen#1317)
1 parent f165048 commit a0f373e
Copy full SHA for a0f373e

File tree

Expand file treeCollapse file tree

1 file changed

+2
-2
lines changed
Filter options
Expand file treeCollapse file tree

1 file changed

+2
-2
lines changed

‎README.md

Copy file name to clipboardExpand all lines: README.md
+2-2Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -321,7 +321,7 @@ For OpenAI API v1 compatibility, you use the [`create_chat_completion_openai_v1`
321321

322322
### JSON and JSON Schema Mode
323323

324-
To constrain chat responses to only valid JSON or a specific JSON Schema use the `response_format` argument in [`create_chat_completion`](http://localhost:8000/api-reference/#llama_cpp.Llama.create_chat_completion).
324+
To constrain chat responses to only valid JSON or a specific JSON Schema use the `response_format` argument in [`create_chat_completion`](https://llama-cpp-python.readthedocs.io/en/latest/api-reference/#llama_cpp.Llama.create_chat_completion).
325325

326326
#### JSON Mode
327327

@@ -529,7 +529,7 @@ llama = Llama(
529529

530530
### Embeddings
531531

532-
To generate text embeddings use [`create_embedding`](http://localhost:8000/api-reference/#llama_cpp.Llama.create_embedding).
532+
To generate text embeddings use [`create_embedding`](https://llama-cpp-python.readthedocs.io/en/latest/api-reference/#llama_cpp.Llama.create_embedding).
533533

534534
```python
535535
import llama_cpp

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.