Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit 213cc5c

Browse filesBrowse files
committed
Remove async from function signature to avoid blocking the server
1 parent 3727ba4 commit 213cc5c
Copy full SHA for 213cc5c

File tree

Expand file treeCollapse file tree

1 file changed

+1
-1
lines changed
Filter options
Expand file treeCollapse file tree

1 file changed

+1
-1
lines changed

‎llama_cpp/server/__main__.py

Copy file name to clipboardExpand all lines: llama_cpp/server/__main__.py
+1-1Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -196,7 +196,7 @@ class Config:
196196
"/v1/chat/completions",
197197
response_model=CreateChatCompletionResponse,
198198
)
199-
async def create_chat_completion(
199+
def create_chat_completion(
200200
request: CreateChatCompletionRequest,
201201
) -> Union[llama_cpp.ChatCompletion, EventSourceResponse]:
202202
completion_or_chunks = llama.create_chat_completion(

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.