Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit b9098b0

Browse filesBrowse files
committed
llama_cpp server: prompt is a string
Not sure why this union type was here but taking a look at llama.py, prompt is only ever processed as a string for completion This was breaking types when generating an openapi client
1 parent 7ab08b8 commit b9098b0
Copy full SHA for b9098b0

File tree

Expand file treeCollapse file tree

1 file changed

+1
-4
lines changed
Filter options
Expand file treeCollapse file tree

1 file changed

+1
-4
lines changed

‎llama_cpp/server/app.py

Copy file name to clipboardExpand all lines: llama_cpp/server/app.py
+1-4Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -126,7 +126,7 @@ def get_llama():
126126
)
127127

128128
class CreateCompletionRequest(BaseModel):
129-
prompt: Union[str, List[str]] = Field(
129+
prompt: Optional[str] = Field(
130130
default="",
131131
description="The prompt to generate completions for."
132132
)
@@ -175,9 +175,6 @@ class Config:
175175
def create_completion(
176176
request: CreateCompletionRequest, llama: llama_cpp.Llama = Depends(get_llama)
177177
):
178-
if isinstance(request.prompt, list):
179-
request.prompt = "".join(request.prompt)
180-
181178
completion_or_chunks = llama(
182179
**request.dict(
183180
exclude={

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.