Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit d11ccc3

Browse filesBrowse files
committed
fix(server): minor type fixes
1 parent c1325dc commit d11ccc3
Copy full SHA for d11ccc3

File tree

Expand file treeCollapse file tree

2 files changed

+4
-4
lines changed
Filter options
Expand file treeCollapse file tree

2 files changed

+4
-4
lines changed

‎llama_cpp/server/app.py

Copy file name to clipboardExpand all lines: llama_cpp/server/app.py
+3-3Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -493,7 +493,7 @@ async def tokenize(
493493
) -> TokenizeInputResponse:
494494
tokens = llama_proxy(body.model).tokenize(body.input.encode("utf-8"), special=True)
495495

496-
return {"tokens": tokens}
496+
return TokenizeInputResponse(tokens=tokens)
497497

498498

499499
@router.post(
@@ -508,7 +508,7 @@ async def count_query_tokens(
508508
) -> TokenizeInputCountResponse:
509509
tokens = llama_proxy(body.model).tokenize(body.input.encode("utf-8"), special=True)
510510

511-
return {"count": len(tokens)}
511+
return TokenizeInputCountResponse(count=len(tokens))
512512

513513

514514
@router.post(
@@ -523,4 +523,4 @@ async def detokenize(
523523
) -> DetokenizeInputResponse:
524524
text = llama_proxy(body.model).detokenize(body.tokens).decode("utf-8")
525525

526-
return {"text": text}
526+
return DetokenizeInputResponse(text=text)

‎llama_cpp/server/types.py

Copy file name to clipboardExpand all lines: llama_cpp/server/types.py
+1-1Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -268,7 +268,7 @@ class ModelList(TypedDict):
268268

269269
class TokenizeInputRequest(BaseModel):
270270
model: Optional[str] = model_field
271-
input: Optional[str] = Field(description="The input to tokenize.")
271+
input: str = Field(description="The input to tokenize.")
272272

273273
model_config = {
274274
"json_schema_extra": {"examples": [{"input": "How many tokens in this query?"}]}

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.