Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit 6d8db9d

Browse filesBrowse files
committed
tests: simple test for server module
1 parent 468377b commit 6d8db9d
Copy full SHA for 6d8db9d

File tree

Expand file treeCollapse file tree

4 files changed

+117
-2
lines changed
Filter options
Expand file treeCollapse file tree

4 files changed

+117
-2
lines changed

‎llama_cpp/server/app.py

Copy file name to clipboardExpand all lines: llama_cpp/server/app.py
+2Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,7 @@ class Settings(BaseSettings):
2424
last_n_tokens_size: int = 64
2525
logits_all: bool = False
2626
cache: bool = False # WARNING: This is an experimental feature
27+
vocab_only: bool = False
2728

2829

2930
app = FastAPI(
@@ -49,6 +50,7 @@ class Settings(BaseSettings):
4950
n_batch=settings.n_batch,
5051
n_ctx=settings.n_ctx,
5152
last_n_tokens_size=settings.last_n_tokens_size,
53+
vocab_only=settings.vocab_only,
5254
)
5355
if settings.cache:
5456
cache = llama_cpp.LlamaCache()

‎poetry.lock

Copy file name to clipboardExpand all lines: poetry.lock
+93-2Lines changed: 93 additions & 2 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

‎pyproject.toml

Copy file name to clipboardExpand all lines: pyproject.toml
+1Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,7 @@ mkdocs = "^1.4.2"
2424
mkdocstrings = {extras = ["python"], version = "^0.20.0"}
2525
mkdocs-material = "^9.1.4"
2626
pytest = "^7.2.2"
27+
httpx = "^0.24.0"
2728

2829
[build-system]
2930
requires = [

‎tests/test_llama.py

Copy file name to clipboardExpand all lines: tests/test_llama.py
+21Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -128,3 +128,24 @@ def mock_sample(*args, **kwargs):
128128
n = 0 # reset
129129
completion = llama.create_completion("", max_tokens=1)
130130
assert completion["choices"][0]["text"] == ""
131+
132+
133+
def test_llama_server():
134+
from fastapi.testclient import TestClient
135+
import os
136+
os.environ["MODEL"] = MODEL
137+
os.environ["VOCAB_ONLY"] = "true"
138+
from llama_cpp.server.app import app
139+
client = TestClient(app)
140+
response = client.get("/v1/models")
141+
assert response.json() == {
142+
"object": "list",
143+
"data": [
144+
{
145+
"id": MODEL,
146+
"object": "model",
147+
"owned_by": "me",
148+
"permissions": [],
149+
}
150+
],
151+
}

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.