Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit f27393a

Browse filesBrowse files
committed
Add additional verbose logs for cache
1 parent 4cefb70 commit f27393a
Copy full SHA for f27393a

File tree

Expand file treeCollapse file tree

1 file changed

+4
-0
lines changed
Filter options
Expand file treeCollapse file tree

1 file changed

+4
-0
lines changed

‎llama_cpp/server/app.py

Copy file name to clipboardExpand all lines: llama_cpp/server/app.py
+4Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -119,8 +119,12 @@ def create_app(settings: Optional[Settings] = None):
119119
)
120120
if settings.cache:
121121
if settings.cache_type == "disk":
122+
if settings.verbose:
123+
print(f"Using disk cache with size {settings.cache_size}")
122124
cache = llama_cpp.LlamaDiskCache(capacity_bytes=settings.cache_size)
123125
else:
126+
if settings.verbose:
127+
print(f"Using ram cache with size {settings.cache_size}")
124128
cache = llama_cpp.LlamaRAMCache(capacity_bytes=settings.cache_size)
125129

126130
cache = llama_cpp.LlamaCache(capacity_bytes=settings.cache_size)

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.