Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit 598f09c

Browse filesBrowse files
authored
Merge pull request #3 from ZenHubHQ/llm-monitoring
Removed hard check on ai_service parameter
2 parents c538a13 + 6fb1013 commit 598f09c
Copy full SHA for 598f09c

File tree

Expand file treeCollapse file tree

1 file changed

+0
-2
lines changed
Filter options
Expand file treeCollapse file tree

1 file changed

+0
-2
lines changed

‎llama_cpp/llama.py

Copy file name to clipboardExpand all lines: llama_cpp/llama.py
-2Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -975,8 +975,6 @@ def _create_completion(
975975
_ttft_start = time.time()
976976
_pid = os.getpid()
977977
_tpot_metrics = []
978-
if not ai_service:
979-
raise ValueError("ai_service must be provided")
980978
_labels = {
981979
"service": ai_service if ai_service is not None else "not-specified",
982980
"request_type": "chat/completions",

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.