Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit 401309d

Browse filesBrowse files
committed
Revert "Merge pull request abetlen#521 from bretello/main"
This reverts commit 07f0f3a, reversing changes made to d8a3ddb.
1 parent 07f0f3a commit 401309d
Copy full SHA for 401309d

File tree

Expand file treeCollapse file tree

1 file changed

+1
-4
lines changed
Filter options
Expand file treeCollapse file tree

1 file changed

+1
-4
lines changed

‎llama_cpp/llama_cpp.py

Copy file name to clipboardExpand all lines: llama_cpp/llama_cpp.py
+1-4Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -423,10 +423,7 @@ def llama_backend_free():
423423
def llama_load_model_from_file(
424424
path_model: bytes, params: llama_context_params
425425
) -> llama_model_p:
426-
result = _lib.llama_load_model_from_file(path_model, params)
427-
if result is None:
428-
raise Exception(f"Failed to load model from {path_model}")
429-
return result
426+
return _lib.llama_load_model_from_file(path_model, params)
430427

431428

432429
_lib.llama_load_model_from_file.argtypes = [c_char_p, llama_context_params]

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.