Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit 1f0b9a2

Browse filesBrowse files
authored
fix : Missing LoRA adapter after API change (abetlen#1630)
1 parent 8a12c9f commit 1f0b9a2
Copy full SHA for 1f0b9a2

File tree

Expand file treeCollapse file tree

1 file changed

+6
-3
lines changed
Filter options
Expand file treeCollapse file tree

1 file changed

+6
-3
lines changed

‎llama_cpp/llama.py

Copy file name to clipboardExpand all lines: llama_cpp/llama.py
+6-3Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2083,11 +2083,14 @@ def pooling_type(self) -> str:
20832083

20842084
def close(self) -> None:
20852085
"""Explicitly free the model from memory."""
2086-
self._stack.close()
2086+
if hasattr(self,'_stack'):
2087+
if self._stack is not None:
2088+
self._stack.close()
20872089

20882090
def __del__(self) -> None:
2089-
if self._lora_adapter is not None:
2090-
llama_cpp.llama_lora_adapter_free(self._lora_adapter)
2091+
if hasattr(self,'_lora_adapter'):
2092+
if self._lora_adapter is not None:
2093+
llama_cpp.llama_lora_adapter_free(self._lora_adapter)
20912094
self.close()
20922095

20932096
@staticmethod

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.