Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit f1c631d

Browse filesBrowse files
Bug fixed with n_ctx=0 (abetlen#1015)
If the n_ctx is set to 0 the code should use the maximum context length of the selected model, but it didn't work. There was a problem with the initialization of this parameter and a related problem with 'n_batch'.
1 parent 5a89446 commit f1c631d
Copy full SHA for f1c631d

File tree

Expand file treeCollapse file tree

1 file changed

+6
-0
lines changed
Filter options
Expand file treeCollapse file tree

1 file changed

+6
-0
lines changed

‎llama_cpp/llama.py

Copy file name to clipboardExpand all lines: llama_cpp/llama.py
+6Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -923,6 +923,12 @@ def __init__(
923923
self._model = _LlamaModel(
924924
path_model=self.model_path, params=self.model_params, verbose=self.verbose
925925
)
926+
# Set the default value for the context and correct the batch
927+
if n_ctx == 0:
928+
n_ctx = self._model.n_ctx_train()
929+
self.n_batch = min(n_ctx, n_batch)
930+
self.context_params.n_ctx = self._model.n_ctx_train()
931+
self.context_params.n_batch = self.n_batch
926932

927933
self._ctx = _LlamaContext(
928934
model=self._model,

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.