Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit a7ba858

Browse filesBrowse files
committed
Add n_ctx, n_vocab, and n_embd properties
1 parent 01a010b commit a7ba858
Copy full SHA for a7ba858

File tree

Expand file treeCollapse file tree

1 file changed

+18
-0
lines changed
Filter options
Expand file treeCollapse file tree

1 file changed

+18
-0
lines changed

‎llama_cpp/llama.py

Copy file name to clipboardExpand all lines: llama_cpp/llama.py
+18Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1291,6 +1291,24 @@ def load_state(self, state: LlamaState) -> None:
12911291
if llama_cpp.llama_set_state_data(self.ctx, state.llama_state) != state_size:
12921292
raise RuntimeError("Failed to set llama state data")
12931293

1294+
@property
1295+
def n_ctx(self) -> int:
1296+
"""Return the context window size."""
1297+
assert self.ctx is not None
1298+
return llama_cpp.llama_n_ctx(self.ctx)
1299+
1300+
@property
1301+
def n_embd(self) -> int:
1302+
"""Return the embedding size."""
1303+
assert self.ctx is not None
1304+
return llama_cpp.llama_n_embd(self.ctx)
1305+
1306+
@property
1307+
def n_vocab(self) -> int:
1308+
"""Return the vocabulary size."""
1309+
assert self.ctx is not None
1310+
return llama_cpp.llama_n_vocab(self.ctx)
1311+
12941312
@staticmethod
12951313
def token_eos() -> int:
12961314
"""Return the end-of-sequence token."""

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.