Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit 6b3aa7f

Browse filesBrowse files
committed
Bump version
1 parent 3fbcded commit 6b3aa7f
Copy full SHA for 6b3aa7f

File tree

Expand file treeCollapse file tree

2 files changed

+18
-1
lines changed
Filter options
Expand file treeCollapse file tree

2 files changed

+18
-1
lines changed

‎CHANGELOG.md

Copy file name to clipboardExpand all lines: CHANGELOG.md
+17Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,23 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
77

88
## [Unreleased]
99

10+
## [0.2.12]
11+
12+
- Update llama.cpp to ggerganov/llama.cpp@50337961a678fce4081554b24e56e86b67660163
13+
- Fix missing `n_seq_id` in `llama_batch` by @NickAlgra in #842
14+
- Fix exception raised in `__del__` when freeing models by @cebtenzzre in #848
15+
- Performance improvement for logit bias by @zolastro in #851
16+
- Fix suffix check arbitrary code execution bug by @mtasic85 in #854
17+
- Fix typo in `function_call` parameter in `llama_types.py` by @akatora28 in #849
18+
- Fix streaming not returning `finish_reason` by @gmcgoldr in #798
19+
- Fix `n_gpu_layers` check to allow values less than 1 for server by @hxy9243 in #826
20+
- Supppress stdout and stderr when freeing model by @paschembri in #803
21+
- Fix `llama2` chat format by @delock in #808
22+
- Add validation for tensor_split size by @eric1932 #820
23+
- Print stack trace on server error by @abetlen in d6a130a052db3a50975a719088a9226abfebb266
24+
- Update docs for gguf by @johnccshen in #783
25+
- Add `chatml` chat format by @abetlen in 305482bd4156c70802fc054044119054806f4126
26+
1027
## [0.2.11]
1128

1229
- Fix bug in `llama_model_params` object has no attribute `logits_all` by @abetlen in d696251fbe40015e8616ea7a7d7ad5257fd1b896

‎llama_cpp/__init__.py

Copy file name to clipboard
+1-1Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
from .llama_cpp import *
22
from .llama import *
33

4-
__version__ = "0.2.11"
4+
__version__ = "0.2.12"

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.