Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit cbce061

Browse filesBrowse files
committed
Bump version
1 parent 8b4db73 commit cbce061
Copy full SHA for cbce061

File tree

Expand file treeCollapse file tree

2 files changed

+12
-1
lines changed
Filter options
Expand file treeCollapse file tree

2 files changed

+12
-1
lines changed

‎CHANGELOG.md

Copy file name to clipboardExpand all lines: CHANGELOG.md
+11Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,17 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
77

88
## [Unreleased]
99

10+
## [0.2.23]
11+
12+
- Update llama.cpp to ggerganov/llama.cpp@948ff137ec37f1ec74c02905917fa0afc9b97514
13+
- Add qwen chat format by @yhfgyyf in #1005
14+
- Add support for running the server with SSL by @rgerganov in #994
15+
- Replace logits_to_logprobs implementation with numpy equivalent to llama.cpp by @player1537 in #991
16+
- Fix UnsupportedOperation: fileno in suppress_stdout_stderr by @zocainViken in #961
17+
- Add Pygmalion chat format by @chiensen in #986
18+
- README.md multimodal params fix by @zocainViken in #967
19+
- Fix minor typo in README by @aniketmaurya in #958
20+
1021
## [0.2.22]
1122

1223
- Update llama.cpp to ggerganov/llama.cpp@8a7b2fa528f130631a5f43648481596ab320ed5a

‎llama_cpp/__init__.py

Copy file name to clipboard
+1-1Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
from .llama_cpp import *
22
from .llama import *
33

4-
__version__ = "0.2.22"
4+
__version__ = "0.2.23"

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.