Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit d4f343c

Browse filesBrowse files
authored
Merge branch 'abetlen:main' into main
2 parents 6ecf40c + c2e31ee commit d4f343c
Copy full SHA for d4f343c

File tree

Expand file treeCollapse file tree

12 files changed

+289
-46
lines changed
Filter options
Expand file treeCollapse file tree

12 files changed

+289
-46
lines changed

‎.dockerignore

Copy file name to clipboard
+166Lines changed: 166 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,166 @@
1+
_skbuild/
2+
3+
.envrc
4+
5+
models/
6+
7+
# Byte-compiled / optimized / DLL files
8+
__pycache__/
9+
*.py[cod]
10+
*$py.class
11+
12+
# C extensions
13+
*.so
14+
15+
# Distribution / packaging
16+
.Python
17+
build/
18+
develop-eggs/
19+
dist/
20+
downloads/
21+
eggs/
22+
.eggs/
23+
lib/
24+
lib64/
25+
parts/
26+
sdist/
27+
var/
28+
wheels/
29+
share/python-wheels/
30+
*.egg-info/
31+
.installed.cfg
32+
*.egg
33+
MANIFEST
34+
35+
# PyInstaller
36+
# Usually these files are written by a python script from a template
37+
# before PyInstaller builds the exe, so as to inject date/other infos into it.
38+
*.manifest
39+
*.spec
40+
41+
# Installer logs
42+
pip-log.txt
43+
pip-delete-this-directory.txt
44+
45+
# Unit test / coverage reports
46+
htmlcov/
47+
.tox/
48+
.nox/
49+
.coverage
50+
.coverage.*
51+
.cache
52+
nosetests.xml
53+
coverage.xml
54+
*.cover
55+
*.py,cover
56+
.hypothesis/
57+
.pytest_cache/
58+
cover/
59+
60+
# Translations
61+
*.mo
62+
*.pot
63+
64+
# Django stuff:
65+
*.log
66+
local_settings.py
67+
db.sqlite3
68+
db.sqlite3-journal
69+
70+
# Flask stuff:
71+
instance/
72+
.webassets-cache
73+
74+
# Scrapy stuff:
75+
.scrapy
76+
77+
# Sphinx documentation
78+
docs/_build/
79+
80+
# PyBuilder
81+
.pybuilder/
82+
target/
83+
84+
# Jupyter Notebook
85+
.ipynb_checkpoints
86+
87+
# IPython
88+
profile_default/
89+
ipython_config.py
90+
91+
# pyenv
92+
# For a library or package, you might want to ignore these files since the code is
93+
# intended to run in multiple environments; otherwise, check them in:
94+
# .python-version
95+
96+
# pipenv
97+
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
98+
# However, in case of collaboration, if having platform-specific dependencies or dependencies
99+
# having no cross-platform support, pipenv may install dependencies that don't work, or not
100+
# install all needed dependencies.
101+
#Pipfile.lock
102+
103+
# poetry
104+
# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
105+
# This is especially recommended for binary packages to ensure reproducibility, and is more
106+
# commonly ignored for libraries.
107+
# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
108+
#poetry.lock
109+
110+
# pdm
111+
# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
112+
#pdm.lock
113+
# pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
114+
# in version control.
115+
# https://pdm.fming.dev/#use-with-ide
116+
.pdm.toml
117+
118+
# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
119+
__pypackages__/
120+
121+
# Celery stuff
122+
celerybeat-schedule
123+
celerybeat.pid
124+
125+
# SageMath parsed files
126+
*.sage.py
127+
128+
# Environments
129+
.env
130+
.venv
131+
env/
132+
venv/
133+
ENV/
134+
env.bak/
135+
venv.bak/
136+
137+
# Spyder project settings
138+
.spyderproject
139+
.spyproject
140+
141+
# Rope project settings
142+
.ropeproject
143+
144+
# mkdocs documentation
145+
/site
146+
147+
# mypy
148+
.mypy_cache/
149+
.dmypy.json
150+
dmypy.json
151+
152+
# Pyre type checker
153+
.pyre/
154+
155+
# pytype static type analyzer
156+
.pytype/
157+
158+
# Cython debug symbols
159+
cython_debug/
160+
161+
# PyCharm
162+
# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
163+
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
164+
# and can be added to the global gitignore or merged into this file. For a more nuclear
165+
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
166+
.idea/

‎.github/workflows/build-docker.yaml

Copy file name to clipboard
+39Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
name: Build Docker
2+
3+
on: workflow_dispatch
4+
5+
permissions:
6+
contents: write
7+
packages: write
8+
9+
jobs:
10+
docker:
11+
name: Build and push Docker image
12+
runs-on: ubuntu-latest
13+
steps:
14+
- name: Checkout
15+
uses: actions/checkout@v3
16+
with:
17+
submodules: "true"
18+
19+
- name: Set up QEMU
20+
uses: docker/setup-qemu-action@v2
21+
22+
- name: Set up Docker Buildx
23+
uses: docker/setup-buildx-action@v2
24+
25+
- name: Login to GitHub Container Registry
26+
uses: docker/login-action@v2
27+
with:
28+
registry: ghcr.io
29+
username: ${{ github.repository_owner }}
30+
password: ${{ secrets.GITHUB_TOKEN }}
31+
32+
- name: Build and push
33+
uses: docker/build-push-action@v4
34+
with:
35+
context: .
36+
push: true # push to registry
37+
pull: true # always fetch the latest base images
38+
platforms: linux/amd64,linux/arm64 # build for both amd64 and arm64
39+
tags: ghcr.io/abetlen/llama-cpp-python:latest

‎.github/workflows/publish.yaml

Copy file name to clipboardExpand all lines: .github/workflows/publish.yaml
+1-1Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,4 +28,4 @@ jobs:
2828
# if: startsWith(github.ref, 'refs/tags')
2929
uses: pypa/gh-action-pypi-publish@release/v1
3030
with:
31-
password: ${{ secrets.PYPI_API_TOKEN }}
31+
password: ${{ secrets.PYPI_API_TOKEN }}

‎Dockerfile

Copy file name to clipboard
+15Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
FROM python:3-bullseye
2+
3+
# We need to set the host to 0.0.0.0 to allow outside access
4+
ENV HOST 0.0.0.0
5+
6+
COPY . .
7+
8+
# Install the package
9+
RUN apt update && apt install -y libopenblas-dev
10+
RUN python -m pip install --upgrade pip pytest cmake scikit-build setuptools fastapi uvicorn sse-starlette
11+
12+
RUN LLAMA_OPENBLAS=1 python3 setup.py develop
13+
14+
# Run the server
15+
CMD python3 -m llama_cpp.server

‎README.md

Copy file name to clipboardExpand all lines: README.md
+9-2Lines changed: 9 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -72,6 +72,14 @@ python3 -m llama_cpp.server
7272

7373
Navigate to [http://localhost:8000/docs](http://localhost:8000/docs) to see the OpenAPI documentation.
7474

75+
## Docker image
76+
77+
A Docker image is available on [GHCR](https://ghcr.io/abetlen/llama-cpp-python). To run the server:
78+
79+
```bash
80+
docker run --rm -it -p8000:8000 -v /path/to/models:/models -eMODEL=/models/ggml-model-name.bin ghcr.io/abetlen/llama-cpp-python:latest
81+
```
82+
7583
## Low-level API
7684

7785
The low-level API is a direct `ctypes` binding to the C API provided by `llama.cpp`.
@@ -90,8 +98,7 @@ This package is under active development and I welcome any contributions.
9098
To get started, clone the repository and install the package in development mode:
9199

92100
```bash
93-
git clone git@github.com:abetlen/llama-cpp-python.git
94-
git submodule update --init --recursive
101+
git clone --recurse-submodules git@github.com:abetlen/llama-cpp-python.git
95102
# Will need to be re-run any time vendor/llama.cpp is updated
96103
python3 setup.py develop
97104
```

‎llama_cpp/llama.py

Copy file name to clipboardExpand all lines: llama_cpp/llama.py
+6-7Lines changed: 6 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -306,7 +306,7 @@ def _sample_top_p_top_k(
306306
llama_cpp.llama_sample_typical(
307307
ctx=self.ctx,
308308
candidates=llama_cpp.ctypes.pointer(candidates),
309-
p=llama_cpp.c_float(1.0)
309+
p=llama_cpp.c_float(1.0),
310310
)
311311
llama_cpp.llama_sample_top_p(
312312
ctx=self.ctx,
@@ -637,10 +637,7 @@ def _create_completion(
637637
self.detokenize([token]).decode("utf-8", errors="ignore")
638638
for token in all_tokens
639639
]
640-
all_logprobs = [
641-
[Llama.logit_to_logprob(logit) for logit in row]
642-
for row in self.eval_logits
643-
]
640+
all_logprobs = [Llama._logits_to_logprobs(row) for row in self.eval_logits]
644641
for token, token_str, logprobs_token in zip(
645642
all_tokens, all_token_strs, all_logprobs
646643
):
@@ -980,5 +977,7 @@ def token_bos() -> llama_cpp.llama_token:
980977
return llama_cpp.llama_token_bos()
981978

982979
@staticmethod
983-
def logit_to_logprob(x: float) -> float:
984-
return math.log(1.0 + math.exp(x))
980+
def logits_to_logprobs(logits: List[llama_cpp.c_float]) -> List[llama_cpp.c_float]:
981+
exps = [math.exp(float(x)) for x in logits]
982+
sum_exps = sum(exps)
983+
return [llama_cpp.c_float(math.log(x / sum_exps)) for x in exps]

‎llama_cpp/llama_cpp.py

Copy file name to clipboardExpand all lines: llama_cpp/llama_cpp.py
+3-3Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -136,9 +136,9 @@ class llama_context_params(Structure):
136136
) # tok_embeddings.weight and output.weight are F16
137137
LLAMA_FTYPE_MOSTLY_Q4_2 = ctypes.c_int(5) # except 1d tensors
138138
# LLAMA_FTYPE_MOSTYL_Q4_3 = ctypes.c_int(6) # except 1d tensors
139-
LLAMA_FTYPE_MOSTYL_Q8_0 = ctypes.c_int(7) # except 1d tensors
140-
LLAMA_FTYPE_MOSTYL_Q5_0 = ctypes.c_int(8) # except 1d tensors
141-
LLAMA_FTYPE_MOSTYL_Q5_1 = ctypes.c_int(9) # except 1d tensors
139+
LLAMA_FTYPE_MOSTLY_Q8_0 = ctypes.c_int(7) # except 1d tensors
140+
LLAMA_FTYPE_MOSTLY_Q5_0 = ctypes.c_int(8) # except 1d tensors
141+
LLAMA_FTYPE_MOSTLY_Q5_1 = ctypes.c_int(9) # except 1d tensors
142142

143143
# Functions
144144

‎llama_cpp/server/__main__.py

Copy file name to clipboardExpand all lines: llama_cpp/server/__main__.py
+2-2Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -24,10 +24,10 @@
2424
import os
2525
import uvicorn
2626

27-
from llama_cpp.server.app import app, init_llama
27+
from llama_cpp.server.app import create_app
2828

2929
if __name__ == "__main__":
30-
init_llama()
30+
app = create_app()
3131

3232
uvicorn.run(
3333
app, host=os.getenv("HOST", "localhost"), port=int(os.getenv("PORT", 8000))

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.