Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit a89e3ca

Browse filesBrowse files
committed
Merge branch 'main' of github.com:abetlen/llama_cpp_python into main
2 parents 5377f97 + 232880c commit a89e3ca
Copy full SHA for a89e3ca

File tree

Expand file treeCollapse file tree

1 file changed

+21
-5
lines changed
Filter options
Expand file treeCollapse file tree

1 file changed

+21
-5
lines changed

‎.github/ISSUE_TEMPLATE/bug_report.md

Copy file name to clipboardExpand all lines: .github/ISSUE_TEMPLATE/bug_report.md
+21-5Lines changed: 21 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,17 @@ Please provide detailed steps for reproducing the issue. We are not sitting in f
5757
3. step 3
5858
4. etc.
5959

60-
**Note: Many issues seem to be regarding performance issues / differences with `llama.cpp`. In these cases we need to confirm that you're comparing against the version of `llama.cpp` that was built with your python package, and which parameters you're passing to the context.**
60+
**Note: Many issues seem to be regarding functional or performance issues / differences with `llama.cpp`. In these cases we need to confirm that you're comparing against the version of `llama.cpp` that was built with your python package, and which parameters you're passing to the context.**
61+
62+
Try the following:
63+
64+
1. `git clone https://github.com/abetlen/llama-cpp-python`
65+
2. `cd llama-cpp-python`
66+
3. `rm -rf _skbuild/` # delete any old builds
67+
4. `python setup.py develop`
68+
5. `cd ./vendor/llama.cpp`
69+
6. Follow [llama.cpp's instructions](https://github.com/ggerganov/llama.cpp#build) to `cmake` llama.cpp
70+
7. Run llama.cpp's `./main` with the same arguments you previously passed to llama-cpp-python and see if you can reproduce the issue. If you can, [log an issue with llama.cpp](https://github.com/ggerganov/llama.cpp/issues)
6171

6272
# Failure Logs
6373

@@ -73,8 +83,14 @@ commit 47b0aa6e957b93dbe2c29d53af16fbae2dd628f2
7383
llama-cpp-python$ python3 --version
7484
Python 3.10.10
7585
76-
llama-cpp-python$ pip list | egrep "uvicorn|fastapi|sse-starlette"
77-
fastapi 0.95.0
78-
sse-starlette 1.3.3
79-
uvicorn 0.21.1
86+
llama-cpp-python$ pip list | egrep "uvicorn|fastapi|sse-starlette|numpy"
87+
fastapi 0.95.0
88+
numpy 1.24.3
89+
sse-starlette 1.3.3
90+
uvicorn 0.21.1
91+
92+
llama-cpp-python/vendor/llama.cpp$ git log | head -3
93+
commit 66874d4fbcc7866377246efbcee938e8cc9c7d76
94+
Author: Kerfuffle <44031344+KerfuffleV2@users.noreply.github.com>
95+
Date: Thu May 25 20:18:01 2023 -0600
8096
```

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.