Description
Prerequisites
Please answer the following questions for yourself before submitting an issue.
- I am running the latest code. Development is very rapid so there are no tagged versions as of now.
- I carefully followed the README.md.
- I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
- I reviewed the Discussions, and have a new bug or useful enhancement to share.
Expected Behavior
Streaming output should work correctly.
Current Behavior
Streaming output is interrupted with below error message:
{
"error": true,
"message": "Unexpected end of JSON input"
}
The same API is working properly with same request body if call with Postman, the only difference is header set to Accept: application/json
in Postman.
Environment and Context
Please provide detailed information about your computer setup. This is important in case the issue is not reproducible except for under certain specific conditions.
Running in Docker image, the image information:
- Base image:
nvidia/12.1.0-devel-ubuntu22.04
- Python: 3.10.2
- Installed llama-cpp-python in Dockerfile by
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip3 install llama-cpp-python[server]
Failure Information (for bugs)
The Web UI shows error message:
{
"error": true,
"message": "Unexpected end of JSON input"
}
Steps to Reproduce
Please provide detailed steps for reproducing the issue. We are not sitting in front of your screen, so the more detail the better.
Just call the v1/chat/completions
API, and set the header value Accept: text/event-stream
.
Note: Many issues seem to be regarding functional or performance issues / differences with llama.cpp
. In these cases we need to confirm that you're comparing against the version of llama.cpp
that was built with your python package, and which parameters you're passing to the context.
Try the following:
git clone https://github.com/abetlen/llama-cpp-python
cd llama-cpp-python
rm -rf _skbuild/
# delete any old buildspython -m pip install .
cd ./vendor/llama.cpp
- Follow llama.cpp's instructions to
cmake
llama.cpp - Run llama.cpp's
./main
with the same arguments you previously passed to llama-cpp-python and see if you can reproduce the issue. If you can, log an issue with llama.cpp
Failure Logs
There is not any error message output by llama-cpp-python.
llama_print_timings: load time = 4354.71 ms
llama_print_timings: sample time = 14.60 ms / 62 runs ( 0.24 ms per token, 4245.70 tokens per second)
llama_print_timings: prompt eval time = 189.30 ms / 7 tokens ( 27.04 ms per token, 36.98 tokens per second)
llama_print_timings: eval time = 2788.22 ms / 61 runs ( 45.71 ms per token, 21.88 tokens per second)
llama_print_timings: total time = 3134.24 ms
Llama.generate: prefix-match hit
llama_print_timings: load time = 4354.71 ms
llama_print_timings: sample time = 1.86 ms / 8 runs ( 0.23 ms per token, 4294.15 tokens per second)
llama_print_timings: prompt eval time = 1138.13 ms / 128 tokens ( 8.89 ms per token, 112.47 tokens per second)
llama_print_timings: eval time = 375.45 ms / 7 runs ( 53.64 ms per token, 18.64 tokens per second)
llama_print_timings: total time = 1774.18 ms
INFO: 10.70.0.104:47466 - "POST /v1/chat/completions?path=v1&path=chat&path=completions HTTP/1.1" 200 OK
Llama.generate: prefix-match hit
INFO: 10.70.0.200:16312 - "POST /v1/chat/completions?path=v1&path=chat&path=completions HTTP/1.1" 200 OK
Please include any relevant log snippets or files. If it works under one configuration but not under another, please provide logs for both configurations and their corresponding outputs so it is easy to see where behavior changes.
Also, please try to avoid using screenshots if at all possible. Instead, copy/paste the console output and use Github's markdown to cleanly format your logs for easy readability.