Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit 30d32e9

Browse filesBrowse files
committed
More README.md corrections and cleanup
1 parent d4eef73 commit 30d32e9
Copy full SHA for 30d32e9

File tree

Expand file treeCollapse file tree

1 file changed

+5
-4
lines changed
Filter options
Expand file treeCollapse file tree

1 file changed

+5
-4
lines changed

‎docker/README.md

Copy file name to clipboardExpand all lines: docker/README.md
+5-4Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44

55
[Install Docker Engine](https://docs.docker.com/engine/install)
66

7-
**Note #2:** NVidia GPU CuBLAS support requires a NVidia GPU with sufficient VRAM (approximately as much as the size above) and Docker NVidia support (see [container-toolkit/install-guide](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html))
7+
**Note #2:** NVidia GPU CuBLAS support requires a NVidia GPU with sufficient VRAM (approximately as much as the size in the table below) and Docker NVidia support (see [container-toolkit/install-guide](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html))
88

99
# Simple Dockerfiles for building the llama-cpp-python server with external model bin files
1010
## openblas_simple - a simple Dockerfile for non-GPU OpenBLAS, where the model is located outside the Docker image
@@ -23,16 +23,17 @@ docker run -e USE_MLOCK=0 -e MODEL=/var/model/<model-path> -v <model-root-path>:
2323
```
2424
where `<model-root-path>/<model-path>` is the full path to the model file on the Docker host system.
2525

26-
# "Open-Llama-in-a-box" - Download a MIT licensed Open Llama model and install into a Docker image that runs an OpenBLAS-enabled llama-cpp-python server
26+
# "Open-Llama-in-a-box"
27+
## Download an Apache V2.0 licensed 3B paramter Open Llama model and install into a Docker image that runs an OpenBLAS-enabled llama-cpp-python server
2728
```
2829
$ cd ./open_llama
2930
./build.sh
3031
./start.sh
3132
```
3233

3334
# Manually choose your own Llama model from Hugging Face
34-
- `python3 ./hug_model.py -a TheBloke -t llama`
35-
- You should now have a model in the current directory and `model.bin` symlinked to it for the subsequent Docker build and copy step. e.g.
35+
`python3 ./hug_model.py -a TheBloke -t llama`
36+
You should now have a model in the current directory and `model.bin` symlinked to it for the subsequent Docker build and copy step. e.g.
3637
```
3738
docker $ ls -lh *.bin
3839
-rw-rw-r-- 1 user user 4.8G May 23 18:30 <downloaded-model-file>q5_1.bin

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.