Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit 0481a3a

Browse filesBrowse files
committed
fix(docs): Update LLAMA_ flags to GGML_ flags
1 parent 09a4f78 commit 0481a3a
Copy full SHA for 0481a3a

File tree

Expand file treeCollapse file tree

2 files changed

+24
-24
lines changed
Filter options
Expand file treeCollapse file tree

2 files changed

+24
-24
lines changed

‎README.md

Copy file name to clipboardExpand all lines: README.md
+23-23Lines changed: 23 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -64,13 +64,13 @@ All `llama.cpp` cmake build options can be set via the `CMAKE_ARGS` environment
6464

6565
```bash
6666
# Linux and Mac
67-
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" \
67+
CMAKE_ARGS="-DGGML_BLAS=ON -DGGML_BLAS_VENDOR=OpenBLAS" \
6868
pip install llama-cpp-python
6969
```
7070

7171
```powershell
7272
# Windows
73-
$env:CMAKE_ARGS = "-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS"
73+
$env:CMAKE_ARGS = "-DGGML_BLAS=ON -DGGML_BLAS_VENDOR=OpenBLAS"
7474
pip install llama-cpp-python
7575
```
7676
</details>
@@ -83,13 +83,13 @@ They can also be set via `pip install -C / --config-settings` command and saved
8383
```bash
8484
pip install --upgrade pip # ensure pip is up to date
8585
pip install llama-cpp-python \
86-
-C cmake.args="-DLLAMA_BLAS=ON;-DLLAMA_BLAS_VENDOR=OpenBLAS"
86+
-C cmake.args="-DGGML_BLAS=ON;-DGGML_BLAS_VENDOR=OpenBLAS"
8787
```
8888

8989
```txt
9090
# requirements.txt
9191
92-
llama-cpp-python -C cmake.args="-DLLAMA_BLAS=ON;-DLLAMA_BLAS_VENDOR=OpenBLAS"
92+
llama-cpp-python -C cmake.args="-DGGML_BLAS=ON;-DGGML_BLAS_VENDOR=OpenBLAS"
9393
```
9494

9595
</details>
@@ -101,20 +101,20 @@ Below are some common backends, their build commands and any additional environm
101101
<details open>
102102
<summary>OpenBLAS (CPU)</summary>
103103

104-
To install with OpenBLAS, set the `LLAMA_BLAS` and `LLAMA_BLAS_VENDOR` environment variables before installing:
104+
To install with OpenBLAS, set the `GGML_BLAS` and `GGML_BLAS_VENDOR` environment variables before installing:
105105

106106
```bash
107-
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
107+
CMAKE_ARGS="-DGGML_BLAS=ON -DGGML_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
108108
```
109109
</details>
110110

111111
<details>
112112
<summary>CUDA</summary>
113113

114-
To install with CUDA support, set the `LLAMA_CUDA=on` environment variable before installing:
114+
To install with CUDA support, set the `GGML_CUDA=on` environment variable before installing:
115115

116116
```bash
117-
CMAKE_ARGS="-DLLAMA_CUDA=on" pip install llama-cpp-python
117+
CMAKE_ARGS="-DGGML_CUDA=on" pip install llama-cpp-python
118118
```
119119

120120
**Pre-built Wheel (New)**
@@ -147,10 +147,10 @@ pip install llama-cpp-python \
147147
<details>
148148
<summary>Metal</summary>
149149

150-
To install with Metal (MPS), set the `LLAMA_METAL=on` environment variable before installing:
150+
To install with Metal (MPS), set the `GGML_METAL=on` environment variable before installing:
151151

152152
```bash
153-
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
153+
CMAKE_ARGS="-DGGML_METAL=on" pip install llama-cpp-python
154154
```
155155

156156
**Pre-built Wheel (New)**
@@ -170,54 +170,54 @@ pip install llama-cpp-python \
170170
<details>
171171
<summary>hipBLAS (ROCm)</summary>
172172

173-
To install with hipBLAS / ROCm support for AMD cards, set the `LLAMA_HIPBLAS=on` environment variable before installing:
173+
To install with hipBLAS / ROCm support for AMD cards, set the `GGML_HIPBLAS=on` environment variable before installing:
174174

175175
```bash
176-
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
176+
CMAKE_ARGS="-DGGML_HIPBLAS=on" pip install llama-cpp-python
177177
```
178178

179179
</details>
180180

181181
<details>
182182
<summary>Vulkan</summary>
183183

184-
To install with Vulkan support, set the `LLAMA_VULKAN=on` environment variable before installing:
184+
To install with Vulkan support, set the `GGML_VULKAN=on` environment variable before installing:
185185

186186
```bash
187-
CMAKE_ARGS="-DLLAMA_VULKAN=on" pip install llama-cpp-python
187+
CMAKE_ARGS="-DGGML_VULKAN=on" pip install llama-cpp-python
188188
```
189189

190190
</details>
191191

192192
<details>
193193
<summary>Kompute</summary>
194194

195-
To install with Kompute support, set the `LLAMA_KOMPUTE=on` environment variable before installing:
195+
To install with Kompute support, set the `GGML_KOMPUTE=on` environment variable before installing:
196196

197197
```bash
198-
CMAKE_ARGS="-DLLAMA_KOMPUTE=on" pip install llama-cpp-python
198+
CMAKE_ARGS="-DGGML_KOMPUTE=on" pip install llama-cpp-python
199199
```
200200
</details>
201201

202202
<details>
203203
<summary>SYCL</summary>
204204

205-
To install with SYCL support, set the `LLAMA_SYCL=on` environment variable before installing:
205+
To install with SYCL support, set the `GGML_SYCL=on` environment variable before installing:
206206

207207
```bash
208208
source /opt/intel/oneapi/setvars.sh
209-
CMAKE_ARGS="-DLLAMA_SYCL=on -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx" pip install llama-cpp-python
209+
CMAKE_ARGS="-DGGML_SYCL=on -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx" pip install llama-cpp-python
210210
```
211211
</details>
212212

213213
<details>
214214
<summary>RPC</summary>
215215

216-
To install with RPC support, set the `LLAMA_RPC=on` environment variable before installing:
216+
To install with RPC support, set the `GGML_RPC=on` environment variable before installing:
217217

218218
```bash
219219
source /opt/intel/oneapi/setvars.sh
220-
CMAKE_ARGS="-DLLAMA_RPC=on" pip install llama-cpp-python
220+
CMAKE_ARGS="-DGGML_RPC=on" pip install llama-cpp-python
221221
```
222222
</details>
223223

@@ -231,7 +231,7 @@ If you run into issues where it complains it can't find `'nmake'` `'?'` or CMAKE
231231

232232
```ps
233233
$env:CMAKE_GENERATOR = "MinGW Makefiles"
234-
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on -DCMAKE_C_COMPILER=C:/w64devkit/bin/gcc.exe -DCMAKE_CXX_COMPILER=C:/w64devkit/bin/g++.exe"
234+
$env:CMAKE_ARGS = "-DGGML_OPENBLAS=on -DCMAKE_C_COMPILER=C:/w64devkit/bin/gcc.exe -DCMAKE_CXX_COMPILER=C:/w64devkit/bin/g++.exe"
235235
```
236236

237237
See the above instructions and set `CMAKE_ARGS` to the BLAS backend you want to use.
@@ -260,7 +260,7 @@ Otherwise, while installing it will build the llama.cpp x86 version which will b
260260
Try installing with
261261

262262
```bash
263-
CMAKE_ARGS="-DCMAKE_OSX_ARCHITECTURES=arm64 -DCMAKE_APPLE_SILICON_PROCESSOR=arm64 -DLLAMA_METAL=on" pip install --upgrade --verbose --force-reinstall --no-cache-dir llama-cpp-python
263+
CMAKE_ARGS="-DCMAKE_OSX_ARCHITECTURES=arm64 -DCMAKE_APPLE_SILICON_PROCESSOR=arm64 -DGGML_METAL=on" pip install --upgrade --verbose --force-reinstall --no-cache-dir llama-cpp-python
264264
```
265265
</details>
266266

@@ -667,7 +667,7 @@ python3 -m llama_cpp.server --model models/7B/llama-model.gguf
667667
Similar to Hardware Acceleration section above, you can also install with GPU (cuBLAS) support like this:
668668

669669
```bash
670-
CMAKE_ARGS="-DLLAMA_CUDA=on" FORCE_CMAKE=1 pip install 'llama-cpp-python[server]'
670+
CMAKE_ARGS="-DGGML_CUDA=on" FORCE_CMAKE=1 pip install 'llama-cpp-python[server]'
671671
python3 -m llama_cpp.server --model models/7B/llama-model.gguf --n_gpu_layers 35
672672
```
673673

‎docs/install/macos.md

Copy file name to clipboardExpand all lines: docs/install/macos.md
+1-1Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ conda activate llama
3030
*(you needed xcode installed in order pip to build/compile the C++ code)*
3131
```
3232
pip uninstall llama-cpp-python -y
33-
CMAKE_ARGS="-DLLAMA_METAL=on" pip install -U llama-cpp-python --no-cache-dir
33+
CMAKE_ARGS="-DGGML_METAL=on" pip install -U llama-cpp-python --no-cache-dir
3434
pip install 'llama-cpp-python[server]'
3535
3636
# you should now have llama-cpp-python v0.1.62 or higher installed

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.