Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit 89ae347

Browse filesBrowse files
committed
Remove references to force_cmake
1 parent 1dd3f47 commit 89ae347
Copy full SHA for 89ae347

File tree

Expand file treeCollapse file tree

1 file changed

+5
-5
lines changed
Filter options
Expand file treeCollapse file tree

1 file changed

+5
-5
lines changed

‎README.md

Copy file name to clipboardExpand all lines: README.md
+5-5Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -52,31 +52,31 @@ Otherwise, while installing it will build the llama.ccp x86 version which will b
5252
To install with OpenBLAS, set the `LLAMA_BLAS and LLAMA_BLAS_VENDOR` environment variables before installing:
5353

5454
```bash
55-
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
55+
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" FORCE_CMAKE=1 pip install llama-cpp-python
5656
```
5757

5858
To install with cuBLAS, set the `LLAMA_CUBLAS=1` environment variable before installing:
5959

6060
```bash
61-
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
61+
CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python
6262
```
6363

6464
To install with CLBlast, set the `LLAMA_CLBLAST=1` environment variable before installing:
6565

6666
```bash
67-
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
67+
CMAKE_ARGS="-DLLAMA_CLBLAST=on" FORCE_CMAKE=1 pip install llama-cpp-python
6868
```
6969

7070
To install with Metal (MPS), set the `LLAMA_METAL=on` environment variable before installing:
7171

7272
```bash
73-
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
73+
CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install llama-cpp-python
7474
```
7575

7676
To install with hipBLAS / ROCm support for AMD cards, set the `LLAMA_HIPBLAS=on` environment variable before installing:
7777

7878
```bash
79-
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
79+
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python
8080
```
8181

8282
#### Windows remarks

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.