Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit d98a24a

Browse filesBrowse files
committed
docs: Remove references to deprecated opencl backend. Closes abetlen#1512
1 parent 6c33190 commit d98a24a
Copy full SHA for d98a24a

File tree

Expand file treeCollapse file tree

2 files changed

+0
-14
lines changed
Filter options
Expand file treeCollapse file tree

2 files changed

+0
-14
lines changed

‎Makefile

Copy file name to clipboardExpand all lines: Makefile
-3Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -24,9 +24,6 @@ build.debug:
2424
build.cuda:
2525
CMAKE_ARGS="-DLLAMA_CUDA=on" python3 -m pip install --verbose -e .
2626

27-
build.opencl:
28-
CMAKE_ARGS="-DLLAMA_CLBLAST=on" python3 -m pip install --verbose -e .
29-
3027
build.openblas:
3128
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" python3 -m pip install --verbose -e .
3229

‎README.md

Copy file name to clipboardExpand all lines: README.md
-11Lines changed: 0 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -165,17 +165,6 @@ pip install llama-cpp-python \
165165
--extra-index-url https://abetlen.github.io/llama-cpp-python/whl/metal
166166
```
167167

168-
</details>
169-
<details>
170-
171-
<summary>CLBlast (OpenCL)</summary>
172-
173-
To install with CLBlast, set the `LLAMA_CLBLAST=on` environment variable before installing:
174-
175-
```bash
176-
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
177-
```
178-
179168
</details>
180169

181170
<details>

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.