Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit 7f59856

Browse filesBrowse files
committed
fix: Enable CUDA backend for llava. Closes abetlen#1324
1 parent 7316502 commit 7f59856
Copy full SHA for 7f59856

File tree

Expand file treeCollapse file tree

2 files changed

+3
-2
lines changed
Filter options
Expand file treeCollapse file tree

2 files changed

+3
-2
lines changed

‎CMakeLists.txt

Copy file name to clipboardExpand all lines: CMakeLists.txt
+2-1Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -51,8 +51,9 @@ if (LLAMA_BUILD)
5151
)
5252

5353
if (LLAVA_BUILD)
54-
if (LLAMA_CUBLAS)
54+
if (LLAMA_CUBLAS OR LLAMA_CUDA)
5555
add_compile_definitions(GGML_USE_CUBLAS)
56+
add_compile_definitions(GGML_USE_CUDA)
5657
endif()
5758

5859
if (LLAMA_METAL)

‎Makefile

Copy file name to clipboardExpand all lines: Makefile
+1-1Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ build.debug:
1616
CMAKE_ARGS="-DCMAKE_BUILD_TYPE=Debug" python3 -m pip install --verbose --config-settings=cmake.verbose=true --config-settings=logging.level=INFO --config-settings=install.strip=false --editable .
1717

1818
build.cuda:
19-
CMAKE_ARGS="-DLLAMA_CUBLAS=on" python3 -m pip install --verbose -e .
19+
CMAKE_ARGS="-DLLAMA_CUDA=on" python3 -m pip install --verbose -e .
2020

2121
build.opencl:
2222
CMAKE_ARGS="-DLLAMA_CLBLAST=on" python3 -m pip install --verbose -e .

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.