-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Insights: abetlen/llama-cpp-python
Overview
-
- 0 Merged pull requests
- 1 Open pull request
- 0 Closed issues
- 3 New issues
There hasn’t been any commit activity on abetlen/llama-cpp-python in the last week.
Want to help out?
1 Pull request opened by 1 person
-
Remove llama_kv_cache_view and deprecations were deleted on llama.cpp side too
#2030 opened
Jun 13, 2025
3 Issues opened by 3 people
-
Gemma 3:4B Multimodal CLIP Error [WinError -529697949] Windows Error 0xe06d7363
#2031 opened
Jun 17, 2025 -
Access Violation issue facing for exe created using pyinstaller
#2029 opened
Jun 13, 2025 -
Building and installing llama_cpp from source for RTX 50 Blackwell GPU
#2028 opened
Jun 13, 2025
2 Unresolved conversations
Sometimes conversations happen on old items that aren’t yet closed. Here is a list of all the Issues and Pull Requests with unresolved conversations.
-
How to improve GPU utilization
#1674 commented on
Jun 13, 2025 • 0 new comments -
llama_cpp/lib/libllama.so: undefined symbol: llama_kv_cache_view_init
#2026 commented on
Jun 14, 2025 • 0 new comments