Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Version 0.1.60 does not install correctly #352

Copy link
Copy link
Closed
@Barafu

Description

@Barafu
Issue body actions

Short

In the Oobabooga installation, on Windows11, the package stops working when upgrading from 0.1.57 to 0.1.60. Reverting back to 0.1.57 fixes the problem. Complains about Shared library with base name 'llama' not found. There is no llama.dll anywhere in the folder, only llama.lib . I do not see any compilation errors.

Logs.

Installation log:

 .\cmd_windows.bat
(F:\oobabooga_windows\installer_files\env) F:\oobabooga_windows>pip install llama-cpp-python --force-reinstall --upgrade --no-cache-dir
Collecting llama-cpp-python
  Downloading llama-cpp-python-0.1.60.tar.gz (1.2 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.2/1.2 MB 516.0 kB/s eta 0:00:00
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Installing backend dependencies ... done
  Preparing metadata (pyproject.toml) ... done
Collecting typing-extensions>=4.5.0
  Downloading typing_extensions-4.6.3-py3-none-any.whl (31 kB)
Collecting numpy>=1.20.0
  Downloading numpy-1.24.3-cp310-cp310-win_amd64.whl (14.8 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 14.8/14.8 MB 1.2 MB/s eta 0:00:00
Collecting diskcache>=5.6.1
  Downloading diskcache-5.6.1-py3-none-any.whl (45 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 45.6/45.6 kB 2.2 MB/s eta 0:00:00
Building wheels for collected packages: llama-cpp-python
  Building wheel for llama-cpp-python (pyproject.toml) ... done
  Created wheel for llama-cpp-python: filename=llama_cpp_python-0.1.60-cp310-cp310-win_amd64.whl size=68815 sha256=3963c2f660e18df2a5e72cf2295cffd4d0c632accea654a06b31ee96cf6c5c52
  Stored in directory: F:\oobabooga_windows\installer_files\pip-ephem-wheel-cache-1a8td5kq\wheels\eb\a7\7e\e2f6aaef90347cd520e86d37bf5c613d1b96eeab4512dc080a
Successfully built llama-cpp-python
Installing collected packages: typing-extensions, numpy, diskcache, llama-cpp-python
  Attempting uninstall: typing-extensions
    Found existing installation: typing_extensions 4.5.0
    Uninstalling typing_extensions-4.5.0:
      Successfully uninstalled typing_extensions-4.5.0
  Attempting uninstall: numpy
    Found existing installation: numpy 1.24.3
    Uninstalling numpy-1.24.3:
      Successfully uninstalled numpy-1.24.3
  Attempting uninstall: diskcache
    Found existing installation: diskcache 5.6.1
    Uninstalling diskcache-5.6.1:
      Successfully uninstalled diskcache-5.6.1
  Attempting uninstall: llama-cpp-python
    Found existing installation: llama-cpp-python 0.1.57
    Uninstalling llama-cpp-python-0.1.57:
      Successfully uninstalled llama-cpp-python-0.1.57
Successfully installed diskcache-5.6.1 llama-cpp-python-0.1.60 numpy-1.24.3 typing-extensions-4.6.3

Launch log

PS F:\oobabooga_windows> .\start_windows.bat
bin F:\oobabooga_windows\installer_files\env\lib\site-packages\bitsandbytes\libbitsandbytes_cuda117.dll
INFO:Loading settings from settings.yaml...
The following models are available:

1. facebook_opt-350m
2. gpt4-x-alpaca-native-13B-ggml-q5_1.bin
3. guanaco-13B.ggmlv3.q5_1.bin
4. Manticore-13B-Chat-Pyg-Guanaco-GGML-q4_0.bin
5. Manticore-13B.ggmlv3.q5_1.bin
6. Manticore-13B.ggmlv3.q5_K_M.bin
7. pygmalion-13b-ggml-q5_1.bin
8. Wizard-Vicuna-30B-Uncensored.ggmlv3.q4_1.bin
9. WizardLM-13B-1.0.ggmlv3.q5_1.bin
10. WizardLM-Uncensored-SuperCOT-Storytelling.ggmlv3.q4_0.bin

Which one do you want to load? 1-10

3

INFO:Loading guanaco-13B.ggmlv3.q5_1.bin...
Traceback (most recent call last):
  File "F:\oobabooga_windows\text-generation-webui\server.py", line 1079, in <module>
    shared.model, shared.tokenizer = load_model(shared.model_name)
  File "F:\oobabooga_windows\text-generation-webui\modules\models.py", line 94, in load_model
    output = load_func(model_name)
  File "F:\oobabooga_windows\text-generation-webui\modules\models.py", line 262, in llamacpp_loader
    from modules.llamacpp_model import LlamaCppModel
  File "F:\oobabooga_windows\text-generation-webui\modules\llamacpp_model.py", line 11, in <module>
    from llama_cpp import Llama, LlamaCache
  File "F:\oobabooga_windows\installer_files\env\lib\site-packages\llama_cpp\__init__.py", line 1, in <module>
    from .llama_cpp import *
  File "F:\oobabooga_windows\installer_files\env\lib\site-packages\llama_cpp\llama_cpp.py", line 77, in <module>
    _lib = _load_shared_library(_lib_base_name)
  File "F:\oobabooga_windows\installer_files\env\lib\site-packages\llama_cpp\llama_cpp.py", line 68, in _load_shared_library
    raise FileNotFoundError(
FileNotFoundError: Shared library with base name 'llama' not found

Done!
Press any key to continue . . .
PS F:\oobabooga_windows>
Free-Radical, Chayraaa, Slayery777 and xanthousm

Metadata

Metadata

Assignees

No one assigned

    Labels

    buildoobaboogahttps://github.com/oobabooga/text-generation-webuihttps://github.com/oobabooga/text-generation-webui

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      Morty Proxy This is a proxified and sanitized view of the page, visit original site.