Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Conversation

barsuna
Copy link

@barsuna barsuna commented Apr 2, 2023

  • Most issues are due to fact that embedding layer 250880x14336 is too large to fit into signed integer
  • Above affects the main, quantize, and also ggml code
  • 2nd issue is that main seems to estimate amount of necessary memory on the low side
  • Above is not fixed, i have just added 5GB for weights and doubled the size of context used for model evaluation Being very far away from proficiency in C++, these changes need to be civilized by someone experienced with ggml and c++

- Most issues are due to fact that embedding layer 250880x14336 is too large to fit into signed integer
- Above affects the main, quantize, and also ggml code
- 2nd issue is that main seems to estimate amount of necessary memory on the low side
- Above is not fixed, i have just added 5GB for weights and doubled the size of context used for model evaluation
Being very far away from proficiency in C++, these changes need to be civilized by someone experienced with ggml and c++
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant

Morty Proxy This is a proxified and sanitized view of the page, visit original site.