Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

GPTQModel v1.6.0

Choose a tag to compare

@Qubitium Qubitium released this 06 Jan 08:00
· 783 commits to main since this release
c5c2677

What's Changed

⚡ 25% faster quantization. 35% reduction in vram usage vs v1.5. 👀
🎉 AMD ROCm (6.2+) support added and validated for 7900XT+ GPU.
💫 Auto-tokenizer loader via load() api. For most models you no longer need to manually init a tokenizer for both inference and quantization.

Full Changelog: v1.5.1...v1.6.0

Morty Proxy This is a proxified and sanitized view of the page, visit original site.