Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings
#

multi-turboquant

Here are 2 public repositories matching this topic...

Native Windows build of vLLM 0.19.1 — no WSL, no Docker. Pre-built wheels + 34-file Windows patch + Multi-TurboQuant KV cache compression (6 methods, 2x cache capacity). PyTorch 2.10 + CUDA 12.6 + Triton + Flash-Attention 2.

  • Updated Apr 26, 2026
  • Python

Improve this page

Add a description, image, and links to the multi-turboquant topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the multi-turboquant topic, visit your repo's landing page and select "manage topics."

Learn more

Morty Proxy This is a proxified and sanitized view of the page, visit original site.