Issues
is:issue state:open
is:issue state:open
Search results
[Feature] Add the option in Unsloth Studio AI Chat to load only language model without vision mmproj file (e.g Qwen3.5-27b to fit on 16GB VRAM)
feature requestFeature request pending on roadmapFeature request pending on roadmapStatus: Open.#4673 In unslothai/unsloth;[Feature] A way to view and change llama.cpp server args
feature requestFeature request pending on roadmapFeature request pending on roadmapStatus: Open.#4672 In unslothai/unsloth;- Status: Open.#4671 In unslothai/unsloth;
- Status: Open.#4670 In unslothai/unsloth;
- Status: Open.#4669 In unslothai/unsloth;
- Status: Open.#4668 In unslothai/unsloth;
- Status: Open.#4666 In unslothai/unsloth;
- Status: Open.#4661 In unslothai/unsloth;
[Feature] Unsloth studio Export chat configs to ollama
feature requestFeature request pending on roadmapFeature request pending on roadmapStatus: Open.#4660 In unslothai/unsloth;- Status: Open.#4645 In unslothai/unsloth;
- Status: Open.#4644 In unslothai/unsloth;
[Feature] Preliminary MultiGPU support not working properly in Unsloth Studio
feature requestFeature request pending on roadmapFeature request pending on roadmapStatus: Open.#4643 In unslothai/unsloth;