Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings
#

4-bit-quantization

Here are 3 public repositories matching this topic...

Language: All
Filter by language

本项目利用医学领域的 CoT 数据对 Deepseek-R1-Distill-Qwen-7B 进行微调,通过 QLoRA 量化和 Unsloth 加速训练,显著提升模型在复杂医学推理任务中的慢思考能力。知识蒸馏技术使轻量级模型获得大模型的推理优势,实现高效、准确且具有解释性的医学问答系统。

  • Updated Mar 10, 2025
  • Python

Fine-Tuning Mistral-7B with Unsloth is a streamlined implementation for efficiently adapting the powerful Mistral-7B language model using the Unsloth framework. This project showcases low-rank adaptation (LoRA), 4-bit quantization, and structured conversational datasets to fine-tune large models with minimal memory overhead and maximum performance.

  • Updated Apr 9, 2025
  • Jupyter Notebook

Improve this page

Add a description, image, and links to the 4-bit-quantization topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the 4-bit-quantization topic, visit your repo's landing page and select "manage topics."

Learn more

Morty Proxy This is a proxified and sanitized view of the page, visit original site.