Official implementation of Rethinking Parameter Sharing for LLM Fine-Tuning with Multiple LoRAs
Previous research works (HydraLoRA, FedSA-LoRA) have shown that the
Furthermore, we analyze the similarity, magnitude, and direction changes of LoRA modules before and after fine-tuning. First, we observe that the
Motivated by these findings, we propose sharing the
ALoRA for multi-task fine-tuning:
Fed-ALoRA for federated fine-tuning:
Please refer to each folder for more details.



