Abstract
Low-rank adaptation is a popular parameter-efficient fine-tuning method for large language models. In this paper, we analyze the impact of low-rank updating, as implemented in LoRA. Our findings suggest that the low-rank updating mechanism may limit the ability of LLMs to effectively learn and memorize new knowledge. Inspired by this observation, we propose a new method called MoRA, which employs a square matrix to achieve high-rank updating while maintaining the same number of trainable parameters. To achieve it, we introduce the corresponding non-parameter operators to reduce the input dimension and increase the output dimension for the square matrix. Furthermore, these operators ensure that the weight can be merged back into LLMs, which makes our method can be deployed like LoRA. We perform a comprehensive evaluation of our method across five tasks: instruction tuning, mathematical reasoning, continual pretraining, memory and pretraining. Our method outperforms LoRA on memory-intensive tasks and achieves comparable performance on other tasks.
Community
They seem to have a repo up already
There's a simple rewrite of the paper up here: https://www.aimodels.fyi/papers/arxiv/mora-high-rank-updating-parameter-efficient-fine
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning (2024)
- PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models (2024)
- LoRA Learns Less and Forgets Less (2024)
- ReFT: Representation Finetuning for Language Models (2024)
- HFT: Half Fine-Tuning for Large Language Models (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Sebastian Raschka did a great write-up here:
https://magazine.sebastianraschka.com/p/llm-research-insights-instruction?utm_source=substack&publication_id=1174659&post_id=145134347&utm_medium=email&utm_content=share&utm_campaign=email-share&triggerShare=true&isFreemail=true&r=2evja5&triedRedirect=true
Hey, amazing paper
We wrote a blog about the same. Please take a look.
https://datta0.substack.com/p/ai-unplugged-12-mora-dpo-vs-ppo-cope
also includes
- CoPE
- S3D
- DPO vs PPO
Feel free let me know your thoughts/suggestions/comments.
Revolutionizing Fine-Tuning: Unveiling MoRA's High-Rank Updates
Links ๐:
๐ Subscribe: https://www.youtube.com/@Arxflix
๐ Twitter: https://x.com/arxflix
๐ LMNT (Partner): https://lmnt.com/
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper