Post
🚨 New Release of 🤗PEFT!
1. New methods for merging LoRA weights. Refer this HF Post for more details: https://huggingface.co/posts/smangrul/850816632583824
2. AWQ and AQLM support for LoRA. You can now:
- Train adapters on top of 2-bit quantized models with AQLM
- Train adapters on top of powerful AWQ quantized models
Note for inference you can't merge the LoRA weights into the base model!
3. DoRA support: Enabling DoRA is as easy as adding
4. Improved documentation, particularly docs regarding PEFT LoRA+DeepSpeed and PEFT LoRA+FSDP! 📄 Check out the docs at https://huggingface.co/docs/peft/index.
5. Full Release Notes: https://github.com/huggingface/peft/releases/tag/v0.9.0
1. New methods for merging LoRA weights. Refer this HF Post for more details: https://huggingface.co/posts/smangrul/850816632583824
2. AWQ and AQLM support for LoRA. You can now:
- Train adapters on top of 2-bit quantized models with AQLM
- Train adapters on top of powerful AWQ quantized models
Note for inference you can't merge the LoRA weights into the base model!
3. DoRA support: Enabling DoRA is as easy as adding
use_dora=True
to your LoraConfig
. Find out more about this method here: https://arxiv.org/abs/2402.093534. Improved documentation, particularly docs regarding PEFT LoRA+DeepSpeed and PEFT LoRA+FSDP! 📄 Check out the docs at https://huggingface.co/docs/peft/index.
5. Full Release Notes: https://github.com/huggingface/peft/releases/tag/v0.9.0