language:
- en
library_name: transformers
extra_gated_prompt: >-
To gain access, [subscribe to The Kaitchup
Pro](https://newsletter.kaitchup.com/subscribe). You will receive an access
token for all the toolboxes in your welcome email. You can also purchase an
access specifically for this repository on
[Gumroad](https://benjaminmarie.gumroad.com/l/qwen2-5-toolbox). Once you have
access, you can request for help and suggest new notebooks through the
community tab.
datasets:
- mlabonne/orpo-dpo-mix-40k
- HuggingFaceH4/ultrachat_200k
This toolbox already includes 18 Jupyter notebooks specially optimized for Qwen2.5. The logs of successful runs are also provided. More notebooks will be regularly added.
Once you've subscribed to The Kaitchup Pro or purchased access, you can also request repository access here.
To run the code in the toolbox, CUDA 12.4 and PyTorch 2.4 are recommended. PyTorch 2.5 might already work but I didn't test it yet.
Toolbox content
Supervised Fine-Tuning with Chat Templates (5 notebooks)
Full fine-tuning
LoRA fine-tuning
QLoRA fine-tuning with Bitsandbytes quantization
QLoRA fine-tuning with AutoRound quantization
LoRA and QLoRA fine-tuning with Unsloth
Multi-GPU QLoRA/LoRA fine-tuning with FSDP
Preference Optimization (3 notebooks)
Full DPO training (TRL and Transformers)
DPO training with LoRA (TRL and Transformers)
ORPO training with LoRA (TRL and Transformers)
Multi-GPU QLoRA/LoRA DPO Training with FSDP
Quantization (3 notebooks)
AWQ
AutoRound (with code to quantize Qwen 2.5 72B)
GGUF for llama.cpp
Inference with Qwen2.5 Instruct and Your Own Fine-tuned Qwen2.5 (4 notebooks)
Transformers with and without a LoRA adapter
vLLM offline and online inference
Ollama (not released yet)
llama.cpp
Merging (3 notebooks)
Merge a LoRA adapter into the base model
Merge a QLoRA adapter into the base model
Merge several Qwen2.5 models into one with mergekit (not released yet)