You need to agree to share your contact information to access this model
This repository is publicly accessible, but you have to accept the conditions to access its files and content.
To gain access, subscribe to The Kaitchup Pro. You will receive an access token for all the toolboxes in your welcome email. You can also purchase an access specifically for this repository on Gumroad. Once you have access, you can request for help and suggest new notebooks through the community tab.
Log in or Sign Up to review the conditions and access this model content.
This toolbox already includes 16 Jupyter notebooks specially optimized for Llama 3.1 and Llama 3.2 LLMs. The logs of successful runs are also provided. More notebooks will be regularly added.
Once you've subscribed to The Kaitchup Pro or purchased access, you can also request repository access here.
To run the code in the toolbox, CUDA 12.4 and PyTorch 2.4 are recommended. PyTorch 2.5 might already work but I didn't test it yet.
Toolbox content
Supervised Fine-Tuning with Chat Templates (6 notebooks)
Full fine-tuning
LoRA fine-tuning
LoRA fine-tuning (with Llama 3.1/3.2 Instruct)
QLoRA fine-tuning with Bitsandbytes quantization
QLoRA fine-tuning with AutoRound quantization
LoRA and QLoRA fine-tuning with Unsloth
Preference Optimization (2 notebooks)
DPO training with LoRA (TRL and Transformers)
ORPO training with LoRA (TRL and Transformers)
Quantization (3 notebooks)
AWQ
AutoRound
GGUF for llama.cpp
Inference (4 notebooks)
Transformers with and without a LoRA adapter
vLLM offline and online inference
Ollama (not released yet)
llama.cpp
Merging (3 notebooks)
Merge a LoRA adapter into the base model
Merge a QLoRA adapter into the base model
Merge several Llama 3.1/3.2 models into one with mergekit (not released yet)