Native bitsandbytes 4bit pre quantized models
Unsloth AI
company
AI & ML interests
Hey! We're focusing on making AI more accessible to everyone!
Organization Card
About org cards
Unsloth makes fine-tuning LLMs 2.2x faster and use 80% less VRAM!
Unsloth supports | Free Notebooks | Performance | Memory use |
---|---|---|---|
Llama-3 8b | ▶️ Start on Colab | 2x faster | 60% less |
Gemma 7b | ▶️ Start on Colab | 2.4x faster | 58% less |
Mistral 7b | ▶️ Start on Colab | 2.2x faster | 62% less |
Collections
2
models
61
unsloth/gemma-2b-it-bnb-4bit
Text Classification
•
Updated
•
12.7k
•
10
unsloth/llama-3-8b
Text Generation
•
Updated
•
27k
•
29
unsloth/llama-3-8b-bnb-4bit
Text Generation
•
Updated
•
268k
•
95
unsloth/Phi-3-mini-4k-instruct
Text Generation
•
Updated
•
4.53k
•
9
unsloth/Phi-3-mini-4k-instruct-bnb-4bit
Text Generation
•
Updated
•
15.7k
•
6
unsloth/llama-3-70b-Instruct-bnb-4bit
Text Generation
•
Updated
•
7.23k
•
25
unsloth/llama-3-70b-bnb-4bit
Text Generation
•
Updated
•
7.77k
•
33
unsloth/llama-3-8b-Instruct-bnb-4bit
Text Generation
•
Updated
•
110k
•
60
unsloth/llama-3-8b-Instruct
Text Generation
•
Updated
•
15.7k
•
37
unsloth/codegemma-7b-it
Text Generation
•
Updated
•
156
•
1