Native bitsandbytes 4bit pre quantized models
Unsloth AI
company
AI & ML interests
Hey! We're focusing on making AI more accessible to everyone!
Organization Card
About org cards
Unsloth makes fine-tuning of LLMs 2.2x faster and use 80% less VRAM!
Join our Discord server! 🦥 And our open source Github repo is here!
Unsloth support includes | Free Notebooks | Performance | Memory use |
---|---|---|---|
Llama-3 8b | ▶️ Start on Colab | 2x faster | 60% less |
Gemma 7b | ▶️ Start on Colab | 2.4x faster | 58% less |
Mistral 7b | ▶️ Start on Colab | 2.2x faster | 62% less |
Collections
2
models
70
unsloth/llama-3-8b-bnb-4bit
Text Generation
•
Updated
•
387k
•
112
unsloth/llama-3-8b
Text Generation
•
Updated
•
39.1k
•
36
unsloth/Phi-3-medium-4k-instruct-bnb-4bit
Text Generation
•
Updated
•
2.97k
•
2
unsloth/Phi-3-mini-4k-instruct-bnb-4bit
Text Generation
•
Updated
•
68.3k
•
12
unsloth/Phi-3-mini-4k-instruct
Text Generation
•
Updated
•
15.4k
•
23
unsloth/Phi-3-medium-4k-instruct
Text Generation
•
Updated
•
2k
•
16
unsloth/mistral-7b-v0.3
Text Generation
•
Updated
•
702
unsloth/mistral-7b-instruct-v0.3-bnb-4bit
Text Generation
•
Updated
•
2.69k
•
5
unsloth/mistral-7b-v0.3-bnb-4bit
Text Generation
•
Updated
•
6.71k
•
5
unsloth/mistral-7b-instruct-v0.3
Text Generation
•
Updated
•
922
•
1