Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
onekq-ai
's Collections
Ollama-ready Coding Models
QLora-ready Coding Models
QLora-ready Coding Models
updated
Oct 19
For Finetuning. GPU is needed for both quantization and inference.
Upvote
-
onekq-ai/starcoder2-15b-bnb-4bit
Text Generation
•
Updated
Oct 18
•
13
•
1
onekq-ai/starcoder2-7b-bnb-4bit
Text Generation
•
Updated
Oct 18
•
10
onekq-ai/starcoder2-3b-bnb-4bit
Text Generation
•
Updated
Oct 18
•
56
unsloth/Qwen2.5-Coder-7B-Instruct-bnb-4bit
Text Generation
•
Updated
Nov 12
•
8.07k
•
4
unsloth/Qwen2.5-Coder-7B-bnb-4bit
Text Generation
•
Updated
Nov 12
•
2.02k
•
4
unsloth/Qwen2.5-Coder-1.5B-Instruct-bnb-4bit
Text Generation
•
Updated
Nov 12
•
1.76k
unsloth/Qwen2.5-Coder-1.5B-bnb-4bit
Text Generation
•
Updated
Nov 12
•
2.67k
•
1
dan-kwiat/DeepSeek-Coder-V2-Lite-Instruct-bnb-4bit
Text Generation
•
Updated
Jun 25
•
37
onekq-ai/DeepSeek-Coder-V2-Lite-Base-bnb-4bit
Text Generation
•
Updated
Oct 19
•
15
Upvote
-
Share collection
View history
Collection guide
Browse collections