Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
datatab
/
Yugo45A-GPT-Quantized-GGUF
like
0
Transformers
GGUF
Serbian
mistral
text-generation-inference
Inference Endpoints
conversational
License:
apache-2.0
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
3aafdad
Yugo45A-GPT-Quantized-GGUF
1 contributor
History:
20 commits
datatab
(Trained with Unsloth)
3aafdad
verified
7 months ago
.gitattributes
2.23 kB
(Trained with Unsloth)
7 months ago
README.md
267 Bytes
Update README.md
7 months ago
Yugo45A-GPT-Quantized-GGUF-unsloth.Q8_0.gguf
7.7 GB
LFS
(Trained with Unsloth)
7 months ago
Yugo45A-GPT-Quantized-GGUF.Q3_K_M.gguf
3.52 GB
LFS
q3_k_m: Uses Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K
7 months ago
Yugo45A-GPT-Quantized-GGUF.Q4_K_M.gguf
4.37 GB
LFS
q4_k_m: Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K
7 months ago
Yugo45A-GPT-Quantized-GGUF.Q5_K_M.gguf
5.13 GB
LFS
q5_k_m: Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K
7 months ago
Yugo45A-GPT-Quantized-GGUF.Q6_K.gguf
5.94 GB
LFS
q6_k: Uses Q6_K for all tensors
7 months ago
config.json
31 Bytes
Create config.json
7 months ago