Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
datatab
/
Yugo55-GPT-v4-Quantized-GGUF
like
0
Transformers
GGUF
mistral
Inference Endpoints
text-generation-inference
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
main
Yugo55-GPT-v4-Quantized-GGUF
1 contributor
History:
9 commits
datatab
q6_k: Uses Q6_K for all tensors
c0aa467
verified
3 months ago
.gitattributes
1.84 kB
q6_k: Uses Q6_K for all tensors
3 months ago
Yugo55-GPT-v4-Quantized-GGUF.Q4_K_M.gguf
4.37 GB
LFS
q4_k_m: Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K
3 months ago
Yugo55-GPT-v4-Quantized-GGUF.Q6_K.gguf
5.94 GB
LFS
q6_k: Uses Q6_K for all tensors
3 months ago
config.json
31 Bytes
Create config.json
3 months ago