Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
afrideva
/
Mixtral-GQA-400m-v2-GGUF
like
1
Text Generation
GGUF
English
ggml
quantized
q2_k
q3_k_m
q4_k_m
q5_k_m
q6_k
q8_0
conversational
License:
apache-2.0
Model card
Files
Files and versions
Community
1
Use this model
db42a9c
Mixtral-GQA-400m-v2-GGUF
1 contributor
History:
7 commits
afrideva
Upload mixtral-gqa-400m-v2.q6_k.gguf with huggingface_hub
db42a9c
10 months ago
.gitattributes
1.92 kB
Upload mixtral-gqa-400m-v2.q6_k.gguf with huggingface_hub
10 months ago
mixtral-gqa-400m-v2.fp16.gguf
4.01 GB
LFS
Upload mixtral-gqa-400m-v2.fp16.gguf with huggingface_hub
10 months ago
mixtral-gqa-400m-v2.q2_k.gguf
703 MB
LFS
Upload mixtral-gqa-400m-v2.q2_k.gguf with huggingface_hub
10 months ago
mixtral-gqa-400m-v2.q3_k_m.gguf
900 MB
LFS
Upload mixtral-gqa-400m-v2.q3_k_m.gguf with huggingface_hub
10 months ago
mixtral-gqa-400m-v2.q4_k_m.gguf
1.15 GB
LFS
Upload mixtral-gqa-400m-v2.q4_k_m.gguf with huggingface_hub
10 months ago
mixtral-gqa-400m-v2.q5_k_m.gguf
1.39 GB
LFS
Upload mixtral-gqa-400m-v2.q5_k_m.gguf with huggingface_hub
10 months ago
mixtral-gqa-400m-v2.q6_k.gguf
1.65 GB
LFS
Upload mixtral-gqa-400m-v2.q6_k.gguf with huggingface_hub
10 months ago