Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
venketh
/
Mixtral-8x7B-v0.1-GGUF-imatrix
like
0
GGUF
Model card
Files
Files and versions
Community
Use this model
main
Mixtral-8x7B-v0.1-GGUF-imatrix
2 contributors
History:
11 commits
Venkatesh Srinivas
Import Q6_K quantization
fb16293
4 months ago
.gitattributes
1.61 kB
Import imatrix for Mixtral-8x7B-v0.1
4 months ago
group_10_merged.txt
162 kB
Import imatrix for Mixtral-8x7B-v0.1
4 months ago
imatrix.dat
25.7 MB
LFS
Import imatrix for Mixtral-8x7B-v0.1
4 months ago
mixtral-8x7b-v0.1.IQ1_S.gguf
9.82 GB
LFS
Import IQ1_S quantization (1.56 bpw, 9.2 GB)
4 months ago
mixtral-8x7b-v0.1.IQ2_M.gguf
15.5 GB
LFS
Import IQ2_M quantization (2.7 bpw, 15 GB)
4 months ago
mixtral-8x7b-v0.1.IQ2_XXS.gguf
12.6 GB
LFS
Import IQ2_XXS quantization (2.06 bpw, 12 GB)
4 months ago
mixtral-8x7b-v0.1.IQ3_M.gguf
21.4 GB
LFS
Import IQ3_M quantization (3.66 bpw, 20 GB)
4 months ago
mixtral-8x7b-v0.1.IQ3_XXS.gguf
18.2 GB
LFS
Import IQ3_XXS (3.06 bpw) quantization (17 GB)
4 months ago
mixtral-8x7b-v0.1.IQ4_XS.gguf
25.1 GB
LFS
Import IQ4_XS quantization (4.25 bpw, 24 GB)
4 months ago
mixtral-8x7b-v0.1.Q5_K_S.gguf
32.2 GB
LFS
Import Q5_K_S quantization
4 months ago
mixtral-8x7b-v0.1.Q6_K.gguf
38.4 GB
LFS
Import Q6_K quantization
4 months ago