Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
Mozilla
/
Mixtral-8x7B-Instruct-v0.1-llamafile
like
18
Follow
mozilla
147
Transformers
llamafile
5 languages
mixtral
License:
apache-2.0
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
9d7ebff
Mixtral-8x7B-Instruct-v0.1-llamafile
1 contributor
History:
127 commits
jartine
Quantize Q3_K_S with llamafile-0.8.8
9d7ebff
verified
4 months ago
.gitattributes
3.31 kB
Quantize Q5_K_S with llamafile-0.8.5
6 months ago
README.md
20.8 kB
Update README.md
7 months ago
config.json
31 Bytes
Add config.json to repo
11 months ago
mixtral-8x7b-instruct-v0.1.BF16.llamafile.cat0
50 GB
LFS
Quantize BF16 with llamafile-0.8.7
5 months ago
mixtral-8x7b-instruct-v0.1.BF16.llamafile.cat1
43.4 GB
LFS
Quantize BF16 with llamafile-0.8.7
5 months ago
mixtral-8x7b-instruct-v0.1.F16.llamafile.cat0
50 GB
LFS
Quantize F16 with llamafile-0.8.7
5 months ago
mixtral-8x7b-instruct-v0.1.F16.llamafile.cat1
43.4 GB
LFS
Quantize F16 with llamafile-0.8.7
5 months ago
mixtral-8x7b-instruct-v0.1.Q2_K.llamafile
17.3 GB
LFS
Quantize Q2_K with llamafile-0.8.8
4 months ago
mixtral-8x7b-instruct-v0.1.Q3_K_M.llamafile
22.6 GB
LFS
Quantize Q3_K_M with llamafile-0.8.7
5 months ago
mixtral-8x7b-instruct-v0.1.Q3_K_S.llamafile
20.5 GB
LFS
Quantize Q3_K_S with llamafile-0.8.8
4 months ago
mixtral-8x7b-instruct-v0.1.Q4_0.llamafile
26.5 GB
LFS
Quantize Q4_0 with llamafile-0.8.7
5 months ago
mixtral-8x7b-instruct-v0.1.Q4_1.llamafile
29.4 GB
LFS
Quantize mixtral-8x7b-instruct-v0.1 with llamafile-0.7.3 Q4_1
7 months ago
mixtral-8x7b-instruct-v0.1.Q4_K_M.llamafile
28.5 GB
LFS
Quantize Q4_K_M with llamafile-0.8.7
5 months ago
mixtral-8x7b-instruct-v0.1.Q5_0.llamafile
32.3 GB
LFS
Quantize mixtral-8x7b-instruct-v0.1 with llamafile-0.7 Q5_0
7 months ago
mixtral-8x7b-instruct-v0.1.Q5_K_M.llamafile
33.3 GB
LFS
Quantize Q5_K_M with llamafile-0.8.8
4 months ago
mixtral-8x7b-instruct-v0.1.Q5_K_S.llamafile
32.3 GB
LFS
Quantize Q5_K_S with llamafile-0.8.8
4 months ago
mixtral-8x7b-instruct-v0.1.Q6_K.llamafile
38.4 GB
LFS
Quantize Q6_K with llamafile-0.8.7
5 months ago
mixtral-8x7b-instruct-v0.1.Q8_0.llamafile
49.7 GB
LFS
Quantize Q8_0 with llamafile-0.8.7
5 months ago