Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
venketh
/
Mistral-7B-Instruct-v0.2-GGUF-imatrix
like
3
GGUF
License:
apache-2.0
Model card
Files
Files and versions
Community
Use this model
main
Mistral-7B-Instruct-v0.2-GGUF-imatrix
2 contributors
History:
15 commits
Venkatesh Srinivas
Import new IQ4_XS quantization
d3f9d00
4 months ago
.gitattributes
1.68 kB
Track GGUF files via LFS
5 months ago
README.md
28 Bytes
initial commit
5 months ago
group_10_merged.txt
162 kB
Import imatrix source and data
5 months ago
imatrix.dat
4.99 MB
LFS
Import imatrix.dat
5 months ago
mistral-7b-instruct-v0.2.IQ3_M.gguf
3.28 GB
LFS
Replace Q3_K_S, Q3_K_M quants with new IQ3_S, IQ3_M versions
4 months ago
mistral-7b-instruct-v0.2.IQ3_S.gguf
3.18 GB
LFS
Replace Q3_K_S, Q3_K_M quants with new IQ3_S, IQ3_M versions
4 months ago
mistral-7b-instruct-v0.2.IQ3_XXS.gguf
2.9 GB
LFS
Import IQ3_XXS quantization
5 months ago
mistral-7b-instruct-v0.2.IQ4_XS.gguf
3.91 GB
LFS
Import new IQ4_XS quantization
4 months ago
mistral-7b-instruct-v0.2.Q3_K_L.gguf
3.82 GB
LFS
Import Q3_K_L
5 months ago
mistral-7b-instruct-v0.2.Q4_K_M.gguf
4.37 GB
LFS
Import Q4_K_M quantization
5 months ago
mistral-7b-instruct-v0.2.Q4_K_S.gguf
4.14 GB
LFS
Import Q4_K_S quantization
5 months ago
mistral-7b-instruct-v0.2.Q5_K_M.gguf
5.13 GB
LFS
Import Q5_K_M quantization
4 months ago
mistral-7b-instruct-v0.2.Q8_0.gguf
7.7 GB
LFS
Import Q8_0 quantization
4 months ago