Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
MaziyarPanahi
/
Mistral-Large-Instruct-2407-GGUF
like
20
Text Generation
GGUF
quantized
2-bit
3-bit
4-bit precision
5-bit
6-bit
8-bit precision
GGUF
imatrix
Model card
Files
Files and versions
Community
14
Use this model
2f7cd46
Mistral-Large-Instruct-2407-GGUF
1 contributor
History:
8 commits
MaziyarPanahi
3b8ee08c400d352156a20d98628ffb9ff05525ad5d59c5330302a5e2517cbeb8
2f7cd46
verified
4 months ago
.gitattributes
2.3 kB
3b8ee08c400d352156a20d98628ffb9ff05525ad5d59c5330302a5e2517cbeb8
4 months ago
Mistral-Large-Instruct-2407.IQ1_M.gguf
28.4 GB
LFS
Upload folder using huggingface_hub (#2)
4 months ago
Mistral-Large-Instruct-2407.IQ1_S.gguf
26 GB
LFS
Upload folder using huggingface_hub (#2)
4 months ago
Mistral-Large-Instruct-2407.IQ2_XS.gguf
36.1 GB
LFS
Upload folder using huggingface_hub (#2)
4 months ago
Mistral-Large-Instruct-2407.IQ3_XS.gguf-00001-of-00007.gguf
8.32 GB
LFS
2ffd1a5a864adb861afe786ca6ce43fd3d8443ec9f8499d5461734aead9de9c1
4 months ago
Mistral-Large-Instruct-2407.IQ3_XS.gguf-00002-of-00007.gguf
8.07 GB
LFS
3cb5a9fa46dc77422042a0927dff3bf2e4fc2d265c5dd2e21d7e2be41318f11c
4 months ago
Mistral-Large-Instruct-2407.IQ3_XS.gguf-00003-of-00007.gguf
7.92 GB
LFS
d30cd69fb2e3c8647c8f9b4a472a2f843a429053706a0a158247d3ee8dace5e8
4 months ago
Mistral-Large-Instruct-2407.IQ3_XS.gguf-00004-of-00007.gguf
7.85 GB
LFS
3127a3330bade9a647ef52a16d3e0cd5fd43aca1566e532a619d0d15b7386ec7
4 months ago
Mistral-Large-Instruct-2407.IQ3_XS.gguf-00005-of-00007.gguf
7.85 GB
LFS
3b8ee08c400d352156a20d98628ffb9ff05525ad5d59c5330302a5e2517cbeb8
4 months ago
Mistral-Large-Instruct-2407.Q2_K.gguf
45.2 GB
LFS
Upload folder using huggingface_hub (#2)
4 months ago
README.md
3.06 kB
Create README.md (#3)
4 months ago
main.log
22.7 kB
Upload folder using huggingface_hub (#2)
4 months ago