Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
MaziyarPanahi
/
Mistral-Large-Instruct-2407-GGUF
like
20
Text Generation
GGUF
quantized
2-bit
3-bit
4-bit precision
5-bit
6-bit
8-bit precision
GGUF
Model card
Files
Files and versions
Community
14
Use this model
9e6d8f6
Mistral-Large-Instruct-2407-GGUF
1 contributor
History:
6 commits
MaziyarPanahi
d30cd69fb2e3c8647c8f9b4a472a2f843a429053706a0a158247d3ee8dace5e8
9e6d8f6
verified
5 months ago
.gitattributes
2.11 kB
d30cd69fb2e3c8647c8f9b4a472a2f843a429053706a0a158247d3ee8dace5e8
5 months ago
Mistral-Large-Instruct-2407.IQ1_M.gguf
28.4 GB
LFS
Upload folder using huggingface_hub (#2)
5 months ago
Mistral-Large-Instruct-2407.IQ1_S.gguf
26 GB
LFS
Upload folder using huggingface_hub (#2)
5 months ago
Mistral-Large-Instruct-2407.IQ2_XS.gguf
36.1 GB
LFS
Upload folder using huggingface_hub (#2)
5 months ago
Mistral-Large-Instruct-2407.IQ3_XS.gguf-00001-of-00007.gguf
8.32 GB
LFS
2ffd1a5a864adb861afe786ca6ce43fd3d8443ec9f8499d5461734aead9de9c1
5 months ago
Mistral-Large-Instruct-2407.IQ3_XS.gguf-00002-of-00007.gguf
8.07 GB
LFS
3cb5a9fa46dc77422042a0927dff3bf2e4fc2d265c5dd2e21d7e2be41318f11c
5 months ago
Mistral-Large-Instruct-2407.IQ3_XS.gguf-00003-of-00007.gguf
7.92 GB
LFS
d30cd69fb2e3c8647c8f9b4a472a2f843a429053706a0a158247d3ee8dace5e8
5 months ago
Mistral-Large-Instruct-2407.Q2_K.gguf
45.2 GB
LFS
Upload folder using huggingface_hub (#2)
5 months ago
README.md
3.06 kB
Create README.md (#3)
5 months ago
main.log
22.7 kB
Upload folder using huggingface_hub (#2)
5 months ago