Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
MaziyarPanahi
/
Mistral-Large-Instruct-2407-GGUF
like
20
Text Generation
GGUF
quantized
2-bit
3-bit
4-bit precision
5-bit
6-bit
8-bit precision
GGUF
Model card
Files
Files and versions
Community
14
Use this model
New discussion
New pull request
Resources
PR & discussions documentation
Code of Conduct
Hub documentation
All
Discussions
Pull requests
View closed (11)
How much GPU Memory is needed?
1
#14 opened about 2 months ago by
rsoika
how to serve the model with parallelism
#13 opened 4 months ago by
lone17
How to convert GGUF
3
#11 opened 4 months ago by
Jasper17