Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Mistral_Pro_8B_v0.1 - GGUF - Model creator: https://huggingface.co/TencentARC/ - Original model: https://huggingface.co/TencentARC/Mistral_Pro_8B_v0.1/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Mistral_Pro_8B_v0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/TencentARC_-_Mistral_Pro_8B_v0.1-gguf/blob/main/Mistral_Pro_8B_v0.1.Q2_K.gguf) | Q2_K | 3.13GB | | [Mistral_Pro_8B_v0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/TencentARC_-_Mistral_Pro_8B_v0.1-gguf/blob/main/Mistral_Pro_8B_v0.1.IQ3_XS.gguf) | IQ3_XS | 3.48GB | | [Mistral_Pro_8B_v0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/TencentARC_-_Mistral_Pro_8B_v0.1-gguf/blob/main/Mistral_Pro_8B_v0.1.IQ3_S.gguf) | IQ3_S | 3.67GB | | [Mistral_Pro_8B_v0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/TencentARC_-_Mistral_Pro_8B_v0.1-gguf/blob/main/Mistral_Pro_8B_v0.1.Q3_K_S.gguf) | Q3_K_S | 3.65GB | | [Mistral_Pro_8B_v0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/TencentARC_-_Mistral_Pro_8B_v0.1-gguf/blob/main/Mistral_Pro_8B_v0.1.IQ3_M.gguf) | IQ3_M | 3.79GB | | [Mistral_Pro_8B_v0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/TencentARC_-_Mistral_Pro_8B_v0.1-gguf/blob/main/Mistral_Pro_8B_v0.1.Q3_K.gguf) | Q3_K | 4.05GB | | [Mistral_Pro_8B_v0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/TencentARC_-_Mistral_Pro_8B_v0.1-gguf/blob/main/Mistral_Pro_8B_v0.1.Q3_K_M.gguf) | Q3_K_M | 4.05GB | | [Mistral_Pro_8B_v0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/TencentARC_-_Mistral_Pro_8B_v0.1-gguf/blob/main/Mistral_Pro_8B_v0.1.Q3_K_L.gguf) | Q3_K_L | 4.41GB | | [Mistral_Pro_8B_v0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/TencentARC_-_Mistral_Pro_8B_v0.1-gguf/blob/main/Mistral_Pro_8B_v0.1.IQ4_XS.gguf) | IQ4_XS | 4.55GB | | [Mistral_Pro_8B_v0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/TencentARC_-_Mistral_Pro_8B_v0.1-gguf/blob/main/Mistral_Pro_8B_v0.1.Q4_0.gguf) | Q4_0 | 4.74GB | | [Mistral_Pro_8B_v0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/TencentARC_-_Mistral_Pro_8B_v0.1-gguf/blob/main/Mistral_Pro_8B_v0.1.IQ4_NL.gguf) | IQ4_NL | 4.79GB | | [Mistral_Pro_8B_v0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/TencentARC_-_Mistral_Pro_8B_v0.1-gguf/blob/main/Mistral_Pro_8B_v0.1.Q4_K_S.gguf) | Q4_K_S | 4.78GB | | [Mistral_Pro_8B_v0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/TencentARC_-_Mistral_Pro_8B_v0.1-gguf/blob/main/Mistral_Pro_8B_v0.1.Q4_K.gguf) | Q4_K | 5.04GB | | [Mistral_Pro_8B_v0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/TencentARC_-_Mistral_Pro_8B_v0.1-gguf/blob/main/Mistral_Pro_8B_v0.1.Q4_K_M.gguf) | Q4_K_M | 5.04GB | | [Mistral_Pro_8B_v0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/TencentARC_-_Mistral_Pro_8B_v0.1-gguf/blob/main/Mistral_Pro_8B_v0.1.Q4_1.gguf) | Q4_1 | 5.26GB | | [Mistral_Pro_8B_v0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/TencentARC_-_Mistral_Pro_8B_v0.1-gguf/blob/main/Mistral_Pro_8B_v0.1.Q5_0.gguf) | Q5_0 | 5.77GB | | [Mistral_Pro_8B_v0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/TencentARC_-_Mistral_Pro_8B_v0.1-gguf/blob/main/Mistral_Pro_8B_v0.1.Q5_K_S.gguf) | Q5_K_S | 5.77GB | | [Mistral_Pro_8B_v0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/TencentARC_-_Mistral_Pro_8B_v0.1-gguf/blob/main/Mistral_Pro_8B_v0.1.Q5_K.gguf) | Q5_K | 5.93GB | | [Mistral_Pro_8B_v0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/TencentARC_-_Mistral_Pro_8B_v0.1-gguf/blob/main/Mistral_Pro_8B_v0.1.Q5_K_M.gguf) | Q5_K_M | 5.93GB | | [Mistral_Pro_8B_v0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/TencentARC_-_Mistral_Pro_8B_v0.1-gguf/blob/main/Mistral_Pro_8B_v0.1.Q5_1.gguf) | Q5_1 | 6.29GB | | [Mistral_Pro_8B_v0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/TencentARC_-_Mistral_Pro_8B_v0.1-gguf/blob/main/Mistral_Pro_8B_v0.1.Q6_K.gguf) | Q6_K | 6.87GB | | [Mistral_Pro_8B_v0.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/TencentARC_-_Mistral_Pro_8B_v0.1-gguf/blob/main/Mistral_Pro_8B_v0.1.Q8_0.gguf) | Q8_0 | 8.89GB | Original model description: --- license: apache-2.0 datasets: - HuggingFaceTB/cosmopedia - EleutherAI/proof-pile-2 - bigcode/the-stack-dedup - math-ai/AutoMathText language: - en metrics: - accuracy - code_eval --- # Mistral-Pro-8B Model Card ## Model Description Mistral-Pro is a progressive version of the original [Mistral](https://huggingface.co/mistralai/Mistral-7B-v0.1) model, enhanced by the addition of Transformer blocks. It specializes in integrating both general language understanding and domain-specific knowledge, particularly in programming and mathematics. ## Development and Training Developed by Tencent's ARC Lab, Mistral-Pro is an 8 billion parameter model. It's an expansion of Mistral-7B, further trained on code and math corpora. ## Intended Use This model is designed for a wide range of NLP tasks, with a focus on programming, mathematics, and general language tasks. It suits scenarios requiring integration of natural and programming languages. ## Performance Mistral_Pro_8B_v0.1 showcases superior performance on a range of benchmarks. It enhances the code and math performance of Mistral. Furthermore, it matches the performance of the recently dominant model, [Gemma](https://huggingface.co/google/gemma-7b). ### Overall Performance on Languages, math and code tasks | Model | ARC | Hellaswag | MMLU | TruthfulQA | Winogrande | GSM8K | HumanEval | | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | | Gemma-7B | 61.9 | 82.2 | 64.6 | 44.8 | 79.0 | 50.9 | 32.3 | | Mistral-7B | 60.8 | 83.3 | 62.7 | 42.6 | 78.0 | 39.2 | 28.7 | | Mistral_Pro_8B_v0.1 | 63.2 | 82.6 | 60.6 | 48.3 | 78.9 | 50.6 | 32.9 | ## Limitations While Mistral-Pro addresses some limitations of previous models in the series, it may still encounter challenges specific to highly specialized domains or tasks. ## Ethical Considerations Users should be aware of potential biases in the model and use it responsibly, considering its impact on various applications.