--- language: - en license: mit base_model: - mistralai/Mistral-7B-v0.1 datasets: - argilla/ultrafeedback-binarized-preferences-cleaned pipeline_tag: text-generation model-index: - name: Mistral-ORPO-β results: # AI2 Reasoning Challenge (25-Shot) - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm name: normalized accuracy value: 61.18 source: name: Open LLM Leaderboard url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kaist-ai%2Fmistral-orpo-beta # HellaSwag (10-shot) - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm name: normalized accuracy value: 84.03 source: name: Open LLM Leaderboard url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kaist-ai%2Fmistral-orpo-beta # TruthfulQA (0-shot) - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 47.69 source: name: Open LLM Leaderboard url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kaist-ai%2Fmistral-orpo-beta # GSM8k (5-shot) - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc name: accuracy value: 39.8 source: name: Open LLM Leaderboard url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kaist-ai%2Fmistral-orpo-beta # MMLU (5-Shot) - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc name: accuracy value: 63.26 source: name: Open LLM Leaderboard url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kaist-ai%2Fmistral-orpo-beta # Winogrande (5-shot) - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc name: accuracy value: 79.24 source: name: Open LLM Leaderboard url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kaist-ai%2Fmistral-orpo-beta - task: type: text-generation dataset: name: AlpacaEval 1 type: AlpacaEval metrics: - type: AlpacaEval 1.0 value: 91.16% name: Win Rate source: url: https://tatsu-lab.github.io/alpaca_eval/ name: Leaderboard - task: type: text-generation dataset: name: AlpacaEval 2 type: AlpacaEval metrics: - type: AlpacaEval 2.0 value: 12.57% name: Win Rate source: url: https://tatsu-lab.github.io/alpaca_eval/ name: Leaderboard - task: type: text-generation dataset: name: MT-Bench type: MT-Bench metrics: - type: MT-Bench value: 7.322 name: Score source: url: https://github.com/lm-sys/FastChat/blob/main/fastchat/llm_judge/ name: self-reported quantized_by: bartowski --- ## Llamacpp Quantizations of mistral-orpo-beta Using llama.cpp release b2440 for quantization. Original model: https://huggingface.co/kaist-ai/mistral-orpo-beta Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [mistral-orpo-beta-Q8_0.gguf](https://huggingface.co/bartowski/mistral-orpo-beta-GGUF/blob/main/mistral-orpo-beta-Q8_0.gguf) | Q8_0 | 7.69GB | Extremely high quality, generally unneeded but max available quant. | | [mistral-orpo-beta-Q6_K.gguf](https://huggingface.co/bartowski/mistral-orpo-beta-GGUF/blob/main/mistral-orpo-beta-Q6_K.gguf) | Q6_K | 5.94GB | Very high quality, near perfect, *recommended*. | | [mistral-orpo-beta-Q5_K_M.gguf](https://huggingface.co/bartowski/mistral-orpo-beta-GGUF/blob/main/mistral-orpo-beta-Q5_K_M.gguf) | Q5_K_M | 5.13GB | High quality, very usable. | | [mistral-orpo-beta-Q5_K_S.gguf](https://huggingface.co/bartowski/mistral-orpo-beta-GGUF/blob/main/mistral-orpo-beta-Q5_K_S.gguf) | Q5_K_S | 4.99GB | High quality, very usable. | | [mistral-orpo-beta-Q5_0.gguf](https://huggingface.co/bartowski/mistral-orpo-beta-GGUF/blob/main/mistral-orpo-beta-Q5_0.gguf) | Q5_0 | 4.99GB | High quality, older format, generally not recommended. | | [mistral-orpo-beta-Q4_K_M.gguf](https://huggingface.co/bartowski/mistral-orpo-beta-GGUF/blob/main/mistral-orpo-beta-Q4_K_M.gguf) | Q4_K_M | 4.36GB | Good quality, similar to 4.25 bpw. | | [mistral-orpo-beta-Q4_K_S.gguf](https://huggingface.co/bartowski/mistral-orpo-beta-GGUF/blob/main/mistral-orpo-beta-Q4_K_S.gguf) | Q4_K_S | 4.14GB | Slightly lower quality with small space savings. | | [mistral-orpo-beta-Q4_0.gguf](https://huggingface.co/bartowski/mistral-orpo-beta-GGUF/blob/main/mistral-orpo-beta-Q4_0.gguf) | Q4_0 | 4.10GB | Decent quality, older format, generally not recommended. | | [mistral-orpo-beta-Q3_K_L.gguf](https://huggingface.co/bartowski/mistral-orpo-beta-GGUF/blob/main/mistral-orpo-beta-Q3_K_L.gguf) | Q3_K_L | 3.82GB | Lower quality but usable, good for low RAM availability. | | [mistral-orpo-beta-Q3_K_M.gguf](https://huggingface.co/bartowski/mistral-orpo-beta-GGUF/blob/main/mistral-orpo-beta-Q3_K_M.gguf) | Q3_K_M | 3.51GB | Even lower quality. | | [mistral-orpo-beta-Q3_K_S.gguf](https://huggingface.co/bartowski/mistral-orpo-beta-GGUF/blob/main/mistral-orpo-beta-Q3_K_S.gguf) | Q3_K_S | 3.16GB | Low quality, not recommended. | | [mistral-orpo-beta-Q2_K.gguf](https://huggingface.co/bartowski/mistral-orpo-beta-GGUF/blob/main/mistral-orpo-beta-Q2_K.gguf) | Q2_K | 2.71GB | Extremely low quality, *not* recommended. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski