|
--- |
|
language: |
|
- en |
|
license: mit |
|
base_model: |
|
- mistralai/Mistral-7B-v0.1 |
|
datasets: |
|
- HuggingFaceH4/ultrafeedback_binarized |
|
pipeline_tag: text-generation |
|
model-index: |
|
- name: Mistral-ORPO-⍺ |
|
results: |
|
- task: |
|
type: text-generation |
|
dataset: |
|
name: AlpacaEval 1 |
|
type: AlpacaEval |
|
metrics: |
|
- type: AlpacaEval 1.0 |
|
value: 87.92% |
|
name: Win Rate |
|
source: |
|
url: https://github.com/tatsu-lab/alpaca_eval |
|
name: self-reported |
|
- task: |
|
type: text-generation |
|
dataset: |
|
name: AlpacaEval 2 |
|
type: AlpacaEval |
|
metrics: |
|
- type: AlpacaEval 2.0 |
|
value: 11.33% |
|
name: Win Rate |
|
source: |
|
url: https://github.com/tatsu-lab/alpaca_eval |
|
name: self-reported |
|
- task: |
|
type: text-generation |
|
dataset: |
|
name: MT-Bench |
|
type: MT-Bench |
|
metrics: |
|
- type: MT-Bench |
|
value: 7.23 |
|
name: Score |
|
source: |
|
url: https://github.com/lm-sys/FastChat/blob/main/fastchat/llm_judge/ |
|
name: self-reported |
|
quantized_by: bartowski |
|
--- |
|
|
|
## Llamacpp Quantizations of mistral-orpo-alpha |
|
|
|
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2440">b2440</a> for quantization. |
|
|
|
Original model: https://huggingface.co/kaist-ai/mistral-orpo-alpha |
|
|
|
Download a file (not the whole branch) from below: |
|
|
|
| Filename | Quant type | File Size | Description | |
|
| -------- | ---------- | --------- | ----------- | |
|
| [mistral-orpo-alpha-Q8_0.gguf](https://huggingface.co/bartowski/mistral-orpo-alpha-GGUF/blob/main/mistral-orpo-alpha-Q8_0.gguf) | Q8_0 | 7.69GB | Extremely high quality, generally unneeded but max available quant. | |
|
| [mistral-orpo-alpha-Q6_K.gguf](https://huggingface.co/bartowski/mistral-orpo-alpha-GGUF/blob/main/mistral-orpo-alpha-Q6_K.gguf) | Q6_K | 5.94GB | Very high quality, near perfect, *recommended*. | |
|
| [mistral-orpo-alpha-Q5_K_M.gguf](https://huggingface.co/bartowski/mistral-orpo-alpha-GGUF/blob/main/mistral-orpo-alpha-Q5_K_M.gguf) | Q5_K_M | 5.13GB | High quality, very usable. | |
|
| [mistral-orpo-alpha-Q5_K_S.gguf](https://huggingface.co/bartowski/mistral-orpo-alpha-GGUF/blob/main/mistral-orpo-alpha-Q5_K_S.gguf) | Q5_K_S | 4.99GB | High quality, very usable. | |
|
| [mistral-orpo-alpha-Q5_0.gguf](https://huggingface.co/bartowski/mistral-orpo-alpha-GGUF/blob/main/mistral-orpo-alpha-Q5_0.gguf) | Q5_0 | 4.99GB | High quality, older format, generally not recommended. | |
|
| [mistral-orpo-alpha-Q4_K_M.gguf](https://huggingface.co/bartowski/mistral-orpo-alpha-GGUF/blob/main/mistral-orpo-alpha-Q4_K_M.gguf) | Q4_K_M | 4.36GB | Good quality, similar to 4.25 bpw. | |
|
| [mistral-orpo-alpha-Q4_K_S.gguf](https://huggingface.co/bartowski/mistral-orpo-alpha-GGUF/blob/main/mistral-orpo-alpha-Q4_K_S.gguf) | Q4_K_S | 4.14GB | Slightly lower quality with small space savings. | |
|
| [mistral-orpo-alpha-Q4_0.gguf](https://huggingface.co/bartowski/mistral-orpo-alpha-GGUF/blob/main/mistral-orpo-alpha-Q4_0.gguf) | Q4_0 | 4.10GB | Decent quality, older format, generally not recommended. | |
|
| [mistral-orpo-alpha-Q3_K_L.gguf](https://huggingface.co/bartowski/mistral-orpo-alpha-GGUF/blob/main/mistral-orpo-alpha-Q3_K_L.gguf) | Q3_K_L | 3.82GB | Lower quality but usable, good for low RAM availability. | |
|
| [mistral-orpo-alpha-Q3_K_M.gguf](https://huggingface.co/bartowski/mistral-orpo-alpha-GGUF/blob/main/mistral-orpo-alpha-Q3_K_M.gguf) | Q3_K_M | 3.51GB | Even lower quality. | |
|
| [mistral-orpo-alpha-Q3_K_S.gguf](https://huggingface.co/bartowski/mistral-orpo-alpha-GGUF/blob/main/mistral-orpo-alpha-Q3_K_S.gguf) | Q3_K_S | 3.16GB | Low quality, not recommended. | |
|
| [mistral-orpo-alpha-Q2_K.gguf](https://huggingface.co/bartowski/mistral-orpo-alpha-GGUF/blob/main/mistral-orpo-alpha-Q2_K.gguf) | Q2_K | 2.71GB | Extremely low quality, *not* recommended. |
|
|
|
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski |
|
|