File size: 1,205 Bytes
9a4691f 3748896 21f58cb 9a4691f 21f58cb 9a4691f 21f58cb 9a4691f 75ad3d9 9a4691f 75ad3d9 078a18d 75ad3d9 078a18d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
---
base_model: mlabonne/OrpoLlama-3-8B
language:
- en
license: other
library_name: transformers
datasets:
- mlabonne/orpo-dpo-mix-40k
tags:
- 4-bit
- AWQ
- text-generation
- autotrain_compatible
- endpoints_compatible
- orpo
- llama 3
- rlhf
- sft
pipeline_tag: text-generation
inference: false
quantized_by: Suparious
---
# mlabonne/OrpoLlama-3-8B AWQ
- Model creator: [mlabonne](https://huggingface.co/mlabonne)
- Original model: [OrpoLlama-3-8B](https://huggingface.co/mlabonne/OrpoLlama-3-8B)
![](https://i.imgur.com/ZHwzQvI.png)
## Model Summary
This is an ORPO fine-tune of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on 1k samples of [mlabonne/orpo-dpo-mix-40k](https://huggingface.co/datasets/mlabonne/orpo-dpo-mix-40k) created for [this article](https://huggingface.co/blog/mlabonne/orpo-llama-3).
It's a successful fine-tune that follows the ChatML template!
**Try the demo**: https://huggingface.co/spaces/mlabonne/OrpoLlama-3-8B
## 🔎 Application
This model uses a context window of 8k. It was trained with the ChatML template.
## 🏆 Evaluation
### Nous
OrpoLlama-4-8B outperforms Llama-3-8B-Instruct on the GPT4All and TruthfulQA datasets.
|