Edit model card

mlabonne/OrpoLlama-3-8B AWQ

Model Summary

This is an ORPO fine-tune of meta-llama/Meta-Llama-3-8B on 1k samples of mlabonne/orpo-dpo-mix-40k created for this article.

It's a successful fine-tune that follows the ChatML template!

Try the demo: https://huggingface.co/spaces/mlabonne/OrpoLlama-3-8B

πŸ”Ž Application

This model uses a context window of 8k. It was trained with the ChatML template.

πŸ† Evaluation

Nous

OrpoLlama-4-8B outperforms Llama-3-8B-Instruct on the GPT4All and TruthfulQA datasets.

Downloads last month
15
Safetensors
Model size
1.98B params
Tensor type
I32
Β·
FP16
Β·
Inference API
Input a message to start chatting with solidrust/OrpoLlama-3-8B-AWQ.
Inference API (serverless) has been turned off for this model.

Dataset used to train solidrust/OrpoLlama-3-8B-AWQ

Collection including solidrust/OrpoLlama-3-8B-AWQ