OrpoLlama-3-8B-AWQ / README.md
Suparious's picture
Update README.md
21f58cb verified
|
raw
history blame
No virus
1.17 kB
metadata
language:
  - en
license: other
library_name: transformers
datasets:
  - mlabonne/orpo-dpo-mix-40k
tags:
  - 4-bit
  - AWQ
  - text-generation
  - autotrain_compatible
  - endpoints_compatible
  - orpo
  - llama 3
  - rlhf
  - sft
pipeline_tag: text-generation
inference: false
quantized_by: Suparious

mlabonne/OrpoLlama-3-8B AWQ

Model Summary

This is an ORPO fine-tune of meta-llama/Meta-Llama-3-8B on 1k samples of mlabonne/orpo-dpo-mix-40k created for this article.

It's a successful fine-tune that follows the ChatML template!

Try the demo: https://huggingface.co/spaces/mlabonne/OrpoLlama-3-8B

πŸ”Ž Application

This model uses a context window of 8k. It was trained with the ChatML template.

πŸ† Evaluation

Nous

OrpoLlama-4-8B outperforms Llama-3-8B-Instruct on the GPT4All and TruthfulQA datasets.