Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

Truthful_DPO_MOE_19B - GGUF

Original model description:

license: other tags: - moe - DPO - RL-TUNED

DPO Trainer TRL supports the DPO Trainer for training language models from preference data, as described in the paper Direct Preference Optimization: Your Language Model is Secretly a Reward Model by Rafailov et al., 2023.


Downloads last month
450
GGUF
Model size
19.2B params
Architecture
llama
+2
Unable to determine this model's library. Check the docs .