File size: 497 Bytes
b86dbe5
 
 
 
77ea241
b86dbe5
 
 
 
f147175
b86dbe5
 
 
 
 
 
e8e558b
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
---
tags:
- yi
- moe
license: apache-2.0
---

this is another DPO all-linear-parameter-fine-tuned MoE model for [TomGrc/FusionNet_34Bx2_MoE_v0.1](https://huggingface.co/TomGrc/FusionNet_34Bx2_MoE_v0.1)

it's trained on a H100 for one hour

   ```
DPO Trainer
TRL supports the DPO Trainer for training language models from preference data, as described in the paper Direct Preference Optimization: Your Language Model is Secretly a Reward Model by Rafailov et al., 2023. 
   ```

Metrics not test!