cloudyu commited on
Commit
6ba7b5a
1 Parent(s): 06854ed

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -0
README.md ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ tags:
4
+ - moe
5
+ - DPO
6
+ - RL-TUNED
7
+ ---
8
+
9
+ * [DPO Trainer](https://huggingface.co/docs/trl/main/en/dpo_trainer) with dataset jondurbin/truthy-dpo-v0.1 to improve [cloudyu/Mixtral_34Bx2_MoE_60B]
10
+ ```
11
+ DPO Trainer
12
+ TRL supports the DPO Trainer for training language models from preference data, as described in the paper Direct Preference Optimization: Your Language Model is Secretly a Reward Model by Rafailov et al., 2023.
13
+ ```
14
+ * Metrics NOT test!