File size: 288 Bytes
f5b920a
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
---
license: apache-2.0
language:
- en
---

# zephyr_0.1

The DPO-trained model from `alignment-handbook/zephyr-7b-sft-full` using 10% data of `HuggingFaceH4/ultrafeedback_binarized`, as in the "[Weak-to-Strong Extrapolation Expedites Alignment](https://arxiv.org/abs/2404.16792)" paper.