Files changed (1) hide show
  1. README.md +9 -0
README.md ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama3
3
+ ---
4
+
5
+ # Smaug-Llama-3-70B-Instruct-ExPO
6
+
7
+ The extrapolated (ExPO) model based on [`abacusai/Smaug-Llama-3-70B-Instruct`](https://huggingface.co/abacusai/Smaug-Llama-3-70B-Instruct) and [`meta-llama/Meta-Llama-3-70B-Instruct`](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct), as in the "[Weak-to-Strong Extrapolation Expedites Alignment](https://arxiv.org/abs/2404.16792)" paper.
8
+
9
+ Specifically, we obtain this model by extrapolating from the weights of the SFT and DPO/RLHF checkpoints, achieving superior alignment with human preference.