weqweasdas commited on
Commit
4c7b94c
·
verified ·
1 Parent(s): b29ef86

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -9,7 +9,7 @@ This is the SFT checkpoint used for the project [RLHFlow/Online-RLHF](https://gi
9
  * **Authors**: Hanze Dong*, Wei Xiong*, Bo Pang*, Haoxiang Wang*, Han Zhao, Yingbo Zhou, Nan Jiang, Doyen Sahoo, Caiming Xiong, Tong Zhang
10
  * **Code**: https://github.com/RLHFlow/Online-RLHF
11
 
12
- The model is trained from [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on a mixture of diverse open-source high-quality data for 1 epoch with detailed parameters in the report. It has not been trained by RLHF and can serve as a good starting point for the RLHF research.
13
 
14
 
15
  ## Academic Benchmarks
@@ -36,4 +36,6 @@ Please cite our techical report if you find our model is useful for your researc
36
  primaryClass={cs.LG}
37
  }
38
 
39
- ```
 
 
 
9
  * **Authors**: Hanze Dong*, Wei Xiong*, Bo Pang*, Haoxiang Wang*, Han Zhao, Yingbo Zhou, Nan Jiang, Doyen Sahoo, Caiming Xiong, Tong Zhang
10
  * **Code**: https://github.com/RLHFlow/Online-RLHF
11
 
12
+ The model is trained from [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on [RLHFlow/RLHFlow-SFT-Dataset-ver2](https://huggingface.co/datasets/RLHFlow/RLHFlow-SFT-Dataset-ver2) for 2 epochs. We use a global batch size of 128 and a learning rate of 2e-6, where we pack the samples and split them into chunks of 8192 token. See more training details at https://github.com/RLHFlow/Online-RLHF/blob/main/sft/llama3-8b-it.yaml .
13
 
14
 
15
  ## Academic Benchmarks
 
36
  primaryClass={cs.LG}
37
  }
38
 
39
+ ```
40
+
41
+