Text Generation
Safetensors
English
llama
SinclairWang commited on
Commit
bda9eb7
·
verified ·
1 Parent(s): f974783

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +75 -0
README.md ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama3.2
3
+ datasets:
4
+ - OctoThinker/MegaMath-Web-Pro-Max
5
+ - LLM360/MegaMath
6
+ language:
7
+ - en
8
+ base_model:
9
+ - meta-llama/Llama-3.2-3B
10
+ pipeline_tag: text-generation
11
+ ---
12
+
13
+ # [OctoThinker: Mid-training Incentivizes Reinforcement Learning Scaling](https://arxiv.org/abs/2506.20512)
14
+
15
+
16
+
17
+ ## OctoThinker-3B-Hybrid-Zero
18
+
19
+
20
+ The OctoThinker family is built on carefully studied mid-training insights, starting from the Llama-3 family, to create a reinforcement learning–friendly base language model.
21
+
22
+ OctoThinker-3B-Hybrid-Zero is trained using the R1-Zero-style reinforcement learning technique, starting from OctoThinker-3B-Hybrid-Base without any supervised fine-tuning (SFT).
23
+
24
+
25
+ ### Training Recipe for OctoThinker-3B-Hybrid-Base
26
+
27
+ <div style="display: flex; justify-content: left; gap: 20px;">
28
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/62cbeb2d72dfd24b86bdf977/olNNY0cy0wVxAQh2VwewO.png" alt="Data Pipeline" style="width:90%;">
29
+
30
+ </div>
31
+
32
+
33
+
34
+
35
+ ### Evaluation Results of OctoThinker-3B-Base Series
36
+
37
+ Note that we adopt the few-shot prompting evaluation for these base language models.
38
+
39
+
40
+ <div style="display: flex; justify-content: left; gap: 20px;">
41
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/62cbeb2d72dfd24b86bdf977/UCZ9MahRYqLY0iKjiWMqS.png" alt="Data Pipeline" style="width:80%;">
42
+
43
+ </div>
44
+
45
+
46
+
47
+ ### RL Training Dynamics of OctoThinker-3B-Zero Series
48
+
49
+ <div style="display: flex; justify-content: left; gap: 20px;">
50
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/62cbeb2d72dfd24b86bdf977/e21Eg8jj_ITxC4YcIJUmx.png" alt="Data Pipeline" style="width:80%;">
51
+ </div>
52
+
53
+
54
+
55
+ ### More about OctoThinker
56
+
57
+
58
+ <div style="display: flex; justify-content: left; gap: 20px;">
59
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/62cbeb2d72dfd24b86bdf977/bn85CEB_DW6azJ7KJp11Q.png" alt="Data Pipeline" style="width:100%;">
60
+ </div>
61
+
62
+
63
+ ## Citation
64
+
65
+ Check out our [paper](https://arxiv.org/abs/2506.20512) for more details. If you use our models, datasets or find our work useful, please cite
66
+
67
+ ```
68
+ @article{wang2025octothinker,
69
+ title={OctoThinker: Mid-training Incentivizes Reinforcement Learning Scaling},
70
+ author={Wang, Zengzhi and Zhou, Fan and Li, Xuefeng and Liu, Pengfei},
71
+ year={2025},
72
+ journal={arXiv preprint arXiv:2506.20512},
73
+ note={Preprint}
74
+ }
75
+ ```