sedrickkeh commited on
Commit
5877def
Β·
verified Β·
1 Parent(s): fbbf2e5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +69 -16
README.md CHANGED
@@ -1,37 +1,69 @@
1
  ---
2
  library_name: transformers
3
  license: apache-2.0
4
- base_model: Qwen/Qwen2.5-32B-Instruct
5
  tags:
6
  - llama-factory
7
  - full
8
  - generated_from_trainer
9
  model-index:
10
- - name: DCFT-Stratos-Verified-114k-32B-4gpus
11
  results: []
 
 
12
  ---
13
 
14
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
- should probably proofread and complete it, then remove this comment. -->
 
16
 
17
- # DCFT-Stratos-Verified-114k-32B-4gpus
18
 
19
- This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) on the mlfoundations-dev/stratos_verified_mix dataset.
 
20
 
21
- ## Model description
 
22
 
23
- More information needed
24
 
25
- ## Intended uses & limitations
26
 
27
- More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28
 
29
- ## Training and evaluation data
30
 
31
- More information needed
 
 
 
 
32
 
33
  ## Training procedure
34
 
 
 
 
 
 
 
 
 
 
 
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
@@ -49,13 +81,34 @@ The following hyperparameters were used during training:
49
  - lr_scheduler_warmup_ratio: 0.1
50
  - num_epochs: 3.0
51
 
52
- ### Training results
53
-
54
-
55
-
56
  ### Framework versions
57
 
58
  - Transformers 4.46.1
59
  - Pytorch 2.3.0
60
  - Datasets 3.1.0
61
  - Tokenizers 0.20.3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  library_name: transformers
3
  license: apache-2.0
4
+ base_model: Qwen/Qwen2.5-7B-Instruct
5
  tags:
6
  - llama-factory
7
  - full
8
  - generated_from_trainer
9
  model-index:
10
+ - name: OpenThinker-32B
11
  results: []
12
+ datasets:
13
+ - open-thoughts/open-thoughts-114k
14
  ---
15
 
16
+ <p align="center">
17
+ <img src="https://huggingface.co/datasets/open-thoughts/open-thoughts-114k/resolve/main/open_thoughts.png" width="50%">
18
+ </p>
19
 
20
+ # OpenThinker-32B
21
 
22
+ This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) on the
23
+ [OpenThoughts-114k](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k) dataset.
24
 
25
+ The dataset is derived by distilling DeepSeek-R1 using the [data pipeline available on github](https://github.com/open-thoughts/open-thoughts).
26
+ More info about the dataset can be found on the dataset card at [OpenThoughts-114k dataset](https://huggingface.co/datasets/open-thoughts/open-thoughts-114k).
27
 
28
+ The numbers reported in the table below are evaluated with our open-source tool [Evalchemy](https://github.com/mlfoundations/Evalchemy).
29
 
 
30
 
31
+ |Model Name|Dataset Size|AIME24 I/II|AIME25 I|MATH500|GPQA Diamond|LCBv2|
32
+ |---|---|---|---|---|---|---|
33
+ |LIMO-32B|0.8k|56.7|49.3|86.6|58.1|-|
34
+ |s1-32B|1k|36.0|25.3|84.8|50.5|40.9|
35
+ |s1.1-32B|1k|64.7|49.3|89.0|60.1|65.5|
36
+ |DeepSeek-R1-Distill-Qwen-32B|closed|**76.7**|**55.9**|89.4|57.6|**71.2**|
37
+ |**OpenThinker-32B**|114k|66.0|53.3|**90.6**|**61.6**|68.9|
38
+
39
+
40
+ We are fully open-source. Our [model weights](https://huggingface.co/open-thoughts), [datasets](https://huggingface.co/open-thoughts), [data generation code](https://github.com/open-thoughts/open-thoughts), [evaluation code](https://github.com/mlfoundations/Evalchemy), and [training code](https://github.com/hiyouga/LLaMA-Factory) are all publicly available.
41
+
42
+ | | Open Weights | Open Data | Open Code |
43
+ |--|--------------|-----------| --------- |
44
+ |OpenThinker-32B|βœ…|[βœ…](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k)|[βœ…](https://github.com/open-thoughts/open-thoughts) |
45
+ |DeepSeek-R1-Distill-Qwen-32B|βœ…|❌|❌|
46
+ |OpenAI/Gemini|❌|❌|❌|❌|
47
 
 
48
 
49
+
50
+ ## Intended uses & limitations
51
+
52
+ Apache 2.0 License
53
+
54
 
55
  ## Training procedure
56
 
57
+ We finetune [Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct)
58
+ on [OpenThoughts-114k](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k) for
59
+ 3 epochs with a 16k context length using [LlamaFactory](https://github.com/hiyouga/LLaMA-Factory).
60
+ Our [full training configuration](https://github.com/open-thoughts/open-thoughts/blob/main/train/OpenThinker-32B.yaml)
61
+ is provided in [our repository](https://github.com/open-thoughts/open-thoughts/tree/main).
62
+ Training the 32B model on [OpenThoughts-114k](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k)
63
+ was done on AWS SageMaker with 8xH100 P5 nodes. On 4 nodes, this took around 90 hours.
64
+ Meanwhile, for training on [OpenThoughts-Unverified-173k](https://huggingface.co/datasets/open-thoughts/OpenThoughts-Unverfied-173k),
65
+ we used 96 nodes of 4xA100 (64 GB per GPU), training took 30 hours, spending 11,520 A100 hours on the Leonardo Supercomputer.
66
+
67
  ### Training hyperparameters
68
 
69
  The following hyperparameters were used during training:
 
81
  - lr_scheduler_warmup_ratio: 0.1
82
  - num_epochs: 3.0
83
 
 
 
 
 
84
  ### Framework versions
85
 
86
  - Transformers 4.46.1
87
  - Pytorch 2.3.0
88
  - Datasets 3.1.0
89
  - Tokenizers 0.20.3
90
+
91
+ More info can be found in our repository: [https://github.com/open-thoughts/open-thoughts](https://github.com/open-thoughts/open-thoughts).
92
+
93
+ # Citation
94
+ ```
95
+ @misc{openthoughts,
96
+ author = {Team, OpenThoughts},
97
+ month = jan,
98
+ title = {{Open Thoughts}},
99
+ howpublished = {https://open-thoughts.ai},
100
+ year = {2025}
101
+ }
102
+ ```
103
+
104
+ # Links
105
+ - πŸ“Š [Open Thoughts Launch Blog Post](https://www.open-thoughts.ai/blog/launch)
106
+ - πŸ“Š [Open Thoughts Measuring Reasoning with Evalchmey Blog Post](https://www.open-thoughts.ai/blog/measure)
107
+ - πŸ“Š [Open Thoughts OpenThinker-32B Post](https://www.open-thoughts.ai/blog/openthinker-32b)
108
+ - πŸ’» [Open Thoughts GitHub Repository](https://github.com/open-thoughts/open-thoughts)
109
+ - 🧠 [OpenThoughts-114k dataset](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k)
110
+ - 🧠 [OpenThoughts-Unverified-173k dataset](https://huggingface.co/datasets/open-thoughts/OpenThoughts-Unverified-173k)
111
+ - πŸ€– [OpenThinker-7B model](https://huggingface.co/open-thoughts/OpenThinker-7B)
112
+ - πŸ€– [OpenThinker-7B-Unverfied model](https://huggingface.co/open-thoughts/OpenThinker-7B-Unverified)
113
+ - πŸ€– [OpenThinker-32B model](https://huggingface.co/open-thoughts/OpenThinker-32B) - this model
114
+ - πŸ€– [OpenThinker-32B-Unverified model](https://huggingface.co/open-thoughts/OpenThinker-32B-Unverified)