Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
philippbecker commited on
Commit
5d083f2
·
verified ·
1 Parent(s): 4778cfe

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -3
README.md CHANGED
@@ -1,3 +1,57 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+
5
+ # TROLL Data
6
+
7
+ This repo contains the datasets used in
8
+
9
+ **TROLL: Trust Regions improve Reinforcement Learning for Large Language Models**
10
+ (Philipp Becker∗, Niklas Freymuth∗, Serge Thilges, Fabian Otto, Gerhard Neumann, *shared first author)
11
+
12
+ - Paper: https://arxiv.org/abs/2510.03817
13
+ - Project Page: https://niklasfreymuth.github.io/troll/
14
+ - Code on Github: https://github.com/niklasfreymuth/TROLL
15
+
16
+ ## Datasets
17
+ **GSM8k**
18
+
19
+ Taken from https://huggingface.co/datasets/openai/gsm8k
20
+
21
+
22
+ **DAPO**
23
+
24
+ Build DAPO Train and DAPO Eval on the version of the DAPO-Math dataset provided by Cui et al. (2025) (https://github.com/PRIME-RL/Entropy-Mechanism-of-RL).
25
+ From their original training set, we set aside 1024 samples as an in-domain validation set (DAPO Eval), leaving 16,893 samples for DAPO Train.
26
+ For broader outof-distribution evaluation, we again follow Cui et al. (2025) and use a benchmark suite, we refer
27
+ to as Math-Eval, consisting of MATH500, AMC, AIME2024, AIME 2025, OMNI-MATH, OlympiadBench, and Minerva.
28
+ We again build the data provided by Cui et al. (2025) and also follow their protocol by computing the mean over 32 responses for the small but hard AMC, AIME2024, and AIME2025 datasets while only considering a single response for the other sets.
29
+ Finally, we ensure all 3 datasets have the same system preprompt, and include correct and identical instructions for answer formatting.
30
+
31
+ *Preprompt*:
32
+ ```
33
+ Your task is to follow a systematic, thorough reasoning process before providing the final solution.
34
+ This involves analyzing, summarizing, exploring, reassessing, and refining your thought process through multiple iterations.
35
+ Structure your response into two sections: Thought and Solution.
36
+ In the Thought section, present your reasoning using the format: "<think> {thoughts} </think>".
37
+ ```
38
+
39
+ (Ganqu Cui, Yuchen Zhang, Jiacheng Chen, Lifan Yuan, Zhi Wang, Yuxin Zuo, Haozhan Li, Yuchen
40
+ Fan, Huayu Chen, Weize Chen, et al. The entropy mechanism of reinforcement learning for
41
+ reasoning language models. 2025)
42
+
43
+ **EURUS**
44
+
45
+ Train and validation sets of the Eurus-2-RLDataset https://huggingface.co/datasets/PRIME-RL/Eurus-2-RL-Data , filter for math questions, resulting in 455 261 train and 1 024
46
+ evaluation questions
47
+
48
+
49
+ ## Citation
50
+ ```
51
+ @article{becker2025troll,
52
+ title={TROLL: Trust Regions improve Reinforcement Learning for Large Language Models},
53
+ author={Becker, Philipp and Freymuth, Niklas and Thilges, Serge and Otto, Fabian and Neumann, Gerhard},
54
+ journal={arXiv preprint arXiv:2510.03817},
55
+ year={2025}
56
+ }
57
+ ```