Datasets:

Modalities:
Tabular
Text
Formats:
parquet
DOI:
Libraries:
Datasets
Dask
License:
anton-l HF staff commited on
Commit
e5110dc
1 Parent(s): a7dd0df

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +134 -6
README.md CHANGED
@@ -144,15 +144,143 @@ configs:
144
  path: infiwebmath-4plus/train-*
145
  ---
146
 
 
147
 
148
- Data subsets:
149
 
150
- * `finemath-3plus`: FineMath (English) with `int_score >= 3`
151
- * `finemath-4plus`: FineMath (English) with `int_score >= 4`
152
- * `infiwebmath-3plus`: Infi-WebMath (English-only) with `int_score >= 3`
153
- * `infiwebmath-4plus`: Infi-WebMath (English-only) with `int_score >= 4`
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
154
 
155
  ```python
156
  from datasets import load_dataset
 
 
157
  data = load_dataset("HuggingFaceTB/finemath", "finemath-4plus", split="train", num_proc=8)
158
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
144
  path: infiwebmath-4plus/train-*
145
  ---
146
 
147
+ # 📐 FineMath
148
 
 
149
 
150
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/0GAdY8wZx6bGtUzqX4Lvi.png)
151
+
152
+ ## What is it?
153
+
154
+ 📐 FineMath consists of **34B tokens** (FineMath-3+) and **54B tokens** (FineMath-3+ with InfiMM-WebMath-3+) of mathematical educational content filtered from CommonCrawl. To curate this dataset, we trained a mathematical content classifier using annotations generated by LLama-3.1-70B-Instruct. We used the classifier to retain only the most educational mathematics content, focusing on clear explanations and step-by-step problem solving rather than advanced academic papers.
155
+
156
+ The [Dataset Curation](#dataset-curation) section details the process for creating the dataset.
157
+
158
+ <img src="assets/train_curves.png" width="800"/>
159
+
160
+ ## What is being released?
161
+
162
+ The dataset is released in two versions:
163
+ - **FineMath-3+**: 34B tokens, 21.4M documents containing mathematical reasoning and problem solving, formatted with Markdown and LaTeX.
164
+ - **FineMath-4+** (a subset of FineMath-3+): 9.6B tokens, 6.7M documents of higher quality with detailed explanations. Models trained on this dataset perform better on GSM8k and MATH.
165
+
166
+ <!-- (the image looks kinda meh) <img src="assets/stats.png" width="512"/> -->
167
+
168
+ We also release a filtered English text-only portion of the **[InfiMM-WebMath-40B](https://huggingface.co/datasets/Infi-MM/InfiMM-WebMath-40B)** dataset, classified using the same approach as FineMath:
169
+ - **InfiMM-WebMath-3+**: 20.5B tokens, 13.9M documents.
170
+ - **InfiMM-WebMath-4+** (a subset of InfiMM-WebMath-3+): 8.5B tokens, 6.3M documents.
171
+
172
+ ## How to load the dataset
173
+
174
+ Use one of the available configs: `finemath-3plus`, `finemath-4plus`, `infiwebmath-3plus`, or `infiwebmath-4plus`.
175
 
176
  ```python
177
  from datasets import load_dataset
178
+
179
+ # Load the high-quality subset
180
  data = load_dataset("HuggingFaceTB/finemath", "finemath-4plus", split="train", num_proc=8)
181
+
182
+ # Or load the larger subset
183
+ data = load_dataset("HuggingFaceTB/finemath", "finemath-3plus", split="train", num_proc=8)
184
+ ```
185
+
186
+ ## Dataset curation
187
+
188
+ Recent language models like DeepSeekMath and MathStral have demonstrated strong mathematical capabilities, trained on specialized datasets that aren't publicly available. We developed a pipeline to identify and extract high-quality mathematical content from CommonCrawl, with several iterations of refinement to improve quality.
189
+
190
+ ### Phase 1: Initial content extraction and classification
191
+ We began by re-extracting pages from CommonCrawl WARCs using URLs from the FineWeb dataset, collecting both the latest and largest versions of each page to capture the evolution of pages across the years.
192
+ Unlike FineWeb which uses Trafilatura, we employed Resiliparse for text extraction as it better preserves forum discussions and QA answers that often contain crucial reasoning steps and solutions.
193
+
194
+ For initial quality assessment, we used [Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct) to generate annotations on a 3-point scale:
195
+ 1. Contains general mathematical content
196
+ 2. Shows logical reasoning in mathematical context
197
+ 3. Contains clear step-by-step solutions at appropriate level
198
+
199
+ A `multilingual-e5-small`-based classifier finetuned on these annotations was used to score the initial corpus.
200
+ However, this first version performed below the OpenWebMath baseline, leading to several important refinements.
201
+
202
+ ### Phase 2: Recalling more candidate pages
203
+ Analysis revealed that FineWeb's C4 filter removes pages containing '{' characters, inadvertently filtering out content with LaTeX notation. To address this and expand coverage, we:
204
+
205
+ 1. Identified promising website domains by selecting those where at least 10% of pages received a classifier score ≥ 2
206
+ 2. Added URLs from OpenWebMath and InfiMM-WebMath datasets
207
+ 3. Recovered URLs of pages filtered by FineWeb's '{' rule from its rejection logs
208
+ 4. Re-extracted all content from scratch using the [OpenWebMath pipeline](https://github.com/keirp/OpenWebMath), which properly handles mathematical notation across various HTML markup formats and standardizes them to LaTeX
209
+
210
+ ### Phase 3: Refined quality assessment
211
+ The expanded corpus underwent a more fine-grained quality evaluation:
212
+
213
+ Once again, we used LLama-3.1-70B-Instruct to score a sample of newly extracted pages on a 5-point scale (full prompt available in [here](prompt.txt)):
214
+ We finetuned a new classifier [TODO: link] on these annotations and scored the entire corpus.
215
+ After leaving only pages with a score of 3 or higher, and deduplicating the samples using simple single-band MinHash-LSH, we obtained FineMath-3+ with 34B tokens.
216
+
217
+ The same classifier was applied to InfiMM-WebMath's text content, focusing more on reasoning rather than advanced mathematics.
218
+
219
+ Both datasets were additionally filtered using FineWeb's language classification pipeline to remove non-English content.
220
+
221
+ ### Decontamination
222
+ Following Qwen2.5-Math's approach, we removed samples with 13-gram overlaps against test sets from GSM8k, MATH, MMLU and ARC. Decontamination logs are available at [HuggingFaceTB/finemath_contamination_report](https://huggingface.co/datasets/HuggingFaceTB/finemath_contamination_report).
223
+
224
+ ## Results and Performance
225
+
226
+ <img src="assets/eval_bar.png" width="900"/>
227
+
228
+ Our evaluations show several key findings:
229
+
230
+ 1. FineMath-3+ outperforms the base InfiWebMath on GSM8k and MATH benchmarks
231
+ 2. FineMath-4+ demonstrates superior performance compared to both FineMath-3+ and InfiWebMath-4+ on GSM8k and MATH
232
+ 3. Combining the datasets (50% FineMath-3+ with 50% InfiWebMath-3+) yields approximately 50B tokens while matching the performance of InfiWebMath-3+
233
+ 4. Deduplicating the pages repeated between FineMath and InfiWebMath reduces performance compared to a non-deduplicated combination
234
+
235
+ ## Dataset Schema
236
+
237
+ ```python
238
+ {
239
+ 'url': string, # Source page URL
240
+ 'fetch_time': int64, # Crawler timestamp
241
+ 'content_mime_type': string, # MIME type
242
+ 'warc_filename': string, # Common Crawl WARC source file
243
+ 'warc_record_offset': int32, # WARC record offset, in bytes
244
+ 'warc_record_length': int32, # WARC record size, in bytes
245
+ 'text': string, # Page content
246
+ 'token_count': int32, # Number of Llama tokens
247
+ 'char_count': int32, # Character count
248
+ 'metadata': string, # Additional OpenWebMath metadata
249
+ 'score': float64, # Raw quality score
250
+ 'int_score': int64, # Integer quality score
251
+ 'crawl': string, # Common Crawl crawl identifier
252
+ 'snapshot_type': string, # Whether the page is the latest or the largest for this URL
253
+ 'language': string, # Document language
254
+ 'language_score': float64 # LangID probability
255
+ }
256
+ ```
257
+
258
+ ## Considerations for Using the Data
259
+
260
+ ### Social Impact of Dataset
261
+ With the release of this dataset, we aim to make high-quality mathematical educational content more accessible to the machine learning community. While multiple language models have demonstrated strong mathematical capabilities, the datasets used to train these capabilities are often not publicly available. By releasing FineMath, we hope to:
262
+ - Make the dataset creation process more transparent
263
+ - Reduce the barrier to entry for training models with strong mathematical capabilities
264
+ - Provide a benchmark for mathematical content quality filtering
265
+
266
+ ### Discussion of Biases
267
+ The dataset may have certain inherent biases:
268
+ - Focus on English language content
269
+ - Emphasis on popular educational approaches to mathematics
270
+ - Bias towards certain types of mathematical notation and formatting
271
+
272
+ ### Other Known Limitations
273
+ - The dataset is limited to English language content
274
+ - The filtering criteria may not capture advanced mathematical content (e.g. advanced research subjects)
275
+ - Some mathematical notation (e.g. image-based) may not be preserved
276
+ - Long-form content may have varying quality even within high-scoring documents
277
+
278
+ ## Licensing Information
279
+ The dataset is released under the **Open Data Commons Attribution License (ODC-By) v1.0** [license](https://opendatacommons.org/licenses/by/1-0/). The use of this dataset is also subject to [CommonCrawl's Terms of Use](https://commoncrawl.org/terms-of-use).
280
+
281
+ ## Future work
282
+ There are several avenues for future work:
283
+ - Expand language coverage beyond English
284
+ - Improve mathematical notation extraction and preservation
285
+ - Develop more sophisticated quality metrics
286
+ - Create specialized subsets for different educational levels