Keiran Paster commited on
Commit
fde8ef8
1 Parent(s): 4075638

update author url

Browse files
Files changed (1) hide show
  1. README.md +35 -37
README.md CHANGED
@@ -1,18 +1,18 @@
1
  ---
2
  dataset_info:
3
  features:
4
- - name: url
5
- dtype: string
6
- - name: text
7
- dtype: string
8
- - name: date
9
- dtype: string
10
- - name: metadata
11
- dtype: string
12
  splits:
13
- - name: train
14
- num_bytes: 56651995057
15
- num_examples: 6315233
16
  download_size: 16370689925
17
  dataset_size: 56651995057
18
  license: odc-by
@@ -24,14 +24,15 @@ dataset_info:
24
  size_categories:
25
  - 10B<n<100B
26
  ---
 
27
  <img src="imgs/OpenWebMath-left.png" width="300">
28
 
29
- [Keiran Paster](https://keirp.com)\*, [Marco Dos Santos](#)\*, [Zhangir Azerbayev](https://zhangir-azerbayev.github.io/), [Jimmy Ba](https://jimmylba.github.io/)
30
 
31
  [GitHub ](https://github.com/keirp/OpenWebMath) | [ArXiv](https://arxiv.org/abs/2310.06786)
32
  | [PDF](https://arxiv.org/pdf/2310.06786.pdf)
33
 
34
- **OpenWebMath** is a dataset containing the majority of the high-quality, mathematical text from the internet. It is filtered and extracted from over 200B HTML files on Common Crawl down to a set of **6.3 million documents** containing a total of **14.7B tokens**. OpenWebMath is intended for use in *pretraining* and *finetuning* large language models.
35
 
36
  You can download the dataset using Hugging Face:
37
 
@@ -55,18 +56,18 @@ The dataset is structured as follows:
55
 
56
  OpenWebMath contains documents from over 130k different domains, including data from forums, educational pages, and blogs. The dataset contains documents covering mathematics, physics, statistics, computer science, and more. The following table shows the most common domains in OpenWebMath by character count.
57
 
58
- | Domain | # Characters | % Characters |
59
- |-----------------------|--------------|--------------|
60
- | stackexchange.com | 4,655,132,784| 9.55% |
61
- | nature.com | 1,529,935,838| 3.14% |
62
- | wordpress.com | 1,294,166,938| 2.66% |
63
- | physicsforums.com | 1,160,137,919| 2.38% |
64
- | github.io | 725,689,722 | 1.49% |
65
- | zbmath.org | 620,019,503 | 1.27% |
66
- | wikipedia.org | 618,024,754 | 1.27% |
67
- | groundai.com | 545,214,990 | 1.12% |
68
- | blogspot.com | 520,392,333 | 1.07% |
69
- | mathoverflow.net | 499,102,560 | 1.02% |
70
 
71
  # OpenWebMath Pipeline
72
 
@@ -75,22 +76,19 @@ OpenWebMath contains documents from over 130k different domains, including data
75
  OpenWebMath builds on the massive [Common Crawl](https://commoncrawl.org/) dataset, which contains over 200B HTML documents. We filtered the data to only include documents that are: (1) in English, (2) contain mathematical content, and (3) are of high quality. We also put a strong emphasis on extracting LaTeX content from the HTML documents as well as reducing boilerplate in comparison to other web datasets.
76
 
77
  The OpenWebMath pipeline consists of five steps:
 
78
  1. **Prefiltering HTML Documents**:
79
- - We apply a simple prefilter to all HTML documents in Common Crawl in order to skip documents without mathematical content to unnecessary processing time.
80
-
81
  2. **Text Extraction**:
82
- - Extract text, including LaTeX content, from the HTML documents while removing boilerplate.
83
-
84
  3. **Content Classification and Filtering**:
85
- - Apply a [FastText language identification model](https://fasttext.cc/docs/en/language-identification.html) to keep only English documents.
86
- - Filter high perplexity documents using a [KenLM](https://github.com/kpu/kenlm) model trained on [Proof-Pile](https://huggingface.co/datasets/hoskinson-center/proof-pile).
87
- - Filter non-mathematical documents using our own *MathScore* model.
88
-
89
  4. **Deduplication**:
90
- - Deduplicate the dataset using SimHash in [text-dedup](https://github.com/ChenghaoMou/text-dedup).
91
-
92
  5. **Manual Inspection**:
93
- - Inspect the documents gathered from previous steps and remove low quality pages.
94
 
95
  For a detailed discussion on the processing pipeline, please refer to our paper.
96
 
@@ -102,7 +100,7 @@ OpenWebMath is made available under an ODC-By 1.0 license; users should also abi
102
 
103
  ```
104
  @misc{paster2023openwebmath,
105
- title={OpenWebMath: An Open Dataset of High-Quality Mathematical Web Text},
106
  author={Keiran Paster and Marco Dos Santos and Zhangir Azerbayev and Jimmy Ba},
107
  year={2023},
108
  eprint={2310.06786},
 
1
  ---
2
  dataset_info:
3
  features:
4
+ - name: url
5
+ dtype: string
6
+ - name: text
7
+ dtype: string
8
+ - name: date
9
+ dtype: string
10
+ - name: metadata
11
+ dtype: string
12
  splits:
13
+ - name: train
14
+ num_bytes: 56651995057
15
+ num_examples: 6315233
16
  download_size: 16370689925
17
  dataset_size: 56651995057
18
  license: odc-by
 
24
  size_categories:
25
  - 10B<n<100B
26
  ---
27
+
28
  <img src="imgs/OpenWebMath-left.png" width="300">
29
 
30
+ [Keiran Paster](https://keirp.com)\*, [Marco Dos Santos](https://marco-dossantos.github.io/)\*, [Zhangir Azerbayev](https://zhangir-azerbayev.github.io/), [Jimmy Ba](https://jimmylba.github.io/)
31
 
32
  [GitHub ](https://github.com/keirp/OpenWebMath) | [ArXiv](https://arxiv.org/abs/2310.06786)
33
  | [PDF](https://arxiv.org/pdf/2310.06786.pdf)
34
 
35
+ **OpenWebMath** is a dataset containing the majority of the high-quality, mathematical text from the internet. It is filtered and extracted from over 200B HTML files on Common Crawl down to a set of **6.3 million documents** containing a total of **14.7B tokens**. OpenWebMath is intended for use in _pretraining_ and _finetuning_ large language models.
36
 
37
  You can download the dataset using Hugging Face:
38
 
 
56
 
57
  OpenWebMath contains documents from over 130k different domains, including data from forums, educational pages, and blogs. The dataset contains documents covering mathematics, physics, statistics, computer science, and more. The following table shows the most common domains in OpenWebMath by character count.
58
 
59
+ | Domain | # Characters | % Characters |
60
+ | ----------------- | ------------- | ------------ |
61
+ | stackexchange.com | 4,655,132,784 | 9.55% |
62
+ | nature.com | 1,529,935,838 | 3.14% |
63
+ | wordpress.com | 1,294,166,938 | 2.66% |
64
+ | physicsforums.com | 1,160,137,919 | 2.38% |
65
+ | github.io | 725,689,722 | 1.49% |
66
+ | zbmath.org | 620,019,503 | 1.27% |
67
+ | wikipedia.org | 618,024,754 | 1.27% |
68
+ | groundai.com | 545,214,990 | 1.12% |
69
+ | blogspot.com | 520,392,333 | 1.07% |
70
+ | mathoverflow.net | 499,102,560 | 1.02% |
71
 
72
  # OpenWebMath Pipeline
73
 
 
76
  OpenWebMath builds on the massive [Common Crawl](https://commoncrawl.org/) dataset, which contains over 200B HTML documents. We filtered the data to only include documents that are: (1) in English, (2) contain mathematical content, and (3) are of high quality. We also put a strong emphasis on extracting LaTeX content from the HTML documents as well as reducing boilerplate in comparison to other web datasets.
77
 
78
  The OpenWebMath pipeline consists of five steps:
79
+
80
  1. **Prefiltering HTML Documents**:
81
+ - We apply a simple prefilter to all HTML documents in Common Crawl in order to skip documents without mathematical content to unnecessary processing time.
 
82
  2. **Text Extraction**:
83
+ - Extract text, including LaTeX content, from the HTML documents while removing boilerplate.
 
84
  3. **Content Classification and Filtering**:
85
+ - Apply a [FastText language identification model](https://fasttext.cc/docs/en/language-identification.html) to keep only English documents.
86
+ - Filter high perplexity documents using a [KenLM](https://github.com/kpu/kenlm) model trained on [Proof-Pile](https://huggingface.co/datasets/hoskinson-center/proof-pile).
87
+ - Filter non-mathematical documents using our own _MathScore_ model.
 
88
  4. **Deduplication**:
89
+ - Deduplicate the dataset using SimHash in [text-dedup](https://github.com/ChenghaoMou/text-dedup).
 
90
  5. **Manual Inspection**:
91
+ - Inspect the documents gathered from previous steps and remove low quality pages.
92
 
93
  For a detailed discussion on the processing pipeline, please refer to our paper.
94
 
 
100
 
101
  ```
102
  @misc{paster2023openwebmath,
103
+ title={OpenWebMath: An Open Dataset of High-Quality Mathematical Web Text},
104
  author={Keiran Paster and Marco Dos Santos and Zhangir Azerbayev and Jimmy Ba},
105
  year={2023},
106
  eprint={2310.06786},