File size: 3,022 Bytes
787549f
 
 
 
 
 
ae317ed
 
 
 
787549f
ae317ed
787549f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ae317ed
 
 
 
 
 
 
 
 
 
8a9b25b
a36dd85
 
 
 
 
787549f
 
 
a36dd85
 
 
 
 
 
 
 
 
 
 
 
 
5f65f95
a36dd85
 
 
 
 
 
 
a855894
 
 
 
 
 
 
 
 
a36dd85
 
a855894
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e39a092
 
 
 
 
 
 
 
 
 
 
8a9b25b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a36dd85
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
---
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
- config_name: text-only
  data_files:
  - split: train
    path: text-only/train-*
dataset_info:
- config_name: default
  features:
  - name: url
    dtype: string
  - name: text
    dtype: string
  - name: date
    dtype: string
  - name: metadata
    dtype: string
  splits:
  - name: train
    num_bytes: 4467051029
    num_examples: 1820241
  download_size: 1772035124
  dataset_size: 4467051029
- config_name: text-only
  features:
  - name: text
    dtype: string
  splits:
  - name: train
    num_bytes: 2305854627
    num_examples: 1820241
  download_size: 1360869461
  dataset_size: 2305854627
license: odc-by
task_categories:
- text-generation
size_categories:
- 1M<n<10M
source_datasets: open-web-math/open-web-math
---
# Dataset Card for "open-web-math-minhash"

An attempt at a _"high quality sample"_ of `open-web-math/open-web-math` by aggressively applying `minhash` from text-dedup. The result is 1.82M rows down from the original 6M:

```
DatasetDict({
    train: Dataset({
        features: ['url', 'text', 'date', 'metadata'],
        num_rows: 1820241
    })
})
```

## Usage

Unless you need the metadata, load the `text-only` config which is only 1.4 GB/5 shards:

```python
from datasets import load_dataset

dataset_config = "text-only"
dataset = load_dataset("BEE-spoke-data/open-web-math-minhash", dataset_config)
```

## making of

On a high-RAM colab TPU (40 cores)

```python
from pathlib import Path
from tqdm.auto import tqdm

ds_name = "open-web-math/open-web-math"
dataset_config = "default"
data_split = 'train'
text_column = 'text'

out_dir = Path(f"output/minhash/{ds_short_name}/{data_split}")
!mkdir -p $out_dir


!python -m text_dedup.minhash \
  --path $ds_name \
  --name $dataset_config \
  --split $data_split \
  --cache_dir "./cache" \
  --output $out_dir \
  --column $text_column \
  --ngram 5 --threshold 0.5 \
  --hash_func xxh3 --hash_bits 16 --num_perm 64 \
  --batch_size 10000

print(f"output dir is:\n\t{out_dir}")
!ls $out_dir
```

Console:

```sh
Resolving data files: 100% 114/114 [00:11<00:00,  9.79it/s]
Fingerprinting... (num_proc=40): 100% 6315233/6315233 [15:27<00:00, 6806.11 examples/s]
Iterating MinHashes...: 100% 632/632 [05:37<00:00,  1.87it/s]
Clustering...: 100% 14/14 [01:13<00:00,  5.22s/it]
Finding clusters... (num_proc=40): 100% 6315233/6315233 [10:57<00:00, 9602.90 examples/s]
Filtering clusters... (num_proc=40): 100% 6315233/6315233 [03:53<00:00, 27069.61 examples/s]
Saving the dataset (33/33 shards): 100% 1820241/1820241 [07:07<00:00, 4260.38 examples/s]
[10/11/23 23:41:46] INFO     Loading                         :
```




## citation

```
@misc{paster2023openwebmath,
      title={OpenWebMath: An Open Dataset of High-Quality Mathematical Web Text}, 
      author={Keiran Paster and Marco Dos Santos and Zhangir Azerbayev and Jimmy Ba},
      year={2023},
      eprint={2310.06786},
      archivePrefix={arXiv},
      primaryClass={cs.AI}
}
```