Datasets:
File size: 1,703 Bytes
787549f a855894 e39a092 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 |
---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: url
dtype: string
- name: text
dtype: string
- name: date
dtype: string
- name: metadata
dtype: string
splits:
- name: train
num_bytes: 4467051029
num_examples: 1820241
download_size: 1772035124
dataset_size: 4467051029
---
# Dataset Card for "open-web-math-minhash"
## making of
On a high-RAM colab TPU (40 cores)
```python
from pathlib import Path
from tqdm.auto import tqdm
data_split = 'train'
text_column = 'text'
out_dir = Path(f"output/minhash/{ds_short_name}/{data_split}")
!mkdir -p $out_dir
ds_name = "open-web-math/open-web-math"
dataset_config = str(dataset_config)
!python -m text_dedup.minhash \
--path $ds_name \
--name $dataset_config \
--split $data_split \
--cache_dir "./cache" \
--output $out_dir \
--column $text_column \
--ngram 5 --threshold 0.5 \
--hash_func xxh3 --hash_bits 16 --num_perm 64 \
--batch_size 10000
print(f"output dir is:\n\t{out_dir}")
!ls $out_dir
```
Console:
```sh
Resolving data files: 100% 114/114 [00:11<00:00, 9.79it/s]
Fingerprinting... (num_proc=40): 100% 6315233/6315233 [15:27<00:00, 6806.11 examples/s]
Iterating MinHashes...: 100% 632/632 [05:37<00:00, 1.87it/s]
Clustering...: 100% 14/14 [01:13<00:00, 5.22s/it]
Finding clusters... (num_proc=40): 100% 6315233/6315233 [10:57<00:00, 9602.90 examples/s]
Filtering clusters... (num_proc=40): 100% 6315233/6315233 [03:53<00:00, 27069.61 examples/s]
Saving the dataset (33/33 shards): 100% 1820241/1820241 [07:07<00:00, 4260.38 examples/s]
[10/11/23 23:41:46] INFO Loading :
```
|