--- configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: url dtype: string - name: text dtype: string - name: date dtype: string - name: metadata dtype: string splits: - name: train num_bytes: 4467051029 num_examples: 1820241 download_size: 1772035124 dataset_size: 4467051029 --- # Dataset Card for "open-web-math-minhash" ## making of On a high-RAM colab TPU (40 cores) ```python from pathlib import Path from tqdm.auto import tqdm data_split = 'train' text_column = 'text' out_dir = Path(f"output/minhash/{ds_short_name}/{data_split}") !mkdir -p $out_dir ds_name = "open-web-math/open-web-math" dataset_config = str(dataset_config) !python -m text_dedup.minhash \ --path $ds_name \ --name $dataset_config \ --split $data_split \ --cache_dir "./cache" \ --output $out_dir \ --column $text_column \ --ngram 5 --threshold 0.5 \ --hash_func xxh3 --hash_bits 16 --num_perm 64 \ --batch_size 10000 print(f"output dir is:\n\t{out_dir}") !ls $out_dir ``` Console: ```sh Resolving data files: 100% 114/114 [00:11<00:00, 9.79it/s] Fingerprinting... (num_proc=40): 100% 6315233/6315233 [15:27<00:00, 6806.11 examples/s] Iterating MinHashes...: 100% 632/632 [05:37<00:00, 1.87it/s] Clustering...: 100% 14/14 [01:13<00:00, 5.22s/it] Finding clusters... (num_proc=40): 100% 6315233/6315233 [10:57<00:00, 9602.90 examples/s] Filtering clusters... (num_proc=40): 100% 6315233/6315233 [03:53<00:00, 27069.61 examples/s] Saving the dataset (33/33 shards): 100% 1820241/1820241 [07:07<00:00, 4260.38 examples/s] [10/11/23 23:41:46] INFO Loading : ```