Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
koalazf99 commited on
Commit
a44a9fc
·
verified ·
1 Parent(s): 5d53178

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +36 -3
README.md CHANGED
@@ -1,3 +1,36 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: odc-by
3
+ task_categories:
4
+ - text-generation
5
+ language:
6
+ - en
7
+ tags:
8
+ - web
9
+ - common crawl
10
+ size_categories:
11
+ - 100B<n<1T
12
+ ---
13
+
14
+ # 📚 fineweb-pro
15
+
16
+ <p align="center">
17
+ <img src="prox-teaser.png">
18
+ </p>
19
+
20
+ [ArXiv](http://arxiv.org/abs/2409.17115) | [Models](https://huggingface.co/collections/gair-prox/prox-general-models-65f1674f0607712c4d6eec76) | [Code](https://github.com/GAIR-NLP/ProX)
21
+
22
+ DCLM-pro is refined from [DCLM](https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0-parquet) using the **ProX** refining framework.
23
+ It contains about >500B high quality tokens, ready for general language model pre-training.
24
+
25
+ ## License
26
+ DCLM-pro is based on DCLM, which is made available under an cc-by-4.0 license.
27
+
28
+ ### Citation
29
+ ```
30
+ @article{zhou2024programming,
31
+ title={Programming Every Example: Lifting Pre-training Data Quality like Experts at Scale},
32
+ author={Zhou, Fan and Wang, Zengzhi and Liu, Qian and Li, Junlong and Liu, Pengfei},
33
+ journal={arXiv preprint arXiv:2409.17115},
34
+ year={2024}
35
+ }
36
+ ```