hunterhector
commited on
Commit
•
87f486c
1
Parent(s):
03411fe
Update README.md
Browse files
README.md
CHANGED
@@ -4,7 +4,7 @@ license: odc-by
|
|
4 |
# TxT360: A Top-Quality LLM Pre-training Dataset Requires the Perfect Blend
|
5 |
<center><img src="llm360_logo(1).png" alt="k2 eval table" /></center>
|
6 |
|
7 |
-
## We introduce TxT360 (Trillion eXtracted Text) the first dataset to globally deduplicate 99 CommonCrawl snapshots and 14 commonly used non-web data sources (e.g. FreeLaw, PG-19, etc.) providing pretraining teams with a recipe to easily adjust data weighting and train the most performant models.
|
8 |
|
9 |
# TxT360 Compared to Common Pretraining Datasets
|
10 |
| Data Source | TxT360 | FineWeb | RefinedWeb | PedPajamaV2 | C4 | Dolma | RedPajamaV1 | The Pile |
|
@@ -23,10 +23,10 @@ license: odc-by
|
|
23 |
| Code | * | - | - | - | - | Included | Included | Included |
|
24 |
|
25 |
* TxT360 does not include code. This decision was made due to the perceived low duplication code with other sources.
|
26 |
-
**StackExchange and PubMed Central datasets will be
|
27 |
-
Complete details on the dataset can be found in our blog post [here](https://huggingface.co/spaces/LLM360/TxT360
|
28 |
|
29 |
-
##
|
30 |
To evaluate the training efficiency of our dataset, we sampled 1.5T tokens from both FineWeb and TxT360 (using the aforementioned weighting) and conducted a training ablation on an 8x8B Mixture-of-Experts architecture, similar to Mixtral. We compared the learning curves by tracking training loss, validation scores, and performance across a wide array of diverse evaluation benchmarks. The validation set was sampled independently from SlimPajama. Note that this experiment is done on a slightly earlier version of the dataset.
|
31 |
<center><img src="txttofineweb.png" alt="comparison" /></center>
|
32 |
|
|
|
4 |
# TxT360: A Top-Quality LLM Pre-training Dataset Requires the Perfect Blend
|
5 |
<center><img src="llm360_logo(1).png" alt="k2 eval table" /></center>
|
6 |
|
7 |
+
## We introduce TxT360 (Trillion eXtracted Text) the first dataset to globally deduplicate 99 CommonCrawl snapshots and 14 commonly used non-web data sources (e.g. FreeLaw, PG-19, etc.) providing pretraining teams with a recipe to easily adjust data weighting, obtain the largest high-quality open source dataset, and train the most performant models.
|
8 |
|
9 |
# TxT360 Compared to Common Pretraining Datasets
|
10 |
| Data Source | TxT360 | FineWeb | RefinedWeb | PedPajamaV2 | C4 | Dolma | RedPajamaV1 | The Pile |
|
|
|
23 |
| Code | * | - | - | - | - | Included | Included | Included |
|
24 |
|
25 |
* TxT360 does not include code. This decision was made due to the perceived low duplication code with other sources.
|
26 |
+
**StackExchange and PubMed Central datasets will be uploaded shortly. All other datasets are present and complete.
|
27 |
+
Complete details on the dataset can be found in our blog post [here](https://huggingface.co/spaces/LLM360/TxT360).
|
28 |
|
29 |
+
## TxT360 Performance
|
30 |
To evaluate the training efficiency of our dataset, we sampled 1.5T tokens from both FineWeb and TxT360 (using the aforementioned weighting) and conducted a training ablation on an 8x8B Mixture-of-Experts architecture, similar to Mixtral. We compared the learning curves by tracking training loss, validation scores, and performance across a wide array of diverse evaluation benchmarks. The validation set was sampled independently from SlimPajama. Note that this experiment is done on a slightly earlier version of the dataset.
|
31 |
<center><img src="txttofineweb.png" alt="comparison" /></center>
|
32 |
|