Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,18 @@
|
|
1 |
---
|
2 |
license: mit
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: mit
|
3 |
+
task_categories:
|
4 |
+
- text-generation
|
5 |
---
|
6 |
+
|
7 |
+
We collect a 2.5B training dataset from various domains for long-context continual pre-training. The composition of this dataset is as follows (partially inspired by [Long-Data-Collection](https://huggingface.co/datasets/togethercomputer/Long-Data-Collections)):
|
8 |
+
|
9 |
+
| Domain | Proportion | Source |
|
10 |
+
| ------------- | ---------- | ------ |
|
11 |
+
| Book | 40% | [Redpajama-Book](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) |
|
12 |
+
| Arxiv | 20% | [Redpajama-Arxiv](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) |
|
13 |
+
| General | 20% | [Redpajama](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) |
|
14 |
+
| Code | 10% | [LCC-Python](https://huggingface.co/datasets/microsoft/LCC_python) |
|
15 |
+
| QA | 5% | [Natural Questions](https://ai.google.com/research/NaturalQuestions/) |
|
16 |
+
| Summarization | 5% | [BookSum](https://github.com/salesforce/booksum) |
|
17 |
+
|
18 |
+
We have also curated a test dataset comprising 250 million tokens, mirroring the same composition. The selection criteria ensured that the average n-gram similarity (for n=2, 3, 4) with the training set is below 10%. This threshold effectively excludes all QA and Summarization data, resulting in a test corpus where the distribution of tokens across Book, Arxiv, General, and Code categories follows a ratio of 4:2:2:1, respectively.
|