--- license: mit task_categories: - text-generation configs: - config_name: default data_files: - split: train path: data/train_* - split: test path: data/test_* --- We collect a 2.5B training dataset from various domains for long-context continual pre-training. The composition of this dataset is as follows (partially inspired by [Long-Data-Collection](https://huggingface.co/datasets/togethercomputer/Long-Data-Collections)): | Domain | Proportion | Source | | ------------- | ---------- | ------ | | Book | 40% | [Redpajama-Book](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) | | Arxiv | 20% | [Redpajama-Arxiv](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) | | General | 20% | [Redpajama](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) | | Code | 10% | [LCC-Python](https://huggingface.co/datasets/microsoft/LCC_python) | | QA | 5% | [Natural Questions](https://ai.google.com/research/NaturalQuestions/) | | Summarization | 5% | [BookSum](https://github.com/salesforce/booksum) | We have also curated a test dataset comprising 250 million tokens, mirroring the same composition. The selection criteria ensured that the average n-gram similarity (for n=2, 3, 4) with the training set is below 10%. This threshold effectively excludes all QA and Summarization data, resulting in a test corpus where the distribution of tokens across Book, Arxiv, General, and Code categories follows a ratio of 4:2:2:1, respectively.