Datasets:

Languages:
English
ArXiv:
License:
ljb121002's picture
Update README.md
aff4fec
|
raw
history blame
2.92 kB

Description of the Dataset

This release integrates the entire data sequence utilized in the CrystalCoder training. It encompasses data sequences from the three pre-training stages, combining information from two prior works: the SlimPajama dataset and StarCoder, totaling approximately 1300 billion tokens. These tokens are distributed across three stages, each with distinct weights.

Stage 1

During this initial stage, half of the SlimPajama data is utilized, equivalent to approximately 345 billion tokens.

Stage 2

In the second stage, the remaining half of the SlimPajama data is employed, along with two epochs of StarCoder data. For the StarCoder data, we apply FIM augmentation with an FIM rate of 0.9 and an SPM rate of 0.5. The total token count for this stage is calculated as 0.5 * 690 + 2 * 291, resulting in 927 billion tokens.

Stage 3

The third stage involves reusing Python and web-related data from the StarCoder data, including HTML, CSS, and JavaScript. This data is utilized for training over three epochs, with the application of FIM at a rate of 0.3 alongside an SPM rate of 0.5. The total token count for this stage is 100 billion. Additionally, a small portion of the SlimPajama dataset, excluding the Github part, is also reused, contributing around 10 billion tokens.

Instruction tuning

To enhance the model's proficiency in real chat scenarios, we utilize a diverse set of instruction tuning datasets, totaling approximately 1 billion tokens. Specifically, our data include OASST1-guanaco, SlimOrca, ShareGPT_V4.3, Evol-ShareGPT, CodeAlpaca, Rosetta Code, Evol-CodeAlpaca 1, Evol-CodeAlpaca 2, and a self-generated dataset centered on website creation through the Alpaca pipeline. We will release the full dataset soon.

Primary Usage

This dataset serves as the foundation for training CrystalCoder and supports further reproduction.

License: Apache 2.0