ljb121002's picture
Update README.md
fdad082
|
raw
history blame
No virus
1.77 kB

Description of the Dataset

This release integrates the entire data sequence utilized in the CrystalCoder training. It encompasses data sequences from the three pre-training stages, combining information from two prior works: the SlimPajama dataset and StarCoder, totaling approximately 1300 billion tokens. These tokens are distributed across three stages, each with distinct weights.

Stage 1

During this initial stage, half of the SlimPajama data is utilized, equivalent to approximately 345 billion tokens.

Stage 2

In the second stage, the remaining half of the SlimPajama data is employed, along with two epochs of StarCoder data. For the StarCoder data, we apply FIM augmentation with an FIM rate of 0.9. The total token count for this stage is calculated as 0.5 * 690 + 2 * 291, resulting in 927 billion tokens.

Stage 3

The third stage involves reusing Python and web-related data from the StarCoder data, including HTML, CSS, and JavaScript. This data is utilized for training over three epochs, with the application of FIM at a rate of 0.3. The total token count for this stage is 100 billion. Additionally, a small portion of the SlimPajama dataset, excluding the Github part, is also reused, contributing around 10 billion tokens.

Primary Usage

This dataset serves as the foundation for training CrystalCoder and supports further reproduction.

License: