hunterhector's picture
Update README.md
e42bace verified
metadata
language:
  - en
tags:
  - pretrained
license: odc-by

Description of the Dataset

This release integrates the entire data sequence utilized in the CrystalCoder training. It encompasses data sequences from the three pre-training stages, combining information from two prior works: the SlimPajama dataset and StarCoder, totaling approximately 1300 billion tokens. These tokens are distributed across three stages, each with distinct weights.

Stage 1

During this initial stage, half of the SlimPajama data is utilized, equivalent to approximately 345 billion tokens.

Stage 2

In the second stage, the remaining half of the SlimPajama data is employed, along with two epochs of StarCoder data. For the StarCoder data, we apply FIM augmentation with an FIM rate of 0.9 and an SPM rate of 0.5. The total token count for this stage is calculated as 0.5 * 690 + 2 * 291, resulting in 927 billion tokens.

Stage 3

The third stage involves reusing Python and web-related data from the StarCoder data, including HTML, CSS, and JavaScript. This data is utilized for training over three epochs, with the application of FIM at a rate of 0.3 alongside an SPM rate of 0.5. The total token count for this stage is 100 billion. Additionally, a small portion of the SlimPajama dataset, excluding the Github part, is also reused, contributing around 10 billion tokens.

Instruction tuning (Stage 3a)

To enhance the model's proficiency in real chat scenarios, we utilize a diverse set of instruction tuning datasets, totaling approximately 1 billion tokens. Specifically, our data include OASST1-guanaco, SlimOrca, ShareGPT_V4.3, Evol-ShareGPT, CodeAlpaca, Rosetta Code, Evol-CodeAlpaca 1, Evol-CodeAlpaca 2, and a self-generated dataset centered on website creation through the Alpaca pipeline. We will release the full dataset soon.

The detailed breakdown of the tokens is as followed:

data split

Primary Usage

This dataset serves as the foundation for training CrystalCoder and supports further reproduction. For training from scratch, please refer to our training codes. For training from middle checkpoints, please load the dataloader states in checkpoints and follow this tutorial.

License

Pretraining data in langauge model mostly comes from a collection of data sources with various licenses. Any use of all or part of the data here must abide by the terms of the original licenses, including attribution clauses when relevant. We refer users to SlimPajama dataset and StarCoder for detailed license attribution.

We release our work under ODC-BY, hence granting the rights over the dataset, but not the contents of the dataset individually.