Datasets:

Languages:
English
ArXiv:
License:
ljb121002's picture
Create README.md
7e29626
|
raw
history blame
1.39 kB
# Dataset description
This release incorporates the complete data sequence used in our CrystalCoder training, covering data sequences from the three pre-training stages. The data is a combination of two previous works: [SlimPajama dataset](https://huggingface.co/datasets/cerebras/SlimPajama-627B) and [StarCoder](https://huggingface.co/datasets/bigcode/starcoderdata). We divide them across three stages with different weights.
## Stage 1
During this stage, we utilize half of the [SlimPajama data](https://huggingface.co/datasets/cerebras/SlimPajama-627B).
## Stage 2
In the second stage, the remaining half of the [SlimPajama data](https://huggingface.co/datasets/cerebras/SlimPajama-627B) is employed, along with two epochs of [StarCoder data](https://huggingface.co/datasets/bigcode/starcoderdata). For StarCoder data, we implement [FIM augmentation](https://arxiv.org/abs/2207.14255) with an FIM rate of 0.9.
## Stage 3
The third stage involves the reuse of Python, HTML, CSS, and JavaScript data from the [StarCoder data](https://huggingface.co/datasets/bigcode/starcoderdata) for training over three epochs. In this stage, FIM with a rate of 0.3 is applied. Additionally, a small portion of the SlimPajama dataset except the Github part is also reused.
# Primary usage
This dataset is used for our training of our CrystalCoder, and also for further reproduction.
# License: