ljb121002 commited on
Commit
fdad082
1 Parent(s): 7e29626

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -8
README.md CHANGED
@@ -1,18 +1,18 @@
1
- # Dataset description
2
 
3
- This release incorporates the complete data sequence used in our CrystalCoder training, covering data sequences from the three pre-training stages. The data is a combination of two previous works: [SlimPajama dataset](https://huggingface.co/datasets/cerebras/SlimPajama-627B) and [StarCoder](https://huggingface.co/datasets/bigcode/starcoderdata). We divide them across three stages with different weights.
4
 
5
  ## Stage 1
6
- During this stage, we utilize half of the [SlimPajama data](https://huggingface.co/datasets/cerebras/SlimPajama-627B).
7
 
8
  ## Stage 2
9
- In the second stage, the remaining half of the [SlimPajama data](https://huggingface.co/datasets/cerebras/SlimPajama-627B) is employed, along with two epochs of [StarCoder data](https://huggingface.co/datasets/bigcode/starcoderdata). For StarCoder data, we implement [FIM augmentation](https://arxiv.org/abs/2207.14255) with an FIM rate of 0.9.
10
 
11
  ## Stage 3
12
- The third stage involves the reuse of Python, HTML, CSS, and JavaScript data from the [StarCoder data](https://huggingface.co/datasets/bigcode/starcoderdata) for training over three epochs. In this stage, FIM with a rate of 0.3 is applied. Additionally, a small portion of the SlimPajama dataset except the Github part is also reused.
13
 
14
- # Primary usage
15
 
16
- This dataset is used for our training of our CrystalCoder, and also for further reproduction.
17
 
18
- # License:
 
1
+ # Description of the Dataset
2
 
3
+ This release integrates the entire data sequence utilized in the CrystalCoder training. It encompasses data sequences from the three pre-training stages, combining information from two prior works: the [SlimPajama dataset](https://huggingface.co/datasets/cerebras/SlimPajama-627B) and [StarCoder](https://huggingface.co/datasets/bigcode/starcoderdata), totaling approximately 1300 billion tokens. These tokens are distributed across three stages, each with distinct weights.
4
 
5
  ## Stage 1
6
+ During this initial stage, half of the [SlimPajama data](https://huggingface.co/datasets/cerebras/SlimPajama-627B) is utilized, equivalent to approximately 345 billion tokens.
7
 
8
  ## Stage 2
9
+ In the second stage, the remaining half of the [SlimPajama data](https://huggingface.co/datasets/cerebras/SlimPajama-627B) is employed, along with two epochs of [StarCoder data](https://huggingface.co/datasets/bigcode/starcoderdata). For the StarCoder data, we apply [FIM augmentation](https://arxiv.org/abs/2207.14255) with an FIM rate of 0.9. The total token count for this stage is calculated as 0.5 * 690 + 2 * 291, resulting in 927 billion tokens.
10
 
11
  ## Stage 3
12
+ The third stage involves reusing Python and web-related data from the [StarCoder data](https://huggingface.co/datasets/bigcode/starcoderdata), including HTML, CSS, and JavaScript. This data is utilized for training over three epochs, with the application of FIM at a rate of 0.3. The total token count for this stage is 100 billion. Additionally, a small portion of the SlimPajama dataset, excluding the Github part, is also reused, contributing around 10 billion tokens.
13
 
14
+ # Primary Usage
15
 
16
+ This dataset serves as the foundation for training CrystalCoder and supports further reproduction.
17
 
18
+ # License: