ljb121002 commited on
Commit
3827fa3
1 Parent(s): 13f1a7d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -0
README.md CHANGED
@@ -11,6 +11,11 @@ In the second stage, the remaining half of the [SlimPajama data](https://hugging
11
  ## Stage 3
12
  The third stage involves reusing Python and web-related data from the [StarCoder data](https://huggingface.co/datasets/bigcode/starcoderdata), including HTML, CSS, and JavaScript. This data is utilized for training over three epochs, with the application of FIM at a rate of 0.3. The total token count for this stage is 100 billion. Additionally, a small portion of the SlimPajama dataset, excluding the Github part, is also reused, contributing around 10 billion tokens.
13
 
 
 
 
 
 
14
  # Primary Usage
15
 
16
  This dataset serves as the foundation for training CrystalCoder and supports further reproduction.
 
11
  ## Stage 3
12
  The third stage involves reusing Python and web-related data from the [StarCoder data](https://huggingface.co/datasets/bigcode/starcoderdata), including HTML, CSS, and JavaScript. This data is utilized for training over three epochs, with the application of FIM at a rate of 0.3. The total token count for this stage is 100 billion. Additionally, a small portion of the SlimPajama dataset, excluding the Github part, is also reused, contributing around 10 billion tokens.
13
 
14
+ ### Instruction tuning
15
+
16
+ To enhance the model's proficiency in real chat scenarios, we utilize a diverse set of instruction tuning datasets, totaling approximately 1 billion tokens. Specifically, our data include [OASST1-guanaco](https://huggingface.co/datasets/openaccess-ai-collective/oasst1-guanaco-extended-sharegpt), [SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca), [ShareGPT_V4.3](https://huggingface.co/datasets/Aeala/ShareGPT_Vicuna_unfiltered), [Evol-ShareGPT](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k), [CodeAlpaca](https://huggingface.co/datasets/lucasmccabe-lmi/CodeAlpaca-20k), [Rosetta Code](https://github.com/sahil280114/codealpaca/blob/master/data/rosetta_alpaca.json), [Evol-CodeAlpaca 1](https://huggingface.co/datasets/theblackcat102/evol-codealpaca-v1), [Evol Alpaca 2](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1), and a self-generated dataset centered on website creation through the [Alpaca](https://github.com/tatsu-lab/stanford_alpaca) pipeline.
17
+
18
+
19
  # Primary Usage
20
 
21
  This dataset serves as the foundation for training CrystalCoder and supports further reproduction.