hunterhector commited on
Commit
80df173
1 Parent(s): 7dbf7bc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -0
README.md CHANGED
@@ -22,6 +22,10 @@ The third stage involves reusing Python and web-related data from the [StarCoder
22
 
23
  To enhance the model's proficiency in real chat scenarios, we utilize a diverse set of instruction tuning datasets, totaling approximately 1 billion tokens. Specifically, our data include [OASST1-guanaco](https://huggingface.co/datasets/openaccess-ai-collective/oasst1-guanaco-extended-sharegpt), [SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca), [ShareGPT_V4.3](https://huggingface.co/datasets/Aeala/ShareGPT_Vicuna_unfiltered), [Evol-ShareGPT](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k), [CodeAlpaca](https://huggingface.co/datasets/lucasmccabe-lmi/CodeAlpaca-20k), [Rosetta Code](https://github.com/sahil280114/codealpaca/blob/master/data/rosetta_alpaca.json), [Evol-CodeAlpaca 1](https://huggingface.co/datasets/theblackcat102/evol-codealpaca-v1), [Evol-CodeAlpaca 2](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1), and a self-generated dataset centered on website creation through the [Alpaca](https://github.com/tatsu-lab/stanford_alpaca) pipeline. We will release the full dataset soon.
24
 
 
 
 
 
25
 
26
  # Primary Usage
27
 
 
22
 
23
  To enhance the model's proficiency in real chat scenarios, we utilize a diverse set of instruction tuning datasets, totaling approximately 1 billion tokens. Specifically, our data include [OASST1-guanaco](https://huggingface.co/datasets/openaccess-ai-collective/oasst1-guanaco-extended-sharegpt), [SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca), [ShareGPT_V4.3](https://huggingface.co/datasets/Aeala/ShareGPT_Vicuna_unfiltered), [Evol-ShareGPT](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k), [CodeAlpaca](https://huggingface.co/datasets/lucasmccabe-lmi/CodeAlpaca-20k), [Rosetta Code](https://github.com/sahil280114/codealpaca/blob/master/data/rosetta_alpaca.json), [Evol-CodeAlpaca 1](https://huggingface.co/datasets/theblackcat102/evol-codealpaca-v1), [Evol-CodeAlpaca 2](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1), and a self-generated dataset centered on website creation through the [Alpaca](https://github.com/tatsu-lab/stanford_alpaca) pipeline. We will release the full dataset soon.
24
 
25
+ The detailed breakdown of the tokens is as followed:
26
+
27
+ ![data split](./data_split.png)
28
+
29
 
30
  # Primary Usage
31