ljb121002 commited on
Commit
aff4fec
1 Parent(s): 784f282

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -13,7 +13,7 @@ The third stage involves reusing Python and web-related data from the [StarCoder
13
 
14
  ### Instruction tuning
15
 
16
- To enhance the model's proficiency in real chat scenarios, we utilize a diverse set of instruction tuning datasets, totaling approximately 1 billion tokens. Specifically, our data include [OASST1-guanaco](https://huggingface.co/datasets/openaccess-ai-collective/oasst1-guanaco-extended-sharegpt), [SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca), [ShareGPT_V4.3](https://huggingface.co/datasets/Aeala/ShareGPT_Vicuna_unfiltered), [Evol-ShareGPT](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k), [CodeAlpaca](https://huggingface.co/datasets/lucasmccabe-lmi/CodeAlpaca-20k), [Rosetta Code](https://github.com/sahil280114/codealpaca/blob/master/data/rosetta_alpaca.json), [Evol-CodeAlpaca 1](https://huggingface.co/datasets/theblackcat102/evol-codealpaca-v1), [Evol Alpaca 2](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1), and a self-generated dataset centered on website creation through the [Alpaca](https://github.com/tatsu-lab/stanford_alpaca) pipeline. We will release the full dataset soon.
17
 
18
 
19
  # Primary Usage
 
13
 
14
  ### Instruction tuning
15
 
16
+ To enhance the model's proficiency in real chat scenarios, we utilize a diverse set of instruction tuning datasets, totaling approximately 1 billion tokens. Specifically, our data include [OASST1-guanaco](https://huggingface.co/datasets/openaccess-ai-collective/oasst1-guanaco-extended-sharegpt), [SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca), [ShareGPT_V4.3](https://huggingface.co/datasets/Aeala/ShareGPT_Vicuna_unfiltered), [Evol-ShareGPT](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k), [CodeAlpaca](https://huggingface.co/datasets/lucasmccabe-lmi/CodeAlpaca-20k), [Rosetta Code](https://github.com/sahil280114/codealpaca/blob/master/data/rosetta_alpaca.json), [Evol-CodeAlpaca 1](https://huggingface.co/datasets/theblackcat102/evol-codealpaca-v1), [Evol-CodeAlpaca 2](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1), and a self-generated dataset centered on website creation through the [Alpaca](https://github.com/tatsu-lab/stanford_alpaca) pipeline. We will release the full dataset soon.
17
 
18
 
19
  # Primary Usage