losyer8 commited on
Commit
2fe0651
1 Parent(s): eaa2ba2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -118,7 +118,7 @@ The models have been pre-trained using a blend of the following datasets.
118
  |Codes|[The Stack](https://huggingface.co/datasets/bigcode/the-stack)|10B
119
 
120
  The pre-training was continuously conducted using a total of 10 folds of non-overlapping data, each consisting of approximately 27-28B tokens.
121
- We finalized the pre-training with additional (potentially) high-quality 27B tokens data obtained from the identical source data sets listed above used for the 10-fold data.
122
 
123
  ### Instruction tuning
124
 
 
118
  |Codes|[The Stack](https://huggingface.co/datasets/bigcode/the-stack)|10B
119
 
120
  The pre-training was continuously conducted using a total of 10 folds of non-overlapping data, each consisting of approximately 27-28B tokens.
121
+ We finalized the pre-training with additional (potentially) high-quality 27B tokens data obtained from the identical source datasets listed above used for the 10-fold data.
122
 
123
  ### Instruction tuning
124