Any plan to open source the 200 B token dataset?
#2
by
YaoLiu61
- opened
Hi, thanks to your great work!
Do you have a plan to open source the 200 B token dataset used to continue pretrain the base model (Qwen1.5-7B)?
@YaoLiu61 Thanks for your interest! Sorry we cannot release the dataset due to several reasons (we have tried our best but there are still challenges). However, we will provide the full data cleaning code, and you may use the code to reproduce the dataset IMO.
@SivilTaram Thanks for your reply. And, when will the full data cleaning code be released ?
@YaoLiu61 tonight
@YaoLiu61 Hi, we just released the code for data cleaning / data deduplication at https://github.com/sail-sg/sailcraft
SivilTaram
changed discussion status to
closed