The data size of Chinses is only 385GB
#4
by
zxs1997zju
- opened
Hi all,
Thank you for your amazing work.
when I use the code blow to download the chinese text data:
====================================
from datasets import load_dataset
dataset = load_dataset("oscar-corpus/OSCAR-2301", cache_dir='/dataset/nlp/oscar/cache', language="zh")
I only got 385GB size file, but the dataset card say the chinese(zh) has 1.4T data,
So it is the data has been cleaned or deduplication to some extent?
Hello,
How are you saving the data on disk/checking its size?
If you want to download the raw data, using git LFS might be simpler!
My apology, when use save_to_disk, the filesize turns out to 1.4T
uj
changed discussion status to
closed