Datasets:
How many samples should there be, and how large should it be on disk?
this is tagged with the size category between 10M-100M samples, but the metadata indicates it is slightly less than 10M samples
When I decompressed the json.xz files, I get
# du -sh data/decompressed/
103G data/decompressed/
similarly, the HF datasets cache directory shows:
In [4]: !du -sh hf_pol/
131G hf_pol/
In [5]: !du -sh hf_pol/pile-of-law___pile-of-law
99G hf_pol/pile-of-law___pile-of-law
Is this expected? I see that
In [8]: ds['train'].size_in_bytes / 1e9
Out[8]: 254.359455287
Thank you!
Thanks for filing this issue! We will look into this and circle back.
Hi, I haven't finished downloading and checking, but even partway through loading the dataset, my on-disk usage has significantly surpassed your numbers (currently hf_pol/pile-of-law___pile-of-law is at 129Gb and growing so far). Is it possible that downloading was interrupted somehow? For us to help understand what's happening, could you share how you got the 103Gb number and your version of HF datasets and Python?
EDIT: Just following up. I managed to load it onto a fresh machine. On my disk I have 210GB+ in data uncompressed when I du the HF cache .arrow files. There is still some discrepancy between what's reported via the HF API and the on-disk space taken up, that I'm still investigating. But I'll have to dig through the HF API to figure it out. But I believe that you should have these numbers for the total # of datapoints, which I verified by downloading to my local machine and printing the dataset info via HF:
DatasetDict({
train: Dataset({
features: ['text', 'created_timestamp', 'downloaded_timestamp', 'url'],
num_rows: 7406292
})
validation: Dataset({
features: ['text', 'created_timestamp', 'downloaded_timestamp', 'url'],
num_rows: 2466152
})
})
For what it's worth, here's my HF datasets version:
>>> datasets.__version__
'2.7.1'
Running on MacOS and Python 3.10.
I will continue investigating. Thank you for checking out our dataset!
Oh that's surprising.
I downloaded the data 2 different ways as a check.
HuggingFace native
First, I did
import datasets
ds = datasets.load_dataset("pile-of-law/pile-of-law", "all", cache_dir="hf_pol")
As far as I could tell from the output of the script, this download was successful. Then I ran the two commands I posted before:
$ du -sh hf_pol/
131G hf_pol/
$ du -sh hf_pol/pile-of-law___pile-of-law
99G hf_pol/pile-of-law___pile-of-law
JSONL.XZ directly
As a check against the previous method, I also downloaded the files directly from HuggingFace (script omitted), resulting in 91 files in a directory called data/
. I then ran xz -d
on every *.jsonl.xz
file in there, making data/decompressed/
. I then see
$ du -sh data/decompressed/
103G data/decompressed/
Complicating factors
As mentioned, the Arrow class reports a larger size:
In [8]: ds['train'].size_in_bytes / 1e9
Out[8]: 254.359455287
Also, iterating through the entire train dataset (to convert it to a binary format) and tokenizing ~2% (to estimate total tokens in the dataset) gives me an estimate of 51B tokens, which is roughly what I would expect from the estimates:
data_set_size_bytes = 256000000000
train_fraction = 0.75
bytes_per_char = 1.05 # ASCI characters predominate, but some unicode
char_per_token = 3.5 # a guess
token_estimate = train_fraction * data_set_size_bytes / bytes_per_char / char_per_token
# ~52B
Conclusion
So, given my token results and what you are seeing, it seems I have the full dataset. I do not know how it is smaller on disk for me. Maybe there were multiple factors, and when I downloaded directly there was an error, and when I download with HuggingFace it is able to automatically compress it on my disk?
EDIT: I too am running version '2.7.1'
of datasets
Thanks for providing those details. That is really quite strange. I will do my best to get to the bottom of this and try to reproduce what you're seeing. In the past we've run into some errors when using xz -d
directly in bash (rather than reading via Python). Is it possible that some of the decompressions failed like this silently? Also if you have a version number for xz could you provide that as well? Like this:
% xz --version
xz (XZ Utils) 5.2.5
liblzma 5.2.5
It sounds like via HF you have the whole dataset so hopefully that unblocks you. I'll continue to dig into this and try the XZ pathway also to figure out what's causing the discrepancies. But it might take a bit longer since it is not immediately clear what is causing these differences (and the holidays are coming up).
I am completely happy to stay in the HuggingFace ecosystem and not download the files manually nor convert them with xz
, that was only a backup since the numbers from downloading with huggingface were so strange. However for completeness:
$ xz --version
xz (XZ Utils) 5.2.4
liblzma 5.2.4
I am running version '2.7.1' of datasets
So, despite the size on disk being very different for me than you... the number of rows reported is the same, and the token estimate seems right from first principles. So I will assume things are fine, but boy do I wish I knew what was going on with the size on disk.
Hi. Just to follow up on this and close it out. We pushed a new version of the dataset with additional data and an xz --list call to the README describing the exact file-sizes (compressed and uncompressed) that should be found. The latest version of the dataset should have ~291GB of uncompressed data (including metadata). If you are having trouble with the new version, feel free to file a new issue. Thanks for your interest and patience!