How many rows are supposed to be in the train split?

#13
by yury-zyphra - opened

I couldn't download the dataset using datasets.load_dataset() due to a lot of tiny compressed jsonl's, so I cloned the repo using git LFS, and then I pointed HF to it to load the dataset. However, I see ~1.2B rows in the training split. Does this sound right?

I have the same issue, not actually sure how to download this

ok, there should be 0.6B rows. For some reason, using datasets.load_dataset() leads to duplication of the dataset.

My workaround is basically just to process compressed jsonl's: I generated a few jsonl shards myself (wrote a small script for that), and then used datasets.Dataset.from_json() to get HF Dataset. This way it is correct.

What I did was just git clone it all, worked so much better, need to setup git lfs first though

Yeah, I cloned the repo and then pointed load_dataset() at it. That's when I encountered duplication. I still wanted to get HF dataset object, so I had to jump through the hoop to get it working.

Sign up or log in to comment