Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
10M<n<100M
Language Creators:
found
Annotations Creators:
no-annotation
ArXiv:
License:

Train and Validation splits are identical

#2
by kapllan - opened

Hello,

I am writing you concerning some issues that @joelito and I experienced when using your dataset. As you may know, β€œpile-of-law/pile-of-law” is part of the larger dataset β€œjoelito/Multi_Legal_Pile”. We wanted to give some general information about each dataset β€œjoelito/Multi_Legal_Pile” consists of, including β€œpile-of-law/pile-of-law”. When extracting information about the size of each source, such as "edgar", "scotus_oral_arguments" for example, we got very large numbers in GB. For example, for the entire dataset β€œpile-of-law/pile-of-law” (the source "all") we calculated a size of 473GB which is way more than 222GB. Afterwards we saw that this might be due some errors concerning the split in β€œpile-of-law/pile-of-law”. We saw that the train and validation split are always identical, which cannot be, at least according to your description β€œThere is a train/validation split for each subset of the data. 75%/25%”. You can have a look at the screenshots that I have attached to this message.

Can you please have a look at that and fix this issue or give us an explanation why the train and validation set seem to be identical.

Best
Veton Matoshi

photo_2022-12-07_23-03-05.jpg
photo_2022-12-07_23-05-55.jpg

FYI this isn't a code problem, if you look at the source data they look identical https://huggingface.co/datasets/pile-of-law/pile-of-law/tree/main/data

Retracted!

Pile of Law org

How do you mean that? The validation file is smaller than the train file.

This comment has been hidden
Pile of Law org

Thanks for flagging this! I haven't been able to replicate this unfortunately. For example, if I load r_legaladvice, I properly get the 75/25 split:

>>> x = load_dataset("pile-of-law/pile-of-law", "r_legaladvice")
Found cached dataset pile-of-law (/Users/breakend/.cache/huggingface/datasets/pile-of-law___pile-of-law/r_legaladvice/0.0.0/acacf3e29a952ba9026148b979cb438151ebd33f842f5779a213967033c88619)
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:00<00:00, 76.84it/s]
>>> x["train"]
Dataset({
    features: ['text', 'created_timestamp', 'downloaded_timestamp', 'url'],
    num_rows: 109740
})
>>> x["validation"]
Dataset({
    features: ['text', 'created_timestamp', 'downloaded_timestamp', 'url'],
    num_rows: 36931
})

Could you let me know which version of HF datasets you're using so I can try to replicate as close as possible to your setup?

Hello @breakend ,

thank you very much for your explanation. I tested everything on my new laptop and now it works just fine (see screenshot below). I cannot explain the behaviour on my other device. Anyway, the important thing is that it is working now.

Thanks a lot!

grafik.png

I am closing this thread now.

kapllan changed discussion status to closed

Sign up or log in to comment