Datasets:
Train and Validation splits are identical
Hello,
I am writing you concerning some issues that @joelito and I experienced when using your dataset. As you may know, βpile-of-law/pile-of-lawβ is part of the larger dataset βjoelito/Multi_Legal_Pileβ. We wanted to give some general information about each dataset βjoelito/Multi_Legal_Pileβ consists of, including βpile-of-law/pile-of-lawβ. When extracting information about the size of each source, such as "edgar", "scotus_oral_arguments" for example, we got very large numbers in GB. For example, for the entire dataset βpile-of-law/pile-of-lawβ (the source "all") we calculated a size of 473GB which is way more than 222GB. Afterwards we saw that this might be due some errors concerning the split in βpile-of-law/pile-of-lawβ. We saw that the train and validation split are always identical, which cannot be, at least according to your description βThere is a train/validation split for each subset of the data. 75%/25%β. You can have a look at the screenshots that I have attached to this message.
Can you please have a look at that and fix this issue or give us an explanation why the train and validation set seem to be identical.
Best
Veton Matoshi
FYI this isn't a code problem, if you look at the source data they look identical https://huggingface.co/datasets/pile-of-law/pile-of-law/tree/main/data
Retracted!
How do you mean that? The validation file is smaller than the train file.
Thanks for flagging this! I haven't been able to replicate this unfortunately. For example, if I load r_legaladvice, I properly get the 75/25 split:
>>> x = load_dataset("pile-of-law/pile-of-law", "r_legaladvice")
Found cached dataset pile-of-law (/Users/breakend/.cache/huggingface/datasets/pile-of-law___pile-of-law/r_legaladvice/0.0.0/acacf3e29a952ba9026148b979cb438151ebd33f842f5779a213967033c88619)
100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2/2 [00:00<00:00, 76.84it/s]
>>> x["train"]
Dataset({
features: ['text', 'created_timestamp', 'downloaded_timestamp', 'url'],
num_rows: 109740
})
>>> x["validation"]
Dataset({
features: ['text', 'created_timestamp', 'downloaded_timestamp', 'url'],
num_rows: 36931
})
Could you let me know which version of HF datasets you're using so I can try to replicate as close as possible to your setup?
I am closing this thread now.