Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
10K<n<100K
Language Creators:
found
Annotations Creators:
found
Source Datasets:
extended
ArXiv:
Tags:
License:

Convert dataset to Parquet

#5
by albertvillanova HF staff - opened

Convert dataset to Parquet.

albertvillanova changed pull request status to merged

Hello Albert,

I have been facing some problems since you updated the LexGLUE dataset. I believe you may miss train/validation/test subdirectories under all the different tasks. Of course, it could be an intentional change; however, most of the codes written for the LexGLUE dataset, even the source code on GitHub(https://github.com/coastalcph/lex-glue/blob/main/experiments/eurlex.py) (please see the copied and pasted code snippet below), use train/validation/test splits while loading the datasets. Would it be possible to return the subdirectories? If not possible, the source code could be corrected by deleting the data_dir argument on the load_dataset method.

# Downloading and loading eurlex dataset from the hub.
if training_args.do_train:
    train_dataset = load_dataset("lex_glue", name=data_args.task, split="train", data_dir='data',
                                 cache_dir=model_args.cache_dir)

if training_args.do_eval:
    eval_dataset = load_dataset("lex_glue", name=data_args.task, split="validation", data_dir='data',
                                cache_dir=model_args.cache_dir)

if training_args.do_predict:
    predict_dataset = load_dataset("lex_glue", name=data_args.task, split="test", data_dir='data',
                                   cache_dir=model_args.cache_dir)

Cheers,
Sinan

Thanks for reporting, @singultek .

I have opened a dedicated issue to discuss about this: #6
Let's continue the discussion there.

Sign up or log in to comment