id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
619,968,480
https://api.github.com/repos/huggingface/datasets/issues/151
https://github.com/huggingface/datasets/pull/151
151
Fix JSON tests.
closed
0
2020-05-18T07:17:38
2020-05-18T07:21:52
2020-05-18T07:21:51
jplu
[]
true
619,809,645
https://api.github.com/repos/huggingface/datasets/issues/150
https://github.com/huggingface/datasets/pull/150
150
Add WNUT 17 NER dataset
closed
4
2020-05-17T22:19:04
2020-05-26T20:37:59
2020-05-26T20:37:59
stefan-it
[]
Hi, this PR adds the WNUT 17 dataset to `nlp`. > Emerging and Rare entity recognition > This shared task focuses on identifying unusual, previously-unseen entities in the context of emerging discussions. Named entities form the basis of many modern approaches to other tasks (like event clustering and summarisation), but recall on them is a real problem in noisy text - even among annotators. This drop tends to be due to novel entities and surface forms. Take for example the tweet “so.. kktny in 30 mins?” - even human experts find entity kktny hard to detect and resolve. This task will evaluate the ability to detect and classify novel, emerging, singleton named entities in noisy text. > > The goal of this task is to provide a definition of emerging and of rare entities, and based on that, also datasets for detecting these entities. More information about the dataset can be found on the [shared task page](https://noisy-text.github.io/2017/emerging-rare-entities.html). Dataset is taken is taken from their [GitHub repository](https://github.com/leondz/emerging_entities_17), because the data provided in this repository contains minor fixes in the dataset format. ## Usage Then the WNUT 17 dataset can be used in `nlp` like this: ```python import nlp wnut_17 = nlp.load_dataset("./datasets/wnut_17/wnut_17.py") print(wnut_17) ``` This outputs: ```txt 'train': Dataset(schema: {'id': 'string', 'tokens': 'list<item: string>', 'labels': 'list<item: string>'}, num_rows: 3394) 'validation': Dataset(schema: {'id': 'string', 'tokens': 'list<item: string>', 'labels': 'list<item: string>'}, num_rows: 1009) 'test': Dataset(schema: {'id': 'string', 'tokens': 'list<item: string>', 'labels': 'list<item: string>'}, num_rows: 1287) ``` Number are identical with the ones in [this paper](https://www.ijcai.org/Proceedings/2019/0702.pdf) and are the same as using the `dataset` reader in Flair. ## Features The following feature format is used to represent a sentence in the WNUT 17 dataset: | Feature | Example | Description | ---- | ---- | ----------------- | `id` | `0` | Number (id) of current sentence | `tokens` | `["AHFA", "extends", "deadline"]` | List of tokens (strings) for a sentence | `labels` | `["B-group", "O", "O"]` | List of labels (outer span) The following labels are used in WNUT 17: ```txt O B-corporation I-corporation B-location I-location B-product I-product B-person I-person B-group I-group B-creative-work I-creative-work ```
true
619,735,739
https://api.github.com/repos/huggingface/datasets/issues/149
https://github.com/huggingface/datasets/issues/149
149
[Feature request] Add Ubuntu Dialogue Corpus dataset
closed
1
2020-05-17T15:42:39
2020-05-18T17:01:46
2020-05-18T17:01:46
danth
[ "dataset request" ]
https://github.com/rkadlec/ubuntu-ranking-dataset-creator or http://dataset.cs.mcgill.ca/ubuntu-corpus-1.0/
false
619,590,555
https://api.github.com/repos/huggingface/datasets/issues/148
https://github.com/huggingface/datasets/issues/148
148
_download_and_prepare() got an unexpected keyword argument 'verify_infos'
closed
2
2020-05-17T01:48:53
2020-05-18T07:38:33
2020-05-18T07:38:33
richarddwang
[ "dataset bug" ]
# Reproduce In Colab, ``` %pip install -q nlp %pip install -q apache_beam mwparserfromhell dataset = nlp.load_dataset('wikipedia') ``` get ``` Downloading and preparing dataset wikipedia/20200501.aa (download: Unknown size, generated: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/wikipedia/20200501.aa/1.0.0... --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-6-52471d2a0088> in <module>() ----> 1 dataset = nlp.load_dataset('wikipedia') 1 frames /usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 515 download_mode=download_mode, 516 ignore_verifications=ignore_verifications, --> 517 save_infos=save_infos, 518 ) 519 /usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, dl_manager, **download_and_prepare_kwargs) 361 verify_infos = not save_infos and not ignore_verifications 362 self._download_and_prepare( --> 363 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 364 ) 365 # Sync info TypeError: _download_and_prepare() got an unexpected keyword argument 'verify_infos' ```
false
619,581,907
https://api.github.com/repos/huggingface/datasets/issues/147
https://github.com/huggingface/datasets/issues/147
147
Error with sklearn train_test_split
closed
2
2020-05-17T00:28:24
2020-06-18T16:23:23
2020-06-18T16:23:23
ClonedOne
[ "enhancement" ]
It would be nice if we could use sklearn `train_test_split` to quickly generate subsets from the dataset objects returned by `nlp.load_dataset`. At the moment the code: ```python data = nlp.load_dataset('imdb', cache_dir=data_cache) f_half, s_half = train_test_split(data['train'], test_size=0.5, random_state=seed) ``` throws: ``` ValueError: Can only get row(s) (int or slice) or columns (string). ``` It's not a big deal, since there are other ways to split the data, but it would be a cool thing to have.
false
619,564,653
https://api.github.com/repos/huggingface/datasets/issues/146
https://github.com/huggingface/datasets/pull/146
146
Add BERTScore to metrics
closed
0
2020-05-16T22:09:39
2020-05-17T22:22:10
2020-05-17T22:22:09
felixgwu
[]
This PR adds [BERTScore](https://arxiv.org/abs/1904.09675) to metrics. Here is an example of how to use it. ```sh import nlp bertscore = nlp.load_metric('metrics/bertscore') # or simply nlp.load_metric('bertscore') after this is added to huggingface's s3 bucket predictions = ['example', 'fruit'] references = [['this is an example.', 'this is one example.'], ['apple']] results = bertscore.compute(predictions, references, lang='en') print(results) ```
true
619,480,549
https://api.github.com/repos/huggingface/datasets/issues/145
https://github.com/huggingface/datasets/pull/145
145
[AWS Tests] Follow-up PR from #144
closed
0
2020-05-16T13:53:46
2020-05-16T13:54:23
2020-05-16T13:54:22
patrickvonplaten
[]
I forgot to add this line in PR #145 .
true
619,477,367
https://api.github.com/repos/huggingface/datasets/issues/144
https://github.com/huggingface/datasets/pull/144
144
[AWS tests] AWS test should not run for canonical datasets
closed
0
2020-05-16T13:39:30
2020-05-16T13:44:34
2020-05-16T13:44:33
patrickvonplaten
[]
AWS tests should in general not run for canonical datasets. Only local tests will run in this case. This way a PR is able to pass when adding a new dataset. This PR changes to logic to the following: 1) All datasets that are present in `nlp/datasets` are tested only locally. This way when one adds a canonical dataset, the PR includes his dataset in the tests. 2) All datasets that are only present on AWS, such as `webis/tl_dr` atm are tested only on AWS. I think the testing structure might need a bigger refactoring and better documentation very soon. Merging for now to unblock new PRs @thomwolf @mariamabarham .
true
619,457,641
https://api.github.com/repos/huggingface/datasets/issues/143
https://github.com/huggingface/datasets/issues/143
143
ArrowTypeError in squad metrics
closed
1
2020-05-16T12:06:37
2020-05-22T13:38:52
2020-05-22T13:36:48
patil-suraj
[ "metric bug" ]
`squad_metric.compute` is giving following error ``` ArrowTypeError: Could not convert [{'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}] with type list: was not a dict, tuple, or recognized null value for conversion to struct type ``` This is how my predictions and references look like ``` predictions[0] # {'id': '56be4db0acb8001400a502ec', 'prediction_text': 'Denver Broncos'} ``` ``` references[0] # {'answers': [{'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}], 'id': '56be4db0acb8001400a502ec'} ``` These are structured as per the `squad_metric.compute` help string.
false
619,450,068
https://api.github.com/repos/huggingface/datasets/issues/142
https://github.com/huggingface/datasets/pull/142
142
[WMT] Add all wmt
closed
0
2020-05-16T11:28:46
2020-05-17T12:18:21
2020-05-17T12:18:20
patrickvonplaten
[]
This PR adds all wmt datasets scripts. At the moment the script is **not** functional for the language pairs "cs-en", "ru-en", "hi-en" because apparently it takes up to a week to get the manual data for these datasets: see http://ufal.mff.cuni.cz/czeng. The datasets are fully functional though for the "big" language pairs "de-en" and "fr-en". Overall I think the scripts are very messy and might need a big refactoring at some point. For now I think there are good to merge (most dataset configs can be used). I will add "cs", "ru" and "hi" when the manual data is available.
true
619,447,090
https://api.github.com/repos/huggingface/datasets/issues/141
https://github.com/huggingface/datasets/pull/141
141
[Clean up] remove bogus folder
closed
2
2020-05-16T11:13:42
2020-05-16T13:24:27
2020-05-16T13:24:26
patrickvonplaten
[]
@mariamabarham - I think you accidentally placed it there.
true
619,443,613
https://api.github.com/repos/huggingface/datasets/issues/140
https://github.com/huggingface/datasets/pull/140
140
[Tests] run local tests as default
closed
2
2020-05-16T10:56:06
2020-05-16T13:21:44
2020-05-16T13:21:43
patrickvonplaten
[]
This PR also enables local tests by default I think it's safer for now to enable both local and aws tests for every commit. The problem currently is that when we do a PR to add a dataset, the dataset is not yet on AWS on therefore not tested on the PR itself. Thus the PR will always be green even if the datasets are not correct. This PR aims at fixing this. ## Suggestion on how to commit to the repo from now on: Now since the repo is "online", I think we should adopt a couple of best practices: 1) - No direct committing to the repo anymore. Every change should be opened in a PR and be well documented so that we can find it later 2) - Every PR has to be reviewed by at least x people (I guess @thomwolf you should decide here) because we now have to be much more careful when doing changes to the API for backward compatibility, etc...
true
619,327,409
https://api.github.com/repos/huggingface/datasets/issues/139
https://github.com/huggingface/datasets/pull/139
139
Add GermEval 2014 NER dataset
closed
4
2020-05-15T23:42:09
2020-05-16T13:56:37
2020-05-16T13:56:22
stefan-it
[]
Hi, this PR adds the GermEval 2014 NER dataset 😃 > The GermEval 2014 NER Shared Task builds on a new dataset with German Named Entity annotation [1] with the following properties: > - The data was sampled from German Wikipedia and News Corpora as a collection of citations. > - The dataset covers over 31,000 sentences corresponding to over 590,000 tokens. > - The NER annotation uses the NoSta-D guidelines, which extend the Tübingen Treebank guidelines, using four main NER categories with sub-structure, and annotating embeddings among NEs such as [ORG FC Kickers [LOC Darmstadt]]. Dataset will be downloaded from the [official GermEval 2014 website](https://sites.google.com/site/germeval2014ner/data). ## Dataset format Here's an example of the dataset format from the original dataset: ```tsv # http://de.wikipedia.org/wiki/Manfred_Korfmann [2009-10-17] 1 Aufgrund O O 2 seiner O O 3 Initiative O O 4 fand O O 5 2001/2002 O O 6 in O O 7 Stuttgart B-LOC O 8 , O O 9 Braunschweig B-LOC O 10 und O O 11 Bonn B-LOC O 12 eine O O 13 große O O 14 und O O 15 publizistisch O O 16 vielbeachtete O O 17 Troia-Ausstellung B-LOCpart O 18 statt O O 19 , O O 20 „ O O 21 Troia B-OTH B-LOC 22 - I-OTH O 23 Traum I-OTH O 24 und I-OTH O 25 Wirklichkeit I-OTH O 26 “ O O 27 . O O ``` The sentence is encoded as one token per line (tab separated columns. The first column contains either a `#`, which signals the source the sentence is cited from and the date it was retrieved, or the token number within the sentence. The second column contains the token. Column three and four contain the named entity (in IOB2 scheme). Outer spans are encoded in the third column, embedded/nested spans in the fourth column. ## Features I decided to keep most information from the dataset. That means the so called "source" information (where the sentences come from + date information) is also returned for each sentence in the feature vector. For each sentence in the dataset, one feature vector (`nlp.Features` definition) will be returned: | Feature | Example | Description | ---- | ---- | ----------------- | `id` | `0` | Number (id) of current sentence | `source` | `http://de.wikipedia.org/wiki/Manfred_Korfmann [2009-10-17]` | URL and retrieval date as string | `tokens` | `["Schwartau", "sagte", ":"]` | List of tokens (strings) for a sentence | `labels` | `["B-PER", "O", "O"]` | List of labels (outer span) | `nested-labels` | `["O", "O", "O"]` | List of labels for nested span ## Example The following command downloads the dataset from the official GermEval 2014 page and pre-processed it: ```bash python nlp-cli test datasets/germeval_14 --all_configs ``` It then outputs the number for training, development and testset. The training set consists of 24,000 sentences, the development set of 2,200 and the test of 5,100 sentences. Now it can be imported and used with `nlp`: ```python import nlp germeval = nlp.load_dataset("./datasets/germeval_14/germeval_14.py") assert len(germeval["train"]) == 24000 # Show first sentence of training set: germeval["train"][0] ```
true
619,225,191
https://api.github.com/repos/huggingface/datasets/issues/138
https://github.com/huggingface/datasets/issues/138
138
Consider renaming to nld
closed
13
2020-05-15T20:23:27
2022-09-16T05:18:22
2020-09-28T00:08:10
honnibal
[ "generic discussion" ]
Hey :) Just making a thread here recording what I said on Twitter, as it's impossible to follow discussion there. It's also just really not a good way to talk about this sort of thing. The issue is that modules go into the global namespace, so you shouldn't use variable names that conflict with module names. This means the package makes `nlp` a bad variable name everywhere in the codebase. I've always used `nlp` as the canonical variable name of spaCy's `Language` objects, and this is a convention that a lot of other code has followed (Stanza, flair, etc). And actually, your `transformers` library uses `nlp` as the name for its `Pipeline` instance in your readme. If you stick with the `nlp` name for this package, if anyone uses it then they should rewrite all of that code. If `nlp` is a bad choice of variable anywhere, it's a bad choice of variable everywhere --- because you shouldn't have to notice whether some other function uses a module when you're naming variables within a function. You want to have one convention that you can stick to everywhere. If people use your `nlp` package and continue to use the `nlp` variable name, they'll find themselves with confusing bugs. There will be many many bits of code cut-and-paste from tutorials that give confusing results when combined with the data loading from the `nlp` library. The problem will be especially bad for shadowed modules (people might reasonably have a module named `nlp.py` within their codebase) and notebooks, as people might run notebook cells for data loading out-of-order. I don't think it's an exaggeration to say that if your library becomes popular, we'll all be answering issues around this about once a week for the next few years. That seems pretty unideal, so I do hope you'll reconsider. I suggest `nld` as a better name. It more accurately represents what the package actually does. It's pretty unideal to have a package named `nlp` that doesn't do any processing, and contains data about natural language generation or other non-NLP tasks. The name is equally short, and is sort of a visual pun on `nlp`, since a d is a rotated p.
false
619,211,018
https://api.github.com/repos/huggingface/datasets/issues/136
https://github.com/huggingface/datasets/pull/136
136
Update README.md
closed
1
2020-05-15T20:01:07
2020-05-17T12:17:28
2020-05-17T12:17:28
renaud
[]
small typo
true
619,206,708
https://api.github.com/repos/huggingface/datasets/issues/135
https://github.com/huggingface/datasets/pull/135
135
Fix print statement in READ.md
closed
1
2020-05-15T19:52:23
2020-05-17T12:14:06
2020-05-17T12:14:05
codehunk628
[]
print statement was throwing generator object instead of printing names of available datasets/metrics
true
619,112,641
https://api.github.com/repos/huggingface/datasets/issues/134
https://github.com/huggingface/datasets/pull/134
134
Update README.md
closed
1
2020-05-15T16:56:14
2020-05-28T08:21:49
2020-05-28T08:21:49
pranv
[]
true
619,094,954
https://api.github.com/repos/huggingface/datasets/issues/133
https://github.com/huggingface/datasets/issues/133
133
[Question] Using/adding a local dataset
closed
5
2020-05-15T16:26:06
2020-07-23T16:44:09
2020-07-23T16:44:09
zphang
[]
Users may want to either create/modify a local copy of a dataset, or use a custom-built dataset with the same `Dataset` API as externally downloaded datasets. It appears to be possible to point to a local dataset path rather than downloading the external ones, but I'm not exactly sure how to go about doing this. A notebook/example script demonstrating this would be very helpful.
false
619,077,851
https://api.github.com/repos/huggingface/datasets/issues/132
https://github.com/huggingface/datasets/issues/132
132
[Feature Request] Add the OpenWebText dataset
closed
2
2020-05-15T15:57:29
2020-10-07T14:22:48
2020-10-07T14:22:48
LysandreJik
[ "dataset request" ]
The OpenWebText dataset is an open clone of OpenAI's WebText dataset. It can be used to train ELECTRA as is specified in the [README](https://www.github.com/google-research/electra). More information and the download link are available [here](https://skylion007.github.io/OpenWebTextCorpus/).
false
619,073,731
https://api.github.com/repos/huggingface/datasets/issues/131
https://github.com/huggingface/datasets/issues/131
131
[Feature request] Add Toronto BookCorpus dataset
closed
2
2020-05-15T15:50:44
2020-06-28T21:27:31
2020-06-28T21:27:31
jarednielsen
[ "dataset request" ]
I know the copyright/distribution of this one is complex, but it would be great to have! That, combined with the existing `wikitext`, would provide a complete dataset for pretraining models like BERT.
false
619,035,440
https://api.github.com/repos/huggingface/datasets/issues/130
https://github.com/huggingface/datasets/issues/130
130
Loading GLUE dataset loads CoLA by default
closed
3
2020-05-15T14:55:50
2020-05-27T22:08:15
2020-05-27T22:08:15
zphang
[ "dataset bug" ]
If I run: ```python dataset = nlp.load_dataset('glue') ``` The resultant dataset seems to be CoLA be default, without throwing any error. This is in contrast to calling: ```python metric = nlp.load_metric("glue") ``` which throws an error telling the user that they need to specify a task in GLUE. Should the same apply for loading datasets?
false
618,997,725
https://api.github.com/repos/huggingface/datasets/issues/129
https://github.com/huggingface/datasets/issues/129
129
[Feature request] Add Google Natural Question dataset
closed
7
2020-05-15T14:14:20
2020-07-23T13:21:29
2020-07-23T13:21:29
elyase
[ "dataset request" ]
Would be great to have https://github.com/google-research-datasets/natural-questions as an alternative to SQuAD.
false
618,951,117
https://api.github.com/repos/huggingface/datasets/issues/128
https://github.com/huggingface/datasets/issues/128
128
Some error inside nlp.load_dataset()
closed
2
2020-05-15T13:01:29
2020-05-15T13:10:40
2020-05-15T13:10:40
polkaYK
[]
First of all, nice work! I am going through [this overview notebook](https://colab.research.google.com/github/huggingface/nlp/blob/master/notebooks/Overview.ipynb) In simple step `dataset = nlp.load_dataset('squad', split='validation[:10%]')` I get an error, which is connected with some inner code, I think: ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-8-d848d3a99b8c> in <module>() 1 # Downloading and loading a dataset 2 ----> 3 dataset = nlp.load_dataset('squad', split='validation[:10%]') 8 frames /usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 515 download_mode=download_mode, 516 ignore_verifications=ignore_verifications, --> 517 save_infos=save_infos, 518 ) 519 /usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, dl_manager, **download_and_prepare_kwargs) 361 verify_infos = not save_infos and not ignore_verifications 362 self._download_and_prepare( --> 363 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 364 ) 365 # Sync info /usr/local/lib/python3.6/dist-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 414 try: 415 # Prepare split will record examples associated to the split --> 416 self._prepare_split(split_generator, **prepare_split_kwargs) 417 except OSError: 418 raise OSError("Cannot find data file. " + (self.MANUAL_DOWNLOAD_INSTRUCTIONS or "")) /usr/local/lib/python3.6/dist-packages/nlp/builder.py in _prepare_split(self, split_generator) 585 fname = "{}-{}.arrow".format(self.name, split_generator.name) 586 fpath = os.path.join(self._cache_dir, fname) --> 587 examples_type = self.info.features.type 588 writer = ArrowWriter(data_type=examples_type, path=fpath, writer_batch_size=self._writer_batch_size) 589 /usr/local/lib/python3.6/dist-packages/nlp/features.py in type(self) 460 @property 461 def type(self): --> 462 return get_nested_type(self) 463 464 @classmethod /usr/local/lib/python3.6/dist-packages/nlp/features.py in get_nested_type(schema) 370 # Nested structures: we allow dict, list/tuples, sequences 371 if isinstance(schema, dict): --> 372 return pa.struct({key: get_nested_type(value) for key, value in schema.items()}) 373 elif isinstance(schema, (list, tuple)): 374 assert len(schema) == 1, "We defining list feature, you should just provide one example of the inner type" /usr/local/lib/python3.6/dist-packages/nlp/features.py in <dictcomp>(.0) 370 # Nested structures: we allow dict, list/tuples, sequences 371 if isinstance(schema, dict): --> 372 return pa.struct({key: get_nested_type(value) for key, value in schema.items()}) 373 elif isinstance(schema, (list, tuple)): 374 assert len(schema) == 1, "We defining list feature, you should just provide one example of the inner type" /usr/local/lib/python3.6/dist-packages/nlp/features.py in get_nested_type(schema) 379 # We allow to reverse list of dict => dict of list for compatiblity with tfds 380 if isinstance(inner_type, pa.StructType): --> 381 return pa.struct(dict((f.name, pa.list_(f.type, schema.length)) for f in inner_type)) 382 return pa.list_(inner_type, schema.length) 383 /usr/local/lib/python3.6/dist-packages/nlp/features.py in <genexpr>(.0) 379 # We allow to reverse list of dict => dict of list for compatiblity with tfds 380 if isinstance(inner_type, pa.StructType): --> 381 return pa.struct(dict((f.name, pa.list_(f.type, schema.length)) for f in inner_type)) 382 return pa.list_(inner_type, schema.length) 383 TypeError: list_() takes exactly one argument (2 given) ```
false
618,909,042
https://api.github.com/repos/huggingface/datasets/issues/127
https://github.com/huggingface/datasets/pull/127
127
Update Overview.ipynb
closed
0
2020-05-15T11:46:48
2020-05-15T11:47:27
2020-05-15T11:47:25
patrickvonplaten
[]
update notebook
true
618,897,499
https://api.github.com/repos/huggingface/datasets/issues/126
https://github.com/huggingface/datasets/pull/126
126
remove webis
closed
0
2020-05-15T11:25:20
2020-05-15T11:31:24
2020-05-15T11:30:26
patrickvonplaten
[]
Remove webis from dataset folder. Our first dataset script that only lives on AWS :-) https://s3.console.aws.amazon.com/s3/buckets/datasets.huggingface.co/nlp/datasets/webis/tl_dr/?region=us-east-1 @julien-c @jplu
true
618,869,048
https://api.github.com/repos/huggingface/datasets/issues/125
https://github.com/huggingface/datasets/pull/125
125
[Newsroom] add newsroom
closed
0
2020-05-15T10:34:34
2020-05-15T10:37:07
2020-05-15T10:37:02
patrickvonplaten
[]
I checked it with the data link of the mail you forwarded @thomwolf => works well!
true
618,864,284
https://api.github.com/repos/huggingface/datasets/issues/124
https://github.com/huggingface/datasets/pull/124
124
Xsum, require manual download of some files
closed
0
2020-05-15T10:26:13
2020-05-15T11:04:48
2020-05-15T11:04:46
mariamabarham
[]
true
618,820,140
https://api.github.com/repos/huggingface/datasets/issues/123
https://github.com/huggingface/datasets/pull/123
123
[Tests] Local => aws
closed
3
2020-05-15T09:12:25
2020-05-15T10:06:12
2020-05-15T10:03:26
patrickvonplaten
[]
## Change default Test from local => aws As a default we set` aws=True`, `Local=False`, `slow=False` ### 1. RUN_AWS=1 (default) This runs 4 tests per dataset script. a) Does the dataset script have a valid etag / Can it be reached on AWS? b) Can we load its `builder_class`? c) Can we load **all** dataset configs? d) _Most importantly_: Can we load the dataset? Important - we currently only test the first config of each dataset to reduce test time. Total test time is around 1min20s. ### 2. RUN_LOCAL=1 RUN_AWS=0 ***This should be done when debugging dataset scripts of the ./datasets folder*** This only runs 1 test per dataset test, which is equivalent to aws d) - Can we load the dataset from the local `datasets` directory? ### 3. RUN_SLOW=1 We should set up to run these tests maybe 1 time per week ? @thomwolf The `slow` tests include two more important tests. e) Can we load the dataset with all possible configs? This test will probably fail at the moment because a lot of dummy data is missing. We should add the dummy data step by step to be sure that all configs work. f) Test that the actual dataset can be loaded. This will take quite some time to run, but is important to make sure that the "real" data can be loaded. It will also test whether the dataset script has the correct checksums file which is currently not tested with `aws=True`. @lhoestq - is there an easy way to check cheaply whether the `dataset_info.json` is correct for each dataset script?
true
618,813,182
https://api.github.com/repos/huggingface/datasets/issues/122
https://github.com/huggingface/datasets/pull/122
122
Final cleanup of readme and metrics
closed
0
2020-05-15T09:00:52
2021-09-03T19:40:09
2020-05-15T09:02:22
thomwolf
[]
true
618,790,040
https://api.github.com/repos/huggingface/datasets/issues/121
https://github.com/huggingface/datasets/pull/121
121
make style
closed
0
2020-05-15T08:23:36
2020-05-15T08:25:39
2020-05-15T08:25:38
patrickvonplaten
[]
true
618,737,783
https://api.github.com/repos/huggingface/datasets/issues/120
https://github.com/huggingface/datasets/issues/120
120
🐛 `map` not working
closed
1
2020-05-15T06:43:08
2020-05-15T07:02:38
2020-05-15T07:02:38
astariul
[]
I'm trying to run a basic example (mapping function to add a prefix). [Here is the colab notebook I'm using.](https://colab.research.google.com/drive/1YH4JCAy0R1MMSc-k_Vlik_s1LEzP_t1h?usp=sharing) ```python import nlp dataset = nlp.load_dataset('squad', split='validation[:10%]') def test(sample): sample['title'] = "test prefix @@@ " + sample["title"] return sample print(dataset[0]['title']) dataset.map(test) print(dataset[0]['title']) ``` Output : > Super_Bowl_50 Super_Bowl_50 Expected output : > Super_Bowl_50 test prefix @@@ Super_Bowl_50
false
618,652,145
https://api.github.com/repos/huggingface/datasets/issues/119
https://github.com/huggingface/datasets/issues/119
119
🐛 Colab : type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array'
closed
2
2020-05-15T02:27:26
2020-05-15T05:11:22
2020-05-15T02:45:28
astariul
[]
I'm trying to load CNN/DM dataset on Colab. [Colab notebook](https://colab.research.google.com/drive/11Mf7iNhIyt6GpgA1dBEtg3cyMHmMhtZS?usp=sharing) But I meet this error : > AttributeError: type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array'
false
618,643,088
https://api.github.com/repos/huggingface/datasets/issues/118
https://github.com/huggingface/datasets/issues/118
118
❓ How to apply a map to all subsets ?
closed
1
2020-05-15T01:58:52
2020-05-15T07:05:49
2020-05-15T07:04:25
astariul
[]
I'm working with CNN/DM dataset, where I have 3 subsets : `train`, `test`, `validation`. Should I apply my map function on the subsets one by one ? ```python import nlp cnn_dm = nlp.load_dataset('cnn_dailymail') for corpus in ['train', 'test', 'validation']: cnn_dm[corpus] = cnn_dm[corpus].map(my_func) ``` Or is there a better way to do this ?
false
618,632,573
https://api.github.com/repos/huggingface/datasets/issues/117
https://github.com/huggingface/datasets/issues/117
117
❓ How to remove specific rows of a dataset ?
closed
4
2020-05-15T01:25:06
2022-07-15T08:36:44
2020-05-15T07:04:32
astariul
[]
I saw on the [example notebook](https://colab.research.google.com/github/huggingface/nlp/blob/master/notebooks/Overview.ipynb#scrollTo=efFhDWhlvSVC) how to remove a specific column : ```python dataset.drop('id') ``` But I didn't find how to remove a specific row. **For example, how can I remove all sample with `id` < 10 ?**
false
618,628,264
https://api.github.com/repos/huggingface/datasets/issues/116
https://github.com/huggingface/datasets/issues/116
116
🐛 Trying to use ROUGE metric : pyarrow.lib.ArrowInvalid: Column 1 named references expected length 534 but got length 323
closed
5
2020-05-15T01:12:06
2020-05-28T23:43:07
2020-05-28T23:43:07
astariul
[ "metric bug" ]
I'm trying to use rouge metric. I have to files : `test.pred.tokenized` and `test.gold.tokenized` with each line containing a sentence. I tried : ```python import nlp rouge = nlp.load_metric('rouge') with open("test.pred.tokenized") as p, open("test.gold.tokenized") as g: for lp, lg in zip(p, g): rouge.add(lp, lg) ``` But I meet following error : > pyarrow.lib.ArrowInvalid: Column 1 named references expected length 534 but got length 323 --- Full stack-trace : ``` Traceback (most recent call last): File "<stdin>", line 3, in <module> File "/home/me/.venv/transformers/lib/python3.6/site-packages/nlp/metric.py", line 224, in add self.writer.write_batch(batch) File "/home/me/.venv/transformers/lib/python3.6/site-packages/nlp/arrow_writer.py", line 148, in write_batch pa_table: pa.Table = pa.Table.from_pydict(batch_examples, schema=self._schema) File "pyarrow/table.pxi", line 1550, in pyarrow.lib.Table.from_pydict File "pyarrow/table.pxi", line 1503, in pyarrow.lib.Table.from_arrays File "pyarrow/public-api.pxi", line 390, in pyarrow.lib.pyarrow_wrap_table File "pyarrow/error.pxi", line 85, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Column 1 named references expected length 534 but got length 323 ``` (`nlp` installed from source)
false
618,615,855
https://api.github.com/repos/huggingface/datasets/issues/115
https://github.com/huggingface/datasets/issues/115
115
AttributeError: 'dict' object has no attribute 'info'
closed
2
2020-05-15T00:29:47
2020-05-17T13:11:00
2020-05-17T13:11:00
astariul
[]
I'm trying to access the information of CNN/DM dataset : ```python cnn_dm = nlp.load_dataset('cnn_dailymail') print(cnn_dm.info) ``` returns : > AttributeError: 'dict' object has no attribute 'info'
false
618,611,310
https://api.github.com/repos/huggingface/datasets/issues/114
https://github.com/huggingface/datasets/issues/114
114
Couldn't reach CNN/DM dataset
closed
1
2020-05-15T00:16:17
2020-05-15T00:19:52
2020-05-15T00:19:51
astariul
[]
I can't get CNN / DailyMail dataset. ```python import nlp assert "cnn_dailymail" in [dataset.id for dataset in nlp.list_datasets()] cnn_dm = nlp.load_dataset('cnn_dailymail') ``` [Colab notebook](https://colab.research.google.com/drive/1zQ3bYAVzm1h0mw0yWPqKAg_4EUlSx5Ex?usp=sharing) gives following error : ``` ConnectionError: Couldn't reach https://s3.amazonaws.com/datasets.huggingface.co/nlp/cnn_dailymail/cnn_dailymail.py ```
false
618,590,562
https://api.github.com/repos/huggingface/datasets/issues/113
https://github.com/huggingface/datasets/pull/113
113
Adding docstrings and some doc
closed
0
2020-05-14T23:14:41
2020-05-14T23:22:45
2020-05-14T23:22:44
thomwolf
[]
Some doc
true
618,569,195
https://api.github.com/repos/huggingface/datasets/issues/112
https://github.com/huggingface/datasets/pull/112
112
Qa4mre - add dataset
closed
0
2020-05-14T22:17:51
2020-05-15T09:16:43
2020-05-15T09:16:42
patrickvonplaten
[]
Added dummy data test only for the first config. Will do the rest later. I had to do add some minor hacks to an important function to make it work. There might be a cleaner way to handle it - can you take a look @thomwolf ?
true
618,528,060
https://api.github.com/repos/huggingface/datasets/issues/111
https://github.com/huggingface/datasets/pull/111
111
[Clean-up] remove under construction datastes
closed
0
2020-05-14T20:52:13
2020-05-14T20:52:23
2020-05-14T20:52:22
patrickvonplaten
[]
true
618,520,325
https://api.github.com/repos/huggingface/datasets/issues/110
https://github.com/huggingface/datasets/pull/110
110
fix reddit tifu dummy data
closed
0
2020-05-14T20:37:37
2020-05-14T20:40:14
2020-05-14T20:40:13
patrickvonplaten
[]
true
618,508,359
https://api.github.com/repos/huggingface/datasets/issues/109
https://github.com/huggingface/datasets/pull/109
109
[Reclor] fix reclor
closed
0
2020-05-14T20:16:26
2020-05-14T20:19:09
2020-05-14T20:19:08
patrickvonplaten
[]
- That's probably one me. Could have made the manual data test more flexible. @mariamabarham
true
618,386,394
https://api.github.com/repos/huggingface/datasets/issues/108
https://github.com/huggingface/datasets/pull/108
108
convert can use manual dir as second argument
closed
0
2020-05-14T16:52:32
2020-05-14T16:52:43
2020-05-14T16:52:42
patrickvonplaten
[]
@mariamabarham
true
618,373,045
https://api.github.com/repos/huggingface/datasets/issues/107
https://github.com/huggingface/datasets/pull/107
107
add writer_batch_size to GeneratorBasedBuilder
closed
1
2020-05-14T16:35:39
2020-05-14T16:50:30
2020-05-14T16:50:29
lhoestq
[]
You can now specify `writer_batch_size` in the builder arguments or directly in `load_dataset`
true
618,361,418
https://api.github.com/repos/huggingface/datasets/issues/106
https://github.com/huggingface/datasets/pull/106
106
Add data dir test command
closed
1
2020-05-14T16:18:39
2020-05-14T16:49:11
2020-05-14T16:49:10
lhoestq
[]
true
618,345,191
https://api.github.com/repos/huggingface/datasets/issues/105
https://github.com/huggingface/datasets/pull/105
105
[New structure on AWS] Adapt paths
closed
0
2020-05-14T15:55:57
2020-05-14T15:56:28
2020-05-14T15:56:27
patrickvonplaten
[]
Some small changes so that we have the correct paths. @julien-c
true
618,277,081
https://api.github.com/repos/huggingface/datasets/issues/104
https://github.com/huggingface/datasets/pull/104
104
Add trivia_q
closed
0
2020-05-14T14:27:19
2020-07-12T05:34:20
2020-05-14T20:23:32
patrickvonplaten
[]
Currently tested only for one config to pass tests. Needs to add more dummy data later.
true
618,233,637
https://api.github.com/repos/huggingface/datasets/issues/103
https://github.com/huggingface/datasets/pull/103
103
[Manual downloads] add logic proposal for manual downloads and add wikihow
closed
3
2020-05-14T13:30:36
2020-05-14T14:27:41
2020-05-14T14:27:40
patrickvonplaten
[]
Wikihow is an example that needs to manually download two files as stated in: https://github.com/mahnazkoupaee/WikiHow-Dataset. The user can then store these files under a hard-coded name: `wikihowAll.csv` and `wikihowSep.csv` in this case in a directory of his choice, e.g. `~/wikihow/manual_dir`. The dataset can then be loaded via: ```python import nlp nlp.load_dataset("wikihow", data_dir="~/wikihow/manual_dir") ``` I added/changed so that there are explicit error messages when using manually downloaded files.
true
618,231,216
https://api.github.com/repos/huggingface/datasets/issues/102
https://github.com/huggingface/datasets/pull/102
102
Run save infos
closed
2
2020-05-14T13:27:26
2020-05-14T15:43:04
2020-05-14T15:43:03
lhoestq
[]
I replaced the old checksum file with the new `dataset_infos.json` by running the script on almost all the datasets we have. The only one that is still running on my side is the cornell dialog
true
618,111,651
https://api.github.com/repos/huggingface/datasets/issues/101
https://github.com/huggingface/datasets/pull/101
101
[Reddit] add reddit
closed
0
2020-05-14T10:25:02
2020-05-14T10:27:25
2020-05-14T10:27:24
patrickvonplaten
[]
- Everything worked fine @mariamabarham. Made my computer nearly crash, but all seems to be working :-)
true
618,081,602
https://api.github.com/repos/huggingface/datasets/issues/100
https://github.com/huggingface/datasets/pull/100
100
Add per type scores in seqeval metric
closed
4
2020-05-14T09:37:52
2020-05-14T23:21:35
2020-05-14T23:21:34
jplu
[]
This PR add a bit more detail in the seqeval metric. Now the usage and output are: ```python import nlp met = nlp.load_metric('metrics/seqeval') references = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] predictions = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] met.compute(predictions, references) #Output: {'PER': {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1}, 'MISC': {'precision': 0.0, 'recall': 0.0, 'f1': 0, 'number': 1}, 'overall_precision': 0.5, 'overall_recall': 0.5, 'overall_f1': 0.5, 'overall_accuracy': 0.8} ``` It is also possible to compute scores for non IOB notations, POS tagging for example hasn't this kind of notation. Add `suffix` parameter: ```python import nlp met = nlp.load_metric('metrics/seqeval') references = [['O', 'O', 'O', 'MISC', 'MISC', 'MISC', 'O'], ['PER', 'PER', 'O']] predictions = [['O', 'O', 'MISC', 'MISC', 'MISC', 'MISC', 'O'], ['PER', 'PER', 'O']] met.compute(predictions, references, metrics_kwargs={"suffix": True}) #Output: {'PER': {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1}, 'MISC': {'precision': 0.0, 'recall': 0.0, 'f1': 0, 'number': 1}, 'overall_precision': 0.5, 'overall_recall': 0.5, 'overall_f1': 0.5, 'overall_accuracy': 0.9} ```
true
618,026,700
https://api.github.com/repos/huggingface/datasets/issues/99
https://github.com/huggingface/datasets/pull/99
99
[Cmrc 2018] fix cmrc2018
closed
0
2020-05-14T08:22:03
2020-05-14T08:49:42
2020-05-14T08:49:41
patrickvonplaten
[]
true
617,957,739
https://api.github.com/repos/huggingface/datasets/issues/98
https://github.com/huggingface/datasets/pull/98
98
Webis tl-dr
closed
12
2020-05-14T06:22:18
2020-09-03T10:00:21
2020-05-14T20:54:16
jplu
[]
Add the Webid TL:DR dataset.
true
617,809,431
https://api.github.com/repos/huggingface/datasets/issues/97
https://github.com/huggingface/datasets/pull/97
97
[Csv] add tests for csv dataset script
closed
1
2020-05-13T23:06:11
2020-05-13T23:23:16
2020-05-13T23:23:15
patrickvonplaten
[]
Adds dummy data tests for csv.
true
617,739,521
https://api.github.com/repos/huggingface/datasets/issues/96
https://github.com/huggingface/datasets/pull/96
96
lm1b
closed
1
2020-05-13T20:38:44
2020-05-14T14:13:30
2020-05-14T14:13:29
jplu
[]
Add lm1b dataset.
true
617,703,037
https://api.github.com/repos/huggingface/datasets/issues/95
https://github.com/huggingface/datasets/pull/95
95
Replace checksums files by Dataset infos json
closed
2
2020-05-13T19:36:16
2020-05-14T08:58:43
2020-05-14T08:58:42
lhoestq
[]
### Better verifications when loading a dataset I replaced the `urls_checksums` directory that used to contain `checksums.txt` and `cached_sizes.txt`, by a single file `dataset_infos.json`. It's just a dict `config_name` -> `DatasetInfo`. It simplifies and improves how verifications of checksums and splits sizes are done, as they're all stored in `DatasetInfo` (one per config). Also, having already access to `DatasetInfo` enables to check disk space before running `download_and_prepare` for a given config. The dataset infos json file is user readable, you can take a look at the squad one that I generated in this PR. ### Renaming According to these changes, I did some renaming: `save_checksums` -> `save_infos` `ignore_checksums` -> `ignore_verifications` for example, when you are creating a dataset you have to run ```nlp-cli test path/to/my/dataset --save_infos --all_configs``` instead of ```nlp-cli test path/to/my/dataset --save_checksums --all_configs``` ### And now, the fun part We'll have to rerun the `nlp-cli test ... --save_infos --all_configs` for all the datasets ----------------- feedback appreciated !
true
617,571,340
https://api.github.com/repos/huggingface/datasets/issues/94
https://github.com/huggingface/datasets/pull/94
94
Librispeech
closed
1
2020-05-13T16:04:14
2020-05-13T21:29:03
2020-05-13T21:29:02
jplu
[]
Add librispeech dataset and remove some useless content.
true
617,522,029
https://api.github.com/repos/huggingface/datasets/issues/93
https://github.com/huggingface/datasets/pull/93
93
Cleanup notebooks and various fixes
closed
0
2020-05-13T14:58:58
2020-05-13T15:01:48
2020-05-13T15:01:47
thomwolf
[]
Fixes on dataset (more flexible) metrics (fix) and general clean ups
true
617,341,505
https://api.github.com/repos/huggingface/datasets/issues/92
https://github.com/huggingface/datasets/pull/92
92
[WIP] add wmt14
closed
0
2020-05-13T10:42:03
2020-05-16T11:17:38
2020-05-16T11:17:37
patrickvonplaten
[]
WMT14 takes forever to download :-/ - WMT is the first dataset that uses an abstract class IMO, so I had to modify the `load_dataset_module` a bit.
true
617,339,484
https://api.github.com/repos/huggingface/datasets/issues/91
https://github.com/huggingface/datasets/pull/91
91
[Paracrawl] add paracrawl
closed
0
2020-05-13T10:39:00
2020-05-13T10:40:15
2020-05-13T10:40:14
patrickvonplaten
[]
- Huge dataset - took ~1h to download - Also this PR reformats all dataset scripts and adds `datasets` to `make style`
true
617,311,877
https://api.github.com/repos/huggingface/datasets/issues/90
https://github.com/huggingface/datasets/pull/90
90
Add download gg drive
closed
2
2020-05-13T09:56:02
2020-05-13T12:46:28
2020-05-13T10:05:31
lhoestq
[]
We can now add datasets that download from google drive
true
617,295,069
https://api.github.com/repos/huggingface/datasets/issues/89
https://github.com/huggingface/datasets/pull/89
89
Add list and inspect methods - cleanup hf_api
closed
0
2020-05-13T09:30:15
2020-05-13T14:05:00
2020-05-13T09:33:10
thomwolf
[]
Add a bunch of methods to easily list and inspect the processing scripts up-loaded on S3: ```python nlp.list_datasets() nlp.list_metrics() # Copy and prepare the scripts at `local_path` for easy inspection/modification. nlp.inspect_dataset(path, local_path) # Copy and prepare the scripts at `local_path` for easy inspection/modification. nlp.inspect_metric(path, local_path) ``` Also clean up the `HfAPI` to use `dataclasses` for better user-experience
true
617,284,664
https://api.github.com/repos/huggingface/datasets/issues/88
https://github.com/huggingface/datasets/pull/88
88
Add wiki40b
closed
1
2020-05-13T09:16:01
2020-05-13T12:31:55
2020-05-13T12:31:54
lhoestq
[]
This one is a beam dataset that downloads files using tensorflow. I tested it on a small config and it works fine
true
617,267,118
https://api.github.com/repos/huggingface/datasets/issues/87
https://github.com/huggingface/datasets/pull/87
87
Add Flores
closed
0
2020-05-13T08:51:29
2020-05-13T09:23:34
2020-05-13T09:23:33
patrickvonplaten
[]
Beautiful language for sure!
true
617,260,972
https://api.github.com/repos/huggingface/datasets/issues/86
https://github.com/huggingface/datasets/pull/86
86
[Load => load_dataset] change naming
closed
0
2020-05-13T08:43:00
2020-05-13T08:50:58
2020-05-13T08:50:57
patrickvonplaten
[]
Rename leftovers @thomwolf
true
617,253,428
https://api.github.com/repos/huggingface/datasets/issues/85
https://github.com/huggingface/datasets/pull/85
85
Add boolq
closed
1
2020-05-13T08:32:27
2020-05-13T09:09:39
2020-05-13T09:09:38
lhoestq
[]
I just added the dummy data for this dataset. This one was uses `tf.io.gfile.copy` to download the data but I added the support for custom download in the mock_download_manager. I also had to add a `tensorflow` dependency for tests.
true
617,249,815
https://api.github.com/repos/huggingface/datasets/issues/84
https://github.com/huggingface/datasets/pull/84
84
[TedHrLr] add left dummy data
closed
0
2020-05-13T08:27:20
2020-05-13T08:29:22
2020-05-13T08:29:21
patrickvonplaten
[]
true
616,863,601
https://api.github.com/repos/huggingface/datasets/issues/83
https://github.com/huggingface/datasets/pull/83
83
New datasets
closed
0
2020-05-12T18:22:27
2020-05-12T18:22:47
2020-05-12T18:22:45
mariamabarham
[]
true
616,805,194
https://api.github.com/repos/huggingface/datasets/issues/82
https://github.com/huggingface/datasets/pull/82
82
[Datasets] add ted_hrlr
closed
0
2020-05-12T16:46:50
2020-05-13T07:52:54
2020-05-13T07:52:53
patrickvonplaten
[]
@thomwolf - After looking at `xnli` I think it's better to leave the translation features and add a `translation` key to make them work in our framework. The result looks like this: ![Screenshot from 2020-05-12 18-34-43](https://user-images.githubusercontent.com/23423619/81721933-ee1faf00-9480-11ea-9e95-d6557cbd0ce0.png) you can see that each split has a `translation` key which value is the nlp.features.Translation object. That's a simple change. If it's ok for you, I will add dummy data for the other configs and treat the other translation scripts in the same way.
true
616,793,010
https://api.github.com/repos/huggingface/datasets/issues/81
https://github.com/huggingface/datasets/pull/81
81
add tests
closed
0
2020-05-12T16:28:19
2020-05-13T07:43:57
2020-05-13T07:43:56
lhoestq
[]
Tests for py_utils functions and for the BaseReader used to read from arrow and parquet. I also removed unused utils functions.
true
616,786,803
https://api.github.com/repos/huggingface/datasets/issues/80
https://github.com/huggingface/datasets/pull/80
80
Add nbytes + nexamples check
closed
1
2020-05-12T16:18:43
2020-05-13T07:52:34
2020-05-13T07:52:33
lhoestq
[]
### Save size and number of examples Now when you do `save_checksums`, it also create `cached_sizes.txt` right next to the checksum file. This new file stores the bytes sizes and the number of examples of each split that has been prepared and stored in the cache. Example: ``` # Cached sizes: <full_config_name> <num_bytes> <num_examples> hansards/house/1.0.0/test 22906629 122290 hansards/house/1.0.0/train 191459584 947969 hansards/senate/1.0.0/test 5711686 25553 hansards/senate/1.0.0/train 40324278 182135 ``` ### Check processing output If there is a `caches_sizes.txt`, then each time we run `download_and_prepare` it will make sure that the sizes match. You can set `ignore_checksums=True` if you don't want that to happen. ### Fill Dataset Info All the split infos and the checksums are now stored correctly in DatasetInfo after `download_and_prepare` ### Check space on disk before running `download_and_prepare` Check if the space is lower than the sum of the sizes of the files in `checksums.txt` and `cached_files.txt`. This is not ideal though as it considers the files for all configs. TODO: A better way to do it would be to have save the `DatasetInfo` instead of the `checksums.txt` and `cached_sizes.txt`, in order to have one file per dataset config (and therefore consider only the sizes of the files for one config and not all of them). It can also be the occasion to factorize all the `download_and_prepare` verifications. Maybe next PR ?
true
616,785,613
https://api.github.com/repos/huggingface/datasets/issues/79
https://github.com/huggingface/datasets/pull/79
79
[Convert] add new pattern
closed
0
2020-05-12T16:16:51
2020-05-12T16:17:10
2020-05-12T16:17:09
patrickvonplaten
[]
true
616,774,275
https://api.github.com/repos/huggingface/datasets/issues/78
https://github.com/huggingface/datasets/pull/78
78
[Tests] skip beam dataset tests for now
closed
2
2020-05-12T16:00:58
2020-05-12T16:16:24
2020-05-12T16:16:22
patrickvonplaten
[]
For now we will skip tests for Beam Datasets
true
616,674,601
https://api.github.com/repos/huggingface/datasets/issues/77
https://github.com/huggingface/datasets/pull/77
77
New datasets
closed
0
2020-05-12T13:51:59
2020-05-12T14:02:16
2020-05-12T14:02:15
mariamabarham
[]
true
616,579,228
https://api.github.com/repos/huggingface/datasets/issues/76
https://github.com/huggingface/datasets/pull/76
76
pin flake 8
closed
0
2020-05-12T11:25:29
2020-05-12T11:27:35
2020-05-12T11:27:34
patrickvonplaten
[]
Flake 8's new version does not like our format. Pinning the version for now.
true
616,520,163
https://api.github.com/repos/huggingface/datasets/issues/75
https://github.com/huggingface/datasets/pull/75
75
WIP adding metrics
closed
1
2020-05-12T09:52:00
2020-05-13T07:44:12
2020-05-13T07:44:10
thomwolf
[]
Adding the following metrics as identified by @mariamabarham: 1. BLEU: BiLingual Evaluation Understudy: https://github.com/tensorflow/nmt/blob/master/nmt/scripts/bleu.py, https://github.com/chakki-works/sumeval/blob/master/sumeval/metrics/bleu.py (multilingual) 2. GLEU: Google-BLEU: https://github.com/cnap/gec-ranking/blob/master/scripts/compute_gleu 3. Sacrebleu: https://pypi.org/project/sacrebleu/1.4.8/ (pypi package), https://github.com/mjpost/sacrebleu (github implementation) 4. ROUGE: Recall-Oriented Understudy for Gisting Evaluation: https://github.com/google-research/google-research/tree/master/rouge, https://github.com/chakki-works/sumeval/blob/master/sumeval/metrics/rouge.py (multilingual) 5. Seqeval: https://github.com/chakki-works/seqeval (github implementation), https://pypi.org/project/seqeval/0.0.12/ (pypi package) 6. Coval: coreference evaluation package for the CoNLL and ARRAU datasets https://github.com/ns-moosavi/coval 7. SQuAD v1 evaluation script 8. SQuAD V2 evaluation script: https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/ 9. GLUE 10. XNLI Not now: 1. Perplexity: https://github.com/allenai/allennlp/blob/master/allennlp/training/metrics/perplexity.py 2. Spearman: https://github.com/allenai/allennlp/blob/master/allennlp/training/metrics/spearman_correlation.py 3. F1_measure: https://github.com/allenai/allennlp/blob/master/allennlp/training/metrics/f1_measure.py 4. Pearson_corelation: https://github.com/allenai/allennlp/blob/master/allennlp/training/metrics/pearson_correlation.py 5. AUC: https://github.com/allenai/allennlp/blob/master/allennlp/training/metrics/auc.py 6. Entropy: https://github.com/allenai/allennlp/blob/master/allennlp/training/metrics/entropy.py
true
616,511,101
https://api.github.com/repos/huggingface/datasets/issues/74
https://github.com/huggingface/datasets/pull/74
74
fix overflow check
closed
0
2020-05-12T09:38:01
2020-05-12T10:04:39
2020-05-12T10:04:38
lhoestq
[]
I did some tests and unfortunately the test ``` pa_array.nbytes > MAX_BATCH_BYTES ``` doesn't work. Indeed for a StructArray, `nbytes` can be less 2GB even if there is an overflow (it loops...). I don't think we can do a proper overflow test for the limit of 2GB... For now I replaced it with a sanity check on the first element.
true
616,417,845
https://api.github.com/repos/huggingface/datasets/issues/73
https://github.com/huggingface/datasets/pull/73
73
JSON script
closed
5
2020-05-12T07:11:22
2020-05-18T06:50:37
2020-05-18T06:50:36
jplu
[]
Add a JSONS script to read JSON datasets from files.
true
616,225,010
https://api.github.com/repos/huggingface/datasets/issues/72
https://github.com/huggingface/datasets/pull/72
72
[README dummy data tests] README to better understand how the dummy data structure works
closed
0
2020-05-11T22:19:03
2020-05-11T22:26:03
2020-05-11T22:26:01
patrickvonplaten
[]
In this PR a README.md is added to tests to shine more light on how the dummy data structure works. I try to explain the different possible cases. IMO the best way to understand the logic is to checkout the dummy data structure of the different datasets I mention in the README.md since those are the "edge cases". @mariamabarham @thomwolf @lhoestq @jplu - I'd be happy to checkout the dummy data structure and get some feedback on possible improvements.
true
615,942,180
https://api.github.com/repos/huggingface/datasets/issues/71
https://github.com/huggingface/datasets/pull/71
71
Fix arrow writer for big datasets using writer_batch_size
closed
1
2020-05-11T14:45:36
2020-05-11T20:09:47
2020-05-11T20:00:38
lhoestq
[]
This PR fixes Yacine's bug. According to [this](https://github.com/apache/arrow/blob/master/docs/source/cpp/arrays.rst#size-limitations-and-recommendations), it is not recommended to have pyarrow arrays bigger than 2Go. Therefore I set a default batch size of 100 000 examples per batch. In general it shouldn't exceed 2Go. If it does, I reduce the batch_size on the fly, and I notify the user with a warning.
true
615,679,102
https://api.github.com/repos/huggingface/datasets/issues/70
https://github.com/huggingface/datasets/pull/70
70
adding RACE, QASC, Super_glue and Tiny_shakespear datasets
closed
1
2020-05-11T08:07:49
2020-05-12T13:21:52
2020-05-12T13:21:51
mariamabarham
[]
true
615,450,534
https://api.github.com/repos/huggingface/datasets/issues/69
https://github.com/huggingface/datasets/pull/69
69
fix cache dir in builder tests
closed
2
2020-05-10T18:39:21
2020-05-11T07:19:30
2020-05-11T07:19:28
lhoestq
[]
minor fix
true
614,882,655
https://api.github.com/repos/huggingface/datasets/issues/68
https://github.com/huggingface/datasets/pull/68
68
[CSV] re-add csv
closed
0
2020-05-08T17:38:29
2020-05-08T17:40:48
2020-05-08T17:40:46
patrickvonplaten
[]
Re-adding csv under the datasets under construction to keep circle ci happy - will have to see how to include it in the tests. @lhoestq noticed that I accidently deleted it in https://github.com/huggingface/nlp/pull/63#discussion_r422263729.
true
614,798,483
https://api.github.com/repos/huggingface/datasets/issues/67
https://github.com/huggingface/datasets/pull/67
67
[Tests] Test files locally
closed
1
2020-05-08T15:02:43
2020-05-08T19:50:47
2020-05-08T15:17:00
patrickvonplaten
[]
This PR adds a `aws` and a `local` decorator to the tests so that tests now run on the local datasets. By default, the `aws` is deactivated and `local` is activated and `slow` is deactivated, so that only 1 test per dataset runs on circle ci. **When local is activated all folders in `./datasets` are tested.** **Important** When adding a dataset, we should no longer upload it to AWS. The steps are: 1. Open a PR 2. Add a dataset as described in `datasets/README.md` 3. If all tests pass, push to master Currently we have 49 functional datasets in our code base. We have 6 datasets "under-construction" that don't pass the tests - so I put them in a folder "datasets_under_construction" - it would be nice to open a PR to fix them and put them in the `datasets` folder. **Important** when running tests locally, the datasets are cached so to rerun them delete your local cache via: `rm -r ~/.cache/huggingface/datasets/*` @thomwolf @mariamabarham @lhoestq
true
614,748,552
https://api.github.com/repos/huggingface/datasets/issues/66
https://github.com/huggingface/datasets/pull/66
66
[Datasets] ReadME
closed
0
2020-05-08T13:37:43
2020-05-08T13:39:23
2020-05-08T13:39:22
patrickvonplaten
[]
true
614,746,516
https://api.github.com/repos/huggingface/datasets/issues/65
https://github.com/huggingface/datasets/pull/65
65
fix math dataset and xcopa
closed
0
2020-05-08T13:33:55
2020-05-08T13:35:41
2020-05-08T13:35:40
patrickvonplaten
[]
- fixes math dataset and xcopa, uploaded both of the to S3
true
614,737,057
https://api.github.com/repos/huggingface/datasets/issues/64
https://github.com/huggingface/datasets/pull/64
64
[Datasets] Make master ready for datasets adding
closed
0
2020-05-08T13:17:00
2020-05-08T13:17:31
2020-05-08T13:17:30
patrickvonplaten
[]
Add all relevant files so that datasets can now be added on master
true
614,666,365
https://api.github.com/repos/huggingface/datasets/issues/63
https://github.com/huggingface/datasets/pull/63
63
[Dataset scripts] add all datasets scripts
closed
0
2020-05-08T10:50:15
2020-05-08T17:39:22
2020-05-08T11:34:00
patrickvonplaten
[]
As mentioned, we can have the canonical datasets in the master. For now I also want to include all the data as present on S3 to make the synchronization easier when uploading new datastes. @mariamabarham @lhoestq @thomwolf - what do you think? If this is ok for you, I can sync up the master with the `add_dataset` branch: https://github.com/huggingface/nlp/pull/37 so that master is up to date.
true
614,630,830
https://api.github.com/repos/huggingface/datasets/issues/62
https://github.com/huggingface/datasets/pull/62
62
[Cached Path] Better error message
closed
0
2020-05-08T09:39:47
2020-05-08T09:45:47
2020-05-08T09:45:47
patrickvonplaten
[]
IMO returning `None` in this function only leads to confusion and is never helpful.
true
614,607,474
https://api.github.com/repos/huggingface/datasets/issues/61
https://github.com/huggingface/datasets/pull/61
61
[Load] rename setup_module to prepare_module
closed
0
2020-05-08T08:54:22
2020-05-08T08:56:32
2020-05-08T08:56:16
patrickvonplaten
[]
rename setup_module to prepare_module due to issues with pytests `setup_module` function. See: PR #59.
true
614,372,553
https://api.github.com/repos/huggingface/datasets/issues/60
https://github.com/huggingface/datasets/pull/60
60
Update to simplify some datasets conversion
closed
6
2020-05-07T22:02:24
2020-05-08T10:38:32
2020-05-08T10:18:24
thomwolf
[]
This PR updates the encoding of `Values` like `integers`, `boolean` and `float` to use python casting and avoid having to cast in the dataset scripts, as mentioned here: https://github.com/huggingface/nlp/pull/37#discussion_r420176626 We could also change (not included in this PR yet): - `supervized_keys` to make them a NamedTuple instead of a dataclass, and - handle specifically the `Translation` features. as mentioned here: https://github.com/huggingface/nlp/pull/37#discussion_r421740236 @patrickvonplaten @mariamabarham tell me if you want these two last changes as well.
true
614,366,045
https://api.github.com/repos/huggingface/datasets/issues/59
https://github.com/huggingface/datasets/pull/59
59
Fix tests
closed
5
2020-05-07T21:48:09
2020-05-08T10:57:57
2020-05-08T10:46:51
thomwolf
[]
@patrickvonplaten I've broken a bit the tests with #25 while simplifying and re-organizing the `load.py` and `download_manager.py` scripts. I'm trying to fix them here but I have a weird error, do you think you can have a look? ```bash (datasets) MacBook-Pro-de-Thomas:datasets thomwolf$ python -m pytest -sv ./tests/test_dataset_common.py::DatasetTest::test_builder_class_snli ============================================================================= test session starts ============================================================================= platform darwin -- Python 3.7.7, pytest-5.4.1, py-1.8.1, pluggy-0.13.1 -- /Users/thomwolf/miniconda2/envs/datasets/bin/python cachedir: .pytest_cache rootdir: /Users/thomwolf/Documents/GitHub/datasets plugins: xdist-1.31.0, forked-1.1.3 collected 1 item tests/test_dataset_common.py::DatasetTest::test_builder_class_snli ERROR =================================================================================== ERRORS ==================================================================================== ____________________________________________________________ ERROR at setup of DatasetTest.test_builder_class_snli ____________________________________________________________ file_path = <module 'tests.test_dataset_common' from '/Users/thomwolf/Documents/GitHub/datasets/tests/test_dataset_common.py'> download_config = DownloadConfig(cache_dir=None, force_download=False, resume_download=False, local_files_only=False, proxies=None, user_agent=None, extract_compressed_file=True, force_extract=True) download_kwargs = {} def setup_module(file_path: str, download_config: Optional[DownloadConfig] = None, **download_kwargs,) -> DatasetBuilder: r""" Download/extract/cache a dataset to add to the lib from a path or url which can be: - a path to a local directory containing the dataset processing python script - an url to a S3 directory with a dataset processing python script Dataset codes are cached inside the lib to allow easy import (avoid ugly sys.path tweaks) and using cloudpickle (among other things). Return: tuple of the unique id associated to the dataset the local path to the dataset """ if download_config is None: download_config = DownloadConfig(**download_kwargs) download_config.extract_compressed_file = True download_config.force_extract = True > name = list(filter(lambda x: x, file_path.split("/")))[-1] + ".py" E AttributeError: module 'tests.test_dataset_common' has no attribute 'split' src/nlp/load.py:169: AttributeError ============================================================================== warnings summary =============================================================================== /Users/thomwolf/miniconda2/envs/datasets/lib/python3.7/site-packages/tensorflow_core/python/pywrap_tensorflow_internal.py:15 /Users/thomwolf/miniconda2/envs/datasets/lib/python3.7/site-packages/tensorflow_core/python/pywrap_tensorflow_internal.py:15: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses import imp -- Docs: https://docs.pytest.org/en/latest/warnings.html =========================================================================== short test summary info =========================================================================== ERROR tests/test_dataset_common.py::DatasetTest::test_builder_class_snli - AttributeError: module 'tests.test_dataset_common' has no attribute 'split' ========================================================================= 1 warning, 1 error in 3.63s ========================================================================= ```
true
614,362,308
https://api.github.com/repos/huggingface/datasets/issues/58
https://github.com/huggingface/datasets/pull/58
58
Aborted PR - Fix tests
closed
1
2020-05-07T21:40:19
2020-05-07T21:48:01
2020-05-07T21:41:27
thomwolf
[]
@patrickvonplaten I've broken a bit the tests with #25 while simplifying and re-organizing the `load.py` and `download_manager.py` scripts. I'm trying to fix them here but I have a weird error, do you think you can have a look? ```bash (datasets) MacBook-Pro-de-Thomas:datasets thomwolf$ python -m pytest -sv ./tests/test_dataset_common.py::DatasetTest::test_builder_class_snli ============================================================================= test session starts ============================================================================= platform darwin -- Python 3.7.7, pytest-5.4.1, py-1.8.1, pluggy-0.13.1 -- /Users/thomwolf/miniconda2/envs/datasets/bin/python cachedir: .pytest_cache rootdir: /Users/thomwolf/Documents/GitHub/datasets plugins: xdist-1.31.0, forked-1.1.3 collected 1 item tests/test_dataset_common.py::DatasetTest::test_builder_class_snli ERROR =================================================================================== ERRORS ==================================================================================== ____________________________________________________________ ERROR at setup of DatasetTest.test_builder_class_snli ____________________________________________________________ file_path = <module 'tests.test_dataset_common' from '/Users/thomwolf/Documents/GitHub/datasets/tests/test_dataset_common.py'> download_config = DownloadConfig(cache_dir=None, force_download=False, resume_download=False, local_files_only=False, proxies=None, user_agent=None, extract_compressed_file=True, force_extract=True) download_kwargs = {} def setup_module(file_path: str, download_config: Optional[DownloadConfig] = None, **download_kwargs,) -> DatasetBuilder: r""" Download/extract/cache a dataset to add to the lib from a path or url which can be: - a path to a local directory containing the dataset processing python script - an url to a S3 directory with a dataset processing python script Dataset codes are cached inside the lib to allow easy import (avoid ugly sys.path tweaks) and using cloudpickle (among other things). Return: tuple of the unique id associated to the dataset the local path to the dataset """ if download_config is None: download_config = DownloadConfig(**download_kwargs) download_config.extract_compressed_file = True download_config.force_extract = True > name = list(filter(lambda x: x, file_path.split("/")))[-1] + ".py" E AttributeError: module 'tests.test_dataset_common' has no attribute 'split' src/nlp/load.py:169: AttributeError ============================================================================== warnings summary =============================================================================== /Users/thomwolf/miniconda2/envs/datasets/lib/python3.7/site-packages/tensorflow_core/python/pywrap_tensorflow_internal.py:15 /Users/thomwolf/miniconda2/envs/datasets/lib/python3.7/site-packages/tensorflow_core/python/pywrap_tensorflow_internal.py:15: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses import imp -- Docs: https://docs.pytest.org/en/latest/warnings.html =========================================================================== short test summary info =========================================================================== ERROR tests/test_dataset_common.py::DatasetTest::test_builder_class_snli - AttributeError: module 'tests.test_dataset_common' has no attribute 'split' ========================================================================= 1 warning, 1 error in 3.63s ========================================================================= ```
true
614,261,638
https://api.github.com/repos/huggingface/datasets/issues/57
https://github.com/huggingface/datasets/pull/57
57
Better cached path
closed
2
2020-05-07T18:36:00
2020-05-08T13:20:30
2020-05-08T13:20:28
lhoestq
[]
### Changes: - The `cached_path` no longer returns None if the file is missing/the url doesn't work. Instead, it can raise `FileNotFoundError` (missing file), `ConnectionError` (no cache and unreachable url) or `ValueError` (parsing error) - Fix requests to firebase API that doesn't handle HEAD requests... - Allow custom download in datasets script: it allows to use `tf.io.gfile.copy` for example, to download from google storage. I added an example: the `boolq` script
true
614,236,869
https://api.github.com/repos/huggingface/datasets/issues/56
https://github.com/huggingface/datasets/pull/56
56
[Dataset] Tester add mock function
closed
0
2020-05-07T17:51:37
2020-05-07T17:52:51
2020-05-07T17:52:50
patrickvonplaten
[]
need to add an empty `extract()` function to make `hansard` dataset test work.
true
613,968,072
https://api.github.com/repos/huggingface/datasets/issues/55
https://github.com/huggingface/datasets/pull/55
55
Beam datasets
closed
4
2020-05-07T11:04:32
2020-05-11T07:20:02
2020-05-11T07:20:00
lhoestq
[]
# Beam datasets ## Intro Beam Datasets are using beam pipelines for preprocessing (basically lots of `.map` over objects called PCollections). The advantage of apache beam is that you can choose which type of runner you want to use to preprocess your data. The main runners are: - the `DirectRunner` to run the pipeline locally (default). However I encountered memory issues for big datasets (like the french or english wikipedia). Small dataset work fine - Google Dataflow. I didn't play with it. - Spark or Flink, two well known data processing frameworks. I tried to use the Spark/Flink local runners provided by apache beam for python and wasn't able to make them work properly though... ## From tfds beam datasets to our own beam datasets Tensorflow datasets used beam and a complicated pipeline to shard the TFRecords files. To allow users to download beam datasets and not having to preprocess them, they also allow to download the already preprocessed datasets from their google storage (the beam pipeline doesn't run in that case). On our side, we replace TFRecords by something else. Arrow or Parquet do the job but I chose Parquet as: 1) there is a builtin apache beam parquet writer that is quite convenient, and 2) reading parquet from the pyarrow library is also simple and effective (there is a mmap option !) Moreover we don't shard datasets in many many files like tfds (they were doing probably doing that mainly because of the limit of 2Gb per TFRecord file). Therefore we have a simpler pipeline that saves each split into one parquet file. We also removed the utilities to use their google storage (for now maybe ? we'll have to discuss it). ## Main changes - Added a BeamWriter to save the output of beam pipelines into parquet files and fill dataset infos - Create a ParquetReader and refactor a bit the arrow_reader.py \> **With this, we can now try to add beam datasets from tfds** I already added the wikipedia one, and I will also try to add the Wiki40b dataset ## Test the wikipedia script You can download and run the beam pipeline for wikipedia (using the `DirectRunner` by default) like this: ``` >>> import nlp >>> nlp.load("datasets/nlp/wikipedia", dataset_config="20200501.frr") ``` This wikipedia dataset (lang: frr, North Frisian) is a small one (~10Mb), but feel free to try bigger ones (and fill 20Gb of swap memory if you try the english one lol) ## Next Should we allow to download preprocessed datasets from the tfds google storage ? Should we try to optimize the beam pipelines to run locally without memory issues ? Should we try other data processing frameworks for big datasets, like spark ? ## About this PR It should be merged after #25 ----------------- I'd be happy to have your feedback and your ideas to improve the processing of big datasets like wikipedia :)
true
613,513,348
https://api.github.com/repos/huggingface/datasets/issues/54
https://github.com/huggingface/datasets/pull/54
54
[Tests] Improved Error message for dummy folder structure
closed
0
2020-05-06T18:11:48
2020-05-06T18:13:00
2020-05-06T18:12:59
patrickvonplaten
[]
Improved Error message
true
613,436,158
https://api.github.com/repos/huggingface/datasets/issues/53
https://github.com/huggingface/datasets/pull/53
53
[Features] Typo in generate_from_dict
closed
0
2020-05-06T16:05:23
2020-05-07T15:28:46
2020-05-07T15:28:45
patrickvonplaten
[]
Change `isinstance` test in features when generating features from dict.
true
613,339,071
https://api.github.com/repos/huggingface/datasets/issues/52
https://github.com/huggingface/datasets/pull/52
52
allow dummy folder structure to handle dict of lists
closed
0
2020-05-06T13:54:35
2020-05-06T13:55:19
2020-05-06T13:55:18
patrickvonplaten
[]
`esnli.py` needs that extension of the dummy data testing.
true
613,266,668
https://api.github.com/repos/huggingface/datasets/issues/51
https://github.com/huggingface/datasets/pull/51
51
[Testing] Improved testing structure
closed
1
2020-05-06T12:03:07
2020-05-07T22:07:19
2020-05-06T13:20:18
patrickvonplaten
[]
This PR refactors the test design a bit and puts the mock download manager in the `utils` files as it is just a test helper class. as @mariamabarham pointed out, creating a dummy folder structure can be quite hard to grasp. This PR tries to change that to some extent. It follows the following logic for the `dummy` folder structure now: 1.) The data bulider has no config -> the `dummy` folder structure is: `dummy/<version>/dummy_data.zip` 2) The data builder has >= 1 configs -> the `dummy` folder structure is: `dummy/<config_name_1>/<version>/dummy_data.zip` `dummy/<config_name_2>/<version>/dummy_data.zip` Now, the difficult part is how to create the `dummy_data.zip` file. There are two cases: A) The `data_urs` parameter inserted into the `download_and_extract` fn is a **string**: -> the `dummy_data.zip` file zips the folder: `dummy_data/<relative_path_of_folder_structure_of_url>` B) The `data_urs` parameter inserted into the `download_and_extract` fn is a **dict**: -> the `dummy_data.zip` file zips the folder: `dummy_data/<relative_path_of_folder_structure_of_url_behind _key_1>` `dummy_data/<relative_path_of_folder_structure_of_url_behind _key_2>` By relative folder structure I mean `url_path.split('./')[-1]`. As an example the dataset **xquad** by deepmind has the following url path behind the key `de`: `https://github.com/deepmind/xquad/blob/master/xquad.de.json` -> This means that the relative url path should be `xquad.de.json`. @mariamabarham B) is a change from how is was before and I think is makes more sense. While before the `dummy_data.zip` file for xquad with config `de` looked like: `dummy_data/de` it would now look like `dummy_data/xquad.de.json`. I think this is better and easier to understand. Therefore there are currently 6 tests that would have to have changed their dummy folder structure, but which can easily be done (30min). I also added a function: `print_dummy_data_folder_structure` that prints out the expected structures when testing which should be quite helpful.
true