id
int64
599M
3.18B
number
int64
1
7.65k
title
stringlengths
1
290
state
stringclasses
2 values
body
stringlengths
0
228k
is_pull_request
bool
1 class
created_at
stringdate
2020-04-14 10:18:02
2025-06-26 12:23:48
updated_at
stringdate
2020-04-27 16:04:17
2025-06-26 14:02:38
closed_at
stringlengths
20
20
user_login
stringlengths
3
26
author_association
stringclasses
4 values
pr_url
stringlengths
46
49
pr_merged_at
stringlengths
20
20
comments_count
int64
0
70
reactions_total
int64
0
61
reactions_plus1
int64
0
39
reactions_heart
int64
0
22
draft
bool
2 classes
locked
bool
1 class
labels
listlengths
0
4
html_url
stringlengths
46
51
is_pr_url
bool
2 classes
comments
listlengths
0
30
621,929,428
175
[Manual data dir] Error message: nlp.load_dataset('xsum') -> TypeError
closed
v 0.1.0 from pip ```python import nlp xsum = nlp.load_dataset('xsum') ``` Issue is `dl_manager.manual_dir`is `None` ```python --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-42-8a32f066f3bd> in <module> ----> 1 xsum = nlp.load_dataset('xsum') ~/miniconda3/envs/nb/lib/python3.7/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 515 download_mode=download_mode, 516 ignore_verifications=ignore_verifications, --> 517 save_infos=save_infos, 518 ) 519 ~/miniconda3/envs/nb/lib/python3.7/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, dl_manager, **download_and_prepare_kwargs) 361 verify_infos = not save_infos and not ignore_verifications 362 self._download_and_prepare( --> 363 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 364 ) 365 # Sync info ~/miniconda3/envs/nb/lib/python3.7/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 397 split_dict = SplitDict(dataset_name=self.name) 398 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 399 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 400 # Checksums verification 401 if verify_infos: ~/miniconda3/envs/nb/lib/python3.7/site-packages/nlp/datasets/xsum/5c5fca23aaaa469b7a1c6f095cf12f90d7ab99bcc0d86f689a74fd62634a1472/xsum.py in _split_generators(self, dl_manager) 102 with open(dl_path, "r") as json_file: 103 split_ids = json.load(json_file) --> 104 downloaded_path = os.path.join(dl_manager.manual_dir, "xsum-extracts-from-downloads") 105 return [ 106 nlp.SplitGenerator( ~/miniconda3/envs/nb/lib/python3.7/posixpath.py in join(a, *p) 78 will be discarded. An empty last part will result in a path that 79 ends with a separator.""" ---> 80 a = os.fspath(a) 81 sep = _get_sep(a) 82 path = a TypeError: expected str, bytes or os.PathLike object, not NoneType ```
true
2020-05-20T17:00:32Z
2020-05-20T18:18:50Z
2020-05-20T18:18:50Z
sshleifer
CONTRIBUTOR
null
null
0
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/175
false
[]
621,928,403
174
nlp.load_dataset('xsum') -> TypeError
closed
true
2020-05-20T16:59:09Z
2020-05-20T17:43:46Z
2020-05-20T17:43:46Z
sshleifer
CONTRIBUTOR
null
null
0
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/174
false
[]
621,764,932
173
Rm extracted test dirs
closed
All the dummy data used for tests were duplicated. For each dataset, we had one zip file but also its extracted directory. I removed all these directories Furthermore instead of extracting next to the dummy_data.zip file, we extract in the temp `cached_dir` used for tests, so that all the extracted directories get removed after testing. Finally there was a bug in the `mock_download_manager` that would let it create directories with invalid names, as in #172. I fixed that by encoding url arguments. I had to rename the dummy data for `scientific_papers` and `cnn_dailymail` (the aws tests don't pass for those 2 in this PR, but they will once aws will be synced, as the local ones do) Let me know if it sounds good to you @patrickvonplaten . I'm still not entirely familiar with the mock downloader
true
2020-05-20T13:30:48Z
2020-05-22T16:34:36Z
2020-05-22T16:34:35Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/173
2020-05-22T16:34:35Z
2
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/173
true
[ "Thanks for cleaning up the extracted dummy data folders! Instead of changing the file_utils we could also just put these folders under `.gitignore` (or maybe already done?).", "Awesome! I guess you might have to add the changes for the MockDLManager now in a different file though because of my last PR - sorry!" ...
621,377,386
172
Clone not working on Windows environment
closed
Cloning in a windows environment is not working because of use of special character '?' in folder name .. Please consider changing the folder name .... Reference to folder - nlp/datasets/cnn_dailymail/dummy/3.0.0/3.0.0/dummy_data-zip-extracted/dummy_data/uc?export=download&id=0BwmD_VLjROrfM1BxdkxVaTY2bWs/dailymail/stories/ error log: fatal: cannot create directory at 'datasets/cnn_dailymail/dummy/3.0.0/3.0.0/dummy_data-zip-extracted/dummy_data/uc?export=download&id=0BwmD_VLjROrfM1BxdkxVaTY2bWs': Invalid argument
true
2020-05-20T00:45:14Z
2020-05-23T12:49:13Z
2020-05-23T11:27:52Z
codehunk628
CONTRIBUTOR
null
null
2
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/172
false
[ "Should be fixed on master now :)", "Thanks @lhoestq 👍 Now I can uninstall WSL and get back to work with windows.🙂" ]
621,199,128
171
fix squad metric format
closed
The format of the squad metric was wrong. This should fix #143 I tested with ```python3 predictions = [ {'id': '56be4db0acb8001400a502ec', 'prediction_text': 'Denver Broncos'} ] references = [ {'answers': [{'text': 'Denver Broncos'}], 'id': '56be4db0acb8001400a502ec'} ] ```
true
2020-05-19T18:37:36Z
2020-05-22T13:36:50Z
2020-05-22T13:36:48Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/171
2020-05-22T13:36:48Z
5
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/171
true
[ "One thing for SQuAD is that I wanted to be able to use the SQuAD dataset directly in the metrics and I'm not sure it will be possible with this format.\r\n\r\n(maybe it's not really possible in general though)", "This is kinda related to one thing I had in mind which is that we may want to be able to dump our mo...
621,119,747
170
Rename anli dataset
closed
What we have now as the `anli` dataset is actually the αNLI dataset from the ART challenge dataset. This name is confusing because `anli` is also the name of adversarial NLI (see [https://github.com/facebookresearch/anli](https://github.com/facebookresearch/anli)). I renamed the current `anli` dataset by `art`.
true
2020-05-19T16:26:57Z
2020-05-20T12:23:09Z
2020-05-20T12:23:08Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/170
2020-05-20T12:23:07Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/170
true
[]
621,099,682
169
Adding Qanta (Quizbowl) Dataset
closed
This PR adds the qanta question answering datasets from [Quizbowl: The Case for Incremental Question Answering](https://arxiv.org/abs/1904.04792) and [Trick Me If You Can: Human-in-the-loop Generation of Adversarial Question Answering Examples](https://www.aclweb.org/anthology/Q19-1029/) (adversarial fold) This partially continues a discussion around fixing dummy data from https://github.com/huggingface/nlp/issues/161 I ran the following code to double check that it works and did some sanity checks on the output. The majority of the code itself is from our `allennlp` version of the dataset reader. ```python import nlp # Default is full question data = nlp.load_dataset('./datasets/qanta') # Four configs # Primarily useful for training data = nlp.load_dataset('./datasets/qanta', 'mode=sentences,char_skip=25') # Primarily used in evaluation data = nlp.load_dataset('./datasets/qanta', 'mode=first,char_skip=25') data = nlp.load_dataset('./datasets/qanta', 'mode=full,char_skip=25') # Primarily useful in evaluation and "live" play data = nlp.load_dataset('./datasets/qanta', 'mode=runs,char_skip=25') ```
true
2020-05-19T16:03:01Z
2020-05-26T12:52:31Z
2020-05-26T12:52:31Z
EntilZha
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/169
null
5
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/169
true
[ "Hi @EntilZha - sorry for waiting so long until taking action here. We created a new command and a new recipe of how to add dummy_data. Can you maybe rebase to `master` as explained in 7. of https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md#how-to-contribute-to-nlp and check that your dummy data is cor...
620,959,819
168
Loading 'wikitext' dataset fails
closed
Loading the 'wikitext' dataset fails with Attribute error: Code to reproduce (From example notebook): import nlp wikitext_dataset = nlp.load_dataset('wikitext') Error: --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-17-d5d9df94b13c> in <module>() 11 12 # Load a dataset and print the first examples in the training set ---> 13 wikitext_dataset = nlp.load_dataset('wikitext') 14 print(wikitext_dataset['train'][0]) 6 frames /usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 518 download_mode=download_mode, 519 ignore_verifications=ignore_verifications, --> 520 save_infos=save_infos, 521 ) 522 /usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, dl_manager, **download_and_prepare_kwargs) 363 verify_infos = not save_infos and not ignore_verifications 364 self._download_and_prepare( --> 365 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 366 ) 367 # Sync info /usr/local/lib/python3.6/dist-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 416 try: 417 # Prepare split will record examples associated to the split --> 418 self._prepare_split(split_generator, **prepare_split_kwargs) 419 except OSError: 420 raise OSError("Cannot find data file. " + (self.MANUAL_DOWNLOAD_INSTRUCTIONS or "")) /usr/local/lib/python3.6/dist-packages/nlp/builder.py in _prepare_split(self, split_generator) 594 example = self.info.features.encode_example(record) 595 writer.write(example) --> 596 num_examples, num_bytes = writer.finalize() 597 598 assert num_examples == num_examples, f"Expected to write {split_info.num_examples} but wrote {num_examples}" /usr/local/lib/python3.6/dist-packages/nlp/arrow_writer.py in finalize(self, close_stream) 173 def finalize(self, close_stream=True): 174 if self.pa_writer is not None: --> 175 self.write_on_file() 176 self.pa_writer.close() 177 if close_stream: /usr/local/lib/python3.6/dist-packages/nlp/arrow_writer.py in write_on_file(self) 124 else: 125 # All good --> 126 self._write_array_on_file(pa_array) 127 self.current_rows = [] 128 /usr/local/lib/python3.6/dist-packages/nlp/arrow_writer.py in _write_array_on_file(self, pa_array) 93 def _write_array_on_file(self, pa_array): 94 """Write a PyArrow Array""" ---> 95 pa_batch = pa.RecordBatch.from_struct_array(pa_array) 96 self._num_bytes += pa_array.nbytes 97 self.pa_writer.write_batch(pa_batch) AttributeError: type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array'
true
2020-05-19T13:04:29Z
2020-05-26T21:46:52Z
2020-05-26T21:46:52Z
itay1itzhak
NONE
null
null
6
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/168
false
[ "Hi, make sure you have a recent version of pyarrow.\r\n\r\nAre you using it in Google Colab? In this case, this error is probably the same as #128", "Thanks!\r\n\r\nYes I'm using Google Colab, it seems like a duplicate then.", "Closing as it is a duplicate", "Hi,\r\nThe squad bug seems to be fixed, but the l...
620,908,786
167
[Tests] refactor tests
closed
This PR separates AWS and Local tests to remove these ugly statements in the script: ```python if "/" not in dataset_name: logging.info("Skip {} because it is a canonical dataset") return ``` To run a `aws` test, one should now run the following command: ```python pytest -s tests/test_dataset_common.py::AWSDatasetTest::test_builder_class_wmt14 ``` The same `local` test, can be run with: ```python pytest -s tests/test_dataset_common.py::LocalDatasetTest::test_builder_class_wmt14 ```
true
2020-05-19T11:43:32Z
2020-05-19T16:17:12Z
2020-05-19T16:17:10Z
patrickvonplaten
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/167
2020-05-19T16:17:10Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/167
true
[ "Nice !" ]
620,850,218
166
Add a method to shuffle a dataset
closed
Could maybe be a `dataset.shuffle(generator=None, seed=None)` signature method. Also, we could maybe have a clear indication of which method modify in-place and which methods return/cache a modified dataset. I kinda like torch conversion of having an underscore suffix for all the methods which modify a dataset in-place. What do you think?
true
2020-05-19T10:08:46Z
2020-06-23T15:07:33Z
2020-06-23T15:07:32Z
thomwolf
MEMBER
null
null
4
0
0
0
null
false
[ "generic discussion" ]
https://github.com/huggingface/datasets/issues/166
false
[ "+1 for the naming convention\r\n\r\nAbout the `shuffle` method, from my understanding it should be done in `Dataloader` (better separation between dataset processing - usage)", "+1 for shuffle in `Dataloader`. \r\nSome `Dataloader` just store idxs of dataset and just shuffle those idxs, which might(?) be faster ...
620,758,221
165
ANLI
closed
Can I recommend the following: For ANLI, use https://github.com/facebookresearch/anli. As that paper says, "Our dataset is not to be confused with abductive NLI (Bhagavatula et al., 2019), which calls itself αNLI, or ART.". Indeed, the paper cited under what is currently called anli says in the abstract "We introduce a challenge dataset, ART". The current naming will confuse people :)
true
2020-05-19T07:50:57Z
2020-05-20T12:23:07Z
2020-05-20T12:23:07Z
douwekiela
NONE
null
null
0
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/165
false
[]
620,540,250
164
Add Spanish POR and NER Datasets
closed
Hi guys, In order to cover multilingual support a little step could be adding standard Datasets used for Spanish NER and POS tasks. I can provide it in raw and preprocessed formats.
true
2020-05-18T22:18:21Z
2020-05-25T16:28:45Z
2020-05-25T16:28:45Z
mrm8488
CONTRIBUTOR
null
null
2
0
0
0
null
false
[ "dataset request" ]
https://github.com/huggingface/datasets/issues/164
false
[ "Hello @mrm8488, are these datasets official datasets published in an NLP/CL/ML venue?", "What about this one: https://github.com/ccasimiro88/TranslateAlignRetrieve?" ]
620,534,307
163
[Feature request] Add cos-e v1.0
closed
I noticed the second release of cos-e (v1.11) is included in this repo. I wanted to request inclusion of v1.0, since this is the version on which results are reported on in [the paper](https://www.aclweb.org/anthology/P19-1487/), and v1.11 has noted [annotation](https://github.com/salesforce/cos-e/issues/2) [issues](https://arxiv.org/pdf/2004.14546.pdf).
true
2020-05-18T22:05:26Z
2020-06-16T23:15:25Z
2020-06-16T18:52:06Z
sarahwie
NONE
null
null
10
0
0
0
null
false
[ "dataset request" ]
https://github.com/huggingface/datasets/issues/163
false
[ "Sounds good, @mariamabarham do you want to give a look?\r\nI think we should have two configurations so we can allow either version of the dataset to be loaded with the `1.0` version being the default maybe.\r\n\r\nCc some authors of the great cos-e: @nazneenrajani @bmccann", "cos_e v1.0 is related to CQA v1.0 b...
620,513,554
162
fix prev files hash in map
closed
Fix the `.map` issue in #160. This makes sure it takes the previous files when computing the hash.
true
2020-05-18T21:20:51Z
2020-05-18T21:36:21Z
2020-05-18T21:36:20Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/162
2020-05-18T21:36:20Z
3
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/162
true
[ "Awesome! ", "Hi, yes, this seems to fix #160 -- I cloned the branch locally and verified", "Perfect then :)" ]
620,487,535
161
Discussion on version identifier & MockDataLoaderManager for test data
open
Hi, I'm working on adding a dataset and ran into an error due to `download` not being defined on `MockDataLoaderManager`, but being defined in `nlp/utils/download_manager.py`. The readme step running this: `RUN_SLOW=1 pytest tests/test_dataset_common.py::DatasetTest::test_load_real_dataset_localmydatasetname` triggers the error. If I can get something to work, I can include it in my data PR once I'm done.
true
2020-05-18T20:31:30Z
2020-05-24T18:10:03Z
null
EntilZha
CONTRIBUTOR
null
null
12
0
0
0
null
false
[ "generic discussion" ]
https://github.com/huggingface/datasets/issues/161
false
[ "usually you can replace `download` in your dataset script with `download_and_prepare()` - could you share the code for your dataset here? :-) ", "I have an initial version here: https://github.com/EntilZha/nlp/tree/master/datasets/qanta Thats pretty close to what I'll do as a PR, but still want to do some more s...
620,448,236
160
caching in map causes same result to be returned for train, validation and test
closed
hello, I am working on a program that uses the `nlp` library with the `SST2` dataset. The rough outline of the program is: ``` import nlp as nlp_datasets ... parser.add_argument('--dataset', help='HuggingFace Datasets id', default=['glue', 'sst2'], nargs='+') ... dataset = nlp_datasets.load_dataset(*args.dataset) ... # Create feature vocabs vocabs = create_vocabs(dataset.values(), vectorizers) ... # Create a function to vectorize based on vectorizers and vocabs: print('TS', train_set.num_rows) print('VS', valid_set.num_rows) print('ES', test_set.num_rows) # factory method to create a `convert_to_features` function based on vocabs convert_to_features = create_featurizer(vectorizers, vocabs) train_set = train_set.map(convert_to_features, batched=True) train_set.set_format(type='torch', columns=list(vectorizers.keys()) + ['y', 'lengths']) train_loader = torch.utils.data.DataLoader(train_set, batch_size=args.batchsz) valid_set = valid_set.map(convert_to_features, batched=True) valid_set.set_format(type='torch', columns=list(vectorizers.keys()) + ['y', 'lengths']) valid_loader = torch.utils.data.DataLoader(valid_set, batch_size=args.batchsz) test_set = test_set.map(convert_to_features, batched=True) test_set.set_format(type='torch', columns=list(vectorizers.keys()) + ['y', 'lengths']) test_loader = torch.utils.data.DataLoader(test_set, batch_size=args.batchsz) print('TS', train_set.num_rows) print('VS', valid_set.num_rows) print('ES', test_set.num_rows) ``` Im not sure if Im using it incorrectly, but the results are not what I expect. Namely, the `.map()` seems to grab the datset from the cache and then loses track of what the specific dataset is, instead using my training data for all datasets: ``` TS 67349 VS 872 ES 1821 TS 67349 VS 67349 ES 67349 ``` The behavior changes if I turn off the caching but then the results fail: ``` train_set = train_set.map(convert_to_features, batched=True, load_from_cache_file=False) ... valid_set = valid_set.map(convert_to_features, batched=True, load_from_cache_file=False) ... test_set = test_set.map(convert_to_features, batched=True, load_from_cache_file=False) ``` Now I get the right set of features back... ``` TS 67349 VS 872 ES 1821 100%|██████████| 68/68 [00:00<00:00, 92.78it/s] 100%|██████████| 1/1 [00:00<00:00, 75.47it/s] 0%| | 0/2 [00:00<?, ?it/s]TS 67349 VS 872 ES 1821 100%|██████████| 2/2 [00:00<00:00, 77.19it/s] ``` but I think its losing track of the original training set: ``` Traceback (most recent call last): File "/home/dpressel/dev/work/baseline/api-examples/layers-classify-hf-datasets.py", line 148, in <module> for x in train_loader: File "/home/dpressel/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 345, in __next__ data = self._next_data() File "/home/dpressel/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 385, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/home/dpressel/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/dpressel/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/dpressel/anaconda3/lib/python3.7/site-packages/nlp/arrow_dataset.py", line 338, in __getitem__ output_all_columns=self._output_all_columns, File "/home/dpressel/anaconda3/lib/python3.7/site-packages/nlp/arrow_dataset.py", line 294, in _getitem outputs = self._unnest(self._data.slice(key, 1).to_pydict()) File "pyarrow/table.pxi", line 1211, in pyarrow.lib.Table.slice File "pyarrow/public-api.pxi", line 390, in pyarrow.lib.pyarrow_wrap_table File "pyarrow/error.pxi", line 85, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Column 3: In chunk 0: Invalid: Length spanned by list offsets (15859698) larger than values array (length 100000) Process finished with exit code 1 ``` The full-example program (minus the print stmts) is here: https://github.com/dpressel/mead-baseline/pull/620/files
true
2020-05-18T19:22:03Z
2020-05-18T21:36:20Z
2020-05-18T21:36:20Z
dpressel
NONE
null
null
7
0
0
0
null
false
[ "dataset bug" ]
https://github.com/huggingface/datasets/issues/160
false
[ "Hi @dpressel, \r\n\r\nthanks for posting your issue! Can you maybe add a complete code snippet that we can copy paste to reproduce the error? For example, I'm not sure where the variable `train_set` comes from in your code and it seems like you are loading multiple datasets at once? ", "Hi, the full example was...
620,420,700
159
How can we add more datasets to nlp library?
closed
true
2020-05-18T18:35:31Z
2020-05-18T18:37:08Z
2020-05-18T18:37:07Z
Tahsin-Mayeesha
NONE
null
null
1
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/159
false
[ "Found it. https://github.com/huggingface/nlp/tree/master/datasets" ]
620,396,658
158
add Toronto Books Corpus
closed
This PR adds the Toronto Books Corpus. . It on consider TMX and plain text files (Moses) defined in the table **Statistics and TMX/Moses Downloads** [here](http://opus.nlpl.eu/Books.php )
true
2020-05-18T17:54:45Z
2020-06-11T07:49:15Z
2020-05-19T07:34:56Z
mariamabarham
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/158
null
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/158
true
[]
620,356,542
157
nlp.load_dataset() gives "TypeError: list_() takes exactly one argument (2 given)"
closed
I'm trying to load datasets from nlp but there seems to have error saying "TypeError: list_() takes exactly one argument (2 given)" gist can be found here https://gist.github.com/saahiluppal/c4b878f330b10b9ab9762bc0776c0a6a
true
2020-05-18T16:46:38Z
2020-06-05T08:08:58Z
2020-06-05T08:08:58Z
saahiluppal
NONE
null
null
11
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/157
false
[ "You can just run: \r\n`val = nlp.load_dataset('squad')` \r\n\r\nif you want to have just the validation script you can also do:\r\n\r\n`val = nlp.load_dataset('squad', split=\"validation\")`", "If you want to load a local dataset, make sure you include a `./` before the folder name. ", "This happens by just do...
620,263,687
156
SyntaxError with WMT datasets
closed
The following snippet produces a syntax error: ``` import nlp dataset = nlp.load_dataset('wmt14') print(dataset['train'][0]) ``` ``` Traceback (most recent call last): File "/home/tom/.local/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 3326, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-8-3206959998b9>", line 3, in <module> dataset = nlp.load_dataset('wmt14') File "/home/tom/.local/lib/python3.6/site-packages/nlp/load.py", line 505, in load_dataset builder_cls = import_main_class(module_path, dataset=True) File "/home/tom/.local/lib/python3.6/site-packages/nlp/load.py", line 56, in import_main_class module = importlib.import_module(module_path) File "/usr/lib/python3.6/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 994, in _gcd_import File "<frozen importlib._bootstrap>", line 971, in _find_and_load File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 665, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 678, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/tom/.local/lib/python3.6/site-packages/nlp/datasets/wmt14/c258d646f4f5870b0245f783b7aa0af85c7117e06aacf1e0340bd81935094de2/wmt14.py", line 21, in <module> from .wmt_utils import Wmt, WmtConfig File "/home/tom/.local/lib/python3.6/site-packages/nlp/datasets/wmt14/c258d646f4f5870b0245f783b7aa0af85c7117e06aacf1e0340bd81935094de2/wmt_utils.py", line 659 <<<<<<< HEAD ^ SyntaxError: invalid syntax ``` Python version: `3.6.9 (default, Apr 18 2020, 01:56:04) [GCC 8.4.0]` Running on Ubuntu 18.04, via a Jupyter notebook
true
2020-05-18T14:38:18Z
2020-07-23T16:41:55Z
2020-07-23T16:41:55Z
tomhosking
NONE
null
null
7
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/156
false
[ "Jeez - don't know what happened there :D Should be fixed now! \r\n\r\nThanks a lot for reporting this @tomhosking !", "Hi @patrickvonplaten!\r\n\r\nI'm now getting the below error:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError ...
620,067,946
155
Include more links in README, fix typos
closed
Include more links and fix typos in README
true
2020-05-18T09:47:08Z
2020-05-28T08:31:57Z
2020-05-28T08:31:57Z
bharatr21
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/155
2020-05-28T08:31:57Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/155
true
[ "I fixed a conflict :) thanks !" ]
620,059,066
154
add Ubuntu Dialogs Corpus datasets
closed
This PR adds the Ubuntu Dialog Corpus datasets version 2.0.
true
2020-05-18T09:34:48Z
2020-05-18T10:12:28Z
2020-05-18T10:12:27Z
mariamabarham
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/154
2020-05-18T10:12:27Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/154
true
[]
619,972,246
153
Meta-datasets (GLUE/XTREME/...) – Special care to attributions and citations
open
Meta-datasets are interesting in terms of standardized benchmarks but they also have specific behaviors, in particular in terms of attribution and authorship. It's very important that each specific dataset inside a meta dataset is properly referenced and the citation/specific homepage/etc are very visible and accessible and not only the generic citation of the meta-dataset itself. Let's take GLUE as an example: The configuration has the citation for each dataset included (e.g. [here](https://github.com/huggingface/nlp/blob/master/datasets/glue/glue.py#L154-L161)) but it should be copied inside the dataset info so that, when people access `dataset.info.citation` they get both the citation for GLUE and the citation for the specific datasets inside GLUE that they have loaded.
true
2020-05-18T07:24:22Z
2020-05-18T21:18:16Z
null
thomwolf
MEMBER
null
null
4
0
0
0
null
false
[ "generic discussion" ]
https://github.com/huggingface/datasets/issues/153
false
[ "As @yoavgo suggested, there should be the possibility to call a function like nlp.bib that outputs all bibtex ref from the datasets and models actually used and eventually nlp.bib.forreadme that would output the same info + versions numbers so they can be included in a readme.md file.", "Actually, double checki...
619,971,900
152
Add GLUE config name check
closed
Fixes #130 by adding a name check to the Glue class
true
2020-05-18T07:23:43Z
2020-05-27T22:09:12Z
2020-05-27T22:09:12Z
bharatr21
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/152
null
5
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/152
true
[ "If tests are being added, any guidance on where to add tests would be helpful!\r\n\r\nTagging @thomwolf for review", "Looks good to me. Is this compatible with the way we are doing tests right now @patrickvonplaten ?", "If the tests pass it should be fine :-) \r\n\r\n@Bharat123rox could you check whether the t...
619,968,480
151
Fix JSON tests.
closed
true
2020-05-18T07:17:38Z
2020-05-18T07:21:52Z
2020-05-18T07:21:51Z
jplu
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/151
2020-05-18T07:21:51Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/151
true
[]
619,809,645
150
Add WNUT 17 NER dataset
closed
Hi, this PR adds the WNUT 17 dataset to `nlp`. > Emerging and Rare entity recognition > This shared task focuses on identifying unusual, previously-unseen entities in the context of emerging discussions. Named entities form the basis of many modern approaches to other tasks (like event clustering and summarisation), but recall on them is a real problem in noisy text - even among annotators. This drop tends to be due to novel entities and surface forms. Take for example the tweet “so.. kktny in 30 mins?” - even human experts find entity kktny hard to detect and resolve. This task will evaluate the ability to detect and classify novel, emerging, singleton named entities in noisy text. > > The goal of this task is to provide a definition of emerging and of rare entities, and based on that, also datasets for detecting these entities. More information about the dataset can be found on the [shared task page](https://noisy-text.github.io/2017/emerging-rare-entities.html). Dataset is taken is taken from their [GitHub repository](https://github.com/leondz/emerging_entities_17), because the data provided in this repository contains minor fixes in the dataset format. ## Usage Then the WNUT 17 dataset can be used in `nlp` like this: ```python import nlp wnut_17 = nlp.load_dataset("./datasets/wnut_17/wnut_17.py") print(wnut_17) ``` This outputs: ```txt 'train': Dataset(schema: {'id': 'string', 'tokens': 'list<item: string>', 'labels': 'list<item: string>'}, num_rows: 3394) 'validation': Dataset(schema: {'id': 'string', 'tokens': 'list<item: string>', 'labels': 'list<item: string>'}, num_rows: 1009) 'test': Dataset(schema: {'id': 'string', 'tokens': 'list<item: string>', 'labels': 'list<item: string>'}, num_rows: 1287) ``` Number are identical with the ones in [this paper](https://www.ijcai.org/Proceedings/2019/0702.pdf) and are the same as using the `dataset` reader in Flair. ## Features The following feature format is used to represent a sentence in the WNUT 17 dataset: | Feature | Example | Description | ---- | ---- | ----------------- | `id` | `0` | Number (id) of current sentence | `tokens` | `["AHFA", "extends", "deadline"]` | List of tokens (strings) for a sentence | `labels` | `["B-group", "O", "O"]` | List of labels (outer span) The following labels are used in WNUT 17: ```txt O B-corporation I-corporation B-location I-location B-product I-product B-person I-person B-group I-group B-creative-work I-creative-work ```
true
2020-05-17T22:19:04Z
2020-05-26T20:37:59Z
2020-05-26T20:37:59Z
stefan-it
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/150
2020-05-26T20:37:59Z
4
1
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/150
true
[ "The PR looks awesome! \r\nSince you have already added a dataset I imagine the tests as described in 5. of https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md#how-to-add-a-dataset all pass, right @stefan-it ?\r\n\r\nI think we are then good to merge this :-) @lhoestq ", "Nice !\r\n\r\nOne thing though...
619,735,739
149
[Feature request] Add Ubuntu Dialogue Corpus dataset
closed
https://github.com/rkadlec/ubuntu-ranking-dataset-creator or http://dataset.cs.mcgill.ca/ubuntu-corpus-1.0/
true
2020-05-17T15:42:39Z
2020-05-18T17:01:46Z
2020-05-18T17:01:46Z
danth
NONE
null
null
1
0
0
0
null
false
[ "dataset request" ]
https://github.com/huggingface/datasets/issues/149
false
[ "@AlphaMycelium the Ubuntu Dialogue Corpus [version 2]( https://github.com/rkadlec/ubuntu-ranking-dataset-creator) is added. Note that it requires a manual download by following the download instructions in the [repos]( https://github.com/rkadlec/ubuntu-ranking-dataset-creator).\r\nMaybe we can close this issue for...
619,590,555
148
_download_and_prepare() got an unexpected keyword argument 'verify_infos'
closed
# Reproduce In Colab, ``` %pip install -q nlp %pip install -q apache_beam mwparserfromhell dataset = nlp.load_dataset('wikipedia') ``` get ``` Downloading and preparing dataset wikipedia/20200501.aa (download: Unknown size, generated: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/wikipedia/20200501.aa/1.0.0... --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-6-52471d2a0088> in <module>() ----> 1 dataset = nlp.load_dataset('wikipedia') 1 frames /usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 515 download_mode=download_mode, 516 ignore_verifications=ignore_verifications, --> 517 save_infos=save_infos, 518 ) 519 /usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, dl_manager, **download_and_prepare_kwargs) 361 verify_infos = not save_infos and not ignore_verifications 362 self._download_and_prepare( --> 363 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 364 ) 365 # Sync info TypeError: _download_and_prepare() got an unexpected keyword argument 'verify_infos' ```
true
2020-05-17T01:48:53Z
2020-05-18T07:38:33Z
2020-05-18T07:38:33Z
richarddwang
CONTRIBUTOR
null
null
2
2
2
0
null
false
[ "dataset bug" ]
https://github.com/huggingface/datasets/issues/148
false
[ "Same error for dataset 'wiki40b'", "Should be fixed on master :)" ]
619,581,907
147
Error with sklearn train_test_split
closed
It would be nice if we could use sklearn `train_test_split` to quickly generate subsets from the dataset objects returned by `nlp.load_dataset`. At the moment the code: ```python data = nlp.load_dataset('imdb', cache_dir=data_cache) f_half, s_half = train_test_split(data['train'], test_size=0.5, random_state=seed) ``` throws: ``` ValueError: Can only get row(s) (int or slice) or columns (string). ``` It's not a big deal, since there are other ways to split the data, but it would be a cool thing to have.
true
2020-05-17T00:28:24Z
2020-06-18T16:23:23Z
2020-06-18T16:23:23Z
ClonedOne
NONE
null
null
2
0
0
0
null
false
[ "enhancement" ]
https://github.com/huggingface/datasets/issues/147
false
[ "Indeed. Probably we will want to have a similar method directly in the library", "Related: #166 " ]
619,564,653
146
Add BERTScore to metrics
closed
This PR adds [BERTScore](https://arxiv.org/abs/1904.09675) to metrics. Here is an example of how to use it. ```sh import nlp bertscore = nlp.load_metric('metrics/bertscore') # or simply nlp.load_metric('bertscore') after this is added to huggingface's s3 bucket predictions = ['example', 'fruit'] references = [['this is an example.', 'this is one example.'], ['apple']] results = bertscore.compute(predictions, references, lang='en') print(results) ```
true
2020-05-16T22:09:39Z
2020-05-17T22:22:10Z
2020-05-17T22:22:09Z
felixgwu
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/146
2020-05-17T22:22:09Z
0
3
0
3
false
false
[]
https://github.com/huggingface/datasets/pull/146
true
[]
619,480,549
145
[AWS Tests] Follow-up PR from #144
closed
I forgot to add this line in PR #145 .
true
2020-05-16T13:53:46Z
2020-05-16T13:54:23Z
2020-05-16T13:54:22Z
patrickvonplaten
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/145
2020-05-16T13:54:22Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/145
true
[]
619,477,367
144
[AWS tests] AWS test should not run for canonical datasets
closed
AWS tests should in general not run for canonical datasets. Only local tests will run in this case. This way a PR is able to pass when adding a new dataset. This PR changes to logic to the following: 1) All datasets that are present in `nlp/datasets` are tested only locally. This way when one adds a canonical dataset, the PR includes his dataset in the tests. 2) All datasets that are only present on AWS, such as `webis/tl_dr` atm are tested only on AWS. I think the testing structure might need a bigger refactoring and better documentation very soon. Merging for now to unblock new PRs @thomwolf @mariamabarham .
true
2020-05-16T13:39:30Z
2020-05-16T13:44:34Z
2020-05-16T13:44:33Z
patrickvonplaten
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/144
2020-05-16T13:44:33Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/144
true
[]
619,457,641
143
ArrowTypeError in squad metrics
closed
`squad_metric.compute` is giving following error ``` ArrowTypeError: Could not convert [{'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}] with type list: was not a dict, tuple, or recognized null value for conversion to struct type ``` This is how my predictions and references look like ``` predictions[0] # {'id': '56be4db0acb8001400a502ec', 'prediction_text': 'Denver Broncos'} ``` ``` references[0] # {'answers': [{'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}], 'id': '56be4db0acb8001400a502ec'} ``` These are structured as per the `squad_metric.compute` help string.
true
2020-05-16T12:06:37Z
2020-05-22T13:38:52Z
2020-05-22T13:36:48Z
patil-suraj
CONTRIBUTOR
null
null
1
0
0
0
null
false
[ "metric bug" ]
https://github.com/huggingface/datasets/issues/143
false
[ "There was an issue in the format, thanks.\r\nNow you can do\r\n```python3\r\nsquad_dset = nlp.load_dataset(\"squad\")\r\nsquad_metric = nlp.load_metric(\"/Users/quentinlhoest/Desktop/hf/nlp-bis/metrics/squad\")\r\npredictions = [\r\n {\"id\": v[\"id\"], \"prediction_text\": v[\"answers\"][\"text\"][0]} # take ...
619,450,068
142
[WMT] Add all wmt
closed
This PR adds all wmt datasets scripts. At the moment the script is **not** functional for the language pairs "cs-en", "ru-en", "hi-en" because apparently it takes up to a week to get the manual data for these datasets: see http://ufal.mff.cuni.cz/czeng. The datasets are fully functional though for the "big" language pairs "de-en" and "fr-en". Overall I think the scripts are very messy and might need a big refactoring at some point. For now I think there are good to merge (most dataset configs can be used). I will add "cs", "ru" and "hi" when the manual data is available.
true
2020-05-16T11:28:46Z
2020-05-17T12:18:21Z
2020-05-17T12:18:20Z
patrickvonplaten
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/142
2020-05-17T12:18:20Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/142
true
[]
619,447,090
141
[Clean up] remove bogus folder
closed
@mariamabarham - I think you accidentally placed it there.
true
2020-05-16T11:13:42Z
2020-05-16T13:24:27Z
2020-05-16T13:24:26Z
patrickvonplaten
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/141
2020-05-16T13:24:25Z
2
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/141
true
[ "Same for the dataset_infos.json at the project root no ?", "Sorry guys, I haven't noticed. Thank you for mentioning it." ]
619,443,613
140
[Tests] run local tests as default
closed
This PR also enables local tests by default I think it's safer for now to enable both local and aws tests for every commit. The problem currently is that when we do a PR to add a dataset, the dataset is not yet on AWS on therefore not tested on the PR itself. Thus the PR will always be green even if the datasets are not correct. This PR aims at fixing this. ## Suggestion on how to commit to the repo from now on: Now since the repo is "online", I think we should adopt a couple of best practices: 1) - No direct committing to the repo anymore. Every change should be opened in a PR and be well documented so that we can find it later 2) - Every PR has to be reviewed by at least x people (I guess @thomwolf you should decide here) because we now have to be much more careful when doing changes to the API for backward compatibility, etc...
true
2020-05-16T10:56:06Z
2020-05-16T13:21:44Z
2020-05-16T13:21:43Z
patrickvonplaten
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/140
2020-05-16T13:21:43Z
2
1
1
0
false
false
[]
https://github.com/huggingface/datasets/pull/140
true
[ "You are right and I think those are usual best practice :) I'm 100% fine with this^^", "Merging this for now to unblock other PRs." ]
619,327,409
139
Add GermEval 2014 NER dataset
closed
Hi, this PR adds the GermEval 2014 NER dataset 😃 > The GermEval 2014 NER Shared Task builds on a new dataset with German Named Entity annotation [1] with the following properties: > - The data was sampled from German Wikipedia and News Corpora as a collection of citations. > - The dataset covers over 31,000 sentences corresponding to over 590,000 tokens. > - The NER annotation uses the NoSta-D guidelines, which extend the Tübingen Treebank guidelines, using four main NER categories with sub-structure, and annotating embeddings among NEs such as [ORG FC Kickers [LOC Darmstadt]]. Dataset will be downloaded from the [official GermEval 2014 website](https://sites.google.com/site/germeval2014ner/data). ## Dataset format Here's an example of the dataset format from the original dataset: ```tsv # http://de.wikipedia.org/wiki/Manfred_Korfmann [2009-10-17] 1 Aufgrund O O 2 seiner O O 3 Initiative O O 4 fand O O 5 2001/2002 O O 6 in O O 7 Stuttgart B-LOC O 8 , O O 9 Braunschweig B-LOC O 10 und O O 11 Bonn B-LOC O 12 eine O O 13 große O O 14 und O O 15 publizistisch O O 16 vielbeachtete O O 17 Troia-Ausstellung B-LOCpart O 18 statt O O 19 , O O 20 „ O O 21 Troia B-OTH B-LOC 22 - I-OTH O 23 Traum I-OTH O 24 und I-OTH O 25 Wirklichkeit I-OTH O 26 “ O O 27 . O O ``` The sentence is encoded as one token per line (tab separated columns. The first column contains either a `#`, which signals the source the sentence is cited from and the date it was retrieved, or the token number within the sentence. The second column contains the token. Column three and four contain the named entity (in IOB2 scheme). Outer spans are encoded in the third column, embedded/nested spans in the fourth column. ## Features I decided to keep most information from the dataset. That means the so called "source" information (where the sentences come from + date information) is also returned for each sentence in the feature vector. For each sentence in the dataset, one feature vector (`nlp.Features` definition) will be returned: | Feature | Example | Description | ---- | ---- | ----------------- | `id` | `0` | Number (id) of current sentence | `source` | `http://de.wikipedia.org/wiki/Manfred_Korfmann [2009-10-17]` | URL and retrieval date as string | `tokens` | `["Schwartau", "sagte", ":"]` | List of tokens (strings) for a sentence | `labels` | `["B-PER", "O", "O"]` | List of labels (outer span) | `nested-labels` | `["O", "O", "O"]` | List of labels for nested span ## Example The following command downloads the dataset from the official GermEval 2014 page and pre-processed it: ```bash python nlp-cli test datasets/germeval_14 --all_configs ``` It then outputs the number for training, development and testset. The training set consists of 24,000 sentences, the development set of 2,200 and the test of 5,100 sentences. Now it can be imported and used with `nlp`: ```python import nlp germeval = nlp.load_dataset("./datasets/germeval_14/germeval_14.py") assert len(germeval["train"]) == 24000 # Show first sentence of training set: germeval["train"][0] ```
true
2020-05-15T23:42:09Z
2020-05-16T13:56:37Z
2020-05-16T13:56:22Z
stefan-it
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/139
2020-05-16T13:56:22Z
4
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/139
true
[ "Had really fun playing around with this new library :heart: ", "That's awesome - thanks @stefan-it :-) \r\n\r\nCould you maybe rebase to master and check if all dummy data tests are fine. I should have included the local tests directly in the test suite so that all PRs are fully checked: #140 - sorry :D ", "@p...
619,225,191
138
Consider renaming to nld
closed
Hey :) Just making a thread here recording what I said on Twitter, as it's impossible to follow discussion there. It's also just really not a good way to talk about this sort of thing. The issue is that modules go into the global namespace, so you shouldn't use variable names that conflict with module names. This means the package makes `nlp` a bad variable name everywhere in the codebase. I've always used `nlp` as the canonical variable name of spaCy's `Language` objects, and this is a convention that a lot of other code has followed (Stanza, flair, etc). And actually, your `transformers` library uses `nlp` as the name for its `Pipeline` instance in your readme. If you stick with the `nlp` name for this package, if anyone uses it then they should rewrite all of that code. If `nlp` is a bad choice of variable anywhere, it's a bad choice of variable everywhere --- because you shouldn't have to notice whether some other function uses a module when you're naming variables within a function. You want to have one convention that you can stick to everywhere. If people use your `nlp` package and continue to use the `nlp` variable name, they'll find themselves with confusing bugs. There will be many many bits of code cut-and-paste from tutorials that give confusing results when combined with the data loading from the `nlp` library. The problem will be especially bad for shadowed modules (people might reasonably have a module named `nlp.py` within their codebase) and notebooks, as people might run notebook cells for data loading out-of-order. I don't think it's an exaggeration to say that if your library becomes popular, we'll all be answering issues around this about once a week for the next few years. That seems pretty unideal, so I do hope you'll reconsider. I suggest `nld` as a better name. It more accurately represents what the package actually does. It's pretty unideal to have a package named `nlp` that doesn't do any processing, and contains data about natural language generation or other non-NLP tasks. The name is equally short, and is sort of a visual pun on `nlp`, since a d is a rotated p.
true
2020-05-15T20:23:27Z
2022-09-16T05:18:22Z
2020-09-28T00:08:10Z
honnibal
NONE
null
null
13
33
33
0
null
false
[ "generic discussion" ]
https://github.com/huggingface/datasets/issues/138
false
[ "I would suggest `nlds`. NLP is a very general, broad and ambiguous term, the library is not about NLP (as in processing) per se, it is about accessing Natural Language related datasets. So the name should reflect its purpose.\r\n", "Chiming in to second everything @honnibal said, and to add that I think the curr...
619,211,018
136
Update README.md
closed
small typo
true
2020-05-15T20:01:07Z
2020-05-17T12:17:28Z
2020-05-17T12:17:28Z
renaud
NONE
https://github.com/huggingface/datasets/pull/136
null
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/136
true
[ "Thanks, this was fixed with #135 :)" ]
619,206,708
135
Fix print statement in READ.md
closed
print statement was throwing generator object instead of printing names of available datasets/metrics
true
2020-05-15T19:52:23Z
2020-05-17T12:14:06Z
2020-05-17T12:14:05Z
codehunk628
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/135
2020-05-17T12:14:05Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/135
true
[ "Indeed, thanks!" ]
619,112,641
134
Update README.md
closed
true
2020-05-15T16:56:14Z
2020-05-28T08:21:49Z
2020-05-28T08:21:49Z
pranv
NONE
https://github.com/huggingface/datasets/pull/134
null
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/134
true
[ "the readme got removed, closing this one" ]
619,094,954
133
[Question] Using/adding a local dataset
closed
Users may want to either create/modify a local copy of a dataset, or use a custom-built dataset with the same `Dataset` API as externally downloaded datasets. It appears to be possible to point to a local dataset path rather than downloading the external ones, but I'm not exactly sure how to go about doing this. A notebook/example script demonstrating this would be very helpful.
true
2020-05-15T16:26:06Z
2020-07-23T16:44:09Z
2020-07-23T16:44:09Z
zphang
NONE
null
null
5
6
6
0
null
false
[]
https://github.com/huggingface/datasets/issues/133
false
[ "Hi @zphang,\r\n\r\nSo you can just give the local path to a dataset script file and it should work.\r\n\r\nHere is an example:\r\n- you can download one of the scripts in the `datasets` folder of the present repo (or clone the repo)\r\n- then you can load it with `load_dataset('PATH/TO/YOUR/LOCAL/SCRIPT.py')`\r\n\...
619,077,851
132
[Feature Request] Add the OpenWebText dataset
closed
The OpenWebText dataset is an open clone of OpenAI's WebText dataset. It can be used to train ELECTRA as is specified in the [README](https://www.github.com/google-research/electra). More information and the download link are available [here](https://skylion007.github.io/OpenWebTextCorpus/).
true
2020-05-15T15:57:29Z
2020-10-07T14:22:48Z
2020-10-07T14:22:48Z
LysandreJik
MEMBER
null
null
2
2
2
0
null
false
[ "dataset request" ]
https://github.com/huggingface/datasets/issues/132
false
[ "We're experimenting with hosting the OpenWebText corpus on Zenodo for easier downloading. https://zenodo.org/record/3834942#.Xs1w8i-z2J8", "Closing since it's been added in #660 " ]
619,073,731
131
[Feature request] Add Toronto BookCorpus dataset
closed
I know the copyright/distribution of this one is complex, but it would be great to have! That, combined with the existing `wikitext`, would provide a complete dataset for pretraining models like BERT.
true
2020-05-15T15:50:44Z
2020-06-28T21:27:31Z
2020-06-28T21:27:31Z
jarednielsen
CONTRIBUTOR
null
null
2
1
1
0
null
false
[ "dataset request" ]
https://github.com/huggingface/datasets/issues/131
false
[ "As far as I understand, `wikitext` is refer to `WikiText-103` and `WikiText-2` that created by researchers in Salesforce, and mostly used in traditional language modeling.\r\n\r\nYou might want to say `wikipedia`, a dump from wikimedia foundation.\r\n\r\nAlso I would like to have Toronto BookCorpus too ! Though it...
619,035,440
130
Loading GLUE dataset loads CoLA by default
closed
If I run: ```python dataset = nlp.load_dataset('glue') ``` The resultant dataset seems to be CoLA be default, without throwing any error. This is in contrast to calling: ```python metric = nlp.load_metric("glue") ``` which throws an error telling the user that they need to specify a task in GLUE. Should the same apply for loading datasets?
true
2020-05-15T14:55:50Z
2020-05-27T22:08:15Z
2020-05-27T22:08:15Z
zphang
NONE
null
null
3
0
0
0
null
false
[ "dataset bug" ]
https://github.com/huggingface/datasets/issues/130
false
[ "As a follow-up to this: It looks like the actual GLUE task name is supplied as the `name` argument. Is there a way to check what `name`s/sub-datasets are available under a grouping like GLUE? That information doesn't seem to be readily available in info from `nlp.list_datasets()`.\r\n\r\nEdit: I found the info und...
618,997,725
129
[Feature request] Add Google Natural Question dataset
closed
Would be great to have https://github.com/google-research-datasets/natural-questions as an alternative to SQuAD.
true
2020-05-15T14:14:20Z
2020-07-23T13:21:29Z
2020-07-23T13:21:29Z
elyase
NONE
null
null
7
1
1
0
null
false
[ "dataset request" ]
https://github.com/huggingface/datasets/issues/129
false
[ "Indeed, I think this one is almost ready cc @lhoestq ", "I'm doing the latest adjustments to make the processing of the dataset run on Dataflow", "Is there an update to this? It will be very beneficial for the QA community!", "Still work in progress :)\r\nThe idea is to have the dataset already processed som...
618,951,117
128
Some error inside nlp.load_dataset()
closed
First of all, nice work! I am going through [this overview notebook](https://colab.research.google.com/github/huggingface/nlp/blob/master/notebooks/Overview.ipynb) In simple step `dataset = nlp.load_dataset('squad', split='validation[:10%]')` I get an error, which is connected with some inner code, I think: ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-8-d848d3a99b8c> in <module>() 1 # Downloading and loading a dataset 2 ----> 3 dataset = nlp.load_dataset('squad', split='validation[:10%]') 8 frames /usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 515 download_mode=download_mode, 516 ignore_verifications=ignore_verifications, --> 517 save_infos=save_infos, 518 ) 519 /usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, dl_manager, **download_and_prepare_kwargs) 361 verify_infos = not save_infos and not ignore_verifications 362 self._download_and_prepare( --> 363 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 364 ) 365 # Sync info /usr/local/lib/python3.6/dist-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 414 try: 415 # Prepare split will record examples associated to the split --> 416 self._prepare_split(split_generator, **prepare_split_kwargs) 417 except OSError: 418 raise OSError("Cannot find data file. " + (self.MANUAL_DOWNLOAD_INSTRUCTIONS or "")) /usr/local/lib/python3.6/dist-packages/nlp/builder.py in _prepare_split(self, split_generator) 585 fname = "{}-{}.arrow".format(self.name, split_generator.name) 586 fpath = os.path.join(self._cache_dir, fname) --> 587 examples_type = self.info.features.type 588 writer = ArrowWriter(data_type=examples_type, path=fpath, writer_batch_size=self._writer_batch_size) 589 /usr/local/lib/python3.6/dist-packages/nlp/features.py in type(self) 460 @property 461 def type(self): --> 462 return get_nested_type(self) 463 464 @classmethod /usr/local/lib/python3.6/dist-packages/nlp/features.py in get_nested_type(schema) 370 # Nested structures: we allow dict, list/tuples, sequences 371 if isinstance(schema, dict): --> 372 return pa.struct({key: get_nested_type(value) for key, value in schema.items()}) 373 elif isinstance(schema, (list, tuple)): 374 assert len(schema) == 1, "We defining list feature, you should just provide one example of the inner type" /usr/local/lib/python3.6/dist-packages/nlp/features.py in <dictcomp>(.0) 370 # Nested structures: we allow dict, list/tuples, sequences 371 if isinstance(schema, dict): --> 372 return pa.struct({key: get_nested_type(value) for key, value in schema.items()}) 373 elif isinstance(schema, (list, tuple)): 374 assert len(schema) == 1, "We defining list feature, you should just provide one example of the inner type" /usr/local/lib/python3.6/dist-packages/nlp/features.py in get_nested_type(schema) 379 # We allow to reverse list of dict => dict of list for compatiblity with tfds 380 if isinstance(inner_type, pa.StructType): --> 381 return pa.struct(dict((f.name, pa.list_(f.type, schema.length)) for f in inner_type)) 382 return pa.list_(inner_type, schema.length) 383 /usr/local/lib/python3.6/dist-packages/nlp/features.py in <genexpr>(.0) 379 # We allow to reverse list of dict => dict of list for compatiblity with tfds 380 if isinstance(inner_type, pa.StructType): --> 381 return pa.struct(dict((f.name, pa.list_(f.type, schema.length)) for f in inner_type)) 382 return pa.list_(inner_type, schema.length) 383 TypeError: list_() takes exactly one argument (2 given) ```
true
2020-05-15T13:01:29Z
2020-05-15T13:10:40Z
2020-05-15T13:10:40Z
polkaYK
NONE
null
null
2
1
1
0
null
false
[]
https://github.com/huggingface/datasets/issues/128
false
[ "Google colab has an old version of Apache Arrow built-in.\r\nBe sure you execute the \"pip install\" cell and restart the notebook environment if the colab asks for it.", "Thanks for reply, worked fine!\r\n" ]
618,909,042
127
Update Overview.ipynb
closed
update notebook
true
2020-05-15T11:46:48Z
2020-05-15T11:47:27Z
2020-05-15T11:47:25Z
patrickvonplaten
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/127
2020-05-15T11:47:25Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/127
true
[]
618,897,499
126
remove webis
closed
Remove webis from dataset folder. Our first dataset script that only lives on AWS :-) https://s3.console.aws.amazon.com/s3/buckets/datasets.huggingface.co/nlp/datasets/webis/tl_dr/?region=us-east-1 @julien-c @jplu
true
2020-05-15T11:25:20Z
2020-05-15T11:31:24Z
2020-05-15T11:30:26Z
patrickvonplaten
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/126
2020-05-15T11:30:26Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/126
true
[]
618,869,048
125
[Newsroom] add newsroom
closed
I checked it with the data link of the mail you forwarded @thomwolf => works well!
true
2020-05-15T10:34:34Z
2020-05-15T10:37:07Z
2020-05-15T10:37:02Z
patrickvonplaten
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/125
2020-05-15T10:37:02Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/125
true
[]
618,864,284
124
Xsum, require manual download of some files
closed
true
2020-05-15T10:26:13Z
2020-05-15T11:04:48Z
2020-05-15T11:04:46Z
mariamabarham
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/124
2020-05-15T11:04:46Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/124
true
[]
618,820,140
123
[Tests] Local => aws
closed
## Change default Test from local => aws As a default we set` aws=True`, `Local=False`, `slow=False` ### 1. RUN_AWS=1 (default) This runs 4 tests per dataset script. a) Does the dataset script have a valid etag / Can it be reached on AWS? b) Can we load its `builder_class`? c) Can we load **all** dataset configs? d) _Most importantly_: Can we load the dataset? Important - we currently only test the first config of each dataset to reduce test time. Total test time is around 1min20s. ### 2. RUN_LOCAL=1 RUN_AWS=0 ***This should be done when debugging dataset scripts of the ./datasets folder*** This only runs 1 test per dataset test, which is equivalent to aws d) - Can we load the dataset from the local `datasets` directory? ### 3. RUN_SLOW=1 We should set up to run these tests maybe 1 time per week ? @thomwolf The `slow` tests include two more important tests. e) Can we load the dataset with all possible configs? This test will probably fail at the moment because a lot of dummy data is missing. We should add the dummy data step by step to be sure that all configs work. f) Test that the actual dataset can be loaded. This will take quite some time to run, but is important to make sure that the "real" data can be loaded. It will also test whether the dataset script has the correct checksums file which is currently not tested with `aws=True`. @lhoestq - is there an easy way to check cheaply whether the `dataset_info.json` is correct for each dataset script?
true
2020-05-15T09:12:25Z
2020-05-15T10:06:12Z
2020-05-15T10:03:26Z
patrickvonplaten
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/123
2020-05-15T10:03:26Z
3
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/123
true
[ "For each dataset, If there exist a `dataset_info.json`, then the command `nlp-cli test path/to/my/dataset --al_configs` is successful only if the `dataset_infos.json` is correct. The infos are correct if the size and checksums of the downloaded file are correct, and if the number of examples in each split are corr...
618,813,182
122
Final cleanup of readme and metrics
closed
true
2020-05-15T09:00:52Z
2021-09-03T19:40:09Z
2020-05-15T09:02:22Z
thomwolf
MEMBER
https://github.com/huggingface/datasets/pull/122
2020-05-15T09:02:22Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/122
true
[]
618,790,040
121
make style
closed
true
2020-05-15T08:23:36Z
2020-05-15T08:25:39Z
2020-05-15T08:25:38Z
patrickvonplaten
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/121
2020-05-15T08:25:38Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/121
true
[]
618,737,783
120
🐛 `map` not working
closed
I'm trying to run a basic example (mapping function to add a prefix). [Here is the colab notebook I'm using.](https://colab.research.google.com/drive/1YH4JCAy0R1MMSc-k_Vlik_s1LEzP_t1h?usp=sharing) ```python import nlp dataset = nlp.load_dataset('squad', split='validation[:10%]') def test(sample): sample['title'] = "test prefix @@@ " + sample["title"] return sample print(dataset[0]['title']) dataset.map(test) print(dataset[0]['title']) ``` Output : > Super_Bowl_50 Super_Bowl_50 Expected output : > Super_Bowl_50 test prefix @@@ Super_Bowl_50
true
2020-05-15T06:43:08Z
2020-05-15T07:02:38Z
2020-05-15T07:02:38Z
astariul
NONE
null
null
1
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/120
false
[ "I didn't assign the output 🤦‍♂️\r\n\r\n```python\r\ndataset.map(test)\r\n```\r\n\r\nshould be :\r\n\r\n```python\r\ndataset = dataset.map(test)\r\n```" ]
618,652,145
119
🐛 Colab : type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array'
closed
I'm trying to load CNN/DM dataset on Colab. [Colab notebook](https://colab.research.google.com/drive/11Mf7iNhIyt6GpgA1dBEtg3cyMHmMhtZS?usp=sharing) But I meet this error : > AttributeError: type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array'
true
2020-05-15T02:27:26Z
2020-05-15T05:11:22Z
2020-05-15T02:45:28Z
astariul
NONE
null
null
2
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/119
false
[ "It's strange, after installing `nlp` on Colab, the `pyarrow` version seems fine from `pip` but not from python :\r\n\r\n```python\r\nimport pyarrow\r\n\r\n!pip show pyarrow\r\nprint(\"version = {}\".format(pyarrow.__version__))\r\n```\r\n\r\n> Name: pyarrow\r\nVersion: 0.17.0\r\nSummary: Python library for Apache ...
618,643,088
118
❓ How to apply a map to all subsets ?
closed
I'm working with CNN/DM dataset, where I have 3 subsets : `train`, `test`, `validation`. Should I apply my map function on the subsets one by one ? ```python import nlp cnn_dm = nlp.load_dataset('cnn_dailymail') for corpus in ['train', 'test', 'validation']: cnn_dm[corpus] = cnn_dm[corpus].map(my_func) ``` Or is there a better way to do this ?
true
2020-05-15T01:58:52Z
2020-05-15T07:05:49Z
2020-05-15T07:04:25Z
astariul
NONE
null
null
1
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/118
false
[ "That's the way!" ]
618,632,573
117
❓ How to remove specific rows of a dataset ?
closed
I saw on the [example notebook](https://colab.research.google.com/github/huggingface/nlp/blob/master/notebooks/Overview.ipynb#scrollTo=efFhDWhlvSVC) how to remove a specific column : ```python dataset.drop('id') ``` But I didn't find how to remove a specific row. **For example, how can I remove all sample with `id` < 10 ?**
true
2020-05-15T01:25:06Z
2022-07-15T08:36:44Z
2020-05-15T07:04:32Z
astariul
NONE
null
null
4
1
1
0
null
false
[]
https://github.com/huggingface/datasets/issues/117
false
[ "Hi, you can't do that at the moment.", "Can you do it by now? Coz it would be awfully helpful!", "you can convert dataset object to pandas and remove a feature and convert back to dataset .", "That's what I ended up doing too. but it feels like a workaround to a feature that should be added to the datasets c...
618,628,264
116
🐛 Trying to use ROUGE metric : pyarrow.lib.ArrowInvalid: Column 1 named references expected length 534 but got length 323
closed
I'm trying to use rouge metric. I have to files : `test.pred.tokenized` and `test.gold.tokenized` with each line containing a sentence. I tried : ```python import nlp rouge = nlp.load_metric('rouge') with open("test.pred.tokenized") as p, open("test.gold.tokenized") as g: for lp, lg in zip(p, g): rouge.add(lp, lg) ``` But I meet following error : > pyarrow.lib.ArrowInvalid: Column 1 named references expected length 534 but got length 323 --- Full stack-trace : ``` Traceback (most recent call last): File "<stdin>", line 3, in <module> File "/home/me/.venv/transformers/lib/python3.6/site-packages/nlp/metric.py", line 224, in add self.writer.write_batch(batch) File "/home/me/.venv/transformers/lib/python3.6/site-packages/nlp/arrow_writer.py", line 148, in write_batch pa_table: pa.Table = pa.Table.from_pydict(batch_examples, schema=self._schema) File "pyarrow/table.pxi", line 1550, in pyarrow.lib.Table.from_pydict File "pyarrow/table.pxi", line 1503, in pyarrow.lib.Table.from_arrays File "pyarrow/public-api.pxi", line 390, in pyarrow.lib.pyarrow_wrap_table File "pyarrow/error.pxi", line 85, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Column 1 named references expected length 534 but got length 323 ``` (`nlp` installed from source)
true
2020-05-15T01:12:06Z
2020-05-28T23:43:07Z
2020-05-28T23:43:07Z
astariul
NONE
null
null
5
0
0
0
null
false
[ "metric bug" ]
https://github.com/huggingface/datasets/issues/116
false
[ "Can you share your data files or a minimally reproducible example?", "Sure, [here is a Colab notebook](https://colab.research.google.com/drive/1uiS89fnHMG7HV_cYxp3r-_LqJQvNNKs9?usp=sharing) reproducing the error.\r\n\r\n> ArrowInvalid: Column 1 named references expected length 36 but got length 56", "This is b...
618,615,855
115
AttributeError: 'dict' object has no attribute 'info'
closed
I'm trying to access the information of CNN/DM dataset : ```python cnn_dm = nlp.load_dataset('cnn_dailymail') print(cnn_dm.info) ``` returns : > AttributeError: 'dict' object has no attribute 'info'
true
2020-05-15T00:29:47Z
2020-05-17T13:11:00Z
2020-05-17T13:11:00Z
astariul
NONE
null
null
2
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/115
false
[ "I could access the info by first accessing the different splits :\r\n\r\n```python\r\nimport nlp\r\n\r\ncnn_dm = nlp.load_dataset('cnn_dailymail')\r\nprint(cnn_dm['train'].info)\r\n```\r\n\r\nInformation seems to be duplicated between the subsets :\r\n\r\n```python\r\nprint(cnn_dm[\"train\"].info == cnn_dm[\"test\...
618,611,310
114
Couldn't reach CNN/DM dataset
closed
I can't get CNN / DailyMail dataset. ```python import nlp assert "cnn_dailymail" in [dataset.id for dataset in nlp.list_datasets()] cnn_dm = nlp.load_dataset('cnn_dailymail') ``` [Colab notebook](https://colab.research.google.com/drive/1zQ3bYAVzm1h0mw0yWPqKAg_4EUlSx5Ex?usp=sharing) gives following error : ``` ConnectionError: Couldn't reach https://s3.amazonaws.com/datasets.huggingface.co/nlp/cnn_dailymail/cnn_dailymail.py ```
true
2020-05-15T00:16:17Z
2020-05-15T00:19:52Z
2020-05-15T00:19:51Z
astariul
NONE
null
null
1
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/114
false
[ "Installing from source (instead of Pypi package) solved the problem." ]
618,590,562
113
Adding docstrings and some doc
closed
Some doc
true
2020-05-14T23:14:41Z
2020-05-14T23:22:45Z
2020-05-14T23:22:44Z
thomwolf
MEMBER
https://github.com/huggingface/datasets/pull/113
2020-05-14T23:22:44Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/113
true
[]
618,569,195
112
Qa4mre - add dataset
closed
Added dummy data test only for the first config. Will do the rest later. I had to do add some minor hacks to an important function to make it work. There might be a cleaner way to handle it - can you take a look @thomwolf ?
true
2020-05-14T22:17:51Z
2020-05-15T09:16:43Z
2020-05-15T09:16:42Z
patrickvonplaten
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/112
2020-05-15T09:16:42Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/112
true
[]
618,528,060
111
[Clean-up] remove under construction datastes
closed
true
2020-05-14T20:52:13Z
2020-05-14T20:52:23Z
2020-05-14T20:52:22Z
patrickvonplaten
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/111
2020-05-14T20:52:22Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/111
true
[]
618,520,325
110
fix reddit tifu dummy data
closed
true
2020-05-14T20:37:37Z
2020-05-14T20:40:14Z
2020-05-14T20:40:13Z
patrickvonplaten
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/110
2020-05-14T20:40:13Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/110
true
[]
618,508,359
109
[Reclor] fix reclor
closed
- That's probably one me. Could have made the manual data test more flexible. @mariamabarham
true
2020-05-14T20:16:26Z
2020-05-14T20:19:09Z
2020-05-14T20:19:08Z
patrickvonplaten
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/109
2020-05-14T20:19:08Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/109
true
[]
618,386,394
108
convert can use manual dir as second argument
closed
@mariamabarham
true
2020-05-14T16:52:32Z
2020-05-14T16:52:43Z
2020-05-14T16:52:42Z
patrickvonplaten
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/108
2020-05-14T16:52:42Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/108
true
[]
618,373,045
107
add writer_batch_size to GeneratorBasedBuilder
closed
You can now specify `writer_batch_size` in the builder arguments or directly in `load_dataset`
true
2020-05-14T16:35:39Z
2020-05-14T16:50:30Z
2020-05-14T16:50:29Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/107
2020-05-14T16:50:29Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/107
true
[ "Awesome that's great!" ]
618,361,418
106
Add data dir test command
closed
true
2020-05-14T16:18:39Z
2020-05-14T16:49:11Z
2020-05-14T16:49:10Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/106
2020-05-14T16:49:10Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/106
true
[ "Nice - I think we can merge this. I will update the checksums for `wikihow` then as well" ]
618,345,191
105
[New structure on AWS] Adapt paths
closed
Some small changes so that we have the correct paths. @julien-c
true
2020-05-14T15:55:57Z
2020-05-14T15:56:28Z
2020-05-14T15:56:27Z
patrickvonplaten
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/105
2020-05-14T15:56:27Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/105
true
[]
618,277,081
104
Add trivia_q
closed
Currently tested only for one config to pass tests. Needs to add more dummy data later.
true
2020-05-14T14:27:19Z
2020-07-12T05:34:20Z
2020-05-14T20:23:32Z
patrickvonplaten
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/104
2020-05-14T20:23:32Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/104
true
[]
618,233,637
103
[Manual downloads] add logic proposal for manual downloads and add wikihow
closed
Wikihow is an example that needs to manually download two files as stated in: https://github.com/mahnazkoupaee/WikiHow-Dataset. The user can then store these files under a hard-coded name: `wikihowAll.csv` and `wikihowSep.csv` in this case in a directory of his choice, e.g. `~/wikihow/manual_dir`. The dataset can then be loaded via: ```python import nlp nlp.load_dataset("wikihow", data_dir="~/wikihow/manual_dir") ``` I added/changed so that there are explicit error messages when using manually downloaded files.
true
2020-05-14T13:30:36Z
2020-05-14T14:27:41Z
2020-05-14T14:27:40Z
patrickvonplaten
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/103
2020-05-14T14:27:40Z
3
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/103
true
[ "> Wikihow is an example that needs to manually download two files as stated in: https://github.com/mahnazkoupaee/WikiHow-Dataset.\r\n> \r\n> The user can then store these files under a hard-coded name: `wikihowAll.csv` and `wikihowSep.csv` in this case in a directory of his choice, e.g. `~/wikihow/manual_dir`.\r\n...
618,231,216
102
Run save infos
closed
I replaced the old checksum file with the new `dataset_infos.json` by running the script on almost all the datasets we have. The only one that is still running on my side is the cornell dialog
true
2020-05-14T13:27:26Z
2020-05-14T15:43:04Z
2020-05-14T15:43:03Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/102
2020-05-14T15:43:03Z
2
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/102
true
[ "Haha that cornell dialogue dataset - that ran for 3h on my computer as well. The `generate_examples` method in this script is one of the most inefficient code samples I've ever seen :D ", "Indeed it's been 3 hours already\r\n```73111 examples [3:07:48, 2.40 examples/s]```" ]
618,111,651
101
[Reddit] add reddit
closed
- Everything worked fine @mariamabarham. Made my computer nearly crash, but all seems to be working :-)
true
2020-05-14T10:25:02Z
2020-05-14T10:27:25Z
2020-05-14T10:27:24Z
patrickvonplaten
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/101
2020-05-14T10:27:24Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/101
true
[]
618,081,602
100
Add per type scores in seqeval metric
closed
This PR add a bit more detail in the seqeval metric. Now the usage and output are: ```python import nlp met = nlp.load_metric('metrics/seqeval') references = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] predictions = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] met.compute(predictions, references) #Output: {'PER': {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1}, 'MISC': {'precision': 0.0, 'recall': 0.0, 'f1': 0, 'number': 1}, 'overall_precision': 0.5, 'overall_recall': 0.5, 'overall_f1': 0.5, 'overall_accuracy': 0.8} ``` It is also possible to compute scores for non IOB notations, POS tagging for example hasn't this kind of notation. Add `suffix` parameter: ```python import nlp met = nlp.load_metric('metrics/seqeval') references = [['O', 'O', 'O', 'MISC', 'MISC', 'MISC', 'O'], ['PER', 'PER', 'O']] predictions = [['O', 'O', 'MISC', 'MISC', 'MISC', 'MISC', 'O'], ['PER', 'PER', 'O']] met.compute(predictions, references, metrics_kwargs={"suffix": True}) #Output: {'PER': {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1}, 'MISC': {'precision': 0.0, 'recall': 0.0, 'f1': 0, 'number': 1}, 'overall_precision': 0.5, 'overall_recall': 0.5, 'overall_f1': 0.5, 'overall_accuracy': 0.9} ```
true
2020-05-14T09:37:52Z
2020-05-14T23:21:35Z
2020-05-14T23:21:34Z
jplu
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/100
2020-05-14T23:21:34Z
4
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/100
true
[ "LGTM :-) Some small suggestions to shorten the code a bit :-) ", "Can you put the kwargs as normal kwargs instead of a dict? (And add them to the kwargs description As well)", "@thom Is-it what you meant?", "Yes and there is a dynamically generated doc string in the metric script KWARGS DESCRIPTION" ]
618,026,700
99
[Cmrc 2018] fix cmrc2018
closed
true
2020-05-14T08:22:03Z
2020-05-14T08:49:42Z
2020-05-14T08:49:41Z
patrickvonplaten
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/99
2020-05-14T08:49:41Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/99
true
[]
617,957,739
98
Webis tl-dr
closed
Add the Webid TL:DR dataset.
true
2020-05-14T06:22:18Z
2020-09-03T10:00:21Z
2020-05-14T20:54:16Z
jplu
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/98
2020-05-14T20:54:15Z
12
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/98
true
[ "Should that rather be in an organization scope, @thomwolf @patrickvonplaten ?", "> Should that rather be in an organization scope, @thomwolf @patrickvonplaten ?\r\n\r\nI'm a bit indifferent - both would be fine for me!", "@jplu - if creating the dummy_data is too tedious, I can do it as well :-) ", "There is...
617,809,431
97
[Csv] add tests for csv dataset script
closed
Adds dummy data tests for csv.
true
2020-05-13T23:06:11Z
2020-05-13T23:23:16Z
2020-05-13T23:23:15Z
patrickvonplaten
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/97
2020-05-13T23:23:15Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/97
true
[ "@thomwolf - can you check and merge if ok? " ]
617,739,521
96
lm1b
closed
Add lm1b dataset.
true
2020-05-13T20:38:44Z
2020-05-14T14:13:30Z
2020-05-14T14:13:29Z
jplu
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/96
2020-05-14T14:13:29Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/96
true
[ "I might have a different version of `isort` than others. It seems like I'm always reordering the imports of others. But isn't really a problem..." ]
617,703,037
95
Replace checksums files by Dataset infos json
closed
### Better verifications when loading a dataset I replaced the `urls_checksums` directory that used to contain `checksums.txt` and `cached_sizes.txt`, by a single file `dataset_infos.json`. It's just a dict `config_name` -> `DatasetInfo`. It simplifies and improves how verifications of checksums and splits sizes are done, as they're all stored in `DatasetInfo` (one per config). Also, having already access to `DatasetInfo` enables to check disk space before running `download_and_prepare` for a given config. The dataset infos json file is user readable, you can take a look at the squad one that I generated in this PR. ### Renaming According to these changes, I did some renaming: `save_checksums` -> `save_infos` `ignore_checksums` -> `ignore_verifications` for example, when you are creating a dataset you have to run ```nlp-cli test path/to/my/dataset --save_infos --all_configs``` instead of ```nlp-cli test path/to/my/dataset --save_checksums --all_configs``` ### And now, the fun part We'll have to rerun the `nlp-cli test ... --save_infos --all_configs` for all the datasets ----------------- feedback appreciated !
true
2020-05-13T19:36:16Z
2020-05-14T08:58:43Z
2020-05-14T08:58:42Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/95
2020-05-14T08:58:42Z
2
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/95
true
[ "Great! LGTM :-) ", "> Ok, really clean!\r\n> I like the logic (not a huge fan of using `_asdict_inner` but it makes sense).\r\n> I think it's a nice improvement!\r\n> \r\n> How should we update the files in the repo? Run a big job on a server or on somebody's computer who has most of the datasets already downloa...
617,571,340
94
Librispeech
closed
Add librispeech dataset and remove some useless content.
true
2020-05-13T16:04:14Z
2020-05-13T21:29:03Z
2020-05-13T21:29:02Z
jplu
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/94
2020-05-13T21:29:02Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/94
true
[ "@jplu - I changed this weird archieve - iter method to something simpler. It's only one file to download anyways so I don't see the point of using weird iter methods...It's a huge file though :D 30 million lines of text. Took me quite some time to download :D " ]
617,522,029
93
Cleanup notebooks and various fixes
closed
Fixes on dataset (more flexible) metrics (fix) and general clean ups
true
2020-05-13T14:58:58Z
2020-05-13T15:01:48Z
2020-05-13T15:01:47Z
thomwolf
MEMBER
https://github.com/huggingface/datasets/pull/93
2020-05-13T15:01:47Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/93
true
[]
617,341,505
92
[WIP] add wmt14
closed
WMT14 takes forever to download :-/ - WMT is the first dataset that uses an abstract class IMO, so I had to modify the `load_dataset_module` a bit.
true
2020-05-13T10:42:03Z
2020-05-16T11:17:38Z
2020-05-16T11:17:37Z
patrickvonplaten
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/92
2020-05-16T11:17:37Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/92
true
[]
617,339,484
91
[Paracrawl] add paracrawl
closed
- Huge dataset - took ~1h to download - Also this PR reformats all dataset scripts and adds `datasets` to `make style`
true
2020-05-13T10:39:00Z
2020-05-13T10:40:15Z
2020-05-13T10:40:14Z
patrickvonplaten
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/91
2020-05-13T10:40:14Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/91
true
[]
617,311,877
90
Add download gg drive
closed
We can now add datasets that download from google drive
true
2020-05-13T09:56:02Z
2020-05-13T12:46:28Z
2020-05-13T10:05:31Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/90
2020-05-13T10:05:31Z
2
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/90
true
[ "awesome - so no manual downloaded needed here? ", "Yes exactly. It works like a standard download" ]
617,295,069
89
Add list and inspect methods - cleanup hf_api
closed
Add a bunch of methods to easily list and inspect the processing scripts up-loaded on S3: ```python nlp.list_datasets() nlp.list_metrics() # Copy and prepare the scripts at `local_path` for easy inspection/modification. nlp.inspect_dataset(path, local_path) # Copy and prepare the scripts at `local_path` for easy inspection/modification. nlp.inspect_metric(path, local_path) ``` Also clean up the `HfAPI` to use `dataclasses` for better user-experience
true
2020-05-13T09:30:15Z
2020-05-13T14:05:00Z
2020-05-13T09:33:10Z
thomwolf
MEMBER
https://github.com/huggingface/datasets/pull/89
2020-05-13T09:33:10Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/89
true
[]
617,284,664
88
Add wiki40b
closed
This one is a beam dataset that downloads files using tensorflow. I tested it on a small config and it works fine
true
2020-05-13T09:16:01Z
2020-05-13T12:31:55Z
2020-05-13T12:31:54Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/88
2020-05-13T12:31:54Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/88
true
[ "Looks good to me. I have not really looked too much into the Beam Datasets yet though - so I think you can merge whenever you think is good for Beam datasets :-) " ]
617,267,118
87
Add Flores
closed
Beautiful language for sure!
true
2020-05-13T08:51:29Z
2020-05-13T09:23:34Z
2020-05-13T09:23:33Z
patrickvonplaten
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/87
2020-05-13T09:23:33Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/87
true
[]
617,260,972
86
[Load => load_dataset] change naming
closed
Rename leftovers @thomwolf
true
2020-05-13T08:43:00Z
2020-05-13T08:50:58Z
2020-05-13T08:50:57Z
patrickvonplaten
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/86
2020-05-13T08:50:57Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/86
true
[]
617,253,428
85
Add boolq
closed
I just added the dummy data for this dataset. This one was uses `tf.io.gfile.copy` to download the data but I added the support for custom download in the mock_download_manager. I also had to add a `tensorflow` dependency for tests.
true
2020-05-13T08:32:27Z
2020-05-13T09:09:39Z
2020-05-13T09:09:38Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/85
2020-05-13T09:09:38Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/85
true
[ "Awesome :-) Thanks for adding the function to the Mock DL Manager" ]
617,249,815
84
[TedHrLr] add left dummy data
closed
true
2020-05-13T08:27:20Z
2020-05-13T08:29:22Z
2020-05-13T08:29:21Z
patrickvonplaten
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/84
2020-05-13T08:29:21Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/84
true
[]
616,863,601
83
New datasets
closed
true
2020-05-12T18:22:27Z
2020-05-12T18:22:47Z
2020-05-12T18:22:45Z
mariamabarham
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/83
2020-05-12T18:22:45Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/83
true
[]
616,805,194
82
[Datasets] add ted_hrlr
closed
@thomwolf - After looking at `xnli` I think it's better to leave the translation features and add a `translation` key to make them work in our framework. The result looks like this: ![Screenshot from 2020-05-12 18-34-43](https://user-images.githubusercontent.com/23423619/81721933-ee1faf00-9480-11ea-9e95-d6557cbd0ce0.png) you can see that each split has a `translation` key which value is the nlp.features.Translation object. That's a simple change. If it's ok for you, I will add dummy data for the other configs and treat the other translation scripts in the same way.
true
2020-05-12T16:46:50Z
2020-05-13T07:52:54Z
2020-05-13T07:52:53Z
patrickvonplaten
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/82
2020-05-13T07:52:52Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/82
true
[]
616,793,010
81
add tests
closed
Tests for py_utils functions and for the BaseReader used to read from arrow and parquet. I also removed unused utils functions.
true
2020-05-12T16:28:19Z
2020-05-13T07:43:57Z
2020-05-13T07:43:56Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/81
2020-05-13T07:43:56Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/81
true
[]
616,786,803
80
Add nbytes + nexamples check
closed
### Save size and number of examples Now when you do `save_checksums`, it also create `cached_sizes.txt` right next to the checksum file. This new file stores the bytes sizes and the number of examples of each split that has been prepared and stored in the cache. Example: ``` # Cached sizes: <full_config_name> <num_bytes> <num_examples> hansards/house/1.0.0/test 22906629 122290 hansards/house/1.0.0/train 191459584 947969 hansards/senate/1.0.0/test 5711686 25553 hansards/senate/1.0.0/train 40324278 182135 ``` ### Check processing output If there is a `caches_sizes.txt`, then each time we run `download_and_prepare` it will make sure that the sizes match. You can set `ignore_checksums=True` if you don't want that to happen. ### Fill Dataset Info All the split infos and the checksums are now stored correctly in DatasetInfo after `download_and_prepare` ### Check space on disk before running `download_and_prepare` Check if the space is lower than the sum of the sizes of the files in `checksums.txt` and `cached_files.txt`. This is not ideal though as it considers the files for all configs. TODO: A better way to do it would be to have save the `DatasetInfo` instead of the `checksums.txt` and `cached_sizes.txt`, in order to have one file per dataset config (and therefore consider only the sizes of the files for one config and not all of them). It can also be the occasion to factorize all the `download_and_prepare` verifications. Maybe next PR ?
true
2020-05-12T16:18:43Z
2020-05-13T07:52:34Z
2020-05-13T07:52:33Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/80
2020-05-13T07:52:33Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/80
true
[ "Looks good to me! Should we hard code those numbers in the config classes and make sure that when loading a dataset that the numbers match? " ]
616,785,613
79
[Convert] add new pattern
closed
true
2020-05-12T16:16:51Z
2020-05-12T16:17:10Z
2020-05-12T16:17:09Z
patrickvonplaten
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/79
2020-05-12T16:17:09Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/79
true
[]
616,774,275
78
[Tests] skip beam dataset tests for now
closed
For now we will skip tests for Beam Datasets
true
2020-05-12T16:00:58Z
2020-05-12T16:16:24Z
2020-05-12T16:16:22Z
patrickvonplaten
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/78
2020-05-12T16:16:22Z
2
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/78
true
[ "@lhoestq - I moved the wkipedia file to the \"correct\" folder. ", "Nice thanks !" ]
616,674,601
77
New datasets
closed
true
2020-05-12T13:51:59Z
2020-05-12T14:02:16Z
2020-05-12T14:02:15Z
mariamabarham
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/77
2020-05-12T14:02:15Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/77
true
[]
616,579,228
76
pin flake 8
closed
Flake 8's new version does not like our format. Pinning the version for now.
true
2020-05-12T11:25:29Z
2020-05-12T11:27:35Z
2020-05-12T11:27:34Z
patrickvonplaten
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/76
2020-05-12T11:27:34Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/76
true
[]
616,520,163
75
WIP adding metrics
closed
Adding the following metrics as identified by @mariamabarham: 1. BLEU: BiLingual Evaluation Understudy: https://github.com/tensorflow/nmt/blob/master/nmt/scripts/bleu.py, https://github.com/chakki-works/sumeval/blob/master/sumeval/metrics/bleu.py (multilingual) 2. GLEU: Google-BLEU: https://github.com/cnap/gec-ranking/blob/master/scripts/compute_gleu 3. Sacrebleu: https://pypi.org/project/sacrebleu/1.4.8/ (pypi package), https://github.com/mjpost/sacrebleu (github implementation) 4. ROUGE: Recall-Oriented Understudy for Gisting Evaluation: https://github.com/google-research/google-research/tree/master/rouge, https://github.com/chakki-works/sumeval/blob/master/sumeval/metrics/rouge.py (multilingual) 5. Seqeval: https://github.com/chakki-works/seqeval (github implementation), https://pypi.org/project/seqeval/0.0.12/ (pypi package) 6. Coval: coreference evaluation package for the CoNLL and ARRAU datasets https://github.com/ns-moosavi/coval 7. SQuAD v1 evaluation script 8. SQuAD V2 evaluation script: https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/ 9. GLUE 10. XNLI Not now: 1. Perplexity: https://github.com/allenai/allennlp/blob/master/allennlp/training/metrics/perplexity.py 2. Spearman: https://github.com/allenai/allennlp/blob/master/allennlp/training/metrics/spearman_correlation.py 3. F1_measure: https://github.com/allenai/allennlp/blob/master/allennlp/training/metrics/f1_measure.py 4. Pearson_corelation: https://github.com/allenai/allennlp/blob/master/allennlp/training/metrics/pearson_correlation.py 5. AUC: https://github.com/allenai/allennlp/blob/master/allennlp/training/metrics/auc.py 6. Entropy: https://github.com/allenai/allennlp/blob/master/allennlp/training/metrics/entropy.py
true
2020-05-12T09:52:00Z
2020-05-13T07:44:12Z
2020-05-13T07:44:10Z
thomwolf
MEMBER
https://github.com/huggingface/datasets/pull/75
2020-05-13T07:44:10Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/75
true
[ "It's all about my metric stuff so I'll probably merge it unless you want to have a look.\r\n\r\nTook the occasion to remove the old doc and requirements.txt" ]