id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
652,424,048
https://api.github.com/repos/huggingface/datasets/issues/351
https://github.com/huggingface/datasets/pull/351
351
add pandas dataset
closed
0
2020-07-07T15:38:07
2020-07-08T14:15:16
2020-07-08T14:15:15
lhoestq
[]
Create a dataset from serialized pandas dataframes. Usage: ```python from nlp import load_dataset dset = load_dataset("pandas", data_files="df.pkl")["train"] ```
true
652,398,691
https://api.github.com/repos/huggingface/datasets/issues/350
https://github.com/huggingface/datasets/pull/350
350
add from_pandas and from_dict
closed
0
2020-07-07T15:03:53
2020-07-08T14:14:33
2020-07-08T14:14:32
lhoestq
[]
I added two new methods to the `Dataset` class: - `from_pandas()` to create a dataset from a pandas dataframe - `from_dict()` to create a dataset from a dictionary (keys = columns) It uses the `pa.Table.from_pandas` and `pa.Table.from_pydict` funcitons to do so. It is also possible to specify the features types via `features=...` if there are ambiguities (null/nan values), otherwise the arrow schema is infered from the data automatically by pyarrow. One question that I have right now: + Should we also add a `save()` method that would write the dataset on the disk ? Right now if we create a `Dataset` using those two new methods, the data are kept in RAM. Then to reload it we can call the `from_file()` method.
true
652,231,571
https://api.github.com/repos/huggingface/datasets/issues/349
https://github.com/huggingface/datasets/pull/349
349
Hyperpartisan news detection
closed
2
2020-07-07T11:06:37
2020-07-07T20:47:27
2020-07-07T14:57:11
ghomasHudson
[]
Adding the hyperpartisan news detection dataset from PAN. This contains news article text, labelled with whether they're hyper-partisan and why kinds of biases they display. Implementation notes: - As with many PAN tasks, the data is hosted on [Zenodo](https://zenodo.org/record/1489920) and must be requested before use. I've used the manual download stuff for this, although the dataset is provided under a Creative Commons Attribution 4.0 International License, so we could host a version if we wanted to? - The 'bias' attribute doesn't exist for the 'byarticle' configuration. I've added an empty string to the class labels to deal with this. Is there a more standard value for empty data? - Should we always subclass `nlp.BuilderConfig`?
true
652,158,308
https://api.github.com/repos/huggingface/datasets/issues/348
https://github.com/huggingface/datasets/pull/348
348
Add OSCAR dataset
closed
20
2020-07-07T09:22:07
2021-05-03T22:07:08
2021-02-09T10:19:19
pjox
[]
I don't know if tests pass, when I run them it tries to download the whole corpus which is around 3.5TB compressed and I don't have that kind of space. I'll really need some help with it 😅 Thanks!
true
652,106,567
https://api.github.com/repos/huggingface/datasets/issues/347
https://github.com/huggingface/datasets/issues/347
347
'cp950' codec error from load_dataset('xtreme', 'tydiqa')
closed
10
2020-07-07T08:14:23
2020-09-07T14:51:45
2020-09-07T14:51:45
cosmeowpawlitan
[ "dataset bug" ]
![image](https://user-images.githubusercontent.com/50871412/86744744-67481680-c06c-11ea-8612-b77eba92a392.png) I guess the error is related to python source encoding issue that my PC is trying to decode the source code with wrong encoding-decoding tools, perhaps : https://www.python.org/dev/peps/pep-0263/ I guess the error was triggered by the code " module = importlib.import_module(module_path)" at line 57 in the source code: nlp/src/nlp/load.py / (https://github.com/huggingface/nlp/blob/911d5596f9b500e39af8642fe3d1b891758999c7/src/nlp/load.py#L51) Any ideas? p.s. tried the same code on colab, that runs perfectly
false
652,044,151
https://api.github.com/repos/huggingface/datasets/issues/346
https://github.com/huggingface/datasets/pull/346
346
Add emotion dataset
closed
9
2020-07-07T06:35:41
2022-05-30T15:16:44
2020-07-13T14:39:38
lewtun
[]
Hello 🤗 team! I am trying to add an emotion classification dataset ([link](https://github.com/dair-ai/emotion_dataset)) to `nlp` but I am a bit stuck about what I should do when the URL for the dataset is not a ZIP file, but just a pickled `pandas.DataFrame` (see [here](https://www.dropbox.com/s/607ptdakxuh5i4s/merged_training.pkl)). With the current implementation, running ```bash python nlp-cli test datasets/emotion --save_infos --all_configs ``` throws a `_pickle.UnpicklingError: invalid load key, '<'.` error (full stack trace below). The strange thing is that the path to the file does not carry the `.pkl` extension and instead appears to be some md5 hash (see the `FILE PATH` print statement in the stack trace). Note: I have checked that the `merged_training.pkl` file is not corrupted when I download it with `wget`. Any pointers on what I'm doing wrong would be greatly appreciated! **Stack trace** ``` INFO:nlp.load:Checking datasets/emotion/emotion.py for additional imports. INFO:filelock:Lock 140330435928512 acquired on datasets/emotion/emotion.py.lock INFO:nlp.load:Found main folder for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion INFO:nlp.load:Creating specific version folder for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b INFO:nlp.load:Copying script file from datasets/emotion/emotion.py to /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.py INFO:nlp.load:Couldn't find dataset infos file at datasets/emotion/dataset_infos.json INFO:nlp.load:Creating metadata file for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.json INFO:filelock:Lock 140330435928512 released on datasets/emotion/emotion.py.lock INFO:nlp.builder:Generating dataset emotion (/Users/lewtun/.cache/huggingface/datasets/emotion/emotion/1.0.0) INFO:nlp.builder:Dataset not on Hf google storage. Downloading and preparing it from source Downloading and preparing dataset emotion/emotion (download: Unknown size, generated: Unknown size, total: Unknown size) to /Users/lewtun/.cache/huggingface/datasets/emotion/emotion/1.0.0... INFO:nlp.builder:Generating split train 0 examples [00:00, ? examples/s]FILE PATH /Users/lewtun/.cache/huggingface/datasets/3615dcb52b7ba052ef63e1571894c4b67e8e12a6ab1ef2f756ec3c380bf48490 Traceback (most recent call last): File "nlp-cli", line 37, in <module> service.run() File "/Users/lewtun/git/nlp/src/nlp/commands/test.py", line 83, in run builder.download_and_prepare( File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 431, in download_and_prepare self._download_and_prepare( File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 483, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 664, in _prepare_split for key, record in utils.tqdm(generator, unit=" examples", total=split_info.num_examples, leave=False): File "/Users/lewtun/miniconda3/envs/nlp/lib/python3.8/site-packages/tqdm/std.py", line 1129, in __iter__ for obj in iterable: File "/Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.py", line 87, in _generate_examples data = pickle.load(f) _pickle.UnpicklingError: invalid load key, '<'. ```
true
651,761,201
https://api.github.com/repos/huggingface/datasets/issues/345
https://github.com/huggingface/datasets/issues/345
345
Supporting documents in ELI5
closed
2
2020-07-06T19:14:13
2020-10-27T15:38:45
2020-10-27T15:38:45
saverymax
[]
I was attempting to use the ELI5 dataset, when I realized that huggingface does not provide the supporting documents (the source documents from the common crawl). Without the supporting documents, this makes the dataset about as useful for my project as a block of cheese, or some other more apt metaphor. According to facebook, the entire document collection is quite large. However, it would still be helpful to at least include a subset of the supporting documents i.e., having some data is better than having a block of cheese, in my case at least. If you choose not to include them, it would be helpful to have documentation mentioning this specifically. It is especially confusing because the hf nlp ELI5 dataset has the key `'document'` but there are no documents to be found :(
false
651,495,246
https://api.github.com/repos/huggingface/datasets/issues/344
https://github.com/huggingface/datasets/pull/344
344
Search qa
closed
1
2020-07-06T12:23:16
2020-07-16T08:58:16
2020-07-16T08:58:16
mariamabarham
[]
This PR adds the Search QA dataset used in **SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine**. The dataset has the following config name: - raw_jeopardy: raw data - train_test_val: which is the splitted version #336
true
651,419,630
https://api.github.com/repos/huggingface/datasets/issues/343
https://github.com/huggingface/datasets/pull/343
343
Fix nested tensorflow format
closed
0
2020-07-06T10:13:45
2020-07-06T13:11:52
2020-07-06T13:11:51
lhoestq
[]
In #339 and #337 we are thinking about adding a way to export datasets to tfrecords. However I noticed that it was not possible to do `dset.set_format("tensorflow")` on datasets with nested features like `squad`. I fixed that using a nested map operations to convert features to `tf.ragged.constant`. I also added tests on the `set_format` function.
true
651,333,194
https://api.github.com/repos/huggingface/datasets/issues/342
https://github.com/huggingface/datasets/issues/342
342
Features should be updated when `map()` changes schema
closed
1
2020-07-06T08:03:23
2020-07-23T10:15:16
2020-07-23T10:15:16
thomwolf
[]
`dataset.map()` can change the schema and column names. We should update the features in this case (with what is possible to infer).
false
650,611,969
https://api.github.com/repos/huggingface/datasets/issues/341
https://github.com/huggingface/datasets/pull/341
341
add fever dataset
closed
0
2020-07-03T13:53:07
2020-07-06T13:03:48
2020-07-06T13:03:47
mariamabarham
[]
This PR add the FEVER dataset https://fever.ai/ used in with the paper: FEVER: a large-scale dataset for Fact Extraction and VERification (https://arxiv.org/pdf/1803.05355.pdf). #336
true
650,533,920
https://api.github.com/repos/huggingface/datasets/issues/340
https://github.com/huggingface/datasets/pull/340
340
Update cfq.py
closed
1
2020-07-03T11:23:19
2020-07-03T12:33:50
2020-07-03T12:33:50
brainshawn
[]
Make the dataset name consistent with in the paper: Compositional Freebase Question => Compositional Freebase Questions.
true
650,156,468
https://api.github.com/repos/huggingface/datasets/issues/339
https://github.com/huggingface/datasets/pull/339
339
Add dataset.export() to TFRecords
closed
18
2020-07-02T19:26:27
2020-07-22T09:16:12
2020-07-22T09:16:12
jarednielsen
[]
Fixes https://github.com/huggingface/nlp/issues/337 Some design decisions: - Simplified the function API to not handle sharding. It writes the entire dataset as a single TFRecord file. This simplifies the function logic and users can use other functions (`select`, `shard`, etc) to handle custom sharding or splitting. - Use `from_generator()` instead of `from_tensor_slices()` to address the memory issues discussed in https://github.com/huggingface/nlp/issues/315 and https://github.com/huggingface/nlp/issues/193. - Performs introspection using the values from `dataset.set_format()` to identify the TF datatypes. Currently it supports string, float, and int. If this should be extended for other datatypes, let me know. - There are quite a few helper functions required within the `export()` method. If these are better placed in a utils file somewhere, let me know. Also, I noticed that ```python dataset = dataset.select(indices) dataset.set_format("tensorflow") # dataset._format_type is "tensorflow" ``` gives a different output than ```python dataset.set_format("tensorflow") dataset = dataset.select(indices) # dataset._format_type is None ``` The latter loses the format of its parent dataset. Is there interest in making `set_format` a functional method that returns itself (can be chained), and that derived datasets maintain the format of their parent?
true
650,057,253
https://api.github.com/repos/huggingface/datasets/issues/338
https://github.com/huggingface/datasets/pull/338
338
Run `make style`
closed
0
2020-07-02T16:19:47
2020-07-02T18:03:10
2020-07-02T18:03:10
jarednielsen
[]
These files get changed when I run `make style` on an unrelated PR. Upstreaming these changes so development on a different branch can be easier.
true
650,035,887
https://api.github.com/repos/huggingface/datasets/issues/337
https://github.com/huggingface/datasets/issues/337
337
[Feature request] Export Arrow dataset to TFRecords
closed
0
2020-07-02T15:47:12
2020-07-22T09:16:12
2020-07-22T09:16:12
jarednielsen
[]
The TFRecord generation process is error-prone and requires complex separate Python scripts to download and preprocess the data. I propose to combine the user-friendly features of `nlp` with the speed and efficiency of TFRecords. Sample API: ```python # use these existing methods ds = load_dataset("wikitext", "wikitext-2-raw-v1", split="train") ds = ds.map(lambda ex: tokenizer(ex)) ds.set_format("tensorflow", columns=["input_ids", "token_type_ids", "attention_mask"]) # then add this method ds.export(folder="/my/tfrecords", prefix="myrecord", num_shards=8, format="tfrecord") ``` which would create files like so: ```bash /my/tfrecords/myrecord_1.tfrecord /my/tfrecords/myrecord_2.tfrecord ... ``` I would be happy to contribute this method. We could use a similar approach for PyTorch. Thoughts?
false
649,914,203
https://api.github.com/repos/huggingface/datasets/issues/336
https://github.com/huggingface/datasets/issues/336
336
[Dataset requests] New datasets for Open Question Answering
closed
0
2020-07-02T13:03:03
2020-07-16T09:04:22
2020-07-16T09:04:22
thomwolf
[ "help wanted", "dataset request" ]
We are still a few datasets missing for Open-Question Answering which is currently a field in strong development. Namely, it would be really nice to add: - WebQuestions (Berant et al., 2013) [done] - CuratedTrec (Baudis et al. 2015) [not open-source] - MS-MARCO (NGuyen et al. 2016) [done] - SearchQA (Dunn et al. 2017) [done] - FEVER (Thorne et al. 2018) - [ done] All these datasets are cited in http://arxiv.org/abs/2005.11401
false
649,765,179
https://api.github.com/repos/huggingface/datasets/issues/335
https://github.com/huggingface/datasets/pull/335
335
BioMRC Dataset presented in BioNLP 2020 ACL Workshop
closed
2
2020-07-02T09:03:41
2020-07-15T08:02:07
2020-07-15T08:02:07
PetrosStav
[]
true
649,661,791
https://api.github.com/repos/huggingface/datasets/issues/334
https://github.com/huggingface/datasets/pull/334
334
Add dataset.shard() method
closed
1
2020-07-02T06:05:19
2020-07-06T12:35:36
2020-07-06T12:35:36
jarednielsen
[]
Fixes https://github.com/huggingface/nlp/issues/312
true
649,236,516
https://api.github.com/repos/huggingface/datasets/issues/333
https://github.com/huggingface/datasets/pull/333
333
fix variable name typo
closed
2
2020-07-01T19:13:50
2020-07-24T15:43:31
2020-07-24T08:32:16
stas00
[]
true
649,140,135
https://api.github.com/repos/huggingface/datasets/issues/332
https://github.com/huggingface/datasets/pull/332
332
Add wiki_dpr
closed
2
2020-07-01T17:12:00
2020-07-06T12:21:17
2020-07-06T12:21:16
lhoestq
[]
Presented in the [Dense Passage Retrieval paper](https://arxiv.org/pdf/2004.04906.pdf), this dataset consists in 21M passages from the english wikipedia along with their 768-dim embeddings computed using DPR's context encoder. Note on the implementation: - There are two configs: with and without the embeddings (73GB vs 14GB) - I used a non-fixed-size sequence of floats to describe the feature format of the embeddings. I wanted to use fixed-size sequences but I had issues with reading the arrow file afterwards (for example `dataset[0]` was crashing) - I added the case for lists of urls as input of the download_manager
true
648,533,199
https://api.github.com/repos/huggingface/datasets/issues/331
https://github.com/huggingface/datasets/issues/331
331
Loading CNN/Daily Mail dataset produces `nlp.utils.info_utils.NonMatchingSplitsSizesError`
closed
5
2020-06-30T22:21:33
2020-07-09T13:03:40
2020-07-09T13:03:40
jxmorris12
[ "dataset bug" ]
``` >>> import nlp >>> nlp.load_dataset('cnn_dailymail', '3.0.0') Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.26 GiB, total: 1.81 GiB) to /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/p/qdata/jm8wx/datasets/nlp/src/nlp/load.py", line 520, in load_dataset builder_instance.download_and_prepare( File "/p/qdata/jm8wx/datasets/nlp/src/nlp/builder.py", line 431, in download_and_prepare self._download_and_prepare( File "/p/qdata/jm8wx/datasets/nlp/src/nlp/builder.py", line 488, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/p/qdata/jm8wx/datasets/nlp/src/nlp/utils/info_utils.py", line 70, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) nlp.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='test', num_bytes=49424491, num_examples=11490, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='test', num_bytes=48931393, num_examples=11379, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='train', num_bytes=1249178681, num_examples=287113, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='train', num_bytes=1240618482, num_examples=285161, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='validation', num_bytes=57149241, num_examples=13368, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='validation', num_bytes=56637485, num_examples=13255, dataset_name='cnn_dailymail')}] ```
false
648,525,720
https://api.github.com/repos/huggingface/datasets/issues/330
https://github.com/huggingface/datasets/pull/330
330
Doc red
closed
0
2020-06-30T22:05:31
2020-07-06T12:10:39
2020-07-05T12:27:29
ghomasHudson
[]
Adding [DocRED](https://github.com/thunlp/DocRED) - a relation extraction dataset which tests document-level RE. A few implementation notes: - There are 2 separate versions of the training set - *annotated* and *distant*. Instead of `nlp.Split.Train` I've used the splits `"train_annotated"` and `"train_distant"` to reflect this. - As well as the relation id, the full relation name is mapped from `rel_info.json` - I renamed the 'h', 'r', 't' keys to 'head', 'relation' and 'tail' to make them more readable. - Used the fix from #319 to allow nested sequences of dicts.
true
648,446,979
https://api.github.com/repos/huggingface/datasets/issues/329
https://github.com/huggingface/datasets/issues/329
329
[Bug] FileLock dependency incompatible with filesystem
closed
11
2020-06-30T19:45:31
2024-12-26T15:13:39
2020-06-30T21:33:06
jarednielsen
[]
I'm downloading a dataset successfully with `load_dataset("wikitext", "wikitext-2-raw-v1")` But when I attempt to cache it on an external volume, it hangs indefinitely: `load_dataset("wikitext", "wikitext-2-raw-v1", cache_dir="/fsx") # /fsx is an external volume mount` The filesystem when hanging looks like this: ```bash /fsx ----downloads ----94be...73.lock ----wikitext ----wikitext-2-raw ----wikitext-2-raw-1.0.0.incomplete ``` It appears that on this filesystem, the FileLock object is forever stuck in its "acquire" stage. I have verified that the issue lies specifically with the `filelock` dependency: ```python open("/fsx/hello.txt").write("hello") # succeeds from filelock import FileLock with FileLock("/fsx/hello.lock"): open("/fsx/hello.txt").write("hello") # hangs indefinitely ``` Has anyone else run into this issue? I'd raise it directly on the FileLock repo, but that project appears abandoned with the last update over a year ago. Or if there's a solution that would remove the FileLock dependency from the project, I would appreciate that.
false
648,326,841
https://api.github.com/repos/huggingface/datasets/issues/328
https://github.com/huggingface/datasets/issues/328
328
Fork dataset
closed
5
2020-06-30T16:42:53
2020-07-06T21:43:59
2020-07-06T21:43:59
timothyjlaurent
[]
We have a multi-task learning model training I'm trying to convert to using the Arrow-based nlp dataset. We're currently training a custom TensorFlow model but the nlp paradigm should be a bridge for us to be able to use the wealth of pre-trained models in Transformers. Our preprocessing flow parses raw text and json with Entity and Relations annotations and creates 2 datasets for training a NER and Relations prediction heads. Is there some good way to "fork" dataset- EG 1. text + json -> Dataset1 1. Dataset1 -> DatasetNER 1. Dataset1 -> DatasetREL or 1. text + json -> Dataset1 1. Dataset1 -> DatasetNER 1. Dataset1 + DatasetNER -> DatasetREL
false
648,312,858
https://api.github.com/repos/huggingface/datasets/issues/327
https://github.com/huggingface/datasets/pull/327
327
set seed for suffling tests
closed
0
2020-06-30T16:21:34
2020-07-02T08:34:05
2020-07-02T08:34:04
lhoestq
[]
Some tests were randomly failing because of a missing seed in a test for `train_test_split(shuffle=True)`
true
648,126,103
https://api.github.com/repos/huggingface/datasets/issues/326
https://github.com/huggingface/datasets/issues/326
326
Large dataset in Squad2-format
closed
8
2020-06-30T12:18:59
2020-07-09T09:01:50
2020-07-09T09:01:50
flozi00
[]
At the moment we are building an large question answering dataset and think about sharing it with the huggingface community. Caused the computing power we splitted it into multiple tiles, but they are all in the same format. Right now the most important facts about are this: - Contexts: 1.047.671 - questions: 1.677.732 - Answers: 6.742.406 - unanswerable: 377.398 It is already cleaned <pre><code> train_data = [ { 'context': "this is the context", 'qas': [ { 'id': "00002", 'is_impossible': False, 'question': "whats is this", 'answers': [ { 'text': "answer", 'answer_start': 0 } ] }, { 'id': "00003", 'is_impossible': False, 'question': "question2", 'answers': [ { 'text': "answer2", 'answer_start': 1 } ] } ] } ] </code></pre> Cause it is growing every day we are thinking about an structure like this: We host an Json file, containing all the download links and the script can load it dynamically. At the moment it is around ~20GB Any advice how to handle this, or an ready to use template ?
false
647,601,592
https://api.github.com/repos/huggingface/datasets/issues/325
https://github.com/huggingface/datasets/pull/325
325
Add SQuADShifts dataset
closed
1
2020-06-29T19:11:16
2020-06-30T17:07:31
2020-06-30T17:07:31
millerjohnp
[]
This PR adds the four new variants of the SQuAD dataset used in [The Effect of Natural Distribution Shift on Question Answering Models](https://arxiv.org/abs/2004.14444) to facilitate evaluating model robustness to distribution shift.
true
647,525,725
https://api.github.com/repos/huggingface/datasets/issues/324
https://github.com/huggingface/datasets/issues/324
324
Error when calculating glue score
closed
4
2020-06-29T16:53:48
2020-07-09T09:13:34
2020-07-09T09:13:34
D-i-l-r-u-k-s-h-i
[]
I was trying glue score along with other metrics here. But glue gives me this error; ``` import nlp glue_metric = nlp.load_metric('glue',name="cola") glue_score = glue_metric.compute(predictions, references) ``` ``` --------------------------------------------------------------------------- --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-8-b9210a524504> in <module>() ----> 1 glue_score = glue_metric.compute(predictions, references) 6 frames /usr/local/lib/python3.6/dist-packages/nlp/metric.py in compute(self, predictions, references, timeout, **metrics_kwargs) 191 """ 192 if predictions is not None: --> 193 self.add_batch(predictions=predictions, references=references) 194 self.finalize(timeout=timeout) 195 /usr/local/lib/python3.6/dist-packages/nlp/metric.py in add_batch(self, predictions, references, **kwargs) 207 if self.writer is None: 208 self._init_writer() --> 209 self.writer.write_batch(batch) 210 211 def add(self, prediction=None, reference=None, **kwargs): /usr/local/lib/python3.6/dist-packages/nlp/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size) 155 if self.pa_writer is None: 156 self._build_writer(pa_table=pa.Table.from_pydict(batch_examples)) --> 157 pa_table: pa.Table = pa.Table.from_pydict(batch_examples, schema=self._schema) 158 if writer_batch_size is None: 159 writer_batch_size = self.writer_batch_size /usr/local/lib/python3.6/dist-packages/pyarrow/types.pxi in __iter__() /usr/local/lib/python3.6/dist-packages/pyarrow/array.pxi in pyarrow.lib.asarray() /usr/local/lib/python3.6/dist-packages/pyarrow/array.pxi in pyarrow.lib.array() /usr/local/lib/python3.6/dist-packages/pyarrow/array.pxi in pyarrow.lib._sequence_to_array() TypeError: an integer is required (got type str) ``` I'm not sure whether I'm doing this wrong or whether it's an issue. I would like to know a workaround. Thank you.
false
647,521,308
https://api.github.com/repos/huggingface/datasets/issues/323
https://github.com/huggingface/datasets/pull/323
323
Add package path to sys when downloading package as github archive
closed
2
2020-06-29T16:46:01
2020-07-30T14:00:23
2020-07-30T14:00:23
yjernite
[]
This fixes the `coval.py` metric so that imports within the downloaded module work correctly. We can use a similar trick to add the BLEURT metric (@ankparikh) @thomwolf not sure how you feel about adding to the `PYTHONPATH` from the script. This is the only way I could make it work with my understanding of `importlib` but there might be a more elegant method. This PR fixes https://github.com/huggingface/nlp/issues/305
true
647,483,850
https://api.github.com/repos/huggingface/datasets/issues/322
https://github.com/huggingface/datasets/pull/322
322
output nested dict in get_nearest_examples
closed
0
2020-06-29T15:47:47
2020-07-02T08:33:33
2020-07-02T08:33:32
lhoestq
[]
As we are using a columnar format like arrow as the backend for datasets, we expect to have a dictionary of columns when we slice a dataset like in this example: ```python my_examples = dataset[0:10] print(type(my_examples)) # >>> dict print(my_examples["my_column"][0] # >>> this is the first element of the column 'my_column' ``` Therefore I wanted to keep this logic when calling `get_nearest_examples` that returns the top 10 nearest examples: ```python dataset.add_faiss_index(column="embeddings") scores, examples = dataset.get_nearest_examples("embeddings", query=my_numpy_embedding) print(type(examples)) # >>> dict ``` Previously it was returning a list[dict]. It was the only place that was using this output format. To make it work I had to implement `__getitem__(key)` where `key` is a list. This is different from `.select` because `.select` is a dataset transform (it returns a new dataset object) while `__getitem__` is an extraction method (it returns python dictionaries).
true
647,271,526
https://api.github.com/repos/huggingface/datasets/issues/321
https://github.com/huggingface/datasets/issues/321
321
ERROR:root:mwparserfromhell
closed
10
2020-06-29T11:10:43
2022-02-14T15:21:46
2022-02-14T15:21:46
Shiro-LK
[ "dataset bug" ]
Hi, I am trying to download some wikipedia data but I got this error for spanish "es" (but there are maybe some others languages which have the same error I haven't tried all of them ). `ERROR:root:mwparserfromhell ParseError: This is a bug and should be reported. Info: C tokenizer exited with non-empty token stack.` The code I have use was : `dataset = load_dataset('wikipedia', '20200501.es', beam_runner='DirectRunner')`
false
647,188,167
https://api.github.com/repos/huggingface/datasets/issues/320
https://github.com/huggingface/datasets/issues/320
320
Blog Authorship Corpus, Non Matching Splits Sizes Error, nlp viewer
closed
2
2020-06-29T07:36:35
2020-06-29T14:44:42
2020-06-29T14:44:42
mariamabarham
[ "nlp-viewer" ]
Selecting `blog_authorship_corpus` in the nlp viewer throws the following error: ``` NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=610252351, num_examples=532812, dataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='train', num_bytes=614706451, num_examples=535568, dataset_name='blog_authorship_corpus')}, {'expected': SplitInfo(name='validation', num_bytes=37500394, num_examples=31277, dataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='validation', num_bytes=32553710, num_examples=28521, dataset_name='blog_authorship_corpus')}] Traceback: File "/home/sasha/streamlit/lib/streamlit/ScriptRunner.py", line 322, in _run_script exec(code, module.__dict__) File "/home/sasha/nlp-viewer/run.py", line 172, in <module> dts, fail = get(str(option.id), str(conf_option.name) if conf_option else None) File "/home/sasha/streamlit/lib/streamlit/caching.py", line 591, in wrapped_func return get_or_create_cached_value() File "/home/sasha/streamlit/lib/streamlit/caching.py", line 575, in get_or_create_cached_value return_value = func(*args, **kwargs) File "/home/sasha/nlp-viewer/run.py", line 132, in get builder_instance.download_and_prepare() File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/nlp/builder.py", line 432, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/nlp/builder.py", line 488, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/nlp/utils/info_utils.py", line 70, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) ``` @srush @lhoestq
false
646,792,487
https://api.github.com/repos/huggingface/datasets/issues/319
https://github.com/huggingface/datasets/issues/319
319
Nested sequences with dicts
closed
1
2020-06-27T23:45:17
2020-07-03T10:22:00
2020-07-03T10:22:00
ghomasHudson
[]
Am pretty much finished [adding a dataset](https://github.com/ghomasHudson/nlp/blob/DocRED/datasets/docred/docred.py) for [DocRED](https://github.com/thunlp/DocRED), but am getting an error when trying to add a nested `nlp.features.sequence(nlp.features.sequence({key:value,...}))`. The original data is in this format: ```python { 'title': "Title of wiki page", 'vertexSet': [ [ { 'name': "mention_name", 'sent_id': "mention in which sentence", 'pos': ["postion of mention in a sentence"], 'type': "NER_type"}, {another mention} ], [another entity] ] ... } ``` So to represent this I've attempted to write: ``` ... features=nlp.Features({ "title": nlp.Value("string"), "vertexSet": nlp.features.Sequence(nlp.features.Sequence({ "name": nlp.Value("string"), "sent_id": nlp.Value("int32"), "pos": nlp.features.Sequence(nlp.Value("int32")), "type": nlp.Value("string"), })), ... }), ... ``` This is giving me the error: ``` pyarrow.lib.ArrowTypeError: Could not convert [{'pos': [[0,2], [2,4], [3,5]], "type": ["ORG", "ORG", "ORG"], "name": ["Lark Force", "Lark Force", "Lark Force", "sent_id": [0, 3, 4]}..... with type list: was not a dict, tuple, or recognized null value for conversion to struct type ``` Do we expect the pyarrow stuff to break when doing this deeper nesting? I've checked that it still works when you do `nlp.features.Sequence(nlp.features.Sequence(nlp.Value("string"))` or `nlp.features.Sequence({key:value,...})` just not nested sequences with a dict. If it's not possible, I can always convert it to a shallower structure. I'd rather not change the DocRED authors' structure if I don't have to though.
false
646,682,840
https://api.github.com/repos/huggingface/datasets/issues/318
https://github.com/huggingface/datasets/pull/318
318
Multitask
closed
18
2020-06-27T13:27:29
2022-07-06T15:19:57
2022-07-06T15:19:57
ghomasHudson
[]
Following our discussion in #217, I've implemented a first working version of `MultiDataset`. There's a function `build_multitask()` which takes either individual `nlp.Dataset`s or `dicts` of splits and constructs `MultiDataset`(s). I've added a notebook with example usage. I've implemented many of the `nlp.Dataset` methods (cache_files, columns, nbytes, num_columns, num_rows, column_names, schema, shape). Some of the other methods are complicated as they change the number of examples. These raise `NotImplementedError`s at the moment. This will need some tests which I haven't written yet. There's definitely room for improvements but I think the general approach is sound.
true
646,555,384
https://api.github.com/repos/huggingface/datasets/issues/317
https://github.com/huggingface/datasets/issues/317
317
Adding a dataset with multiple subtasks
closed
1
2020-06-26T23:14:19
2020-10-27T15:36:52
2020-10-27T15:36:52
erickrf
[]
I intent to add the datasets of the MT Quality Estimation shared tasks to `nlp`. However, they have different subtasks -- such as word-level, sentence-level and document-level quality estimation, each of which having different language pairs, and some of the data reused in different subtasks. For example, in [QE 2019,](http://www.statmt.org/wmt19/qe-task.html) we had the same English-Russian and English-German data for word-level and sentence-level QE. I suppose these datasets could have both their word and sentence-level labels inside `nlp.Features`; but what about other subtasks? Should they be considered a different dataset altogether? I read the discussion on #217 but the case of QE seems a lot simpler.
false
646,366,450
https://api.github.com/repos/huggingface/datasets/issues/316
https://github.com/huggingface/datasets/pull/316
316
add AG News dataset
closed
1
2020-06-26T16:11:58
2020-06-30T09:58:08
2020-06-30T08:31:55
jxmorris12
[]
adds support for the AG-News topic classification dataset
true
645,888,943
https://api.github.com/repos/huggingface/datasets/issues/315
https://github.com/huggingface/datasets/issues/315
315
[Question] Best way to batch a large dataset?
open
11
2020-06-25T22:30:20
2020-10-27T15:38:17
null
jarednielsen
[ "generic discussion" ]
I'm training on large datasets such as Wikipedia and BookCorpus. Following the instructions in [the tutorial notebook](https://colab.research.google.com/github/huggingface/nlp/blob/master/notebooks/Overview.ipynb), I see the following recommended for TensorFlow: ```python train_tf_dataset = train_tf_dataset.filter(remove_none_values, load_from_cache_file=False) columns = ['input_ids', 'token_type_ids', 'attention_mask', 'start_positions', 'end_positions'] train_tf_dataset.set_format(type='tensorflow', columns=columns) features = {x: train_tf_dataset[x].to_tensor(default_value=0, shape=[None, tokenizer.max_len]) for x in columns[:3]} labels = {"output_1": train_tf_dataset["start_positions"].to_tensor(default_value=0, shape=[None, 1])} labels["output_2"] = train_tf_dataset["end_positions"].to_tensor(default_value=0, shape=[None, 1]) ### Question about this last line ### tfdataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(8) ``` This code works for something like WikiText-2. However, scaling up to WikiText-103, the last line takes 5-10 minutes to run. I assume it is because tf.data.Dataset.from_tensor_slices() is pulling everything into memory, not lazily loading. This approach won't scale up to datasets 25x larger such as Wikipedia. So I tried manual batching using `dataset.select()`: ```python idxs = np.random.randint(len(dataset), size=bsz) batch = dataset.select(idxs).map(lambda example: {"input_ids": tokenizer(example["text"])}) tf_batch = tf.constant(batch["ids"], dtype=tf.int64) ``` This appears to create a new Apache Arrow dataset with every batch I grab, and then tries to cache it. The runtime of `dataset.select([0, 1])` appears to be much worse than `dataset[:2]`. So using `select()` doesn't seem to be performant enough for a training loop. Is there a performant scalable way to lazily load batches of nlp Datasets?
false
645,461,174
https://api.github.com/repos/huggingface/datasets/issues/314
https://github.com/huggingface/datasets/pull/314
314
Fixed singlular very minor spelling error
closed
1
2020-06-25T10:45:59
2020-06-26T08:46:41
2020-06-25T12:43:59
SchizoidBat
[]
An instance of "independantly" was changed to "independently". That's all.
true
645,390,088
https://api.github.com/repos/huggingface/datasets/issues/313
https://github.com/huggingface/datasets/pull/313
313
Add MWSC
closed
1
2020-06-25T09:22:02
2020-06-30T08:28:11
2020-06-30T08:28:11
ghomasHudson
[]
Adding the [Modified Winograd Schema Challenge](https://github.com/salesforce/decaNLP/blob/master/local_data/schema.txt) dataset which formed part of the [decaNLP](http://decanlp.com/) benchmark. Not sure how much use people would find for it it outside of the benchmark, but it is general purpose. Code is heavily borrowed from the [decaNLP repo](https://github.com/salesforce/decaNLP/blob/1e9605f246b9e05199b28bde2a2093bc49feeeaa/text/torchtext/datasets/generic.py#L773-L877). There's a few (possibly overly opinionated) design choices I made: - I used the train/test/dev split [buried in the decaNLP code](https://github.com/salesforce/decaNLP/blob/1e9605f246b9e05199b28bde2a2093bc49feeeaa/text/torchtext/datasets/generic.py#L852-L855) - I split out each example into the 2 alternatives. Originally the data uses the format: ``` The city councilmen refused the demonstrators a permit because they [feared/advocated] violence. Who [feared/advocated] violence? councilmen/demonstrators ``` I split into the 2 variants: ``` The city councilmen refused the demonstrators a permit because they feared violence. Who feared violence? councilmen/demonstrators The city councilmen refused the demonstrators a permit because they advocated violence. Who advocated violence? councilmen/demonstrators ``` I can't see any use for having the options combined into a single example (splitting them is [the way decaNLP processes](https://github.com/salesforce/decaNLP/blob/1e9605f246b9e05199b28bde2a2093bc49feeeaa/text/torchtext/datasets/generic.py#L846-L850)) them. You can't train on both versions with them combined, and splitting the examples later would be a pain to do. I think [winogrande.py](https://github.com/huggingface/nlp/blob/master/datasets/winogrande/winogrande.py) presents the data in this way? - I've not used the decaNLP framing (appending the options to the question e.g. `Who feared violence? -- councilmen or demonstrators?`) but left it more generic by adding the options as a new key: `"options":["councilmen","demonstrators"]` This should be an easy thing to change using `map` if needed by a specific application. Dataset is working as-is but if anyone has any thoughts/preferences on the design decisions here I'm definitely open to different choices.
true
645,025,561
https://api.github.com/repos/huggingface/datasets/issues/312
https://github.com/huggingface/datasets/issues/312
312
[Feature request] Add `shard()` method to dataset
closed
2
2020-06-24T22:48:33
2020-07-06T12:35:36
2020-07-06T12:35:36
jarednielsen
[]
Currently, to shard a dataset into 10 pieces on different ranks, you can run ```python rank = 3 # for example size = 10 dataset = nlp.load_dataset('wikitext', 'wikitext-2-raw-v1', split=f"train[{rank*10}%:{(rank+1)*10}%]") ``` However, this breaks down if you have a number of ranks that doesn't divide cleanly into 100, such as 64 ranks. Is there interest in adding a method shard() that looks like this? ```python rank = 3 size = 64 dataset = nlp.load_dataset("wikitext", "wikitext-2-raw-v1", split="train").shard(rank=rank, size=size) ``` TensorFlow has a similar API: https://www.tensorflow.org/api_docs/python/tf/data/Dataset#shard. I'd be happy to contribute this code.
false
645,013,131
https://api.github.com/repos/huggingface/datasets/issues/311
https://github.com/huggingface/datasets/pull/311
311
Add qa_zre
closed
0
2020-06-24T22:17:22
2020-06-29T16:37:38
2020-06-29T16:37:38
ghomasHudson
[]
Adding the QA-ZRE dataset from ["Zero-Shot Relation Extraction via Reading Comprehension"](http://nlp.cs.washington.edu/zeroshot/). A common processing step seems to be replacing the `XXX` placeholder with the `subject`. I've left this out as it's something you could easily do with `map`.
true
644,806,720
https://api.github.com/repos/huggingface/datasets/issues/310
https://github.com/huggingface/datasets/pull/310
310
add wikisql
closed
1
2020-06-24T18:00:35
2020-06-25T12:32:25
2020-06-25T12:32:25
ghomasHudson
[]
Adding the [WikiSQL](https://github.com/salesforce/WikiSQL) dataset. Interesting things to note: - Have copied the function (`_convert_to_human_readable`) which converts the SQL query to a human-readable (string) format as this is what most people will want when actually using this dataset for NLP applications. - `conds` was originally a tuple but is converted to a dictionary to support differing types. Would be nice to add the logical_form metrics too at some point.
true
644,783,822
https://api.github.com/repos/huggingface/datasets/issues/309
https://github.com/huggingface/datasets/pull/309
309
Add narrative qa
closed
11
2020-06-24T17:26:18
2020-09-03T09:02:10
2020-09-03T09:02:09
Varal7
[]
Test cases for dummy data don't pass Only contains data for summaries (not whole story)
true
644,195,251
https://api.github.com/repos/huggingface/datasets/issues/308
https://github.com/huggingface/datasets/pull/308
308
Specify utf-8 encoding for MRPC files
closed
0
2020-06-23T22:44:36
2020-06-25T12:52:21
2020-06-25T12:16:10
patpizio
[]
Fixes #307, again probably a Windows-related issue.
true
644,187,262
https://api.github.com/repos/huggingface/datasets/issues/307
https://github.com/huggingface/datasets/issues/307
307
Specify encoding for MRPC
closed
0
2020-06-23T22:24:49
2020-06-25T12:16:09
2020-06-25T12:16:09
patpizio
[]
Same as #242, but with MRPC: on Windows, I get a `UnicodeDecodeError` when I try to download the dataset: ```python dataset = nlp.load_dataset('glue', 'mrpc') ``` ```python Downloading and preparing dataset glue/mrpc (download: Unknown size, generated: Unknown size, total: Unknown size) to C:\Users\Python\.cache\huggingface\datasets\glue\mrpc\1.0.0... --------------------------------------------------------------------------- UnicodeDecodeError Traceback (most recent call last) ~\Miniconda3\envs\nlp\lib\site-packages\nlp\builder.py in incomplete_dir(dirname) 369 try: --> 370 yield tmp_dir 371 if os.path.isdir(dirname): ~\Miniconda3\envs\nlp\lib\site-packages\nlp\builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 430 verify_infos = not save_infos and not ignore_verifications --> 431 self._download_and_prepare( 432 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs ~\Miniconda3\envs\nlp\lib\site-packages\nlp\builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 482 # Prepare split will record examples associated to the split --> 483 self._prepare_split(split_generator, **prepare_split_kwargs) 484 except OSError: ~\Miniconda3\envs\nlp\lib\site-packages\nlp\builder.py in _prepare_split(self, split_generator) 663 generator = self._generate_examples(**split_generator.gen_kwargs) --> 664 for key, record in utils.tqdm(generator, unit=" examples", total=split_info.num_examples, leave=False): 665 example = self.info.features.encode_example(record) ~\Miniconda3\envs\nlp\lib\site-packages\tqdm\notebook.py in __iter__(self, *args, **kwargs) 217 try: --> 218 for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs): 219 # return super(tqdm...) will not catch exception ~\Miniconda3\envs\nlp\lib\site-packages\tqdm\std.py in __iter__(self) 1128 try: -> 1129 for obj in iterable: 1130 yield obj ~\Miniconda3\envs\nlp\lib\site-packages\nlp\datasets\glue\7fc58099eb3983a04c8dac8500b70d27e6eceae63ffb40d7900c977897bb58c6\glue.py in _generate_examples(self, data_file, split, mrpc_files) 514 examples = self._generate_example_mrpc_files(mrpc_files=mrpc_files, split=split) --> 515 for example in examples: 516 yield example["idx"], example ~\Miniconda3\envs\nlp\lib\site-packages\nlp\datasets\glue\7fc58099eb3983a04c8dac8500b70d27e6eceae63ffb40d7900c977897bb58c6\glue.py in _generate_example_mrpc_files(self, mrpc_files, split) 576 reader = csv.DictReader(f, delimiter="\t", quoting=csv.QUOTE_NONE) --> 577 for n, row in enumerate(reader): 578 is_row_in_dev = [row["#1 ID"], row["#2 ID"]] in dev_ids ~\Miniconda3\envs\nlp\lib\csv.py in __next__(self) 110 self.fieldnames --> 111 row = next(self.reader) 112 self.line_num = self.reader.line_num ~\Miniconda3\envs\nlp\lib\encodings\cp1252.py in decode(self, input, final) 22 def decode(self, input, final=False): ---> 23 return codecs.charmap_decode(input,self.errors,decoding_table)[0] 24 UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 1180: character maps to <undefined> ``` The fix is the same: specify `utf-8` encoding when opening the file. The previous fix didn't work as MRPC's download process is different from the others in GLUE. I am going to propose a new PR :)
false
644,176,078
https://api.github.com/repos/huggingface/datasets/issues/306
https://github.com/huggingface/datasets/pull/306
306
add pg19 dataset
closed
12
2020-06-23T22:03:52
2020-07-06T07:55:59
2020-07-06T07:55:59
lucidrains
[]
https://github.com/huggingface/nlp/issues/274 Add functioning PG19 dataset with dummy data `cos_e.py` was just auto-linted by `make style`
true
644,148,149
https://api.github.com/repos/huggingface/datasets/issues/305
https://github.com/huggingface/datasets/issues/305
305
Importing downloaded package repository fails
closed
0
2020-06-23T21:09:05
2020-07-30T16:44:23
2020-07-30T16:44:23
yjernite
[ "metric bug" ]
The `get_imports` function in `src/nlp/load.py` has a feature to download a package as a zip archive of the github repository and import functions from the unpacked directory. This is used for example in the `metrics/coval.py` file, and would be useful to add BLEURT (@ankparikh). Currently however, the code seems to have trouble with imports within the package. For example: ``` import nlp coval = nlp.load_metric('coval') ``` yields: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/yacine/Code/nlp/src/nlp/load.py", line 432, in load_metric metric_cls = import_main_class(module_path, dataset=False) File "/home/yacine/Code/nlp/src/nlp/load.py", line 57, in import_main_class module = importlib.import_module(module_path) File "/home/yacine/anaconda3/lib/python3.7/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1006, in _gcd_import File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 677, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 728, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/yacine/Code/nlp/src/nlp/metrics/coval/a78807df33ac45edbb71799caf2b3b47e55df4fd690267808fe963a5e8b30952/coval.py", line 21, in <module> from .coval_backend.conll import reader # From: https://github.com/ns-moosavi/coval File "/home/yacine/Code/nlp/src/nlp/metrics/coval/a78807df33ac45edbb71799caf2b3b47e55df4fd690267808fe963a5e8b30952/coval_backend/conll/reader.py", line 2, in <module> from conll import mention ModuleNotFoundError: No module named 'conll' ``` Not sure what the fix would be there.
false
644,091,970
https://api.github.com/repos/huggingface/datasets/issues/304
https://github.com/huggingface/datasets/issues/304
304
Problem while printing doc string when instantiating multiple metrics.
closed
0
2020-06-23T19:32:05
2020-07-22T09:50:58
2020-07-22T09:50:58
codehunk628
[ "metric bug" ]
When I load more than one metric and try to print doc string of a particular metric,. It shows the doc strings of all imported metric one after the other which looks quite confusing and clumsy. Attached [Colab](https://colab.research.google.com/drive/13H0ZgyQ2se0mqJ2yyew0bNEgJuHaJ8H3?usp=sharing) Notebook for problem clarification..
false
643,912,464
https://api.github.com/repos/huggingface/datasets/issues/303
https://github.com/huggingface/datasets/pull/303
303
allow to move files across file systems
closed
0
2020-06-23T14:56:08
2020-06-23T15:08:44
2020-06-23T15:08:43
lhoestq
[]
Users are allowed to use the `cache_dir` that they want. Therefore it can happen that we try to move files across filesystems. We were using `os.rename` that doesn't allow that, so I changed some of them to `shutil.move`. This should fix #301
true
643,910,418
https://api.github.com/repos/huggingface/datasets/issues/302
https://github.com/huggingface/datasets/issues/302
302
Question - Sign Language Datasets
closed
3
2020-06-23T14:53:40
2020-11-25T11:25:33
2020-11-25T11:25:33
AmitMY
[ "enhancement", "generic discussion" ]
An emerging field in NLP is SLP - sign language processing. I was wondering about adding datasets here, specifically because it's shaping up to be large and easily usable. The metrics for sign language to text translation are the same. So, what do you think about (me, or others) adding datasets here? An example dataset would be [RWTH-PHOENIX-Weather 2014 T](https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/) For every item in the dataset, the data object includes: 1. video_path - path to mp4 file 2. pose_path - a path to `.pose` file with human pose landmarks 3. openpose_path - a path to a `.json` file with human pose landmarks 4. gloss - string 5. text - string 6. video_metadata - height, width, frames, framerate ------ To make it a tad more complicated - what if sign language libraries add requirements to `nlp`? for example, sign language is commonly annotated using `ilex`, `eaf`, or `srt` files, which are all loadable as text, but there is no reason for the dataset to parse that file by itself, if libraries exist to do so.
false
643,763,525
https://api.github.com/repos/huggingface/datasets/issues/301
https://github.com/huggingface/datasets/issues/301
301
Setting cache_dir gives error on wikipedia download
closed
2
2020-06-23T11:31:44
2020-06-24T07:05:07
2020-06-24T07:05:07
hallvagi
[]
First of all thank you for a super handy library! I'd like to download large files to a specific drive so I set `cache_dir=my_path`. This works fine with e.g. imdb and squad. But on wikipedia I get an error: ``` nlp.load_dataset('wikipedia', '20200501.de', split = 'train', cache_dir=my_path) ``` ``` OSError Traceback (most recent call last) <ipython-input-2-23551344d7bc> in <module> 1 import nlp ----> 2 nlp.load_dataset('wikipedia', '20200501.de', split = 'train', cache_dir=path) ~/anaconda3/envs/fastai2/lib/python3.7/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 522 download_mode=download_mode, 523 ignore_verifications=ignore_verifications, --> 524 save_infos=save_infos, 525 ) 526 ~/anaconda3/envs/fastai2/lib/python3.7/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 385 with utils.temporary_assignment(self, "_cache_dir", tmp_data_dir): 386 reader = ArrowReader(self._cache_dir, self.info) --> 387 reader.download_from_hf_gcs(self._cache_dir, self._relative_data_dir(with_version=True)) 388 downloaded_info = DatasetInfo.from_directory(self._cache_dir) 389 self.info.update(downloaded_info) ~/anaconda3/envs/fastai2/lib/python3.7/site-packages/nlp/arrow_reader.py in download_from_hf_gcs(self, cache_dir, relative_data_dir) 231 remote_dataset_info = os.path.join(remote_cache_dir, "dataset_info.json") 232 downloaded_dataset_info = cached_path(remote_dataset_info) --> 233 os.rename(downloaded_dataset_info, os.path.join(cache_dir, "dataset_info.json")) 234 if self._info is not None: 235 self._info.update(self._info.from_directory(cache_dir)) OSError: [Errno 18] Invalid cross-device link: '/home/local/NTU/nn/.cache/huggingface/datasets/025fa4fd4f04aaafc9e939260fbc8f0bb190ce14c61310c8ae1ddd1dcb31f88c.9637f367b6711a79ca478be55fe6989b8aea4941b7ef7adc67b89ff403020947' -> '/data/nn/nlp/wikipedia/20200501.de/1.0.0.incomplete/dataset_info.json' ```
false
643,688,304
https://api.github.com/repos/huggingface/datasets/issues/300
https://github.com/huggingface/datasets/pull/300
300
Fix bertscore references
closed
0
2020-06-23T09:38:59
2020-06-23T14:47:38
2020-06-23T14:47:37
lhoestq
[]
I added some type checking for metrics. There was an issue where a metric could interpret a string a a list. A `ValueError` is raised if a string is given instead of a list. Moreover I added support for both strings and lists of strings for `references` in `bertscore`, as it is the case in the original code. Both ways work: ``` import nlp scorer = nlp.load_metric("bertscore") with open("pred.txt") as p, open("ref.txt") as g: for lp, lg in zip(p, g): scorer.add(lp, [lg]) score = scorer.compute(lang="en") ``` ``` import nlp scorer = nlp.load_metric("bertscore") with open("pred.txt") as p, open("ref.txt") as g: for lp, lg in zip(p, g): scorer.add(lp, lg) score = scorer.compute(lang="en") ``` This should fix #295 and #238
true
643,611,557
https://api.github.com/repos/huggingface/datasets/issues/299
https://github.com/huggingface/datasets/pull/299
299
remove some print in snli file
closed
1
2020-06-23T07:46:06
2020-06-23T08:10:46
2020-06-23T08:10:44
mariamabarham
[]
This PR removes unwanted `print` statements in some files such as `snli.py`
true
643,603,804
https://api.github.com/repos/huggingface/datasets/issues/298
https://github.com/huggingface/datasets/pull/298
298
Add searchable datasets
closed
8
2020-06-23T07:33:03
2020-06-26T07:50:44
2020-06-26T07:50:43
lhoestq
[]
# Better support for Numpy format + Add Indexed Datasets I was working on adding Indexed Datasets but in the meantime I had to also add more support for Numpy arrays in the lib. ## Better support for Numpy format New features: - New fast method to convert Numpy arrays from Arrow structure (up to x100 speed up) using Pandas. - Allow to output Numpy arrays in batched `.map`, which was the only missing part to fully support Numpy arrays. Pandas offers fast zero-copy Numpy arrays conversion from Arrow structures. Using it we can speed up the reading of memory-mapped Numpy array stored in Arrow format. With these changes you can easily compute embeddings of texts using `.map()`. For example: ```python def embed(text): tokenized_example = tokenizer.encode(text, return_tensors="pt") embeddings = bert_encoder(tokenized_examples).numpy() return embeddings dset_with_embeddings = dset.map(lambda example: {"embeddings": embed(example["text])}) ``` And then reading the embeddings from the arrow format is be very fast. PS1: Note that right now only 1d arrays are supported. PS2: It seems possible to do without pandas but it will require more _trickery_. PS3: I did a simple benchmark with google colab that you can view here: https://colab.research.google.com/drive/1QlLTR6LRwYOKGJ-hTHmHyolE3wJzvfFg?usp=sharing ## Add Indexed Datasets For many retrieval tasks it is convenient to index a dataset to be able to run fast queries. For example for models like DPR, REALM, RAG etc. that are models for Open Domain QA, the retrieval step is very important. Therefore I added two ways to add an index to a column of a dataset: 1) You can index it using a Dense Index like Faiss. It is used to index vectors. Faiss is a library for efficient similarity search and clustering of dense vectors. It contains algorithms that search in sets of vectors of any size, up to ones that possibly do not fit in RAM. 2) You can index it using a Sparse Index like Elasticsearch. It is used to index text and run queries based on BM25 similarity. Example of usage: ```python ds = nlp.load_dataset('crime_and_punish', split='train') ds_with_embeddings = ds.map(lambda example: {'embeddings': embed(example['line']})) # `embed` outputs a `np.array` ds_with_embeddings.add_vector_index(column='embeddings') scores, retrieved_examples = ds_with_embeddings.get_nearest(column='embeddings', query=embed('my new query'), k=10) ``` ```python ds = nlp.load_dataset('crime_and_punish', split='train') es_client = elasticsearch.Elasticsearch() ds.add_text_index(column='line', es_client=es_client, index_name="my_es_index") scores, retrieved_examples = ds.get_nearest(column='line', query='my new query', k=10) ``` PS4: Faiss allows to specify many options for the [index](https://github.com/facebookresearch/faiss/wiki/The-index-factory) and for [GPU settings](https://github.com/facebookresearch/faiss/wiki/Faiss-on-the-GPU). I made sure that the user has full control over those settings. ## Tests I added tests for Faiss, Elasticsearch and indexed datasets. I had to edit the CI config because all the test scripts were not being run by CircleCI. ------------------ I'd be really happy to have some feedbacks :)
true
643,444,625
https://api.github.com/repos/huggingface/datasets/issues/297
https://github.com/huggingface/datasets/issues/297
297
Error in Demo for Specific Datasets
closed
3
2020-06-23T00:38:42
2020-07-17T17:43:06
2020-07-17T17:43:06
s-jse
[ "nlp-viewer" ]
Selecting `natural_questions` or `newsroom` dataset in the online demo results in an error similar to the following. ![image](https://user-images.githubusercontent.com/60150701/85347842-ac861900-b4ae-11ea-98c4-a53a00934783.png)
false
643,423,717
https://api.github.com/repos/huggingface/datasets/issues/296
https://github.com/huggingface/datasets/issues/296
296
snli -1 labels
closed
4
2020-06-22T23:33:30
2020-06-23T14:41:59
2020-06-23T14:41:58
jxmorris12
[]
I'm trying to train a model on the SNLI dataset. Why does it have so many -1 labels? ``` import nlp from collections import Counter data = nlp.load_dataset('snli')['train'] print(Counter(data['label'])) Counter({0: 183416, 2: 183187, 1: 182764, -1: 785}) ```
false
643,245,412
https://api.github.com/repos/huggingface/datasets/issues/295
https://github.com/huggingface/datasets/issues/295
295
Improve input warning for evaluation metrics
closed
0
2020-06-22T17:28:57
2020-06-23T14:47:37
2020-06-23T14:47:37
Tiiiger
[]
Hi, I am the author of `bert_score`. Recently, we received [ an issue ](https://github.com/Tiiiger/bert_score/issues/62) reporting a problem in using `bert_score` from the `nlp` package (also see #238 in this repo). After looking into this, I realized that the problem arises from the format `nlp.Metric` takes input. Here is a minimal example: ```python import nlp scorer = nlp.load_metric("bertscore") with open("pred.txt") as p, open("ref.txt") as g: for lp, lg in zip(p, g): scorer.add(lp, lg) score = scorer.compute(lang="en") ``` The problem in the above code is that `scorer.add()` expects a list of strings as input for the references. As a result, the `scorer` here would take a list of characters in `lg` to be the references. The correct implementation would be calling ```python scorer.add(lp, [lg]) ``` I just want to raise this issue to you to prevent future user errors of a similar kind. I assume some simple type checking can prevent this from happening? Thanks!
false
643,181,179
https://api.github.com/repos/huggingface/datasets/issues/294
https://github.com/huggingface/datasets/issues/294
294
Cannot load arxiv dataset on MacOS?
closed
4
2020-06-22T15:46:55
2020-06-30T15:25:10
2020-06-30T15:25:10
JohnGiorgi
[ "dataset bug" ]
I am having trouble loading the `"arxiv"` config from the `"scientific_papers"` dataset on MacOS. When I try loading the dataset with: ```python arxiv = nlp.load_dataset("scientific_papers", "arxiv") ``` I get the following stack trace: ```bash JSONDecodeError Traceback (most recent call last) <ipython-input-2-8e00c55d5a59> in <module> ----> 1 arxiv = nlp.load_dataset("scientific_papers", "arxiv") ~/miniconda3/envs/t2t/lib/python3.7/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 522 download_mode=download_mode, 523 ignore_verifications=ignore_verifications, --> 524 save_infos=save_infos, 525 ) 526 ~/miniconda3/envs/t2t/lib/python3.7/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 430 verify_infos = not save_infos and not ignore_verifications 431 self._download_and_prepare( --> 432 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 433 ) 434 # Sync info ~/miniconda3/envs/t2t/lib/python3.7/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 481 try: 482 # Prepare split will record examples associated to the split --> 483 self._prepare_split(split_generator, **prepare_split_kwargs) 484 except OSError: 485 raise OSError("Cannot find data file. " + (self.manual_download_instructions or "")) ~/miniconda3/envs/t2t/lib/python3.7/site-packages/nlp/builder.py in _prepare_split(self, split_generator) 662 663 generator = self._generate_examples(**split_generator.gen_kwargs) --> 664 for key, record in utils.tqdm(generator, unit=" examples", total=split_info.num_examples, leave=False): 665 example = self.info.features.encode_example(record) 666 writer.write(example) ~/miniconda3/envs/t2t/lib/python3.7/site-packages/tqdm/std.py in __iter__(self) 1106 fp_write=getattr(self.fp, 'write', sys.stderr.write)) 1107 -> 1108 for obj in iterable: 1109 yield obj 1110 # Update and possibly print the progressbar. ~/miniconda3/envs/t2t/lib/python3.7/site-packages/nlp/datasets/scientific_papers/107a416c0e1958cb846f5934b5aae292f7884a5b27e86af3f3ef1a093e058bbc/scientific_papers.py in _generate_examples(self, path) 114 # "section_names": list[str], list of section names. 115 # "sections": list[list[str]], list of sections (list of paragraphs) --> 116 d = json.loads(line) 117 summary = "\n".join(d["abstract_text"]) 118 # In original paper, <S> and </S> are not used in vocab during training ~/miniconda3/envs/t2t/lib/python3.7/json/__init__.py in loads(s, encoding, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw) 346 parse_int is None and parse_float is None and 347 parse_constant is None and object_pairs_hook is None and not kw): --> 348 return _default_decoder.decode(s) 349 if cls is None: 350 cls = JSONDecoder ~/miniconda3/envs/t2t/lib/python3.7/json/decoder.py in decode(self, s, _w) 335 336 """ --> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end()) 338 end = _w(s, end).end() 339 if end != len(s): ~/miniconda3/envs/t2t/lib/python3.7/json/decoder.py in raw_decode(self, s, idx) 351 """ 352 try: --> 353 obj, end = self.scan_once(s, idx) 354 except StopIteration as err: 355 raise JSONDecodeError("Expecting value", s, err.value) from None JSONDecodeError: Unterminated string starting at: line 1 column 46983 (char 46982) 163502 examples [02:10, 2710.68 examples/s] ``` I am not sure how to trace back to the specific JSON file that has the "Unterminated string". Also, I do not get this error on colab so I suspect it may be MacOS specific. Copy pasting the relevant lines from `transformers-cli env` below: - Platform: Darwin-19.5.0-x86_64-i386-64bit - Python version: 3.7.5 - PyTorch version (GPU?): 1.5.0 (False) - Tensorflow version (GPU?): 2.2.0 (False) Any ideas?
false
642,942,182
https://api.github.com/repos/huggingface/datasets/issues/293
https://github.com/huggingface/datasets/pull/293
293
Don't test community datasets
closed
0
2020-06-22T10:15:33
2020-06-22T11:07:00
2020-06-22T11:06:59
lhoestq
[]
This PR disables testing for community datasets on aws. It should fix the CI that is currently failing.
true
642,897,797
https://api.github.com/repos/huggingface/datasets/issues/292
https://github.com/huggingface/datasets/pull/292
292
Update metadata for x_stance dataset
closed
3
2020-06-22T09:13:26
2020-06-23T08:07:24
2020-06-23T08:07:24
jvamvas
[]
Thank you for featuring the x_stance dataset in your library. This PR updates some metadata: - Citation: Replace preprint with proceedings - URL: Use a URL with long-term availability
true
642,688,450
https://api.github.com/repos/huggingface/datasets/issues/291
https://github.com/huggingface/datasets/pull/291
291
break statement not required
closed
3
2020-06-22T01:40:55
2020-06-23T17:57:58
2020-06-23T09:37:02
mayurnewase
[]
true
641,978,286
https://api.github.com/repos/huggingface/datasets/issues/290
https://github.com/huggingface/datasets/issues/290
290
ConnectionError - Eli5 dataset download
closed
2
2020-06-19T13:40:33
2020-06-20T13:22:24
2020-06-20T13:22:24
JovanNj
[]
Hi, I have a problem with downloading Eli5 dataset. When typing `nlp.load_dataset('eli5')`, I get ConnectionError: Couldn't reach https://storage.googleapis.com/huggingface-nlp/cache/datasets/eli5/LFQA_reddit/1.0.0/explain_like_im_five-train_eli5.arrow I would appreciate if you could help me with this issue.
false
641,934,194
https://api.github.com/repos/huggingface/datasets/issues/289
https://github.com/huggingface/datasets/pull/289
289
update xsum
closed
3
2020-06-19T12:28:32
2020-06-22T13:27:26
2020-06-22T07:20:07
mariamabarham
[]
This PR makes the following update to the xsum dataset: - Manual download is not required anymore - dataset can be loaded as follow: `nlp.load_dataset('xsum')` **Important** Instead of using on outdated url to download the data: "https://raw.githubusercontent.com/EdinburghNLP/XSum/master/XSum-Dataset/XSum-TRAINING-DEV-TEST-SPLIT-90-5-5.json" a more up-to-date url stored here: https://s3.amazonaws.com/datasets.huggingface.co/summarization/xsum.tar.gz is used , so that the user does not need to manually download the data anymore. There might be slight breaking changes here for xsum.
true
641,888,610
https://api.github.com/repos/huggingface/datasets/issues/288
https://github.com/huggingface/datasets/issues/288
288
Error at the first example in README: AttributeError: module 'dill' has no attribute '_dill'
closed
5
2020-06-19T11:01:22
2020-06-21T09:05:11
2020-06-21T09:05:11
wutong8023
[]
/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:469: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([("qint8", np.int8, 1)]) /Users/parasol_tree/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:470: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint8 = np.dtype([("quint8", np.uint8, 1)]) /Users/parasol_tree/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:471: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint16 = np.dtype([("qint16", np.int16, 1)]) /Users/parasol_tree/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:472: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint16 = np.dtype([("quint16", np.uint16, 1)]) /Users/parasol_tree/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:473: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint32 = np.dtype([("qint32", np.int32, 1)]) /Users/parasol_tree/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:476: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. np_resource = np.dtype([("resource", np.ubyte, 1)]) /Users/parasol_tree/anaconda3/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6 return f(*args, **kwds) /Users/parasol_tree/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`. from ._conv import register_converters as _register_converters Traceback (most recent call last): File "/Users/parasol_tree/Resource/019 - Github/AcademicEnglishToolkit /test.py", line 7, in <module> import nlp File "/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/nlp/__init__.py", line 27, in <module> from .arrow_dataset import Dataset File "/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/nlp/arrow_dataset.py", line 31, in <module> from nlp.utils.py_utils import dumps File "/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/nlp/utils/__init__.py", line 20, in <module> from .download_manager import DownloadManager, GenerateMode File "/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/nlp/utils/download_manager.py", line 25, in <module> from .py_utils import flatten_nested, map_nested, size_str File "/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/nlp/utils/py_utils.py", line 244, in <module> class Pickler(dill.Pickler): File "/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/nlp/utils/py_utils.py", line 247, in Pickler dispatch = dill._dill.MetaCatchingDict(dill.Pickler.dispatch.copy()) AttributeError: module 'dill' has no attribute '_dill'
false
641,800,227
https://api.github.com/repos/huggingface/datasets/issues/287
https://github.com/huggingface/datasets/pull/287
287
fix squad_v2 metric
closed
0
2020-06-19T08:24:46
2020-06-19T08:33:43
2020-06-19T08:33:41
lhoestq
[]
Fix #280 The imports were wrong
true
641,585,758
https://api.github.com/repos/huggingface/datasets/issues/286
https://github.com/huggingface/datasets/pull/286
286
Add ANLI dataset.
closed
1
2020-06-18T22:27:30
2020-06-22T12:23:27
2020-06-22T12:23:27
easonnie
[]
I completed all the steps in https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md#how-to-add-a-dataset and push the code for ANLI. Please let me know if there are any errors.
true
641,360,702
https://api.github.com/repos/huggingface/datasets/issues/285
https://github.com/huggingface/datasets/pull/285
285
Consistent formatting of citations
closed
1
2020-06-18T16:25:23
2020-06-22T08:09:25
2020-06-22T08:09:24
mariamabarham
[]
#283
true
641,337,217
https://api.github.com/repos/huggingface/datasets/issues/284
https://github.com/huggingface/datasets/pull/284
284
Fix manual download instructions
closed
5
2020-06-18T15:59:57
2020-06-19T08:24:21
2020-06-19T08:24:19
patrickvonplaten
[]
This PR replaces the static `DatasetBulider` variable `MANUAL_DOWNLOAD_INSTRUCTIONS` by a property function `manual_download_instructions()`. Some datasets like XTREME and all WMT need the manual data dir only for a small fraction of the possible configs. After some brainstorming with @mariamabarham and @lhoestq, we came to the conclusion that having a property function `manual_download_instructions()` gives us more flexibility to decide on a per config basis in the dataset builder if manual download instructions are needed. Also this PR should unblock solves a bug with `wmt16 - ro-en` @sshleifer from this branch you should be able to succesfully run ```python import nlp ds = nlp.load_dataset('./datasets/wmt16', 'ro-en') ``` and once this PR is merged S3 should be synched so that ```python import nlp ds = nlp.load_dataset("wmt16", "ro-en") ``` works as well. **Important**: Since `MANUAL_DOWNLOAD_INSTRUCTIONS` was not really exposed to the user, this PR should not be a problem regarding backward compatibility.
true
641,270,439
https://api.github.com/repos/huggingface/datasets/issues/283
https://github.com/huggingface/datasets/issues/283
283
Consistent formatting of citations
closed
0
2020-06-18T14:48:45
2020-06-22T17:30:46
2020-06-22T17:30:46
srush
[]
The citations are all of a different format, some have "```" and have text inside, others are proper bibtex. Can we make it so that they all are proper citations, i.e. parse by the bibtex spec: https://bibtexparser.readthedocs.io/en/master/
false
641,217,759
https://api.github.com/repos/huggingface/datasets/issues/282
https://github.com/huggingface/datasets/pull/282
282
Update dataset_info from gcs
closed
0
2020-06-18T13:41:15
2020-06-18T16:24:52
2020-06-18T16:24:51
lhoestq
[]
Some datasets are hosted on gcs (wikipedia for example). In this PR I make sure that, when a user loads such datasets, the file_instructions are built using the dataset_info.json from gcs and not from the info extracted from the local `dataset_infos.json` (the one that contain the info for each config). Indeed local files may end up outdated. Furthermore, to avoid outdated dataset_infos.json, I now make sure that each time you run `load_dataset` it also tries to update the file locally.
true
641,067,856
https://api.github.com/repos/huggingface/datasets/issues/281
https://github.com/huggingface/datasets/issues/281
281
Private/sensitive data
closed
3
2020-06-18T09:47:27
2020-06-20T13:15:12
2020-06-20T13:15:12
MFreidank
[]
Hi all, Thanks for this fantastic library, it makes it very easy to do prototyping for NLP projects interchangeably between TF/Pytorch. Unfortunately, there is data that cannot easily be shared publicly as it may contain sensitive information. Is there support/a plan to support such data with NLP, e.g. by reading it from local sources? Use case flow could look like this: use NLP to prototype an approach on similar, public data and apply the resulting prototype on sensitive/private data without the need to rethink data processing pipelines. Many thanks for your responses ahead of time and kind regards, MFreidank
false
640,677,615
https://api.github.com/repos/huggingface/datasets/issues/280
https://github.com/huggingface/datasets/issues/280
280
Error with SquadV2 Metrics
closed
0
2020-06-17T19:10:54
2020-06-19T08:33:41
2020-06-19T08:33:41
avinregmi
[]
I can't seem to import squad v2 metrics. **squad_metric = nlp.load_metric('squad_v2')** **This throws me an error.:** ``` ImportError Traceback (most recent call last) <ipython-input-8-170b6a170555> in <module> ----> 1 squad_metric = nlp.load_metric('squad_v2') ~/env/lib64/python3.6/site-packages/nlp/load.py in load_metric(path, name, process_id, num_process, data_dir, experiment_id, in_memory, download_config, **metric_init_kwargs) 426 """ 427 module_path = prepare_module(path, download_config=download_config, dataset=False) --> 428 metric_cls = import_main_class(module_path, dataset=False) 429 metric = metric_cls( 430 name=name, ~/env/lib64/python3.6/site-packages/nlp/load.py in import_main_class(module_path, dataset) 55 """ 56 importlib.invalidate_caches() ---> 57 module = importlib.import_module(module_path) 58 59 if dataset: /usr/lib64/python3.6/importlib/__init__.py in import_module(name, package) 124 break 125 level += 1 --> 126 return _bootstrap._gcd_import(name[level:], package, level) 127 128 /usr/lib64/python3.6/importlib/_bootstrap.py in _gcd_import(name, package, level) /usr/lib64/python3.6/importlib/_bootstrap.py in _find_and_load(name, import_) /usr/lib64/python3.6/importlib/_bootstrap.py in _find_and_load_unlocked(name, import_) /usr/lib64/python3.6/importlib/_bootstrap.py in _load_unlocked(spec) /usr/lib64/python3.6/importlib/_bootstrap_external.py in exec_module(self, module) /usr/lib64/python3.6/importlib/_bootstrap.py in _call_with_frames_removed(f, *args, **kwds) ~/env/lib64/python3.6/site-packages/nlp/metrics/squad_v2/a15e787c76889174874386d3def75321f0284c11730d2a57e28fe1352c9b5c7a/squad_v2.py in <module> 16 17 import nlp ---> 18 from .evaluate import evaluate 19 20 _CITATION = """\ ImportError: cannot import name 'evaluate' ```
false
640,611,692
https://api.github.com/repos/huggingface/datasets/issues/279
https://github.com/huggingface/datasets/issues/279
279
Dataset Preprocessing Cache with .map() function not working as expected
closed
5
2020-06-17T17:17:21
2021-07-06T21:43:28
2021-04-18T23:43:49
sarahwie
[]
I've been having issues with reproducibility when loading and processing datasets with the `.map` function. I was only able to resolve them by clearing all of the cache files on my system. Is there a way to disable using the cache when processing a dataset? As I make minor processing changes on the same dataset, I want to be able to be certain the data is being re-processed rather than loaded from a cached file. Could you also help me understand a bit more about how the caching functionality is used for pre-processing? E.g. how is it determined when to load from a cache vs. reprocess. I was particularly having an issue where the correct dataset splits were loaded, but as soon as I applied the `.map()` function to each split independently, they somehow all exited this process having been converted to the test set. Thanks!
false
640,518,917
https://api.github.com/repos/huggingface/datasets/issues/278
https://github.com/huggingface/datasets/issues/278
278
MemoryError when loading German Wikipedia
closed
7
2020-06-17T15:06:21
2020-06-19T12:53:02
2020-06-19T12:53:02
gregburman
[]
Hi, first off let me say thank you for all the awesome work you're doing at Hugging Face across all your projects (NLP, Transformers, Tokenizers) - they're all amazing contributions to us working with NLP models :) I'm trying to download the German Wikipedia dataset as follows: ``` wiki = nlp.load_dataset("wikipedia", "20200501.de", split="train") ``` However, when I do so, I get the following error: ``` Downloading and preparing dataset wikipedia/20200501.de (download: Unknown size, generated: Unknown size, total: Unknown size) to /home/ubuntu/.cache/huggingface/datasets/wikipedia/20200501.de/1.0.0... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/ubuntu/anaconda3/envs/albert/lib/python3.7/site-packages/nlp/load.py", line 520, in load_dataset save_infos=save_infos, File "/home/ubuntu/anaconda3/envs/albert/lib/python3.7/site-packages/nlp/builder.py", line 433, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/ubuntu/anaconda3/envs/albert/lib/python3.7/site-packages/nlp/builder.py", line 824, in _download_and_prepare "\n\t`{}`".format(usage_example) nlp.builder.MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc. More information about Apache Beam runners at https://beam.apache.org/documentation/runners/capability-matrix/ If you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory). Example of usage: `load_dataset('wikipedia', '20200501.de', beam_runner='DirectRunner')` ``` So, following on from the example usage at the bottom, I tried specifying `beam_runner='DirectRunner`, however when I do this after about 20 min after the data has all downloaded, I get a `MemoryError` as warned. This isn't an issue for the English or French Wikipedia datasets (I've tried both), as neither seem to require that `beam_runner` be specified. Can you please clarify why this is an issue for the German dataset? My nlp version is 0.2.1. Thank you!
false
640,163,053
https://api.github.com/repos/huggingface/datasets/issues/277
https://github.com/huggingface/datasets/issues/277
277
Empty samples in glue/qqp
closed
2
2020-06-17T05:54:52
2020-06-21T00:21:45
2020-06-21T00:21:45
richarddwang
[]
``` qqp = nlp.load_dataset('glue', 'qqp') print(qqp['train'][310121]) print(qqp['train'][362225]) ``` ``` {'question1': 'How can I create an Android app?', 'question2': '', 'label': 0, 'idx': 310137} {'question1': 'How can I develop android app?', 'question2': '', 'label': 0, 'idx': 362246} ``` Notice that question 2 is empty string. BTW, I have checked and these two are the only naughty ones in all splits of qqp.
false
639,490,858
https://api.github.com/repos/huggingface/datasets/issues/276
https://github.com/huggingface/datasets/pull/276
276
Fix metric compute (original_instructions missing)
closed
2
2020-06-16T08:52:01
2020-06-18T07:41:45
2020-06-18T07:41:44
lhoestq
[]
When loading arrow data we added in cc8d250 a way to specify the instructions that were used to store them with the loaded dataset. However metrics load data the same way but don't need instructions (we use one single file). In this PR I just make `original_instructions` optional when reading files to load a `Dataset` object. This should fix #269
true
639,439,052
https://api.github.com/repos/huggingface/datasets/issues/275
https://github.com/huggingface/datasets/issues/275
275
NonMatchingChecksumError when loading pubmed dataset
closed
1
2020-06-16T07:31:51
2020-06-19T07:37:07
2020-06-19T07:37:07
DavideStenner
[ "dataset bug" ]
I get this error when i run `nlp.load_dataset('scientific_papers', 'pubmed', split = 'train[:50%]')`. The error is: ``` --------------------------------------------------------------------------- NonMatchingChecksumError Traceback (most recent call last) <ipython-input-2-7742dea167d0> in <module>() ----> 1 df = nlp.load_dataset('scientific_papers', 'pubmed', split = 'train[:50%]') 2 df = pd.DataFrame(df) 3 gc.collect() 3 frames /usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 518 download_mode=download_mode, 519 ignore_verifications=ignore_verifications, --> 520 save_infos=save_infos, 521 ) 522 /usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 431 verify_infos = not save_infos and not ignore_verifications 432 self._download_and_prepare( --> 433 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 434 ) 435 # Sync info /usr/local/lib/python3.6/dist-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 468 # Checksums verification 469 if verify_infos: --> 470 verify_checksums(self.info.download_checksums, dl_manager.get_recorded_sizes_checksums()) 471 for split_generator in split_generators: 472 if str(split_generator.split_info.name).lower() == "all": /usr/local/lib/python3.6/dist-packages/nlp/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums) 34 bad_urls = [url for url in expected_checksums if expected_checksums[url] != recorded_checksums[url]] 35 if len(bad_urls) > 0: ---> 36 raise NonMatchingChecksumError(str(bad_urls)) 37 logger.info("All the checksums matched successfully.") 38 NonMatchingChecksumError: ['https://drive.google.com/uc?id=1b3rmCSIoh6VhD4HKWjI4HOW-cSwcwbeC&export=download', 'https://drive.google.com/uc?id=1lvsqvsFi3W-pE1SqNZI0s8NR9rC1tsja&export=download'] ``` I'm currently working on google colab. That is quite strange because yesterday it was fine.
false
639,156,625
https://api.github.com/repos/huggingface/datasets/issues/274
https://github.com/huggingface/datasets/issues/274
274
PG-19
closed
4
2020-06-15T21:02:26
2020-07-06T15:35:02
2020-07-06T15:35:02
lucidrains
[ "dataset request" ]
Hi, and thanks for all your open-sourced work, as always! I was wondering if you would be open to adding PG-19 to your collection of datasets. https://github.com/deepmind/pg19 It is often used for benchmarking long-range language modeling.
false
638,968,054
https://api.github.com/repos/huggingface/datasets/issues/273
https://github.com/huggingface/datasets/pull/273
273
update cos_e to add cos_e v1.0
closed
0
2020-06-15T16:03:22
2020-06-16T08:25:54
2020-06-16T08:25:52
mariamabarham
[]
This PR updates the cos_e dataset to add v1.0 as requested here #163 @nazneenrajani
true
638,307,313
https://api.github.com/repos/huggingface/datasets/issues/272
https://github.com/huggingface/datasets/pull/272
272
asd
closed
0
2020-06-14T08:20:38
2020-06-14T09:16:41
2020-06-14T09:16:41
sn696
[]
true
638,135,754
https://api.github.com/repos/huggingface/datasets/issues/271
https://github.com/huggingface/datasets/pull/271
271
Fix allociné dataset configuration
closed
6
2020-06-13T10:12:10
2020-06-18T07:41:21
2020-06-18T07:41:20
TheophileBlard
[]
This is a patch for #244. According to the [live nlp viewer](url), the Allociné dataset must be loaded with : ```python dataset = load_dataset('allocine', 'allocine') ``` This is redundant, as there is only one "dataset configuration", and should only be: ```python dataset = load_dataset('allocine') ``` This is my mistake, because the code for [`allocine.py`](https://github.com/huggingface/nlp/blob/master/datasets/allocine/allocine.py) was inspired by [`imdb.py`](https://github.com/huggingface/nlp/blob/master/datasets/imdb/imdb.py), which also force the user to specify the "dataset configuration" (even if there is only one). I believe this PR should solve this issue, making the Allociné dataset more convenient to use.
true
638,121,617
https://api.github.com/repos/huggingface/datasets/issues/270
https://github.com/huggingface/datasets/issues/270
270
c4 dataset is not viewable in nlpviewer demo
closed
1
2020-06-13T08:26:16
2020-10-27T15:35:29
2020-10-27T15:35:13
rajarsheem
[ "nlp-viewer" ]
I get the following error when I try to view the c4 dataset in [nlpviewer](https://huggingface.co/nlp/viewer/) ```python ModuleNotFoundError: No module named 'langdetect' Traceback: File "/home/sasha/.local/lib/python3.7/site-packages/streamlit/ScriptRunner.py", line 322, in _run_script exec(code, module.__dict__) File "/home/sasha/nlp_viewer/run.py", line 54, in <module> configs = get_confs(option.id) File "/home/sasha/.local/lib/python3.7/site-packages/streamlit/caching.py", line 591, in wrapped_func return get_or_create_cached_value() File "/home/sasha/.local/lib/python3.7/site-packages/streamlit/caching.py", line 575, in get_or_create_cached_value return_value = func(*args, **kwargs) File "/home/sasha/nlp_viewer/run.py", line 48, in get_confs builder_cls = nlp.load.import_main_class(module_path, dataset=True) File "/home/sasha/.local/lib/python3.7/site-packages/nlp/load.py", line 57, in import_main_class module = importlib.import_module(module_path) File "/usr/lib/python3.7/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1006, in _gcd_import File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 677, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 728, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/sasha/.local/lib/python3.7/site-packages/nlp/datasets/c4/88bb1b1435edad3fb772325710c4a43327cbf4a23b9030094556e6f01e14ec19/c4.py", line 29, in <module> from .c4_utils import ( File "/home/sasha/.local/lib/python3.7/site-packages/nlp/datasets/c4/88bb1b1435edad3fb772325710c4a43327cbf4a23b9030094556e6f01e14ec19/c4_utils.py", line 29, in <module> import langdetect ```
false
638,106,774
https://api.github.com/repos/huggingface/datasets/issues/269
https://github.com/huggingface/datasets/issues/269
269
Error in metric.compute: missing `original_instructions` argument
closed
0
2020-06-13T06:26:54
2020-06-18T07:41:44
2020-06-18T07:41:44
zphang
[ "metric bug" ]
I'm running into an error using metrics for computation in the latest master as well as version 0.2.1. Here is a minimal example: ```python import nlp rte_metric = nlp.load_metric('glue', name="rte") rte_metric.compute( [0, 0, 1, 1], [0, 1, 0, 1], ) ``` ``` 181 # Read the predictions and references 182 reader = ArrowReader(path=self.data_dir, info=None) --> 183 self.data = reader.read_files(node_files) 184 185 # Release all of our locks TypeError: read_files() missing 1 required positional argument: 'original_instructions' ``` I believe this might have been introduced with cc8d2508b75f7ba0e5438d0686ee02dcec43c7f4, which added the `original_instructions` argument. Elsewhere, an empty-string default is provided--perhaps that could be done here too?
false
637,848,056
https://api.github.com/repos/huggingface/datasets/issues/268
https://github.com/huggingface/datasets/pull/268
268
add Rotten Tomatoes Movie Review sentences sentiment dataset
closed
1
2020-06-12T15:53:59
2020-06-18T07:46:24
2020-06-18T07:46:23
jxmorris12
[]
Sentence-level movie reviews v1.0 from here: http://www.cs.cornell.edu/people/pabo/movie-review-data/
true
637,415,545
https://api.github.com/repos/huggingface/datasets/issues/267
https://github.com/huggingface/datasets/issues/267
267
How can I load/find WMT en-romanian?
closed
1
2020-06-12T01:09:37
2020-06-19T08:24:19
2020-06-19T08:24:19
sshleifer
[]
I believe it is from `wmt16` When I run ```python wmt = nlp.load_dataset('wmt16') ``` I get: ```python AssertionError: The dataset wmt16 with config cs-en requires manual data. Please follow the manual download instructions: Some of the wmt configs here, require a manual download. Please look into wmt.py to see the exact path (and file name) that has to be downloaded. . Manual data can be loaded with `nlp.load(wmt16, data_dir='<path/to/manual/data>') ``` There is no wmt.py,as the error message suggests, and wmt16.py doesn't have manual download instructions. Any idea how to do this? Thanks in advance!
false
637,156,392
https://api.github.com/repos/huggingface/datasets/issues/266
https://github.com/huggingface/datasets/pull/266
266
Add sort, shuffle, test_train_split and select methods
closed
4
2020-06-11T16:22:20
2020-06-18T16:23:25
2020-06-18T16:23:24
thomwolf
[]
Add a bunch of methods to reorder/split/select rows in a dataset: - `dataset.select(indices)`: Create a new dataset with rows selected following the list/array of indices (which can have a different size than the dataset and contain duplicated indices, the only constrain is that all the integers in the list must be smaller than the dataset size, otherwise we're indexing outside the dataset...) - `dataset.sort(column_name)`: sort a dataset according to a column (has to be a column with a numpy compatible type) - `dataset.shuffle(seed)`: shuffle a dataset rows - `dataset.train_test_split(test_size, train_size)`: Return a dictionary with two random train and test subsets (`train` and `test` ``Dataset`` splits) All these methods are **not** in-place which means they return new ``Dataset``. This is the default behavior in the library. Fix #147 #166 #259
true
637,139,220
https://api.github.com/repos/huggingface/datasets/issues/265
https://github.com/huggingface/datasets/pull/265
265
Add pyarrow warning colab
closed
0
2020-06-11T15:57:51
2020-08-02T18:14:36
2020-06-12T08:14:16
lhoestq
[]
When a user installs `nlp` on google colab, then google colab doesn't update pyarrow, and the runtime needs to be restarted to use the updated version of pyarrow. This is an issue because `nlp` requires the updated version to work correctly. In this PR I added en error that is shown to the user in google colab if the user tries to `import nlp` without having restarted the runtime. The error tells the user to restart the runtime.
true
637,106,170
https://api.github.com/repos/huggingface/datasets/issues/264
https://github.com/huggingface/datasets/pull/264
264
Fix small issues creating dataset
closed
0
2020-06-11T15:20:16
2020-06-12T08:15:57
2020-06-12T08:15:56
lhoestq
[]
Fix many small issues mentioned in #249: - don't force to install apache beam for commands - fix None cache dir when using `dl_manager.download_custom` - added new extras in `setup.py` named `dev` that contains tests and quality dependencies - mock dataset sizes when running tests with dummy data - add a note about the naming convention of datasets (camel case - snake case) in CONTRIBUTING.md This should help users create their datasets. Next step is the `add_dataset.md` docs :)
true
637,028,015
https://api.github.com/repos/huggingface/datasets/issues/263
https://github.com/huggingface/datasets/issues/263
263
[Feature request] Support for external modality for language datasets
closed
5
2020-06-11T13:42:18
2022-02-10T13:26:35
2022-02-10T13:26:35
aleSuglia
[ "enhancement", "generic discussion" ]
# Background In recent years many researchers have advocated that learning meanings from text-based only datasets is just like asking a human to "learn to speak by listening to the radio" [[E. Bender and A. Koller,2020](https://openreview.net/forum?id=GKTvAcb12b), [Y. Bisk et. al, 2020](https://arxiv.org/abs/2004.10151)]. Therefore, the importance of multi-modal datasets for the NLP community is of paramount importance for next-generation models. For this reason, I raised a [concern](https://github.com/huggingface/nlp/pull/236#issuecomment-639832029) related to the best way to integrate external features in NLP datasets (e.g., visual features associated with an image, audio features associated with a recording, etc.). This would be of great importance for a more systematic way of representing data for ML models that are learning from multi-modal data. # Language + Vision ## Use case Typically, people working on Language+Vision tasks, have a reference dataset (either in JSON or JSONL format) and for each example, they have an identifier that specifies the reference image. For a practical example, you can refer to the [GQA](https://cs.stanford.edu/people/dorarad/gqa/download.html#seconddown) dataset. Currently, images are represented by either pooling-based features (average pooling of ResNet or VGGNet features, see [DeVries et.al, 2017](https://arxiv.org/abs/1611.08481), [Shekhar et.al, 2019](https://www.aclweb.org/anthology/N19-1265.pdf)) where you have a single vector for every image. Another option is to use a set of feature maps for every image extracted from a specific layer of a CNN (see [Xu et.al, 2015](https://arxiv.org/abs/1502.03044)). A more recent option, especially with large-scale multi-modal transformers [Li et. al, 2019](https://arxiv.org/abs/1908.03557), is to use FastRCNN features. For all these types of features, people use one of the following formats: 1. [HD5F](https://pypi.org/project/h5py/) 2. [NumPy](https://numpy.org/doc/stable/reference/generated/numpy.savez.html) 3. [LMDB](https://lmdb.readthedocs.io/en/release/) ## Implementation considerations I was thinking about possible ways of implementing this feature. As mentioned above, depending on the model, different visual features can be used. This step usually relies on another model (say ResNet-101) that is used to generate the visual features for each image used in the dataset. Typically, this step is done in a separate script that completes the feature generation procedure. The usual processing steps for these datasets are the following: 1. Download dataset 2. Download images associated with the dataset 3. Write a script that generates the visual features for every image and store them in a specific file 4. Create a DataLoader that maps the visual features to the corresponding language example In my personal projects, I've decided to ignore HD5F because it doesn't have out-of-the-box support for multi-processing (see this PyTorch [issue](https://github.com/pytorch/pytorch/issues/11929)). I've been successfully using a NumPy compressed file for each image so that I can store any sort of information in it. For ease of use of all these Language+Vision datasets, it would be really handy to have a way to associate the visual features with the text and store them in an efficient way. That's why I immediately thought about the HuggingFace NLP backend based on Apache Arrow. The assumption here is that the external modality will be mapped to a N-dimensional tensor so easily represented by a NumPy array. Looking forward to hearing your thoughts about it!
false
636,702,849
https://api.github.com/repos/huggingface/datasets/issues/262
https://github.com/huggingface/datasets/pull/262
262
Add new dataset ANLI Round 1
closed
1
2020-06-11T04:14:57
2020-06-12T22:03:03
2020-06-12T22:03:03
easonnie
[]
Adding new dataset [ANLI](https://github.com/facebookresearch/anli/). I'm not familiar with how to add new dataset. Let me know if there is any issue. I only include round 1 data here. There will be round 2, round 3 and more in the future with potentially different format. I think it will be better to separate them.
true
636,372,380
https://api.github.com/repos/huggingface/datasets/issues/261
https://github.com/huggingface/datasets/issues/261
261
Downloading dataset error with pyarrow.lib.RecordBatch
closed
2
2020-06-10T16:04:19
2020-06-11T14:35:12
2020-06-11T14:35:12
cuent
[]
I am trying to download `sentiment140` and I have the following error ``` /usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 518 download_mode=download_mode, 519 ignore_verifications=ignore_verifications, --> 520 save_infos=save_infos, 521 ) 522 /usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 418 verify_infos = not save_infos and not ignore_verifications 419 self._download_and_prepare( --> 420 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 421 ) 422 # Sync info /usr/local/lib/python3.6/dist-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 472 try: 473 # Prepare split will record examples associated to the split --> 474 self._prepare_split(split_generator, **prepare_split_kwargs) 475 except OSError: 476 raise OSError("Cannot find data file. " + (self.MANUAL_DOWNLOAD_INSTRUCTIONS or "")) /usr/local/lib/python3.6/dist-packages/nlp/builder.py in _prepare_split(self, split_generator) 652 for key, record in utils.tqdm(generator, unit=" examples", total=split_info.num_examples, leave=False): 653 example = self.info.features.encode_example(record) --> 654 writer.write(example) 655 num_examples, num_bytes = writer.finalize() 656 /usr/local/lib/python3.6/dist-packages/nlp/arrow_writer.py in write(self, example, writer_batch_size) 143 self._build_writer(pa_table=pa.Table.from_pydict(example)) 144 if writer_batch_size is not None and len(self.current_rows) >= writer_batch_size: --> 145 self.write_on_file() 146 147 def write_batch( /usr/local/lib/python3.6/dist-packages/nlp/arrow_writer.py in write_on_file(self) 127 else: 128 # All good --> 129 self._write_array_on_file(pa_array) 130 self.current_rows = [] 131 /usr/local/lib/python3.6/dist-packages/nlp/arrow_writer.py in _write_array_on_file(self, pa_array) 96 def _write_array_on_file(self, pa_array): 97 """Write a PyArrow Array""" ---> 98 pa_batch = pa.RecordBatch.from_struct_array(pa_array) 99 self._num_bytes += pa_array.nbytes 100 self.pa_writer.write_batch(pa_batch) AttributeError: type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array' ``` I installed the last version and ran the following command: ```python import nlp sentiment140 = nlp.load_dataset('sentiment140', cache_dir='/content') ```
false
636,261,118
https://api.github.com/repos/huggingface/datasets/issues/260
https://github.com/huggingface/datasets/pull/260
260
Consistency fixes
closed
0
2020-06-10T13:44:42
2020-06-11T10:34:37
2020-06-11T10:34:36
julien-c
[]
A few bugs I've found while hacking
true
636,239,529
https://api.github.com/repos/huggingface/datasets/issues/259
https://github.com/huggingface/datasets/issues/259
259
documentation missing how to split a dataset
closed
7
2020-06-10T13:18:13
2023-03-14T13:56:07
2020-06-18T22:20:24
fotisj
[]
I am trying to understand how to split a dataset ( as arrow_dataset). I know I can do something like this to access a split which is already in the original dataset : `ds_test = nlp.load_dataset('imdb, split='test') ` But how can I split ds_test into a test and a validation set (without reading the data into memory and keeping the arrow_dataset as container)? I guess it has something to do with the module split :-) but there is no real documentation in the code but only a reference to a longer description: > See the [guide on splits](https://github.com/huggingface/nlp/tree/master/docs/splits.md) for more information. But the guide seems to be missing. To clarify: I know that this has been modelled after the dataset of tensorflow and that some of the documentation there can be used [like this one](https://www.tensorflow.org/datasets/splits). But to come back to the example above: I cannot simply split the testset doing this: `ds_test = nlp.load_dataset('imdb, split='test'[:5000]) ` `ds_val = nlp.load_dataset('imdb, split='test'[5000:])` because the imdb test data is sorted by class (probably not a good idea anyway)
false
635,859,525
https://api.github.com/repos/huggingface/datasets/issues/258
https://github.com/huggingface/datasets/issues/258
258
Why is dataset after tokenization far more larger than the orginal one ?
closed
4
2020-06-10T01:27:07
2020-06-10T12:46:34
2020-06-10T12:46:34
richarddwang
[]
I tokenize wiki dataset by `map` and cache the results. ``` def tokenize_tfm(example): example['input_ids'] = hf_fast_tokenizer.convert_tokens_to_ids(hf_fast_tokenizer.tokenize(example['text'])) return example wiki = nlp.load_dataset('wikipedia', '20200501.en', cache_dir=cache_dir)['train'] wiki.map(tokenize_tfm, cache_file_name=cache_dir/"wikipedia/20200501.en/1.0.0/tokenized_wiki.arrow") ``` and when I see their size ``` ls -l --block-size=M 17460M wikipedia-train.arrow 47511M tokenized_wiki.arrow ``` The tokenized one is over 2x size of original one. Is there something I did wrong ?
false
635,620,979
https://api.github.com/repos/huggingface/datasets/issues/257
https://github.com/huggingface/datasets/issues/257
257
Tokenizer pickling issue fix not landed in `nlp` yet?
closed
2
2020-06-09T17:12:34
2020-06-10T21:45:32
2020-06-09T17:26:53
sarahwie
[]
Unless I recreate an arrow_dataset from my loaded nlp dataset myself (which I think does not use the cache by default), I get the following error when applying the map function: ``` dataset = nlp.load_dataset('cos_e') tokenizer = GPT2TokenizerFast.from_pretrained('gpt2', cache_dir=cache_dir) for split in dataset.keys(): dataset[split].map(lambda x: some_function(x, tokenizer)) ``` ``` 06/09/2020 10:09:19 - INFO - nlp.builder - Constructing Dataset for split train[:10], from /home/sarahw/.cache/huggingface/datasets/cos_e/default/0.0.1 Traceback (most recent call last): File "generation/input_to_label_and_rationale.py", line 390, in <module> main() File "generation/input_to_label_and_rationale.py", line 263, in main dataset[split] = dataset[split].map(lambda x: input_to_explanation_plus_label(x, tokenizer, max_length, datasource=data_args.task_name, wt5=(model_class=='t5'), expl_only=model_args.rationale_only), batched=False) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/nlp/arrow_dataset.py", line 522, in map cache_file_name = self._get_cache_file_path(function, cache_kwargs) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/nlp/arrow_dataset.py", line 381, in _get_cache_file_path function_bytes = dumps(function) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/nlp/utils/py_utils.py", line 257, in dumps dump(obj, file) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/nlp/utils/py_utils.py", line 250, in dump Pickler(file).dump(obj) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/dill/_dill.py", line 445, in dump StockPickler.dump(self, obj) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 485, in dump self.save(obj) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 558, in save f(self, obj) # Call unbound method with explicit self File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/dill/_dill.py", line 1410, in save_function pickler.save_reduce(_create_function, (obj.__code__, File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 690, in save_reduce save(args) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 558, in save f(self, obj) # Call unbound method with explicit self File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 899, in save_tuple save(element) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 558, in save f(self, obj) # Call unbound method with explicit self File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 899, in save_tuple save(element) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 558, in save f(self, obj) # Call unbound method with explicit self File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/dill/_dill.py", line 1147, in save_cell pickler.save_reduce(_create_cell, (f,), obj=obj) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 690, in save_reduce save(args) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 558, in save f(self, obj) # Call unbound method with explicit self File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 884, in save_tuple save(element) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 601, in save self.save_reduce(obj=obj, *rv) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 715, in save_reduce save(state) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 558, in save f(self, obj) # Call unbound method with explicit self File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/dill/_dill.py", line 912, in save_module_dict StockPickler.save_dict(pickler, obj) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 969, in save_dict self._batch_setitems(obj.items()) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 995, in _batch_setitems save(v) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 601, in save self.save_reduce(obj=obj, *rv) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 715, in save_reduce save(state) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 558, in save f(self, obj) # Call unbound method with explicit self File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/dill/_dill.py", line 912, in save_module_dict StockPickler.save_dict(pickler, obj) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 969, in save_dict self._batch_setitems(obj.items()) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 995, in _batch_setitems save(v) File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 576, in save rv = reduce(self.proto) TypeError: cannot pickle 'Tokenizer' object ``` Fix seems to be in the tokenizers [`0.8.0.dev1 pre-release`](https://github.com/huggingface/tokenizers/issues/87), which I can't install with any package managers.
false
635,596,295
https://api.github.com/repos/huggingface/datasets/issues/256
https://github.com/huggingface/datasets/issues/256
256
[Feature request] Add a feature to dataset
closed
5
2020-06-09T16:38:12
2020-06-09T16:51:42
2020-06-09T16:51:42
sarahwie
[]
Is there a straightforward way to add a field to the arrow_dataset, prior to performing map?
false
635,300,822
https://api.github.com/repos/huggingface/datasets/issues/255
https://github.com/huggingface/datasets/pull/255
255
Add dataset/piaf
closed
1
2020-06-09T10:16:01
2020-06-12T08:31:27
2020-06-12T08:31:27
RachelKer
[]
Small SQuAD-like French QA dataset [PIAF](https://www.aclweb.org/anthology/2020.lrec-1.673.pdf)
true
635,057,568
https://api.github.com/repos/huggingface/datasets/issues/254
https://github.com/huggingface/datasets/issues/254
254
[Feature request] Be able to remove a specific sample of the dataset
closed
1
2020-06-09T02:22:13
2020-06-09T08:41:38
2020-06-09T08:41:38
astariul
[]
As mentioned in #117, it's currently not possible to remove a sample of the dataset. But it is a important use case : After applying some preprocessing, some samples might be empty for example. We should be able to remove these samples from the dataset, or at least mark them as `removed` so when iterating the dataset, we don't iterate these samples. I think it should be a feature. What do you think ? --- Any work-around in the meantime ?
false
634,791,939
https://api.github.com/repos/huggingface/datasets/issues/253
https://github.com/huggingface/datasets/pull/253
253
add flue dataset
closed
10
2020-06-08T17:11:09
2023-09-24T09:46:03
2020-07-16T07:50:59
mariamabarham
[]
This PR add the Flue dataset as requested in this issue #223 . @lbourdois made a detailed description in that issue.
true
634,563,239
https://api.github.com/repos/huggingface/datasets/issues/252
https://github.com/huggingface/datasets/issues/252
252
NonMatchingSplitsSizesError error when reading the IMDB dataset
closed
4
2020-06-08T12:26:24
2021-08-27T15:20:58
2020-06-08T14:01:26
antmarakis
[]
Hi! I am trying to load the `imdb` dataset with this line: `dataset = nlp.load_dataset('imdb', data_dir='/A/PATH', cache_dir='/A/PATH')` but I am getting the following error: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/nlp/load.py", line 517, in load_dataset save_infos=save_infos, File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/nlp/builder.py", line 363, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/nlp/builder.py", line 421, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/nlp/utils/info_utils.py", line 70, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) nlp.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=33442202, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='train', num_bytes=5929447, num_examples=4537, dataset_name='imdb')}, {'expected': SplitInfo(name='unsupervised', num_bytes=67125548, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')}] ``` Am I overlooking something? Thanks!
false