id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
754,422,710
https://api.github.com/repos/huggingface/datasets/issues/960
https://github.com/huggingface/datasets/pull/960
960
Add code to automate parts of the dataset card
closed
0
2020-12-01T14:04:51
2023-09-24T09:50:38
2021-04-26T07:56:01
patrickvonplaten
[]
Most parts of the "Dataset Structure" section can be generated automatically. This PR adds some code to do so.
true
754,418,610
https://api.github.com/repos/huggingface/datasets/issues/959
https://github.com/huggingface/datasets/pull/959
959
Add Tunizi Dataset
closed
0
2020-12-01T13:59:39
2020-12-03T14:21:41
2020-12-03T14:21:40
abhishekkrthakur
[]
true
754,404,095
https://api.github.com/repos/huggingface/datasets/issues/958
https://github.com/huggingface/datasets/pull/958
958
dataset(ncslgr): add initial loading script
closed
3
2020-12-01T13:41:17
2020-12-07T16:35:39
2020-12-07T16:35:39
AmitMY
[]
clean #789
true
754,380,073
https://api.github.com/repos/huggingface/datasets/issues/957
https://github.com/huggingface/datasets/pull/957
957
Isixhosa ner corpus
closed
0
2020-12-01T13:08:36
2020-12-01T18:14:58
2020-12-01T18:14:58
yvonnegitau
[]
true
754,368,378
https://api.github.com/repos/huggingface/datasets/issues/956
https://github.com/huggingface/datasets/pull/956
956
Add Norwegian NER
closed
1
2020-12-01T12:51:02
2020-12-02T08:53:11
2020-12-01T18:09:21
jplu
[]
This PR adds the [Norwegian NER](https://github.com/ljos/navnkjenner) dataset. I have added the `conllu` package as a test dependency. This is required to properly parse the `.conllu` files.
true
754,367,291
https://api.github.com/repos/huggingface/datasets/issues/955
https://github.com/huggingface/datasets/pull/955
955
Added PragmEval benchmark
closed
10
2020-12-01T12:49:15
2020-12-04T10:43:32
2020-12-03T09:36:47
sileod
[]
true
754,362,012
https://api.github.com/repos/huggingface/datasets/issues/954
https://github.com/huggingface/datasets/pull/954
954
add prachathai67k
closed
3
2020-12-01T12:40:55
2020-12-02T05:12:11
2020-12-02T04:43:52
cstorm125
[]
`prachathai-67k`: News Article Corpus and Multi-label Text Classificdation from Prachathai.com The prachathai-67k dataset was scraped from the news site Prachathai. We filtered out those articles with less than 500 characters of body text, mostly images and cartoons. It contains 67,889 articles wtih 12 curated tags from August 24, 2004 to November 15, 2018. The dataset was originally scraped by @lukkiddd and cleaned by @cstorm125. You can also see preliminary exploration at https://github.com/PyThaiNLP/prachathai-67k/blob/master/exploration.ipynb
true
754,359,942
https://api.github.com/repos/huggingface/datasets/issues/953
https://github.com/huggingface/datasets/pull/953
953
added health_fact dataset
closed
1
2020-12-01T12:37:44
2020-12-01T23:11:33
2020-12-01T23:11:33
bhavitvyamalik
[]
Added dataset Explainable Fact-Checking for Public Health Claims (dataset_id: health_fact)
true
754,357,270
https://api.github.com/repos/huggingface/datasets/issues/952
https://github.com/huggingface/datasets/pull/952
952
Add orange sum
closed
0
2020-12-01T12:33:34
2020-12-01T15:44:00
2020-12-01T15:44:00
moussaKam
[]
Add OrangeSum a french abstractive summarization dataset. Paper: [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321)
true
754,349,979
https://api.github.com/repos/huggingface/datasets/issues/951
https://github.com/huggingface/datasets/pull/951
951
Prachathai67k
closed
1
2020-12-01T12:21:52
2020-12-01T12:29:53
2020-12-01T12:28:26
cstorm125
[]
Add `prachathai-67k`: News Article Corpus and Multi-label Text Classificdation from Prachathai.com The `prachathai-67k` dataset was scraped from the news site [Prachathai](prachathai.com). We filtered out those articles with less than 500 characters of body text, mostly images and cartoons. It contains 67,889 articles wtih 12 curated tags from August 24, 2004 to November 15, 2018. The dataset was originally scraped by [@lukkiddd](https://github.com/lukkiddd) and cleaned by [@cstorm125](https://github.com/cstorm125). Download the dataset [here](https://www.dropbox.com/s/fsxepdka4l2pr45/prachathai-67k.zip?dl=1). You can also see preliminary exploration in [exploration.ipynb](https://github.com/PyThaiNLP/prachathai-67k/blob/master/exploration.ipynb). This dataset is a part of [pyThaiNLP](https://github.com/PyThaiNLP/) Thai text [classification-benchmarks](https://github.com/PyThaiNLP/classification-benchmarks). For the benchmark, we selected the following tags with substantial volume that resemble **classifying types of articles**: * `การเมือง` - politics * `สิทธิมนุษยชน` - human_rights * `คุณภาพชีวิต` - quality_of_life * `ต่างประเทศ` - international * `สังคม` - social * `สิ่งแวดล้อม` - environment * `เศรษฐกิจ` - economics * `วัฒนธรรม` - culture * `แรงงาน` - labor * `ความมั่นคง` - national_security * `ไอซีที` - ict * `การศึกษา` - education
true
754,318,686
https://api.github.com/repos/huggingface/datasets/issues/950
https://github.com/huggingface/datasets/pull/950
950
Support .xz file format
closed
0
2020-12-01T11:34:48
2020-12-01T13:39:18
2020-12-01T13:39:18
albertvillanova
[]
Add support to extract/uncompress files in .xz format.
true
754,317,777
https://api.github.com/repos/huggingface/datasets/issues/949
https://github.com/huggingface/datasets/pull/949
949
Add GermaNER Dataset
closed
1
2020-12-01T11:33:31
2020-12-03T14:06:41
2020-12-03T14:06:40
abhishekkrthakur
[]
true
754,306,260
https://api.github.com/repos/huggingface/datasets/issues/948
https://github.com/huggingface/datasets/pull/948
948
docs(ADD_NEW_DATASET): correct indentation for script
closed
0
2020-12-01T11:17:38
2020-12-01T11:25:18
2020-12-01T11:25:18
AmitMY
[]
true
754,286,658
https://api.github.com/repos/huggingface/datasets/issues/947
https://github.com/huggingface/datasets/pull/947
947
Add europeana newspapers
closed
0
2020-12-01T10:52:18
2020-12-02T09:42:35
2020-12-02T09:42:09
jplu
[]
This PR adds the [Europeana newspapers](https://github.com/EuropeanaNewspapers/ner-corpora) dataset.
true
754,278,632
https://api.github.com/repos/huggingface/datasets/issues/946
https://github.com/huggingface/datasets/pull/946
946
add PEC dataset
closed
3
2020-12-01T10:41:41
2020-12-03T02:47:14
2020-12-03T02:47:14
zhongpeixiang
[]
A persona-based empathetic conversation dataset published at EMNLP 2020.
true
754,273,920
https://api.github.com/repos/huggingface/datasets/issues/945
https://github.com/huggingface/datasets/pull/945
945
Adding Babi dataset - English version
closed
1
2020-12-01T10:35:36
2020-12-04T15:43:05
2020-12-04T15:42:54
thomwolf
[]
Adding the English version of bAbI. Samples are taken from ParlAI for consistency with the main users at the moment.
true
754,228,947
https://api.github.com/repos/huggingface/datasets/issues/944
https://github.com/huggingface/datasets/pull/944
944
Add German Legal Entity Recognition Dataset
closed
1
2020-12-01T09:38:22
2020-12-03T13:06:56
2020-12-03T13:06:55
abhishekkrthakur
[]
true
754,192,491
https://api.github.com/repos/huggingface/datasets/issues/943
https://github.com/huggingface/datasets/pull/943
943
The FLUE Benchmark
closed
0
2020-12-01T09:00:50
2020-12-01T15:24:38
2020-12-01T15:24:30
jplu
[]
This PR adds the [FLUE](https://github.com/getalp/Flaubert/tree/master/flue) benchmark which is a set of different datasets to evaluate models for French content. Two datasets are missing, the French Treebank that we can use only for research purpose and we are not allowed to distribute, and the Word Sense disambiguation for Nouns that will be added later.
true
754,162,318
https://api.github.com/repos/huggingface/datasets/issues/942
https://github.com/huggingface/datasets/issues/942
942
D
closed
0
2020-12-01T08:17:10
2020-12-03T16:42:53
2020-12-03T16:42:53
CryptoMiKKi
[]
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
false
754,141,321
https://api.github.com/repos/huggingface/datasets/issues/941
https://github.com/huggingface/datasets/pull/941
941
Add People's Daily NER dataset
closed
4
2020-12-01T07:48:53
2020-12-02T18:42:43
2020-12-02T18:42:41
JetRunner
[]
true
754,010,753
https://api.github.com/repos/huggingface/datasets/issues/940
https://github.com/huggingface/datasets/pull/940
940
Add MSRA NER dataset
closed
1
2020-12-01T05:02:11
2020-12-04T09:29:40
2020-12-01T07:25:53
JetRunner
[]
true
753,965,405
https://api.github.com/repos/huggingface/datasets/issues/939
https://github.com/huggingface/datasets/pull/939
939
add wisesight_sentiment
closed
4
2020-12-01T03:06:39
2020-12-02T04:52:38
2020-12-02T04:35:51
cstorm125
[]
Add `wisesight_sentiment` Social media messages in Thai language with sentiment label (positive, neutral, negative, question) Model Card: --- YAML tags: annotations_creators: - expert-generated language_creators: - found languages: - th licenses: - cc0-1.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-classification task_ids: - sentiment-classification --- # Dataset Card for wisesight_sentiment ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://github.com/PyThaiNLP/wisesight-sentiment - **Repository:** https://github.com/PyThaiNLP/wisesight-sentiment - **Paper:** - **Leaderboard:** https://www.kaggle.com/c/wisesight-sentiment/ - **Point of Contact:** https://github.com/PyThaiNLP/ ### Dataset Summary Wisesight Sentiment Corpus: Social media messages in Thai language with sentiment label (positive, neutral, negative, question) - Released to public domain under Creative Commons Zero v1.0 Universal license. - Labels: {"pos": 0, "neu": 1, "neg": 2, "q": 3} - Size: 26,737 messages - Language: Central Thai - Style: Informal and conversational. With some news headlines and advertisement. - Time period: Around 2016 to early 2019. With small amount from other period. - Domains: Mixed. Majority are consumer products and services (restaurants, cosmetics, drinks, car, hotels), with some current affairs. - Privacy: - Only messages that made available to the public on the internet (websites, blogs, social network sites). - For Facebook, this means the public comments (everyone can see) that made on a public page. - Private/protected messages and messages in groups, chat, and inbox are not included. - Alternations and modifications: - Keep in mind that this corpus does not statistically represent anything in the language register. - Large amount of messages are not in their original form. Personal data are removed or masked. - Duplicated, leading, and trailing whitespaces are removed. Other punctuations, symbols, and emojis are kept intact. (Mis)spellings are kept intact. - Messages longer than 2,000 characters are removed. - Long non-Thai messages are removed. Duplicated message (exact match) are removed. - More characteristics of the data can be explore [this notebook](https://github.com/PyThaiNLP/wisesight-sentiment/blob/master/exploration.ipynb) ### Supported Tasks and Leaderboards Sentiment analysis / [Kaggle Leaderboard](https://www.kaggle.com/c/wisesight-sentiment/) ### Languages Thai ## Dataset Structure ### Data Instances ``` {'category': 'pos', 'texts': 'น่าสนนน'} {'category': 'neu', 'texts': 'ครับ #phithanbkk'} {'category': 'neg', 'texts': 'ซื้อแต่ผ้าอนามัยแบบเย็นมาค่ะ แบบว่าอีห่ากูนอนไม่ได้'} {'category': 'q', 'texts': 'มีแอลกอฮอลมั้ยคะ'} ``` ### Data Fields - `texts`: texts - `category`: sentiment of texts ranging from `pos` (positive; 0), `neu` (neutral; 1), `neg` (negative; 2) and `q` (question; 3) ### Data Splits | | train | valid | test | |-----------|-------|-------|-------| | # samples | 21628 | 2404 | 2671 | | # neu | 11795 | 1291 | 1453 | | # neg | 5491 | 637 | 683 | | # pos | 3866 | 434 | 478 | | # q | 476 | 42 | 57 | | avg words | 27.21 | 27.18 | 27.12 | | avg chars | 89.82 | 89.50 | 90.36 | ## Dataset Creation ### Curation Rationale Originally, the dataset was conceived for the [In-class Kaggle Competition](https://www.kaggle.com/c/wisesight-sentiment/) at Chulalongkorn university by [Ekapol Chuangsuwanich](https://www.cp.eng.chula.ac.th/en/about/faculty/ekapolc/) (Faculty of Engineering, Chulalongkorn University). It has since become one of the benchmarks for sentiment analysis in Thai. ### Source Data #### Initial Data Collection and Normalization - Style: Informal and conversational. With some news headlines and advertisement. - Time period: Around 2016 to early 2019. With small amount from other period. - Domains: Mixed. Majority are consumer products and services (restaurants, cosmetics, drinks, car, hotels), with some current affairs. - Privacy: - Only messages that made available to the public on the internet (websites, blogs, social network sites). - For Facebook, this means the public comments (everyone can see) that made on a public page. - Private/protected messages and messages in groups, chat, and inbox are not included. - Usernames and non-public figure names are removed - Phone numbers are masked (e.g. 088-888-8888, 09-9999-9999, 0-2222-2222) - If you see any personal data still remain in the set, please tell us - so we can remove them. - Alternations and modifications: - Keep in mind that this corpus does not statistically represent anything in the language register. - Large amount of messages are not in their original form. Personal data are removed or masked. - Duplicated, leading, and trailing whitespaces are removed. Other punctuations, symbols, and emojis are kept intact. - (Mis)spellings are kept intact. - Messages longer than 2,000 characters are removed. - Long non-Thai messages are removed. Duplicated message (exact match) are removed. #### Who are the source language producers? Social media users in Thailand ### Annotations #### Annotation process - Sentiment values are assigned by human annotators. - A human annotator put his/her best effort to assign just one label, out of four, to a message. - Agreement, enjoyment, and satisfaction are positive. Disagreement, sadness, and disappointment are negative. - Showing interest in a topic or in a product is counted as positive. In this sense, a question about a particular product could has a positive sentiment value, if it shows the interest in the product. - Saying that other product or service is better is counted as negative. - General information or news title tend to be counted as neutral. #### Who are the annotators? Outsourced annotators hired by [Wisesight (Thailand) Co., Ltd.](https://github.com/wisesight/) ### Personal and Sensitive Information - We trying to exclude any known personally identifiable information from this data set. - Usernames and non-public figure names are removed - Phone numbers are masked (e.g. 088-888-8888, 09-9999-9999, 0-2222-2222) - If you see any personal data still remain in the set, please tell us - so we can remove them. ## Considerations for Using the Data ### Social Impact of Dataset - `wisesight_sentiment` is the first and one of the few open datasets for sentiment analysis of social media data in Thai - There are risks of personal information that escape the anonymization process ### Discussion of Biases - A message can be ambiguous. When possible, the judgement will be based solely on the text itself. - In some situation, like when the context is missing, the annotator may have to rely on his/her own world knowledge and just guess. - In some cases, the human annotator may have an access to the message's context, like an image. These additional information are not included as part of this corpus. ### Other Known Limitations - The labels are imbalanced; over half of the texts are `neu` (neutral) whereas there are very few `q` (question). - Misspellings in social media texts make word tokenization process for Thai difficult, thus impacting the model performance ## Additional Information ### Dataset Curators Thanks [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp) community, [Kitsuchart Pasupa](http://www.it.kmitl.ac.th/~kitsuchart/) (Faculty of Information Technology, King Mongkut's Institute of Technology Ladkrabang), and [Ekapol Chuangsuwanich](https://www.cp.eng.chula.ac.th/en/about/faculty/ekapolc/) (Faculty of Engineering, Chulalongkorn University) for advice. The original Kaggle competition, using the first version of this corpus, can be found at https://www.kaggle.com/c/wisesight-sentiment/ ### Licensing Information - If applicable, copyright of each message content belongs to the original poster. - **Annotation data (labels) are released to public domain.** - [Wisesight (Thailand) Co., Ltd.](https://github.com/wisesight/) helps facilitate the annotation, but does not necessarily agree upon the labels made by the human annotators. This annotation is for research purpose and does not reflect the professional work that Wisesight has been done for its customers. - The human annotator does not necessarily agree or disagree with the message. Likewise, the label he/she made to the message does not necessarily reflect his/her personal view towards the message. ### Citation Information Please cite the following if you make use of the dataset: Arthit Suriyawongkul, Ekapol Chuangsuwanich, Pattarawat Chormai, and Charin Polpanumas. 2019. **PyThaiNLP/wisesight-sentiment: First release.** September. BibTeX: ``` @software{bact_2019_3457447, author = {Suriyawongkul, Arthit and Chuangsuwanich, Ekapol and Chormai, Pattarawat and Polpanumas, Charin}, title = {PyThaiNLP/wisesight-sentiment: First release}, month = sep, year = 2019, publisher = {Zenodo}, version = {v1.0}, doi = {10.5281/zenodo.3457447}, url = {https://doi.org/10.5281/zenodo.3457447} } ```
true
753,940,979
https://api.github.com/repos/huggingface/datasets/issues/938
https://github.com/huggingface/datasets/pull/938
938
V-1.0.0 of isizulu_ner_corpus
closed
1
2020-12-01T02:04:32
2020-12-01T23:34:36
2020-12-01T23:34:36
yvonnegitau
[]
true
753,921,078
https://api.github.com/repos/huggingface/datasets/issues/937
https://github.com/huggingface/datasets/issues/937
937
Local machine/cluster Beam Datasets example/tutorial
closed
2
2020-12-01T01:11:43
2024-03-15T16:05:14
2024-03-15T16:05:14
shangw-nvidia
[]
Hi, I'm wondering if https://huggingface.co/docs/datasets/beam_dataset.html has an non-GCP or non-Dataflow version example/tutorial? I tried to migrate it to run on DirectRunner and SparkRunner, however, there were way too many runtime errors that I had to fix during the process, and even so I wasn't able to get either runner correctly producing the desired output. Thanks! Shang
false
753,915,603
https://api.github.com/repos/huggingface/datasets/issues/936
https://github.com/huggingface/datasets/pull/936
936
Added HANS parses and categories
closed
0
2020-12-01T00:58:16
2020-12-01T13:19:41
2020-12-01T13:19:40
TevenLeScao
[]
This pull request adds HANS missing information: the sentence parses, as well as the heuristic category.
true
753,863,055
https://api.github.com/repos/huggingface/datasets/issues/935
https://github.com/huggingface/datasets/pull/935
935
add PIB dataset
closed
4
2020-11-30T22:55:43
2020-12-01T23:17:11
2020-12-01T23:17:11
thevasudevgupta
[]
This pull request will add PIB dataset.
true
753,860,095
https://api.github.com/repos/huggingface/datasets/issues/934
https://github.com/huggingface/datasets/pull/934
934
small updates to the "add new dataset" guide
closed
1
2020-11-30T22:49:10
2020-12-01T04:56:22
2020-11-30T23:14:00
VictorSanh
[]
small updates (corrections/typos) to the "add new dataset" guide
true
753,854,272
https://api.github.com/repos/huggingface/datasets/issues/933
https://github.com/huggingface/datasets/pull/933
933
Add NumerSense
closed
0
2020-11-30T22:36:33
2020-12-01T20:25:50
2020-12-01T19:51:56
joeddav
[]
Adds the NumerSense dataset - Webpage/leaderboard: https://inklab.usc.edu/NumerSense/ - Paper: https://arxiv.org/abs/2005.00683 - Description: NumerSense is a new numerical commonsense reasoning probing task, with a diagnostic dataset consisting of 3,145 masked-word-prediction probes. Basically, it's a benchmark to see whether your MLM can figure out the right number in a fill-in-the-blank task based on commonsense knowledge (a bird has **two** legs)
true
753,840,300
https://api.github.com/repos/huggingface/datasets/issues/932
https://github.com/huggingface/datasets/pull/932
932
adding metooma dataset
closed
3
2020-11-30T22:09:49
2020-12-02T00:37:54
2020-12-02T00:37:54
akash418
[]
true
753,818,193
https://api.github.com/repos/huggingface/datasets/issues/931
https://github.com/huggingface/datasets/pull/931
931
[WIP] complex_webqa - Error zipfile.BadZipFile: Bad CRC-32
closed
1
2020-11-30T21:30:21
2022-10-03T09:40:09
2022-10-03T09:40:09
thomwolf
[ "dataset contribution" ]
Have a string `zipfile.BadZipFile: Bad CRC-32 for file 'web_snippets_train.json'` error when downloading the largest file from dropbox: `https://www.dropbox.com/sh/7pkwkrfnwqhsnpo/AABVENv_Q9rFtnM61liyzO0La/web_snippets_train.json.zip?dl=1` Didn't managed to see how to solve that. Putting aside for now.
true
753,801,204
https://api.github.com/repos/huggingface/datasets/issues/930
https://github.com/huggingface/datasets/pull/930
930
Lambada
closed
0
2020-11-30T21:02:33
2020-12-01T00:37:12
2020-12-01T00:37:11
VictorSanh
[]
Added LAMBADA dataset. A couple of points of attention (mostly because I am not sure) - The training data are compressed in a .tar file inside the main tar.gz file. I had to manually un-tar the training file to access the examples. - The dev and test splits don't have the `category` field so I put `None` by default. Happy to make changes if it doesn't respect the guidelines! Victor
true
753,737,794
https://api.github.com/repos/huggingface/datasets/issues/929
https://github.com/huggingface/datasets/pull/929
929
Add weibo NER dataset
closed
0
2020-11-30T19:22:47
2020-12-03T13:36:55
2020-12-03T13:36:54
abhishekkrthakur
[]
true
753,722,324
https://api.github.com/repos/huggingface/datasets/issues/928
https://github.com/huggingface/datasets/pull/928
928
Add the Multilingual Amazon Reviews Corpus
closed
0
2020-11-30T18:58:06
2020-12-01T16:04:30
2020-12-01T16:04:27
joeddav
[]
- **Name:** Multilingual Amazon Reviews Corpus* (`amazon_reviews_multi`) - **Description:** A collection of Amazon reviews in English, Japanese, German, French, Spanish and Chinese. - **Paper:** https://arxiv.org/abs/2010.02573 ### Checkbox - [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template - [x] Fill the `_DESCRIPTION` and `_CITATION` variables - [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()` - [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class. - [x] Generate the metadata file `dataset_infos.json` for all configurations - [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB) - [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs - [x] Both tests for the real data and the dummy data pass.
true
753,679,020
https://api.github.com/repos/huggingface/datasets/issues/927
https://github.com/huggingface/datasets/issues/927
927
Hello
closed
0
2020-11-30T17:50:05
2020-11-30T17:50:30
2020-11-30T17:50:30
k125-ak
[]
false
753,676,069
https://api.github.com/repos/huggingface/datasets/issues/926
https://github.com/huggingface/datasets/pull/926
926
add inquisitive
closed
3
2020-11-30T17:45:22
2020-12-02T13:45:22
2020-12-02T13:40:13
patil-suraj
[]
Adding inquisitive qg dataset More info: https://github.com/wjko2/INQUISITIVE
true
753,672,661
https://api.github.com/repos/huggingface/datasets/issues/925
https://github.com/huggingface/datasets/pull/925
925
Add Turku NLP Corpus for Finnish NER
closed
1
2020-11-30T17:40:19
2020-12-03T14:07:11
2020-12-03T14:07:10
abhishekkrthakur
[]
true
753,631,951
https://api.github.com/repos/huggingface/datasets/issues/924
https://github.com/huggingface/datasets/pull/924
924
Add DART
closed
1
2020-11-30T16:42:37
2020-12-02T03:13:42
2020-12-02T03:13:41
lhoestq
[]
- **Name:** *DART* - **Description:** *DART is a large dataset for open-domain structured data record to text generation.* - **Paper:** *https://arxiv.org/abs/2007.02871* - **Data:** *https://github.com/Yale-LILY/dart#leaderboard* ### Checkbox - [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template - [x] Fill the `_DESCRIPTION` and `_CITATION` variables - [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()` - [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class. - [x] Generate the metadata file `dataset_infos.json` for all configurations - [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB) - [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs - [x] Both tests for the real data and the dummy data pass.
true
753,569,220
https://api.github.com/repos/huggingface/datasets/issues/923
https://github.com/huggingface/datasets/pull/923
923
Add CC-100 dataset
closed
10
2020-11-30T15:23:22
2021-04-20T13:34:17
2021-04-20T13:34:17
albertvillanova
[ "wontfix" ]
Add CC-100. Close #773
true
753,559,130
https://api.github.com/repos/huggingface/datasets/issues/922
https://github.com/huggingface/datasets/pull/922
922
Add XOR QA Dataset
closed
4
2020-11-30T15:10:54
2020-12-02T03:12:21
2020-12-02T03:12:21
sumanthd17
[]
Added XOR Question Answering Dataset. The link to the dataset can be found [here](https://nlp.cs.washington.edu/xorqa/) - [x] Followed the instructions in CONTRIBUTING.md - [x] Ran the tests successfully - [x] Created the dummy data
true
753,445,747
https://api.github.com/repos/huggingface/datasets/issues/920
https://github.com/huggingface/datasets/pull/920
920
add dream dataset
closed
6
2020-11-30T12:40:14
2020-12-03T16:45:12
2020-12-02T15:39:12
patil-suraj
[]
Adding Dream: a Dataset and for Dialogue-Based Reading Comprehension More details: https://dataset.org/dream/ https://github.com/nlpdata/dream
true
753,434,472
https://api.github.com/repos/huggingface/datasets/issues/919
https://github.com/huggingface/datasets/issues/919
919
wrong length with datasets
closed
2
2020-11-30T12:23:39
2020-11-30T12:37:27
2020-11-30T12:37:26
rabeehk
[]
Hi I have a MRPC dataset which I convert it to seq2seq format, then this is of this format: `Dataset(features: {'src_texts': Value(dtype='string', id=None), 'tgt_texts': Value(dtype='string', id=None)}, num_rows: 10) ` I feed it to a dataloader: ``` dataloader = DataLoader( train_dataset, batch_size=self.args.train_batch_size, sampler=train_sampler, collate_fn=self.data_collator, drop_last=self.args.dataloader_drop_last, num_workers=self.args.dataloader_num_workers, ) ``` now if I type len(dataloader) this is 1, which is wrong, and this needs to be 10. could you assist me please? thanks
false
753,397,440
https://api.github.com/repos/huggingface/datasets/issues/918
https://github.com/huggingface/datasets/pull/918
918
Add conll2002
closed
0
2020-11-30T11:29:35
2020-11-30T18:34:30
2020-11-30T18:34:29
lhoestq
[]
Adding the Conll2002 dataset for NER. More info here : https://www.clips.uantwerpen.be/conll2002/ner/ ### Checkbox - [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template - [x] Fill the `_DESCRIPTION` and `_CITATION` variables - [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()` - [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class. - [x] Generate the metadata file `dataset_infos.json` for all configurations - [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB) - [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs - [x] Both tests for the real data and the dummy data pass.
true
753,391,591
https://api.github.com/repos/huggingface/datasets/issues/917
https://github.com/huggingface/datasets/pull/917
917
Addition of Concode Dataset
closed
8
2020-11-30T11:20:59
2020-12-29T02:55:36
2020-12-29T02:55:36
reshinthadithyan
[]
##Overview Concode Dataset contains pairs of Nl Queries and the corresponding Code.(Contextual Code Generation) Reference Links Paper Link = https://arxiv.org/pdf/1904.09086.pdf Github Link =https://github.com/microsoft/CodeXGLUE/tree/main/Text-Code/text-to-code
true
753,376,643
https://api.github.com/repos/huggingface/datasets/issues/916
https://github.com/huggingface/datasets/pull/916
916
Add Swedish NER Corpus
closed
2
2020-11-30T10:59:51
2020-12-02T03:10:50
2020-12-02T03:10:49
abhishekkrthakur
[]
true
753,118,481
https://api.github.com/repos/huggingface/datasets/issues/915
https://github.com/huggingface/datasets/issues/915
915
Shall we change the hashing to encoding to reduce potential replicated cache files?
open
2
2020-11-30T03:50:46
2020-12-24T05:11:49
null
zhuzilin
[ "enhancement", "generic discussion" ]
Hi there. For now, we are using `xxhash` to hash the transformations to fingerprint and we will save a copy of the processed dataset to disk if there is a new hash value. However, there are some transformations that are idempotent or commutative to each other. I think that encoding the transformation chain as the fingerprint may help in those cases, for example, use `base64.urlsafe_b64encode`. In this way, before we want to save a new copy, we can decode the transformation chain and normalize it to prevent omit potential reuse. As the main targets of this project are the really large datasets that cannot be loaded entirely in memory, I believe it would save a lot of time if we can avoid some write. If you have interest in this, I'd love to help :).
false
752,956,106
https://api.github.com/repos/huggingface/datasets/issues/914
https://github.com/huggingface/datasets/pull/914
914
Add list_github_datasets api for retrieving dataset name list in github repo
closed
4
2020-11-29T16:42:15
2020-12-02T07:21:16
2020-12-02T07:21:16
zhuzilin
[]
Thank you for your great effort on unifying data processing for NLP! This pr is trying to add a new api `list_github_datasets` in the `inspect` module. The reason for it is that the current `list_datasets` api need to access https://huggingface.co/api/datasets to get a large json. However, this connection can be really slow... (I was visiting from China) and from my own experience, most of the time `requests.get` failed to download the whole json after a long wait and will trigger fault in `r.json()`. I also noticed that the current implementation will first try to download from github, which makes me be able to smoothly run `load_dataset('squad')` in the example. Therefore, I think it would be better if we can have an api to get the list of datasets that are available on github, and it will also improve newcomers' experience (it is a little frustrating if one cannot successfully run the first function in the README example.) before we have faster source for huggingface.co. As for the implementation, I've added a `dataset_infos.json` file under the `datasets` folder, and it has the following structure: ```json { "id": "aeslc", "folder": "datasets/aeslc", "dataset_infos": "datasets/aeslc/dataset_infos.json" }, ... { "id": "json", "folder": "datasets/json" }, ... ``` The script I used to get this file is: ```python import json import os DATASETS_BASE_DIR = "/root/datasets" DATASET_INFOS_JSON = "dataset_infos.json" datasets = [] for item in os.listdir(os.path.join(DATASETS_BASE_DIR, "datasets")): if os.path.isdir(os.path.join(DATASETS_BASE_DIR, "datasets", item)): datasets.append(item) datasets.sort() total_ds_info = [] for ds in datasets: ds_dir = os.path.join("datasets", ds) ds_info_dir = os.path.join(ds_dir, DATASET_INFOS_JSON) if os.path.isfile(os.path.join(DATASETS_BASE_DIR, ds_info_dir)): total_ds_info.append({"id": ds, "folder": ds_dir, "dataset_infos": ds_info_dir}) else: total_ds_info.append({"id": ds, "folder": ds_dir}) with open(DATASET_INFOS_JSON, "w") as f: json.dump(total_ds_info, f) ``` The new `dataset_infos.json` was saved as a formated json so that it will be easy to add new dataset. When calling `list_github_datasets`, the user will get the list of dataset names in this github repo and if `with_details` is set to be `True`, they can get the url of specific dataset info. Thank you for your time on reviewing this pr :).
true
752,892,020
https://api.github.com/repos/huggingface/datasets/issues/913
https://github.com/huggingface/datasets/pull/913
913
My new dataset PEC
closed
6
2020-11-29T11:10:37
2020-12-01T10:41:53
2020-12-01T10:41:53
zhongpeixiang
[]
A new dataset PEC published in EMNLP 2020.
true
752,806,215
https://api.github.com/repos/huggingface/datasets/issues/911
https://github.com/huggingface/datasets/issues/911
911
datasets module not found
closed
1
2020-11-29T01:24:15
2020-11-29T14:33:09
2020-11-29T14:33:09
sbassam
[]
Currently, running `from datasets import load_dataset` will throw a `ModuleNotFoundError: No module named 'datasets'` error.
false
752,772,723
https://api.github.com/repos/huggingface/datasets/issues/910
https://github.com/huggingface/datasets/issues/910
910
Grindr meeting app web.Grindr
closed
0
2020-11-28T21:36:23
2020-11-29T10:11:51
2020-11-29T10:11:51
jackin34
[]
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
false
752,508,299
https://api.github.com/repos/huggingface/datasets/issues/909
https://github.com/huggingface/datasets/pull/909
909
Add FiNER dataset
closed
9
2020-11-27T23:54:20
2020-12-07T16:56:23
2020-12-07T16:56:23
stefan-it
[]
Hi, this PR adds "A Finnish News Corpus for Named Entity Recognition" as new `finer` dataset. The dataset is described in [this paper](https://arxiv.org/abs/1908.04212). The data is publicly available in [this GitHub](https://github.com/mpsilfve/finer-data). Notice: they provide two testsets. The additional test dataset taken from Wikipedia is named as "test_wikipedia" split.
true
752,428,652
https://api.github.com/repos/huggingface/datasets/issues/908
https://github.com/huggingface/datasets/pull/908
908
Add dependency on black for tests
closed
1
2020-11-27T19:12:48
2020-11-27T21:46:53
2020-11-27T21:46:52
albertvillanova
[]
Add package 'black' as an installation requirement for tests.
true
752,422,351
https://api.github.com/repos/huggingface/datasets/issues/907
https://github.com/huggingface/datasets/pull/907
907
Remove os.path.join from all URLs
closed
0
2020-11-27T18:55:30
2020-11-29T22:48:20
2020-11-29T22:48:19
albertvillanova
[]
Remove `os.path.join` from all URLs in dataset scripts.
true
752,403,395
https://api.github.com/repos/huggingface/datasets/issues/906
https://github.com/huggingface/datasets/pull/906
906
Fix url with backslash in windows for blimp and pg19
closed
0
2020-11-27T17:59:11
2020-11-27T18:19:56
2020-11-27T18:19:56
lhoestq
[]
Following #903 I also fixed blimp and pg19 which were using the `os.path.join` to create urls cc @albertvillanova
true
752,395,456
https://api.github.com/repos/huggingface/datasets/issues/905
https://github.com/huggingface/datasets/pull/905
905
Disallow backslash in urls
closed
2
2020-11-27T17:38:28
2020-11-29T22:48:37
2020-11-29T22:48:36
lhoestq
[]
Following #903 @albertvillanova noticed that there are sometimes bad usage of `os.path.join` in datasets scripts to create URLS. However this should be avoided since it doesn't work on windows. I'm suggesting a test to make sure we that all the urls don't have backslashes in them in the datasets scripts. The tests works by adding a callback feature to the MockDownloadManager used to test the dataset scripts. In a download callback I just make sure that the url is valid.
true
752,372,743
https://api.github.com/repos/huggingface/datasets/issues/904
https://github.com/huggingface/datasets/pull/904
904
Very detailed step-by-step on how to add a dataset
closed
1
2020-11-27T16:45:21
2020-11-30T09:56:27
2020-11-30T09:56:26
thomwolf
[]
Add very detailed step-by-step instructions to add a new dataset to the library.
true
752,360,614
https://api.github.com/repos/huggingface/datasets/issues/903
https://github.com/huggingface/datasets/pull/903
903
Fix URL with backslash in Windows
closed
8
2020-11-27T16:26:24
2020-11-27T18:04:46
2020-11-27T18:04:46
albertvillanova
[]
In Windows, `os.path.join` generates URLs containing backslashes, when the first "path" does not end with a slash. In general, `os.path.join` should be avoided to generate URLs.
true
752,345,739
https://api.github.com/repos/huggingface/datasets/issues/902
https://github.com/huggingface/datasets/pull/902
902
Follow cache_dir parameter to gcs downloader
closed
0
2020-11-27T16:02:06
2020-11-29T22:48:54
2020-11-29T22:48:53
lhoestq
[]
As noticed in #900 the cache_dir parameter was not followed to the downloader in the case of an already processed dataset hosted on our google storage (one of them is natural questions). Fix #900
true
752,233,851
https://api.github.com/repos/huggingface/datasets/issues/901
https://github.com/huggingface/datasets/pull/901
901
Addition of Nl2Bash Dataset
closed
3
2020-11-27T12:53:55
2020-11-29T18:09:25
2020-11-29T18:08:51
reshinthadithyan
[]
## Overview The NL2Bash data contains over 10,000 instances of linux shell commands and their corresponding natural language descriptions provided by experts, from the Tellina system. The dataset features 100+ commonly used shell utilities. ## Footnotes The following dataset marks the first ML on source code related Dataset in datasets module. It'll be really useful as a lot of the research direction involves Transformer Based Model. Thanks. ### Reference Links > Paper Link = https://arxiv.org/pdf/1802.08979.pdf > Github Link = https://github.com/TellinaTool/nl2bash
true
752,214,066
https://api.github.com/repos/huggingface/datasets/issues/900
https://github.com/huggingface/datasets/issues/900
900
datasets.load_dataset() custom chaching directory bug
closed
1
2020-11-27T12:18:53
2020-11-29T22:48:53
2020-11-29T22:48:53
SapirWeissbuch
[]
Hello, I'm having issue with loading a dataset with a custom `cache_dir`. Despite specifying the output dir, it is still downloaded to `~/.cache`. ## Environment info - `datasets` version: 1.1.3 - Platform: Linux-4.19.129-aufs-1-x86_64-with-debian-10.1 - Python version: 3.7.3 ## The code I'm running: ```python import datasets from pathlib import Path validation_dataset = datasets.load_dataset("natural_questions", split="validation[:5%]", cache_dir=Path("./data")) ``` ## The output: * The dataset is downloaded to my home directory's `.cache` * A new empty directory named "`natural_questions` is created in the specified directory `.data` * `tree data` in the shell outputs: ``` data └── natural_questions └── default └── 0.0.2 3 directories, 0 files ``` The output: ``` Downloading: 8.61kB [00:00, 5.11MB/s] Downloading: 13.6kB [00:00, 7.89MB/s] Using custom data configuration default Downloading and preparing dataset natural_questions/default (download: 41.97 GiB, generated: 92.95 GiB, post-processed: Unknown size, total: 134.92 GiB) to ./data/natural_questions/default/0.0.2/867dbbaf9137c1b8 3ecb19f5eb80559e1002ea26e702c6b919cfa81a17a8c531... Downloading: 100%|██████████████████████████████████████████████████| 13.6k/13.6k [00:00<00:00, 1.51MB/s] Downloading: 7%|███▎ | 6.70G/97.4G [03:46<1:37:05, 15.6MB/s] ``` ## Expected behaviour: The dataset "Natural Questions" should be downloaded to the directory "./data"
false
752,191,227
https://api.github.com/repos/huggingface/datasets/issues/899
https://github.com/huggingface/datasets/pull/899
899
Allow arrow based builder in auto dummy data generation
closed
0
2020-11-27T11:39:38
2020-11-27T13:30:09
2020-11-27T13:30:08
lhoestq
[]
Following #898 I added support for arrow based builder for the auto dummy data generator
true
752,148,284
https://api.github.com/repos/huggingface/datasets/issues/898
https://github.com/huggingface/datasets/pull/898
898
Adding SQA dataset
closed
2
2020-11-27T10:29:18
2020-12-15T12:54:40
2020-12-15T12:54:19
thomwolf
[]
As discussed in #880 Seems like automatic dummy-data generation doesn't work if the builder is a `ArrowBasedBuilder`, do you think you could take a look @lhoestq ?
true
752,100,256
https://api.github.com/repos/huggingface/datasets/issues/897
https://github.com/huggingface/datasets/issues/897
897
Dataset viewer issues
closed
5
2020-11-27T09:14:34
2021-10-31T09:12:01
2021-10-31T09:12:01
BramVanroy
[ "nlp-viewer" ]
I was looking through the dataset viewer and I like it a lot. Version numbers, citation information, everything's there! I've spotted a few issues/bugs though: - the URL is still under `nlp`, perhaps an alias for `datasets` can be made - when I remove a **feature** (and the feature list is empty), I get an error. This is probably expected, but perhaps a better error message can be shown to the user ```bash IndexError: list index out of range Traceback: File "/home/sasha/streamlit/lib/streamlit/ScriptRunner.py", line 322, in _run_script exec(code, module.__dict__) File "/home/sasha/nlp-viewer/run.py", line 316, in <module> st.table(style) File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 122, in wrapped_method return dg._enqueue_new_element_delta(marshall_element, delta_type, last_index) File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 367, in _enqueue_new_element_delta rv = marshall_element(msg.delta.new_element) File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 120, in marshall_element return method(dg, element, *args, **kwargs) File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 2944, in table data_frame_proto.marshall_data_frame(data, element.table) File "/home/sasha/streamlit/lib/streamlit/elements/data_frame_proto.py", line 54, in marshall_data_frame _marshall_styles(proto_df.style, df, styler) File "/home/sasha/streamlit/lib/streamlit/elements/data_frame_proto.py", line 73, in _marshall_styles translated_style = styler._translate() File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/pandas/io/formats/style.py", line 351, in _translate * (len(clabels[0]) - len(hidden_columns)) ``` - there seems to be **an encoding issue** in the default view, the dataset examples are shown as raw monospace text, without a decent encoding. That makes it hard to read for languages that use a lot of special characters. Take for instance the [cs-en WMT19 set](https://huggingface.co/nlp/viewer/?dataset=wmt19&config=cs-en). This problem goes away when you enable "List view", because then some syntax highlighteris used, and the special characters are coded correctly.
false
751,834,265
https://api.github.com/repos/huggingface/datasets/issues/896
https://github.com/huggingface/datasets/pull/896
896
Add template and documentation for dataset card
closed
0
2020-11-26T21:30:25
2020-11-28T01:10:15
2020-11-28T01:10:15
yjernite
[]
This PR adds a template for dataset cards, as well as a guide to filling out the template and a completed example for the ELI5 dataset, building on the work of @mcmillanmajora New pull requests adding datasets should now have a README.md file which serves both to hold the tags we will have to index the datasets and as a data statement. The template is designed to be pretty extensive. The idea is that the person who uploads the dataset should put in all the basic information (at least the Dataset Description section) and whatever else they feel comfortable adding and leave the `[More Information Needed]` annotation everywhere else as a placeholder. We will then work with @mcmillanmajora to involve the data authors more directly in filling out the remaining information. Direct links to: - [Documentation](https://github.com/yjernite/datasets/blob/add_dataset_card_doc/templates/README_guide.md) - [Empty template](https://github.com/yjernite/datasets/blob/add_dataset_card_doc/templates/README.md) - [ELI5 example](https://github.com/yjernite/datasets/blob/add_dataset_card_doc/datasets/eli5/README.md)
true
751,782,295
https://api.github.com/repos/huggingface/datasets/issues/895
https://github.com/huggingface/datasets/pull/895
895
Better messages regarding split naming
closed
0
2020-11-26T18:55:46
2020-11-27T13:31:00
2020-11-27T13:30:59
lhoestq
[]
I made explicit the error message when a bad split name is used. Also I wanted to allow the `-` symbol for split names but actually this symbol is used to name the arrow files `{dataset_name}-{dataset_split}.arrow` so we should probably keep it this way, i.e. not allowing the `-` symbol in split names. Moreover in the future we might want to use `{dataset_name}-{dataset_split}-{shard_id}_of_{n_shards}.arrow` and reuse the `-` symbol.
true
751,734,905
https://api.github.com/repos/huggingface/datasets/issues/894
https://github.com/huggingface/datasets/pull/894
894
Allow several tags sets
closed
1
2020-11-26T17:04:13
2021-05-05T18:24:17
2020-11-27T20:15:49
lhoestq
[]
Hi ! Currently we have three dataset cards : snli, cnn_dailymail and allocine. For each one of those datasets a set of tag is defined. The set of tags contains fields like `multilinguality`, `task_ids`, `licenses` etc. For certain datasets like `glue` for example, there exist several configurations: `sst2`, `mnli` etc. Therefore we should define one set of tags per configuration. However the current format used for tags only supports one set of tags per dataset. In this PR I propose a simple change in the yaml format used for tags to allow for several sets of tags. Let me know what you think, especially @julien-c let me know if it's good for you since it's going to be parsed by moon-landing
true
751,703,696
https://api.github.com/repos/huggingface/datasets/issues/893
https://github.com/huggingface/datasets/pull/893
893
add metrec: arabic poetry dataset
closed
10
2020-11-26T16:10:16
2020-12-01T16:24:55
2020-12-01T15:15:07
zaidalyafeai
[]
true
751,658,262
https://api.github.com/repos/huggingface/datasets/issues/892
https://github.com/huggingface/datasets/pull/892
892
Add a few datasets of reference in the documentation
closed
3
2020-11-26T15:02:39
2020-11-27T18:08:45
2020-11-27T18:08:44
lhoestq
[]
I started making a small list of various datasets of reference in the documentation. Since many datasets share a lot in common I think it's good to have a list of datasets scripts to get some inspiration from. Let me know what you think, and if you have ideas of other datasets that we may add to this list, please let me know.
true
751,576,869
https://api.github.com/repos/huggingface/datasets/issues/891
https://github.com/huggingface/datasets/pull/891
891
gitignore .python-version
closed
0
2020-11-26T13:05:58
2020-11-26T13:28:27
2020-11-26T13:28:26
patil-suraj
[]
ignore `.python-version` added by `pyenv`
true
751,534,050
https://api.github.com/repos/huggingface/datasets/issues/890
https://github.com/huggingface/datasets/pull/890
890
Add LER
closed
9
2020-11-26T11:58:23
2020-12-01T13:33:35
2020-12-01T13:26:16
JoelNiklaus
[]
true
751,115,691
https://api.github.com/repos/huggingface/datasets/issues/889
https://github.com/huggingface/datasets/pull/889
889
Optional per-dataset default config name
closed
3
2020-11-25T21:02:30
2020-11-30T17:27:33
2020-11-30T17:27:27
joeddav
[]
This PR adds a `DEFAULT_CONFIG_NAME` class attribute to `DatasetBuilder`. This allows a dataset to have a specified default config name when a dataset has more than one config but the user does not specify it. For example, after defining `DEFAULT_CONFIG_NAME = "combined"` in PolyglotNER, a user can now do the following: ```python ds = load_dataset("polyglot_ner") ``` which is equivalent to, ```python ds = load_dataset("polyglot_ner", "combined") ``` In effect (for this particular dataset configuration), this means that if the user doesn't specify a language, they are given the combined dataset including all languages. Since it doesn't always make sense to have a default config, this feature is opt-in. If `DEFAULT_CONFIG_NAME` is not defined and a user does not pass a config for a dataset with multiple configs available, a ValueError is raised like usual. Let me know what you think about this approach @lhoestq @thomwolf and I'll add some documentation and define a default for some of our existing datasets.
true
750,944,422
https://api.github.com/repos/huggingface/datasets/issues/888
https://github.com/huggingface/datasets/issues/888
888
Nested lists are zipped unexpectedly
closed
2
2020-11-25T16:07:46
2020-11-25T17:30:39
2020-11-25T17:30:39
AmitMY
[]
I might misunderstand something, but I expect that if I define: ```python "top": datasets.features.Sequence({ "middle": datasets.features.Sequence({ "bottom": datasets.Value("int32") }) }) ``` And I then create an example: ```python yield 1, { "top": [{ "middle": [ {"bottom": 1}, {"bottom": 2} ] }] } ``` I then load my dataset: ```python train = load_dataset("my dataset")["train"] ``` and expect to be able to access `data[0]["top"][0]["middle"][0]`. That is not the case. Here is `data[0]` as JSON: ```json {"top": {"middle": [{"bottom": [1, 2]}]}} ``` Clearly different than the thing I inputted. ```json {"top": [{"middle": [{"bottom": 1},{"bottom": 2}]}]} ```
false
750,868,831
https://api.github.com/repos/huggingface/datasets/issues/887
https://github.com/huggingface/datasets/issues/887
887
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
open
14
2020-11-25T14:32:21
2021-09-09T17:03:40
null
AmitMY
[ "bug" ]
I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, # This defines the different columns of the dataset and their types features=datasets.Features( { "pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32")) } ), homepage=_HOMEPAGE, citation=_CITATION, ) def _generate_examples(self): """ Yields examples. """ yield 1, { "pose": [np.zeros(shape=(137, 2), dtype=np.float32)] } ``` But this doesn't work - > pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
false
750,829,314
https://api.github.com/repos/huggingface/datasets/issues/886
https://github.com/huggingface/datasets/pull/886
886
Fix wikipedia custom config
closed
1
2020-11-25T13:44:12
2021-06-25T05:24:16
2020-11-25T15:42:13
lhoestq
[]
It should be possible to use the wikipedia dataset with any `language` and `date`. However it was not working as noticed in #784 . Indeed the custom wikipedia configurations were not enabled for some reason. I fixed that and was able to run ```python from datasets import load_dataset load_dataset("./datasets/wikipedia", language="zh", date="20201120", beam_runner='DirectRunner') ``` cc @stvhuang @SamuelCahyawijaya Fix #784
true
750,789,052
https://api.github.com/repos/huggingface/datasets/issues/885
https://github.com/huggingface/datasets/issues/885
885
Very slow cold-start
closed
3
2020-11-25T12:47:58
2021-01-13T11:31:25
2021-01-13T11:31:25
AmitMY
[ "dataset request" ]
Hi, I expect when importing `datasets` that nothing major happens in the background, and so the import should be insignificant. When I load a metric, or a dataset, its fine that it takes time. The following ranges from 3 to 9 seconds: ``` python -m timeit -n 1 -r 1 'from datasets import load_dataset' ``` edit: sorry for the mis-tag, not sure how I added it.
false
749,862,034
https://api.github.com/repos/huggingface/datasets/issues/884
https://github.com/huggingface/datasets/pull/884
884
Auto generate dummy data
closed
3
2020-11-24T16:31:34
2020-11-26T14:18:47
2020-11-26T14:18:46
lhoestq
[]
When adding a new dataset to the library, dummy data creation can take some time. To make things easier I added a command line tool that automatically generates dummy data when possible. The tool only supports certain data files types: txt, csv, tsv, jsonl, json and xml. Here are some examples: ``` python datasets-cli dummy_data ./datasets/snli --auto_generate python datasets-cli dummy_data ./datasets/squad --auto_generate --json_field data python datasets-cli dummy_data ./datasets/iwslt2017 --auto_generate --xml_tag seg --match_text_files "train*" --n_lines 15 # --xml_tag seg => each sample corresponds to a "seg" tag in the xml tree # --match_text_files "train*" => also match text files that don't have a proper text file extension (no suffix like ".txt" for example) # --n_lines 15 => some text files have headers so we have to use at least 15 lines ``` and here is the command usage: ``` usage: datasets-cli <command> [<args>] dummy_data [-h] [--auto_generate] [--n_lines N_LINES] [--json_field JSON_FIELD] [--xml_tag XML_TAG] [--match_text_files MATCH_TEXT_FILES] [--keep_uncompressed] [--cache_dir CACHE_DIR] path_to_dataset positional arguments: path_to_dataset Path to the dataset (example: ./datasets/squad) optional arguments: -h, --help show this help message and exit --auto_generate Try to automatically generate dummy data --n_lines N_LINES Number of lines or samples to keep when auto- generating dummy data --json_field JSON_FIELD Optional, json field to read the data from when auto- generating dummy data. In the json data files, this field must point to a list of samples as json objects (ex: the 'data' field for squad-like files) --xml_tag XML_TAG Optional, xml tag name of the samples inside the xml files when auto-generating dummy data. --match_text_files MATCH_TEXT_FILES Optional, a comma separated list of file patterns that looks for line-by-line text files other than *.txt or *.csv. Example: --match_text_files *.label --keep_uncompressed Don't compress the dummy data folders when auto- generating dummy data. Useful for debugging for to do manual adjustements before compressing. --cache_dir CACHE_DIR Cache directory to download and cache files when auto- generating dummy data ``` The command generates all the necessary `dummy_data.zip` files (one per config). How it works: - it runs the split_generators() method of the dataset script to download the original data files - when downloading it records a mapping between the downloaded files and the corresponding expected dummy data files paths - then for each data file it creates the dummy data file keeping only the first samples (the strategy depends on the type of file) - finally it compresses the dummy data folders into dummy_zip files ready for dataset tests Let me know if that makes sense or if you have ideas to improve this tool ! I also added a unit test.
true
749,750,801
https://api.github.com/repos/huggingface/datasets/issues/883
https://github.com/huggingface/datasets/issues/883
883
Downloading/caching only a part of a datasets' dataset.
open
3
2020-11-24T14:25:18
2020-11-27T13:51:55
null
SapirWeissbuch
[ "enhancement", "question" ]
Hi, I want to use the validation data *only* (of natural question). I don't want to have the whole dataset cached in my machine, just the dev set. Is this possible? I can't find a way to do it in the docs. Thank you, Sapir
false
749,662,188
https://api.github.com/repos/huggingface/datasets/issues/882
https://github.com/huggingface/datasets/pull/882
882
Update README.md
closed
0
2020-11-24T12:23:52
2021-01-29T10:41:07
2021-01-29T10:41:07
vaibhavad
[]
"no label" is "-" in the original dataset but "-1" in Huggingface distribution.
true
749,548,107
https://api.github.com/repos/huggingface/datasets/issues/881
https://github.com/huggingface/datasets/pull/881
881
Use GCP download url instead of tensorflow custom download for boolq
closed
0
2020-11-24T09:47:11
2020-11-24T10:12:34
2020-11-24T10:12:33
lhoestq
[]
BoolQ is a dataset that used tf.io.gfile.copy to download the file from a GCP bucket. It prevented the dataset to be downloaded twice because of a FileAlreadyExistsError. Even though the error could be fixed by providing `overwrite=True` to the tf.io.gfile.copy call, I changed the script to use GCP download urls and use regular downloads instead and remove the tensorflow dependency. Fix #875
true
748,949,606
https://api.github.com/repos/huggingface/datasets/issues/880
https://github.com/huggingface/datasets/issues/880
880
Add SQA
closed
3
2020-11-23T16:31:55
2020-12-23T13:58:24
2020-12-23T13:58:23
NielsRogge
[ "dataset request" ]
## Adding a Dataset - **Name:** SQA (Sequential Question Answering) by Microsoft. - **Description:** The SQA dataset was created to explore the task of answering sequences of inter-related questions on HTML tables. It has 6,066 sequences with 17,553 questions in total. - **Paper:** https://www.microsoft.com/en-us/research/publication/search-based-neural-structured-learning-sequential-question-answering/ - **Data:** https://www.microsoft.com/en-us/download/details.aspx?id=54253 - **Motivation:** currently, the [Tapas](https://ai.googleblog.com/2020/04/using-neural-networks-to-find-answers.html) algorithm by Google AI is being added to the Transformers library (see https://github.com/huggingface/transformers/pull/8113). It would be great to use that model in combination with this dataset, on which it achieves SOTA results (average question accuracy of 0.71). Note 1: this dataset actually consists of 2 types of files: 1) TSV files, containing the questions, answer coordinates and answer texts (for training, dev and test) 2) a folder of csv files, which contain the actual tabular data Note 2: if you download the dataset straight from the download link above, then you will see that the `answer_coordinates` and `answer_text` columns are string lists of string tuples and strings respectively, which is not ideal. It would be better to make them true Python lists of tuples and strings respectively (using `ast.literal_eval`), before uploading them to the HuggingFace hub. Adding this would be great! Then we could possibly also add [WTQ (WikiTable Questions)](https://github.com/ppasupat/WikiTableQuestions) and [TabFact (Tabular Fact Checking)](https://github.com/wenhuchen/Table-Fact-Checking) on which TAPAS also achieves state-of-the-art results. Note that the TAPAS algorithm requires these datasets to first be converted into the SQA format. Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
false
748,848,847
https://api.github.com/repos/huggingface/datasets/issues/879
https://github.com/huggingface/datasets/issues/879
879
boolq does not load
closed
3
2020-11-23T14:28:28
2022-10-05T12:23:32
2022-10-05T12:23:32
rabeehk
[ "dataset bug" ]
Hi I am getting these errors trying to load boolq thanks Traceback (most recent call last): File "test.py", line 5, in <module> data = AutoTask().get("boolq").get_dataset("train", n_obs=10) File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks/tasks.py", line 42, in get_dataset dataset = self.load_dataset(split=split) File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks/tasks.py", line 38, in load_dataset return datasets.load_dataset(self.task.name, split=split) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/boolq/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11/boolq.py", line 74, in _split_generators downloaded_files = dl_manager.download_custom(urls_to_download, tf.io.gfile.copy) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 150, in download_custom get_from_cache(url, cache_dir=cache_dir, local_files_only=True, use_etag=False) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 472, in get_from_cache f"Cannot find the requested files in the cached path at {cache_path} and outgoing traffic has been" FileNotFoundError: Cannot find the requested files in the cached path at /idiap/home/rkarimi/.cache/huggingface/datasets/eaee069e38f6ceaa84de02ad088c34e63ec97671f2cd1910ddb16b10dc60808c and outgoing traffic has been disabled. To enable file online look-ups, set 'local_files_only' to False.
false
748,621,981
https://api.github.com/repos/huggingface/datasets/issues/878
https://github.com/huggingface/datasets/issues/878
878
Loading Data From S3 Path in Sagemaker
open
16
2020-11-23T09:17:22
2020-12-23T09:53:08
null
mahesh1amour
[ "enhancement", "question" ]
In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_files["train"] = train_path data_files["validation"] = valid_path data_files["test"] = test_path extension = train_path.split(".")[-1] datasets = load_dataset(extension, data_files=data_files, s3_enabled=True) print(datasets)` I getting an error of `algo-1-7plil_1 | File "main.py", line 21, in <module> algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__ algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file))) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime algo-1-7plil_1 | return os.stat(filename).st_mtime algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv` But when im trying with pandas , it is able to load from S3 Does the datasets library support S3 path to load
false
748,234,438
https://api.github.com/repos/huggingface/datasets/issues/877
https://github.com/huggingface/datasets/issues/877
877
DataLoader(datasets) become more and more slowly within iterations
closed
3
2020-11-22T12:41:10
2024-11-22T03:02:53
2020-11-29T15:45:12
shexuan
[]
Hello, when I for loop my dataloader, the loading speed is becoming more and more slowly! ``` dataset = load_from_disk(dataset_path) # around 21,000,000 lines lineloader = tqdm(DataLoader(dataset, batch_size=1)) for idx, line in enumerate(lineloader): # do some thing for each line ``` In the begining, the loading speed is around 2000it/s, but after 1 minutes later, the speed is much slower, just around 800it/s. And when I set `num_workers=4` in DataLoader, the loading speed is much lower, just 130it/s. Could you please help me with this problem? Thanks a lot!
false
748,195,104
https://api.github.com/repos/huggingface/datasets/issues/876
https://github.com/huggingface/datasets/issues/876
876
imdb dataset cannot be loaded
closed
6
2020-11-22T08:24:43
2024-05-10T03:03:29
2020-12-24T17:38:47
rabeehk
[]
Hi I am trying to load the imdb train dataset `dataset = datasets.load_dataset("imdb", split="train")` getting following errors, thanks for your help ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 558, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 73, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='test', num_bytes=32660064, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='test', num_bytes=26476338, num_examples=20316, dataset_name='imdb')}, {'expected': SplitInfo(name='train', num_bytes=33442202, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='imdb')}, {'expected': SplitInfo(name='unsupervised', num_bytes=67125548, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')}] >>> dataset = datasets.load_dataset("imdb", split="train") ```
false
748,194,311
https://api.github.com/repos/huggingface/datasets/issues/875
https://github.com/huggingface/datasets/issues/875
875
bug in boolq dataset loading
closed
1
2020-11-22T08:18:34
2020-11-24T10:12:33
2020-11-24T10:12:33
rabeehk
[]
Hi I am trying to load boolq dataset: ``` import datasets datasets.load_dataset("boolq") ``` I am getting the following errors, thanks for your help ``` >>> import datasets 2020-11-22 09:16:30.070332: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory 2020-11-22 09:16:30.070389: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. >>> datasets.load_dataset("boolq") cahce dir /idiap/temp/rkarimi/cache_home/datasets cahce dir /idiap/temp/rkarimi/cache_home/datasets Using custom data configuration default Downloading and preparing dataset boolq/default (download: 8.36 MiB, generated: 7.47 MiB, post-processed: Unknown size, total: 15.83 MiB) to /idiap/temp/rkarimi/cache_home/datasets/boolq/default/0.1.0/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11... cahce dir /idiap/temp/rkarimi/cache_home/datasets cahce dir /idiap/temp/rkarimi/cache_home/datasets/downloads Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/boolq/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11/boolq.py", line 74, in _split_generators downloaded_files = dl_manager.download_custom(urls_to_download, tf.io.gfile.copy) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 149, in download_custom custom_download(url, path) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/tensorflow/python/lib/io/file_io.py", line 516, in copy_v2 compat.path_to_bytes(src), compat.path_to_bytes(dst), overwrite) tensorflow.python.framework.errors_impl.AlreadyExistsError: file already exists ```
false
748,193,140
https://api.github.com/repos/huggingface/datasets/issues/874
https://github.com/huggingface/datasets/issues/874
874
trec dataset unavailable
closed
2
2020-11-22T08:09:36
2020-11-27T13:56:42
2020-11-27T13:56:42
rabeehk
[]
Hi when I try to load the trec dataset I am getting these errors, thanks for your help `datasets.load_dataset("trec", split="train") ` ``` File "<stdin>", line 1, in <module> File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/trec/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7/trec.py", line 140, in _split_generators dl_files = dl_manager.download_and_extract(_URLs) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 477, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label ```
false
747,959,523
https://api.github.com/repos/huggingface/datasets/issues/873
https://github.com/huggingface/datasets/issues/873
873
load_dataset('cnn_dalymail', '3.0.0') gives a 'Not a directory' error
closed
13
2020-11-21T06:30:45
2023-08-03T12:07:03
2020-11-22T12:18:05
vishal-burman
[]
``` from datasets import load_dataset dataset = load_dataset('cnn_dailymail', '3.0.0') ``` Stack trace: ``` --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-6-2e06a8332652> in <module>() 1 from datasets import load_dataset ----> 2 dataset = load_dataset('cnn_dailymail', '3.0.0') 5 frames /usr/local/lib/python3.6/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 608 download_config=download_config, 609 download_mode=download_mode, --> 610 ignore_verifications=ignore_verifications, 611 ) 612 /usr/local/lib/python3.6/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 513 if not downloaded_from_gcs: 514 self._download_and_prepare( --> 515 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 516 ) 517 # Sync info /usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 568 split_dict = SplitDict(dataset_name=self.name) 569 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 570 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 571 572 # Checksums verification /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _split_generators(self, dl_manager) 252 def _split_generators(self, dl_manager): 253 dl_paths = dl_manager.download_and_extract(_DL_URLS) --> 254 train_files = _subset_filenames(dl_paths, datasets.Split.TRAIN) 255 # Generate shared vocabulary 256 /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _subset_filenames(dl_paths, split) 153 else: 154 logging.fatal("Unsupported split: %s", split) --> 155 cnn = _find_files(dl_paths, "cnn", urls) 156 dm = _find_files(dl_paths, "dm", urls) 157 return cnn + dm /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 132 else: 133 logging.fatal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' ``` I have ran the code on Google Colab
false
747,653,697
https://api.github.com/repos/huggingface/datasets/issues/872
https://github.com/huggingface/datasets/pull/872
872
Add IndicGLUE dataset and Metrics
closed
1
2020-11-20T17:09:34
2020-11-25T17:01:11
2020-11-25T15:26:07
sumanthd17
[]
Added IndicGLUE benchmark for evaluating models on 11 Indian Languages. The descriptions of the tasks and the corresponding paper can be found [here](https://indicnlp.ai4bharat.org/indic-glue/) - [x] Followed the instructions in CONTRIBUTING.md - [x] Ran the tests successfully - [x] Created the dummy data
true
747,470,136
https://api.github.com/repos/huggingface/datasets/issues/871
https://github.com/huggingface/datasets/issues/871
871
terminate called after throwing an instance of 'google::protobuf::FatalException'
closed
2
2020-11-20T12:56:24
2020-12-12T21:16:32
2020-12-12T21:16:32
rabeehk
[]
Hi I am using the dataset "iwslt2017-en-nl", and after downloading it I am getting this error when trying to evaluate it on T5-base with seq2seq_trainer.py in the huggingface repo could you assist me please? thanks 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 63/63 [02:47<00:00, 2.18s/it][libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0): terminate called after throwing an instance of 'google::protobuf::FatalException' what(): CHECK failed: (index) >= (0): run_t5_base_eval.sh: line 19: 5795 Aborted
false
747,021,996
https://api.github.com/repos/huggingface/datasets/issues/870
https://github.com/huggingface/datasets/issues/870
870
[Feature Request] Add optional parameter in text loading script to preserve linebreaks
closed
2
2020-11-19T23:51:31
2022-06-01T15:25:53
2022-06-01T15:25:52
jncasey
[ "enhancement" ]
I'm working on a project about rhyming verse using phonetic poetry and song lyrics, and line breaks are a vital part of the data. I recently switched over to use the datasets library when my various corpora grew larger than my computer's memory. And so far, it is SO great. But the first time I processed all of my data into a dataset, I hadn't realized the text loader script was processing the source files line-by-line and stripping off the newlines. Once I caught the issue, I made my own data loader by modifying one line in the default text loader (changing `batch = batch.splitlines()` to `batch = batch.splitlines(True)` inside `_generate_tables`). And so I'm all set as far as my project is concerned. But if my use case is more general, it seems like it'd be pretty trivial to add a kwarg to the default text loader called keeplinebreaks or something, which would default to False and get passed to `splitlines()`.
false
746,495,711
https://api.github.com/repos/huggingface/datasets/issues/869
https://github.com/huggingface/datasets/pull/869
869
Update ner datasets infos
closed
1
2020-11-19T11:28:03
2020-11-19T14:14:18
2020-11-19T14:14:17
lhoestq
[]
Update the dataset_infos.json files for changes made in #850 regarding the ner datasets feature types (and the change to ClassLabel) I also fixed the ner types of conll2003
true
745,889,882
https://api.github.com/repos/huggingface/datasets/issues/868
https://github.com/huggingface/datasets/pull/868
868
Consistent metric outputs
closed
2
2020-11-18T18:05:59
2023-09-24T09:50:25
2023-07-11T09:35:52
lhoestq
[ "transfer-to-evaluate" ]
To automate the use of metrics, they should return consistent outputs. In particular I'm working on adding a conversion of metrics to keras metrics. To achieve this we need two things: - have each metric return dictionaries of string -> floats since each keras metrics should return one float - define in the metric info the different fields of the output dictionary In this PR I'm adding these two features. I also fixed a few bugs in some metrics #867 needs to be merged first
true
745,773,955
https://api.github.com/repos/huggingface/datasets/issues/867
https://github.com/huggingface/datasets/pull/867
867
Fix some metrics feature types
closed
0
2020-11-18T15:46:11
2020-11-19T17:35:58
2020-11-19T17:35:57
lhoestq
[]
Replace `int` feature type to `int32` since `int` is not a pyarrow dtype in those metrics: - accuracy - precision - recall - f1 I also added the sklearn citation and used keyword arguments to remove future warnings
true
745,719,222
https://api.github.com/repos/huggingface/datasets/issues/866
https://github.com/huggingface/datasets/issues/866
866
OSCAR from Inria group
closed
2
2020-11-18T14:40:54
2020-11-18T15:01:30
2020-11-18T15:01:30
jchwenger
[ "dataset request" ]
## Adding a Dataset - **Name:** *OSCAR* (Open Super-large Crawled ALMAnaCH coRpus), multilingual parsing of Common Crawl (separate crawls for many different languages), [here](https://oscar-corpus.com/). - **Description:** *OSCAR or Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.* - **Paper:** *[here](https://hal.inria.fr/hal-02148693)* - **Data:** *[here](https://oscar-corpus.com/)* - **Motivation:** *useful for unsupervised tasks in separate languages. In an ideal world, your team would be able to obtain the unshuffled version, that could be used to train GPT-2-like models (the shuffled version, I suppose, could be used for translation).* I am aware that you do offer the "colossal" Common Crawl dataset already, but this has the advantage to be available in many subcorpora for different languages.
false
745,430,497
https://api.github.com/repos/huggingface/datasets/issues/865
https://github.com/huggingface/datasets/issues/865
865
Have Trouble importing `datasets`
closed
1
2020-11-18T08:04:41
2020-11-18T08:16:35
2020-11-18T08:16:35
forest1988
[]
I'm failing to import transformers (v4.0.0-dev), and tracing the cause seems to be failing to import datasets. I cloned the newest version of datasets (master branch), and do `pip install -e .`. Then, `import datasets` causes the error below. ``` ~/workspace/Clone/datasets/src/datasets/utils/file_utils.py in <module> 116 sys.path.append(str(HF_MODULES_CACHE)) 117 --> 118 os.makedirs(HF_MODULES_CACHE, exist_ok=True) 119 if not os.path.exists(os.path.join(HF_MODULES_CACHE, "__init__.py")): 120 with open(os.path.join(HF_MODULES_CACHE, "__init__.py"), "w"): ~/.pyenv/versions/anaconda3-2020.07/lib/python3.8/os.py in makedirs(name, mode, exist_ok) 221 return 222 try: --> 223 mkdir(name, mode) 224 except OSError: 225 # Cannot rely on checking for EEXIST, since the operating system FileNotFoundError: [Errno 2] No such file or directory: '<MY_HOME_DIRECTORY>/.cache/huggingface/modules' ``` The error occurs in `os.makedirs` in `file_utils.py`, even though `exist_ok = True` option is set. (I use Python 3.8, so `exist_ok` is expected to work.) I've checked some environment variables, and they are set as below. ``` *** NameError: name 'HF_MODULES_CACHE' is not defined *** NameError: name 'hf_cache_home' is not defined *** NameError: name 'XDG_CACHE_HOME' is not defined ``` Should I set some environment variables before using this library? And, do you have any idea why "No such file or directory" occurs even though the `exist_ok = True` option is set? Thank you in advance.
false
745,322,357
https://api.github.com/repos/huggingface/datasets/issues/864
https://github.com/huggingface/datasets/issues/864
864
Unable to download cnn_dailymail dataset
closed
6
2020-11-18T04:38:02
2020-11-20T05:22:11
2020-11-20T05:22:10
rohitashwa1907
[ "dataset bug" ]
### Script to reproduce the error ``` from datasets import load_dataset train_dataset = load_dataset("cnn_dailymail", "3.0.0", split= 'train[:10%') valid_dataset = load_dataset("cnn_dailymail","3.0.0", split="validation[:5%]") ``` ### Error ``` --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-8-47c39c228935> in <module>() 1 from datasets import load_dataset 2 ----> 3 train_dataset = load_dataset("cnn_dailymail", "3.0.0", split= 'train[:10%') 4 valid_dataset = load_dataset("cnn_dailymail","3.0.0", split="validation[:5%]") 5 frames /usr/local/lib/python3.6/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 609 download_config=download_config, 610 download_mode=download_mode, --> 611 ignore_verifications=ignore_verifications, 612 ) 613 /usr/local/lib/python3.6/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 469 if not downloaded_from_gcs: 470 self._download_and_prepare( --> 471 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 472 ) 473 # Sync info /usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 524 split_dict = SplitDict(dataset_name=self.name) 525 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 526 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 527 528 # Checksums verification /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _split_generators(self, dl_manager) 252 def _split_generators(self, dl_manager): 253 dl_paths = dl_manager.download_and_extract(_DL_URLS) --> 254 train_files = _subset_filenames(dl_paths, datasets.Split.TRAIN) 255 # Generate shared vocabulary 256 /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _subset_filenames(dl_paths, split) 153 else: 154 logging.fatal("Unsupported split: %s", split) --> 155 cnn = _find_files(dl_paths, "cnn", urls) 156 dm = _find_files(dl_paths, "dm", urls) 157 return cnn + dm /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 132 else: 133 logging.fatal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' ``` Thanks for any suggestions.
false
744,954,534
https://api.github.com/repos/huggingface/datasets/issues/863
https://github.com/huggingface/datasets/pull/863
863
Add clear_cache parameter in the test command
closed
0
2020-11-17T17:52:29
2020-11-18T14:44:25
2020-11-18T14:44:24
lhoestq
[]
For certain datasets like OSCAR #348 there are lots of different configurations and each one of them can take a lot of disk space. I added a `--clear_cache` flag to the `datasets-cli test` command to be able to clear the cache after each configuration test to avoid filling up the disk. It should enable an easier generation for the `dataset_infos.json` file for OSCAR.
true
744,906,131
https://api.github.com/repos/huggingface/datasets/issues/862
https://github.com/huggingface/datasets/pull/862
862
Update head requests
closed
0
2020-11-17T16:49:06
2020-11-18T14:43:53
2020-11-18T14:43:50
lhoestq
[]
Get requests and Head requests didn't have the same parameters.
true
744,753,458
https://api.github.com/repos/huggingface/datasets/issues/861
https://github.com/huggingface/datasets/issues/861
861
Possible Bug: Small training/dataset file creates gigantic output
closed
7
2020-11-17T13:48:59
2021-03-30T14:04:04
2021-03-22T12:04:55
NebelAI
[ "enhancement", "question" ]
Hey guys, I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small 5 GB raw text file but I can't even end the preprocessing handled by datasets because this tiny 5 GB text file becomes more than 1 TB when processing. My system was running out of space and crashed prematurely. I've done training from scratch via Google's bert repo in the past and I can remember that the resulting pretraining data can become quite big. But 5 GB becoming 1 TB was never the case. Is this considered normal or is it a bug? I've used the following CMD: `python xla_spawn.py --num_cores=8 run_mlm.py --model_type bert --config_name config.json --tokenizer_name tokenizer.json --train_file dataset_full.txt --do_train --output_dir out --max_steps 500000 --save_steps 2500 --save_total_limit 2 --prediction_loss_only --line_by_line --max_seq_length 128 --pad_to_max_length --preprocessing_num_workers 16 --per_device_train_batch_size 128 --overwrite_output_dir --debug`
false
744,750,691
https://api.github.com/repos/huggingface/datasets/issues/860
https://github.com/huggingface/datasets/issues/860
860
wmt16 cs-en does not donwload
closed
1
2020-11-17T13:45:35
2022-10-05T12:27:00
2022-10-05T12:26:59
rabeehk
[ "dataset bug" ]
Hi I am trying with wmt16, cs-en pair, thanks for the help, perhaps similar to the ro-en issue. thanks split="train", n_obs=data_args.n_train) for task in data_args.task} File "finetune_t5_trainer.py", line 109, in <dictcomp> split="train", n_obs=data_args.n_train) for task in data_args.task} File "/home/rabeeh/internship/seq2seq/tasks/tasks.py", line 82, in get_dataset dataset = load_dataset("wmt16", self.pair, split=split) File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/rabeeh/.cache/huggingface/modules/datasets_modules/datasets/wmt16/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308/wmt_utils.py", line 755, in _split_generators downloaded_files = dl_manager.download_and_extract(urls_to_download) File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 181, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 181, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://www.statmt.org/wmt13/training-parallel-commoncrawl.tgz
false
743,917,091
https://api.github.com/repos/huggingface/datasets/issues/859
https://github.com/huggingface/datasets/pull/859
859
Integrate file_lock inside the lib for better logging control
closed
0
2020-11-16T15:13:39
2020-11-16T17:06:44
2020-11-16T17:06:42
lhoestq
[]
Previously the locking system of the lib was based on the file_lock package. However as noticed in #812 there were too many logs printed even when the datasets logging was set to warnings or errors. For example ```python import logging logging.basicConfig(level=logging.INFO) import datasets datasets.set_verbosity_warning() datasets.load_dataset("squad") ``` would still log the file lock events: ``` INFO:filelock:Lock 5737989232 acquired on /Users/quentinlhoest/.cache/huggingface/datasets/44801f118d500eff6114bfc56ab4e6def941f1eb14b70ac1ecc052e15cdac49d.85f43de978b9b25921cb78d7a2f2b350c04acdbaedb9ecb5f7101cd7c0950e68.py.lock INFO:filelock:Lock 5737989232 released on /Users/quentinlhoest/.cache/huggingface/datasets/44801f118d500eff6114bfc56ab4e6def941f1eb14b70ac1ecc052e15cdac49d.85f43de978b9b25921cb78d7a2f2b350c04acdbaedb9ecb5f7101cd7c0950e68.py.lock INFO:filelock:Lock 4393489968 acquired on /Users/quentinlhoest/.cache/huggingface/datasets/_Users_quentinlhoest_.cache_huggingface_datasets_squad_plain_text_1.0.0_1244d044b266a5e4dbd4174d23cb995eead372fbca31a03edc3f8a132787af41.lock INFO:filelock:Lock 4393489968 released on /Users/quentinlhoest/.cache/huggingface/datasets/_Users_quentinlhoest_.cache_huggingface_datasets_squad_plain_text_1.0.0_1244d044b266a5e4dbd4174d23cb995eead372fbca31a03edc3f8a132787af41.lock INFO:filelock:Lock 4393490808 acquired on /Users/quentinlhoest/.cache/huggingface/datasets/_Users_quentinlhoest_.cache_huggingface_datasets_squad_plain_text_1.0.0_1244d044b266a5e4dbd4174d23cb995eead372fbca31a03edc3f8a132787af41.lock Reusing dataset squad (/Users/quentinlhoest/.cache/huggingface/datasets/squad/plain_text/1.0.0/1244d044b266a5e4dbd4174d23cb995eead372fbca31a03edc3f8a132787af41) INFO:filelock:Lock 4393490808 released on /Users/quentinlhoest/.cache/huggingface/datasets/_Users_quentinlhoest_.cache_huggingface_datasets_squad_plain_text_1.0.0_1244d044b266a5e4dbd4174d23cb995eead372fbca31a03edc3f8a132787af41.lock ``` With the integration of file_lock in the library, the ouput is much cleaner: ``` Reusing dataset squad (/Users/quentinlhoest/.cache/huggingface/datasets/squad/plain_text/1.0.0/1244d044b266a5e4dbd4174d23cb995eead372fbca31a03edc3f8a132787af41) ``` Since the file_lock package is only a 450 lines file I think it's fine to have it inside the lib. Fix #812
true