id
int64 599M
3.29B
| url
stringlengths 58
61
| html_url
stringlengths 46
51
| number
int64 1
7.72k
| title
stringlengths 1
290
| state
stringclasses 2
values | comments
int64 0
70
| created_at
timestamp[s]date 2020-04-14 10:18:02
2025-08-05 09:28:51
| updated_at
timestamp[s]date 2020-04-27 16:04:17
2025-08-05 11:39:56
| closed_at
timestamp[s]date 2020-04-14 12:01:40
2025-08-01 05:15:45
⌀ | user_login
stringlengths 3
26
| labels
listlengths 0
4
| body
stringlengths 0
228k
⌀ | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
757,677,188
|
https://api.github.com/repos/huggingface/datasets/issues/1160
|
https://github.com/huggingface/datasets/pull/1160
| 1,160
|
adding TabFact dataset
|
closed
| 2
| 2020-12-05T13:05:52
| 2020-12-09T11:41:39
| 2020-12-09T09:12:41
|
patil-suraj
|
[] |
Adding TabFact: A Large-scale Dataset for Table-based Fact Verification.
https://github.com/wenhuchen/Table-Fact-Checking
- The tables are stored as individual csv files, so need to download 16,573 🤯 csv files. As a result the `datasets_infos.json` file is huge (6.62 MB).
- Original dataset has nested structure where, where table is one example and each table has multiple statements,
flattening the structure here so that each statement is one example.
| true
|
757,661,128
|
https://api.github.com/repos/huggingface/datasets/issues/1159
|
https://github.com/huggingface/datasets/pull/1159
| 1,159
|
Add Roman Urdu dataset
|
closed
| 0
| 2020-12-05T11:36:43
| 2020-12-07T13:41:21
| 2020-12-07T09:59:03
|
jaketae
|
[] |
This PR adds the [Roman Urdu dataset](https://archive.ics.uci.edu/ml/datasets/Roman+Urdu+Data+Set#).
| true
|
757,658,926
|
https://api.github.com/repos/huggingface/datasets/issues/1158
|
https://github.com/huggingface/datasets/pull/1158
| 1,158
|
Add BBC Hindi NLI Dataset
|
closed
| 7
| 2020-12-05T11:25:34
| 2021-02-05T09:48:31
| 2021-02-05T09:48:31
|
avinsit123
|
[] |
# Dataset Card for BBC Hindi NLI Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- HomePage : https://github.com/midas-research/hindi-nli-data
- Paper : "https://www.aclweb.org/anthology/2020.aacl-main.71"
- Point of Contact : https://github.com/midas-research/hindi-nli-data
### Dataset Summary
- Dataset for Natural Language Inference in Hindi Language. BBC Hindi Dataset consists of textual-entailment pairs.
- Each row of the Datasets if made up of 4 columns - Premise, Hypothesis, Label and Topic.
- Context and Hypothesis is written in Hindi while Entailment_Label is in English.
- Entailment_label is of 2 types - entailed and not-entailed.
- Dataset can be used to train models for Natural Language Inference tasks in Hindi Language.
[More Information Needed]
### Supported Tasks and Leaderboards
- Natural Language Inference for Hindi
### Languages
Dataset is in Hindi
## Dataset Structure
- Data is structured in TSV format.
- Train and Test files are in seperate files
### Dataset Instances
An example of 'train' looks as follows.
```
{'hypothesis': 'यह खबर की सूचना है|', 'label': 'entailed', 'premise': 'गोपनीयता की नीति', 'topic': '1'}
```
### Data Fields
- Each row contatins 4 columns - Premise, Hypothesis, Label and Topic.
### Data Splits
- Train : 15553
- Valid : 2581
- Test : 2593
## Dataset Creation
- We employ a recasting technique from Poliak et al. (2018a,b) to convert publicly available BBC Hindi news text classification datasets in Hindi and pose them as TE problems
- In this recasting process, we build template hypotheses for each class in the label taxonomy
- Then, we pair the original annotated sentence with each of the template hypotheses to create TE samples.
- For more information on the recasting process, refer to paper "https://www.aclweb.org/anthology/2020.aacl-main.71"
### Source Data
Source Dataset for the recasting process is the BBC Hindi Headlines Dataset(https://github.com/NirantK/hindi2vec/releases/tag/bbc-hindi-v0.1)
#### Initial Data Collection and Normalization
- BBC Hindi News Classification Dataset contains 4, 335 Hindi news headlines tagged across 14 categories: India, Pakistan,news, International, entertainment, sport, science, China, learning english, social, southasia, business, institutional, multimedia
- We processed this dataset to combine two sets of relevant but low prevalence classes.
- Namely, we merged the samples from Pakistan, China, international, and southasia as one class called international.
- Likewise, we also merged samples from news, business, social, learning english, and institutional as news.
- Lastly, we also removed the class multimedia because there were very few samples.
#### Who are the source language producers?
Pls refer to this paper: "https://www.aclweb.org/anthology/2020.aacl-main.71"
### Annotations
#### Annotation process
Annotation process has been described in Dataset Creation Section.
#### Who are the annotators?
Annotation is done automatically.
### Personal and Sensitive Information
No Personal and Sensitive Information is mentioned in the Datasets.
## Considerations for Using the Data
Pls refer to this paper: https://www.aclweb.org/anthology/2020.aacl-main.71
### Discussion of Biases
Pls refer to this paper: https://www.aclweb.org/anthology/2020.aacl-main.71
### Other Known Limitations
No other known limitations
## Additional Information
Pls refer to this link: https://github.com/midas-research/hindi-nli-data
### Dataset Curators
It is written in the repo : https://github.com/avinsit123/hindi-nli-data that
- This corpus can be used freely for research purposes.
- The paper listed below provide details of the creation and use of the corpus. If you use the corpus, then please cite the paper.
- If interested in commercial use of the corpus, send email to midas@iiitd.ac.in.
- If you use the corpus in a product or application, then please credit the authors and Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi appropriately. Also, if you send us an email, we will be thrilled to know about how you have used the corpus.
- Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi, India disclaims any responsibility for the use of the corpus and does not provide technical support. However, the contact listed above will be happy to respond to queries and clarifications.
- Rather than redistributing the corpus, please direct interested parties to this page
- Please feel free to send us an email:
- with feedback regarding the corpus.
- with information on how you have used the corpus.
- if interested in having us analyze your data for natural language inference.
- if interested in a collaborative research project.
### Licensing Information
Copyright (C) 2019 Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi (MIDAS, IIIT-Delhi).
Pls contact authors for any information on the dataset.
### Citation Information
```
@inproceedings{uppal-etal-2020-two,
title = "Two-Step Classification using Recasted Data for Low Resource Settings",
author = "Uppal, Shagun and
Gupta, Vivek and
Swaminathan, Avinash and
Zhang, Haimin and
Mahata, Debanjan and
Gosangi, Rakesh and
Shah, Rajiv Ratn and
Stent, Amanda",
booktitle = "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing",
month = dec,
year = "2020",
address = "Suzhou, China",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.aacl-main.71",
pages = "706--719",
abstract = "An NLP model{'}s ability to reason should be independent of language. Previous works utilize Natural Language Inference (NLI) to understand the reasoning ability of models, mostly focusing on high resource languages like English. To address scarcity of data in low-resource languages such as Hindi, we use data recasting to create NLI datasets for four existing text classification datasets. Through experiments, we show that our recasted dataset is devoid of statistical irregularities and spurious patterns. We further study the consistency in predictions of the textual entailment models and propose a consistency regulariser to remove pairwise-inconsistencies in predictions. We propose a novel two-step classification method which uses textual-entailment predictions for classification task. We further improve the performance by using a joint-objective for classification and textual entailment. We therefore highlight the benefits of data recasting and improvements on classification performance using our approach with supporting experimental results.",
}
```
| true
|
757,657,888
|
https://api.github.com/repos/huggingface/datasets/issues/1157
|
https://github.com/huggingface/datasets/pull/1157
| 1,157
|
Add dataset XhosaNavy English -Xhosa
|
closed
| 0
| 2020-12-05T11:19:54
| 2020-12-07T09:11:33
| 2020-12-07T09:11:33
|
spatil6
|
[] |
Add dataset XhosaNavy English -Xhosa
More info : http://opus.nlpl.eu/XhosaNavy.php
| true
|
757,656,094
|
https://api.github.com/repos/huggingface/datasets/issues/1156
|
https://github.com/huggingface/datasets/pull/1156
| 1,156
|
add telugu-news corpus
|
closed
| 0
| 2020-12-05T11:07:56
| 2020-12-07T09:08:48
| 2020-12-07T09:08:48
|
oostopitre
|
[] |
Adding Telugu News Corpus to datasets.
| true
|
757,652,517
|
https://api.github.com/repos/huggingface/datasets/issues/1155
|
https://github.com/huggingface/datasets/pull/1155
| 1,155
|
Add BSD
|
closed
| 5
| 2020-12-05T10:43:48
| 2020-12-07T09:27:46
| 2020-12-07T09:27:46
|
j-chim
|
[] |
This PR adds BSD, the Japanese-English business dialogue corpus by
[Rikters et al., 2020](https://www.aclweb.org/anthology/D19-5204.pdf).
| true
|
757,651,669
|
https://api.github.com/repos/huggingface/datasets/issues/1154
|
https://github.com/huggingface/datasets/pull/1154
| 1,154
|
Opus sardware
|
closed
| 0
| 2020-12-05T10:38:02
| 2020-12-05T17:05:45
| 2020-12-05T17:05:45
|
spatil6
|
[] |
Added Opus sardware dataset for machine translation English to Sardinian.
for more info : http://opus.nlpl.eu/sardware.php
| true
|
757,643,302
|
https://api.github.com/repos/huggingface/datasets/issues/1153
|
https://github.com/huggingface/datasets/pull/1153
| 1,153
|
Adding dataset for proto_qa in huggingface datasets library
|
closed
| 0
| 2020-12-05T09:43:28
| 2020-12-05T18:53:10
| 2020-12-05T18:53:10
|
bpatidar
|
[] |
Added dataset for ProtoQA: A Question Answering Dataset for Prototypical Common-Sense Reasoning
Followed all steps for adding a new dataset.
| true
|
757,640,506
|
https://api.github.com/repos/huggingface/datasets/issues/1152
|
https://github.com/huggingface/datasets/pull/1152
| 1,152
|
hindi discourse analysis dataset commit
|
closed
| 9
| 2020-12-05T09:24:01
| 2020-12-14T19:44:48
| 2020-12-14T19:44:48
|
duttahritwik
|
[] | true
|
|
757,517,092
|
https://api.github.com/repos/huggingface/datasets/issues/1151
|
https://github.com/huggingface/datasets/pull/1151
| 1,151
|
adding psc dataset
|
closed
| 0
| 2020-12-05T02:40:01
| 2020-12-09T11:38:41
| 2020-12-09T11:38:41
|
abecadel
|
[] | true
|
|
757,512,441
|
https://api.github.com/repos/huggingface/datasets/issues/1150
|
https://github.com/huggingface/datasets/pull/1150
| 1,150
|
adding dyk dataset
|
closed
| 0
| 2020-12-05T02:11:42
| 2020-12-05T16:52:19
| 2020-12-05T16:52:19
|
abecadel
|
[] | true
|
|
757,504,068
|
https://api.github.com/repos/huggingface/datasets/issues/1149
|
https://github.com/huggingface/datasets/pull/1149
| 1,149
|
Fix typo in the comment in _info function
|
closed
| 0
| 2020-12-05T01:26:20
| 2020-12-05T16:19:26
| 2020-12-05T16:19:26
|
vinaykudari
|
[] | true
|
|
757,503,918
|
https://api.github.com/repos/huggingface/datasets/issues/1148
|
https://github.com/huggingface/datasets/pull/1148
| 1,148
|
adding polemo2 dataset
|
closed
| 0
| 2020-12-05T01:25:29
| 2020-12-05T16:51:39
| 2020-12-05T16:51:39
|
abecadel
|
[] | true
|
|
757,502,199
|
https://api.github.com/repos/huggingface/datasets/issues/1147
|
https://github.com/huggingface/datasets/pull/1147
| 1,147
|
Vinay/add/telugu books
|
closed
| 0
| 2020-12-05T01:17:02
| 2020-12-05T16:36:04
| 2020-12-05T16:36:04
|
vinaykudari
|
[] |
Real data tests are failing as this dataset needs to be manually downloaded
| true
|
757,498,565
|
https://api.github.com/repos/huggingface/datasets/issues/1146
|
https://github.com/huggingface/datasets/pull/1146
| 1,146
|
Add LINNAEUS
|
closed
| 0
| 2020-12-05T01:01:09
| 2020-12-05T16:35:53
| 2020-12-05T16:35:53
|
edugp
|
[] | true
|
|
757,477,349
|
https://api.github.com/repos/huggingface/datasets/issues/1145
|
https://github.com/huggingface/datasets/pull/1145
| 1,145
|
Add Species-800
|
closed
| 4
| 2020-12-04T23:44:51
| 2022-01-13T03:09:20
| 2020-12-05T16:35:01
|
edugp
|
[] | true
|
|
757,452,831
|
https://api.github.com/repos/huggingface/datasets/issues/1144
|
https://github.com/huggingface/datasets/pull/1144
| 1,144
|
Add JFLEG
|
closed
| 2
| 2020-12-04T22:36:38
| 2020-12-06T18:16:04
| 2020-12-06T18:16:04
|
j-chim
|
[] |
This PR adds [JFLEG ](https://www.aclweb.org/anthology/E17-2037/), an English grammatical error correction benchmark.
The tests were successful on real data, although it would be great if I can get some guidance on the **dummy data**. Basically, **for each source sentence there are 4 possible gold standard target sentences**. The original dataset comprise files in a flat structure, labelled by split then by source/target (e.g., dev.src, dev.ref0, ..., dev.ref3). Not sure what is the best way of adding this.
I imagine I can treat each distinct source-target pair as its own split? But having so many copies of the source sentence feels redundant, and it would make it less convenient to end-users who might want to access multiple gold standard targets simultaneously.
| true
|
757,448,920
|
https://api.github.com/repos/huggingface/datasets/issues/1143
|
https://github.com/huggingface/datasets/pull/1143
| 1,143
|
Add the Winograd Schema Challenge
|
closed
| 0
| 2020-12-04T22:26:59
| 2020-12-09T15:11:31
| 2020-12-09T09:32:34
|
joeddav
|
[] |
Adds the Winograd Schema Challenge, including configs for the more canonical wsc273 as well as wsc285 with 12 new examples.
- https://cs.nyu.edu/faculty/davise/papers/WinogradSchemas/WS.html
The data format was a bit of a nightmare but I think I got it to a workable format.
| true
|
757,413,920
|
https://api.github.com/repos/huggingface/datasets/issues/1142
|
https://github.com/huggingface/datasets/pull/1142
| 1,142
|
Fix PerSenT
|
closed
| 0
| 2020-12-04T21:21:02
| 2020-12-14T13:39:34
| 2020-12-14T13:39:34
|
jeromeku
|
[] |
New PR for dataset PerSenT
| true
|
757,411,057
|
https://api.github.com/repos/huggingface/datasets/issues/1141
|
https://github.com/huggingface/datasets/pull/1141
| 1,141
|
Add GitHub version of ETH Py150 Corpus
|
closed
| 2
| 2020-12-04T21:16:08
| 2020-12-09T18:32:44
| 2020-12-07T10:00:24
|
bharatr21
|
[] |
Add the redistributable version of **ETH Py150 Corpus**
| true
|
757,399,142
|
https://api.github.com/repos/huggingface/datasets/issues/1140
|
https://github.com/huggingface/datasets/pull/1140
| 1,140
|
Add Urdu Sentiment Corpus (USC).
|
closed
| 2
| 2020-12-04T20:55:27
| 2020-12-07T03:27:23
| 2020-12-07T03:27:23
|
chaitnayabasava
|
[] |
Added Urdu Sentiment Corpus. More details about the dataset over <a href="https://github.com/MuhammadYaseenKhan/Urdu-Sentiment-Corpus">here</a>.
| true
|
757,393,158
|
https://api.github.com/repos/huggingface/datasets/issues/1139
|
https://github.com/huggingface/datasets/pull/1139
| 1,139
|
Add ReFreSD dataset
|
closed
| 3
| 2020-12-04T20:45:11
| 2020-12-16T16:01:18
| 2020-12-16T16:01:18
|
mpariente
|
[] |
This PR adds the **ReFreSD dataset**.
The original data is hosted [on this github repo](https://github.com/Elbria/xling-SemDiv) and we use the `REFreSD_rationale` to expose all the data.
Need feedback on:
- I couldn't generate the dummy data. The file we download is a tsv file, but without extension, I suppose this is the problem. I'm sure there is a simple trick to make this work.
- The feature names.
- I don't know if it's better to stick to the classic `sentence1`, `sentence2` or to `sentence_en`, `sentence_fr` to be more explicit.
- There is a binary label (called `label`, no problem here), and a 3-class label called `#3_labels` in the original tsv. I changed it to `all_labels` but I'm sure there is better.
- The rationales are lists of integers, extracted as a string at first. I wonder what's the best way to treat them, any idea? Also, I couldn't manage to make a `Sequence` of `int8` but I'm sure I've missed something simple.
Thanks in advance
| true
|
757,378,406
|
https://api.github.com/repos/huggingface/datasets/issues/1138
|
https://github.com/huggingface/datasets/pull/1138
| 1,138
|
updated after the class name update
|
closed
| 0
| 2020-12-04T20:19:43
| 2020-12-05T15:43:32
| 2020-12-05T15:43:32
|
timpal0l
|
[] |
@lhoestq <---
| true
|
757,358,145
|
https://api.github.com/repos/huggingface/datasets/issues/1137
|
https://github.com/huggingface/datasets/pull/1137
| 1,137
|
add wmt mlqe 2020 shared task
|
closed
| 1
| 2020-12-04T19:45:34
| 2020-12-06T19:59:44
| 2020-12-06T19:53:46
|
VictorSanh
|
[] |
First commit for Shared task 1 (wmt_mlqw_task1) of WMT20 MLQE (quality estimation of machine translation)
Note that I copied the tags in the README for only one (of the 7 configurations): `en-de`.
There is one configuration for each pair of languages.
| true
|
757,341,607
|
https://api.github.com/repos/huggingface/datasets/issues/1136
|
https://github.com/huggingface/datasets/pull/1136
| 1,136
|
minor change in description in paws-x.py and updated dataset_infos
|
closed
| 0
| 2020-12-04T19:17:49
| 2020-12-06T18:02:57
| 2020-12-06T18:02:57
|
bhavitvyamalik
|
[] | true
|
|
757,325,741
|
https://api.github.com/repos/huggingface/datasets/issues/1135
|
https://github.com/huggingface/datasets/pull/1135
| 1,135
|
added paws
|
closed
| 0
| 2020-12-04T18:52:38
| 2020-12-09T17:17:13
| 2020-12-09T17:17:13
|
bhavitvyamalik
|
[] |
Updating README and tags for dataset card in a while
| true
|
757,317,651
|
https://api.github.com/repos/huggingface/datasets/issues/1134
|
https://github.com/huggingface/datasets/pull/1134
| 1,134
|
adding xquad-r dataset
|
closed
| 0
| 2020-12-04T18:39:13
| 2020-12-05T16:50:47
| 2020-12-05T16:50:47
|
manandey
|
[] | true
|
|
757,307,660
|
https://api.github.com/repos/huggingface/datasets/issues/1133
|
https://github.com/huggingface/datasets/pull/1133
| 1,133
|
Adding XQUAD-R Dataset
|
closed
| 0
| 2020-12-04T18:22:29
| 2020-12-04T18:28:54
| 2020-12-04T18:28:49
|
manandey
|
[] | true
|
|
757,301,368
|
https://api.github.com/repos/huggingface/datasets/issues/1132
|
https://github.com/huggingface/datasets/pull/1132
| 1,132
|
Add Urdu Sentiment Corpus (USC).
|
closed
| 0
| 2020-12-04T18:12:24
| 2020-12-04T20:52:48
| 2020-12-04T20:52:48
|
chaitnayabasava
|
[] |
Added Urdu Sentiment Corpus. More details about the dataset over <a href="https://github.com/MuhammadYaseenKhan/Urdu-Sentiment-Corpus">here</a>.
| true
|
757,278,341
|
https://api.github.com/repos/huggingface/datasets/issues/1131
|
https://github.com/huggingface/datasets/pull/1131
| 1,131
|
Adding XQUAD-R Dataset
|
closed
| 0
| 2020-12-04T17:35:43
| 2020-12-04T18:27:22
| 2020-12-04T18:27:22
|
manandey
|
[] | true
|
|
757,265,075
|
https://api.github.com/repos/huggingface/datasets/issues/1130
|
https://github.com/huggingface/datasets/pull/1130
| 1,130
|
adding discovery
|
closed
| 1
| 2020-12-04T17:16:54
| 2020-12-14T13:03:14
| 2020-12-14T13:03:14
|
sileod
|
[] | true
|
|
757,255,492
|
https://api.github.com/repos/huggingface/datasets/issues/1129
|
https://github.com/huggingface/datasets/pull/1129
| 1,129
|
Adding initial version of cord-19 dataset
|
closed
| 5
| 2020-12-04T17:03:17
| 2021-02-09T10:22:35
| 2021-02-09T10:18:06
|
ggdupont
|
[] |
Initial version only reading the metadata in CSV.
### Checklist:
- [x] Create the dataset script /datasets/my_dataset/my_dataset.py using the template
- [x] Fill the _DESCRIPTION and _CITATION variables
- [x] Implement _infos(), _split_generators() and _generate_examples()
- [x] Make sure that the BUILDER_CONFIGS class attribute is filled with the different configurations of the dataset and that the BUILDER_CONFIG_CLASS is specified if there is a custom config class.
- [x] Generate the metadata file dataset_infos.json for all configurations
- [x] Generate the dummy data dummy_data.zip files to have the dataset script tested and that they don't weigh too much (<50KB)
- [x] Add the dataset card README.md using the template and at least fill the tags
- [x] Both tests for the real data and the dummy data pass.
### TODO:
- [x] add more metadata
- [x] add full text
- [x] add pre-computed document embedding
| true
|
757,245,404
|
https://api.github.com/repos/huggingface/datasets/issues/1128
|
https://github.com/huggingface/datasets/pull/1128
| 1,128
|
Add xquad-r dataset
|
closed
| 0
| 2020-12-04T16:48:53
| 2020-12-04T18:14:30
| 2020-12-04T18:14:26
|
manandey
|
[] | true
|
|
757,229,684
|
https://api.github.com/repos/huggingface/datasets/issues/1127
|
https://github.com/huggingface/datasets/pull/1127
| 1,127
|
Add wikiqaar dataset
|
closed
| 0
| 2020-12-04T16:26:18
| 2020-12-07T16:39:41
| 2020-12-07T16:39:41
|
zaidalyafeai
|
[] |
Arabic Wiki Question Answering Corpus.
| true
|
757,197,735
|
https://api.github.com/repos/huggingface/datasets/issues/1126
|
https://github.com/huggingface/datasets/pull/1126
| 1,126
|
Adding babi dataset
|
closed
| 3
| 2020-12-04T15:42:34
| 2021-03-30T09:44:04
| 2021-03-30T09:44:04
|
thomwolf
|
[] |
Adding the English version of bAbI.
Samples are taken from ParlAI for consistency with the main users at the moment.
Supersede #945 (problem with the rebase) and adresses the issues mentioned in the review (dummy data are smaller now and code comments are fixed).
| true
|
757,194,531
|
https://api.github.com/repos/huggingface/datasets/issues/1125
|
https://github.com/huggingface/datasets/pull/1125
| 1,125
|
Add Urdu fake news dataset.
|
closed
| 3
| 2020-12-04T15:38:17
| 2020-12-07T03:21:05
| 2020-12-07T03:21:05
|
chaitnayabasava
|
[] |
Added Urdu fake news dataset. More information about the dataset can be found <a href="https://github.com/MaazAmjad/Datasets-for-Urdu-news">here</a>.
| true
|
757,186,983
|
https://api.github.com/repos/huggingface/datasets/issues/1124
|
https://github.com/huggingface/datasets/pull/1124
| 1,124
|
Add Xitsonga Ner
|
closed
| 1
| 2020-12-04T15:27:44
| 2020-12-06T18:31:35
| 2020-12-06T18:31:35
|
yvonnegitau
|
[] |
Clean Xitsonga Ner PR
| true
|
757,181,014
|
https://api.github.com/repos/huggingface/datasets/issues/1123
|
https://github.com/huggingface/datasets/pull/1123
| 1,123
|
adding cdt dataset
|
closed
| 2
| 2020-12-04T15:19:36
| 2020-12-04T17:05:56
| 2020-12-04T17:05:56
|
abecadel
|
[] | true
|
|
757,176,172
|
https://api.github.com/repos/huggingface/datasets/issues/1122
|
https://github.com/huggingface/datasets/pull/1122
| 1,122
|
Add Urdu fake news.
|
closed
| 0
| 2020-12-04T15:13:10
| 2020-12-04T15:20:07
| 2020-12-04T15:20:07
|
chaitnayabasava
|
[] |
Added Urdu fake news dataset. More information about the dataset can be found <a href="https://github.com/MaazAmjad/Datasets-for-Urdu-news">here</a>.
| true
|
757,169,944
|
https://api.github.com/repos/huggingface/datasets/issues/1121
|
https://github.com/huggingface/datasets/pull/1121
| 1,121
|
adding cdt dataset
|
closed
| 0
| 2020-12-04T15:04:33
| 2020-12-04T15:16:49
| 2020-12-04T15:16:49
|
abecadel
|
[] | true
|
|
757,166,342
|
https://api.github.com/repos/huggingface/datasets/issues/1120
|
https://github.com/huggingface/datasets/pull/1120
| 1,120
|
Add conda environment activation
|
closed
| 0
| 2020-12-04T14:59:43
| 2020-12-04T18:34:48
| 2020-12-04T16:40:57
|
parmarsuraj99
|
[] |
Added activation of Conda environment before installing.
| true
|
757,156,781
|
https://api.github.com/repos/huggingface/datasets/issues/1119
|
https://github.com/huggingface/datasets/pull/1119
| 1,119
|
Add Google Great Code Dataset
|
closed
| 0
| 2020-12-04T14:46:28
| 2020-12-06T17:33:14
| 2020-12-06T17:33:13
|
abhishekkrthakur
|
[] | true
|
|
757,142,350
|
https://api.github.com/repos/huggingface/datasets/issues/1118
|
https://github.com/huggingface/datasets/pull/1118
| 1,118
|
Add Tashkeela dataset
|
closed
| 2
| 2020-12-04T14:26:18
| 2020-12-04T15:47:01
| 2020-12-04T15:46:51
|
zaidalyafeai
|
[] |
Arabic Vocalized Words Dataset.
| true
|
757,133,789
|
https://api.github.com/repos/huggingface/datasets/issues/1117
|
https://github.com/huggingface/datasets/pull/1117
| 1,117
|
Fix incorrect MRQA train+SQuAD URL
|
closed
| 3
| 2020-12-04T14:14:26
| 2020-12-06T17:14:11
| 2020-12-06T17:14:10
|
yuxiang-wu
|
[] |
Fix issue #1115
| true
|
757,133,502
|
https://api.github.com/repos/huggingface/datasets/issues/1116
|
https://github.com/huggingface/datasets/pull/1116
| 1,116
|
add dbpedia_14 dataset
|
closed
| 5
| 2020-12-04T14:13:59
| 2020-12-07T10:06:54
| 2020-12-05T15:36:23
|
hfawaz
|
[] |
This dataset corresponds to the DBpedia dataset requested in https://github.com/huggingface/datasets/issues/353.
| true
|
757,127,527
|
https://api.github.com/repos/huggingface/datasets/issues/1115
|
https://github.com/huggingface/datasets/issues/1115
| 1,115
|
Incorrect URL for MRQA SQuAD train subset
|
closed
| 1
| 2020-12-04T14:05:24
| 2020-12-06T17:14:22
| 2020-12-06T17:14:22
|
yuxiang-wu
|
[] |
https://github.com/huggingface/datasets/blob/4ef4c8f8b7a60e35c6fa21115fca9faae91c9f74/datasets/mrqa/mrqa.py#L53
The URL for `train+SQuAD` subset of MRQA points to the dev set instead of train set. It should be `https://s3.us-east-2.amazonaws.com/mrqa/release/v2/train/SQuAD.jsonl.gz`.
| false
|
757,123,638
|
https://api.github.com/repos/huggingface/datasets/issues/1114
|
https://github.com/huggingface/datasets/pull/1114
| 1,114
|
Add sesotho ner corpus
|
closed
| 0
| 2020-12-04T13:59:41
| 2020-12-04T15:02:07
| 2020-12-04T15:02:07
|
yvonnegitau
|
[] |
Clean Sesotho PR
| true
|
757,115,557
|
https://api.github.com/repos/huggingface/datasets/issues/1113
|
https://github.com/huggingface/datasets/pull/1113
| 1,113
|
add qed
|
closed
| 0
| 2020-12-04T13:47:57
| 2020-12-05T15:46:21
| 2020-12-05T15:41:57
|
patil-suraj
|
[] |
adding QED: Dataset for Explanations in Question Answering
https://github.com/google-research-datasets/QED
https://arxiv.org/abs/2009.06354
| true
|
757,108,151
|
https://api.github.com/repos/huggingface/datasets/issues/1112
|
https://github.com/huggingface/datasets/pull/1112
| 1,112
|
Initial version of cord-19 dataset from AllenAI with only the abstract
|
closed
| 1
| 2020-12-04T13:36:39
| 2020-12-04T16:16:40
| 2020-12-04T16:16:24
|
ggdupont
|
[] |
Initial version only reading the metadata in CSV.
### Checklist:
- [x] Create the dataset script /datasets/my_dataset/my_dataset.py using the template
- [x] Fill the _DESCRIPTION and _CITATION variables
- [x] Implement _infos(), _split_generators() and _generate_examples()
- [x] Make sure that the BUILDER_CONFIGS class attribute is filled with the different configurations of the dataset and that the BUILDER_CONFIG_CLASS is specified if there is a custom config class.
- [x] Generate the metadata file dataset_infos.json for all configurations
- [x] Generate the dummy data dummy_data.zip files to have the dataset script tested and that they don't weigh too much (<50KB)
- [x] Add the dataset card README.md using the template and at least fill the tags
- [ ] Both tests for the real data and the dummy data pass.
### TODO:
- [ ] add more metadata
- [ ] add full text
- [ ] add pre-computed document embedding
| true
|
757,083,266
|
https://api.github.com/repos/huggingface/datasets/issues/1111
|
https://github.com/huggingface/datasets/pull/1111
| 1,111
|
Add Siswati Ner corpus
|
closed
| 0
| 2020-12-04T12:57:31
| 2020-12-04T14:43:01
| 2020-12-04T14:43:00
|
yvonnegitau
|
[] |
Clean Siswati PR
| true
|
757,082,677
|
https://api.github.com/repos/huggingface/datasets/issues/1110
|
https://github.com/huggingface/datasets/issues/1110
| 1,110
|
Using a feature named "_type" fails with certain operations
|
closed
| 1
| 2020-12-04T12:56:33
| 2022-01-14T18:07:00
| 2022-01-14T18:07:00
|
dcfidalgo
|
[] |
A column named `_type` leads to a `TypeError: unhashable type: 'dict'` for certain operations:
```python
from datasets import Dataset, concatenate_datasets
ds = Dataset.from_dict({"_type": ["whatever"]}).map()
concatenate_datasets([ds])
# or simply
Dataset(ds._data)
```
Context: We are using datasets to persist data coming from elasticsearch to feed to our pipeline, and elasticsearch has a `_type` field, hence the strange name of the column.
Not sure if you wish to support this specific column name, but if you do i would be happy to try a fix and provide a PR. I already had a look into it and i think the culprit is the `datasets.features.generate_from_dict` function. It uses the hard coded `_type` string to figure out if it reached the end of the nested feature object from a serialized dict.
Best wishes and keep up the awesome work!
| false
|
757,055,702
|
https://api.github.com/repos/huggingface/datasets/issues/1109
|
https://github.com/huggingface/datasets/pull/1109
| 1,109
|
add woz_dialogue
|
closed
| 0
| 2020-12-04T12:13:07
| 2020-12-05T15:41:23
| 2020-12-05T15:40:18
|
patil-suraj
|
[] |
Adding Wizard-of-Oz task oriented dialogue dataset
https://github.com/nmrksic/neural-belief-tracker/tree/master/data/woz
https://arxiv.org/abs/1604.04562
| true
|
757,054,732
|
https://api.github.com/repos/huggingface/datasets/issues/1108
|
https://github.com/huggingface/datasets/pull/1108
| 1,108
|
Add Sepedi NER corpus
|
closed
| 0
| 2020-12-04T12:11:24
| 2020-12-04T14:39:00
| 2020-12-04T14:39:00
|
yvonnegitau
|
[] |
Finally a clean PR for Sepedi
| true
|
757,031,179
|
https://api.github.com/repos/huggingface/datasets/issues/1107
|
https://github.com/huggingface/datasets/pull/1107
| 1,107
|
Add arsentd_lev dataset
|
closed
| 1
| 2020-12-04T11:31:04
| 2020-12-05T15:38:09
| 2020-12-05T15:38:09
|
moussaKam
|
[] |
Add The Arabic Sentiment Twitter Dataset for Levantine dialect (ArSenTD-LEV)
Paper: [ArSentD-LEV: A Multi-Topic Corpus for Target-based Sentiment Analysis in Arabic Levantine Tweets](https://arxiv.org/abs/1906.01830)
Homepage: http://oma-project.com/
| true
|
757,027,158
|
https://api.github.com/repos/huggingface/datasets/issues/1106
|
https://github.com/huggingface/datasets/pull/1106
| 1,106
|
Add Urdu fake news
|
closed
| 0
| 2020-12-04T11:24:14
| 2020-12-04T14:21:12
| 2020-12-04T14:21:12
|
chaitnayabasava
|
[] |
Added Urdu fake news dataset. More information about the dataset can be found <a href="https://github.com/MaazAmjad/Datasets-for-Urdu-news">here</a>.
| true
|
757,024,162
|
https://api.github.com/repos/huggingface/datasets/issues/1105
|
https://github.com/huggingface/datasets/pull/1105
| 1,105
|
add xquad_r dataset
|
closed
| 2
| 2020-12-04T11:19:35
| 2020-12-04T16:37:00
| 2020-12-04T16:37:00
|
manandey
|
[] | true
|
|
757,020,934
|
https://api.github.com/repos/huggingface/datasets/issues/1104
|
https://github.com/huggingface/datasets/pull/1104
| 1,104
|
add TLC
|
closed
| 0
| 2020-12-04T11:14:58
| 2020-12-04T14:29:23
| 2020-12-04T14:29:23
|
chameleonTK
|
[] |
Added TLC dataset
| true
|
757,016,820
|
https://api.github.com/repos/huggingface/datasets/issues/1103
|
https://github.com/huggingface/datasets/issues/1103
| 1,103
|
Add support to download kaggle datasets
|
closed
| 2
| 2020-12-04T11:08:37
| 2023-07-20T15:22:24
| 2023-07-20T15:22:23
|
abhishekkrthakur
|
[
"enhancement"
] |
We can use API key
| false
|
757,016,515
|
https://api.github.com/repos/huggingface/datasets/issues/1102
|
https://github.com/huggingface/datasets/issues/1102
| 1,102
|
Add retries to download manager
|
closed
| 0
| 2020-12-04T11:08:11
| 2020-12-22T15:34:06
| 2020-12-22T15:34:06
|
abhishekkrthakur
|
[
"enhancement"
] | false
|
|
757,009,226
|
https://api.github.com/repos/huggingface/datasets/issues/1101
|
https://github.com/huggingface/datasets/pull/1101
| 1,101
|
Add Wikicorpus dataset
|
closed
| 1
| 2020-12-04T10:57:26
| 2020-12-09T18:13:10
| 2020-12-09T18:13:09
|
albertvillanova
|
[] |
Add dataset.
| true
|
756,998,433
|
https://api.github.com/repos/huggingface/datasets/issues/1100
|
https://github.com/huggingface/datasets/pull/1100
| 1,100
|
Urdu fake news
|
closed
| 0
| 2020-12-04T10:41:20
| 2020-12-04T11:19:00
| 2020-12-04T11:19:00
|
chaitnayabasava
|
[] |
Added Bend the Truth urdu fake news dataset. More inforation <a href="https://github.com/MaazAmjad/Datasets-for-Urdu-news">here</a>.
| true
|
756,993,540
|
https://api.github.com/repos/huggingface/datasets/issues/1099
|
https://github.com/huggingface/datasets/pull/1099
| 1,099
|
Add tamilmixsentiment data
|
closed
| 0
| 2020-12-04T10:34:07
| 2020-12-06T06:32:22
| 2020-12-05T16:48:33
|
jamespaultg
|
[] | true
|
|
756,975,414
|
https://api.github.com/repos/huggingface/datasets/issues/1098
|
https://github.com/huggingface/datasets/pull/1098
| 1,098
|
Add ToTTo Dataset
|
closed
| 0
| 2020-12-04T10:07:25
| 2020-12-04T13:38:20
| 2020-12-04T13:38:19
|
abhishekkrthakur
|
[] |
Adds a brand new table to text dataset: https://github.com/google-research-datasets/ToTTo
| true
|
756,955,729
|
https://api.github.com/repos/huggingface/datasets/issues/1097
|
https://github.com/huggingface/datasets/pull/1097
| 1,097
|
Add MSRA NER labels
|
closed
| 0
| 2020-12-04T09:38:16
| 2020-12-04T13:31:59
| 2020-12-04T13:31:58
|
JetRunner
|
[] |
Fixes #940
| true
|
756,952,461
|
https://api.github.com/repos/huggingface/datasets/issues/1096
|
https://github.com/huggingface/datasets/pull/1096
| 1,096
|
FIX matinf link in ADD_NEW_DATASET.md
|
closed
| 0
| 2020-12-04T09:33:25
| 2020-12-04T14:25:35
| 2020-12-04T14:25:35
|
moussaKam
|
[] | true
|
|
756,934,964
|
https://api.github.com/repos/huggingface/datasets/issues/1095
|
https://github.com/huggingface/datasets/pull/1095
| 1,095
|
Add TupleInf Open IE Dataset
|
closed
| 2
| 2020-12-04T09:08:07
| 2020-12-04T15:40:54
| 2020-12-04T15:40:54
|
mattbui
|
[] |
For more information: https://allenai.org/data/tuple-ie
| true
|
756,927,060
|
https://api.github.com/repos/huggingface/datasets/issues/1094
|
https://github.com/huggingface/datasets/pull/1094
| 1,094
|
add urdu fake news dataset
|
closed
| 0
| 2020-12-04T08:57:38
| 2020-12-04T09:20:56
| 2020-12-04T09:20:56
|
chaitnayabasava
|
[] |
Added Urdu fake news dataset. The dataset can be found <a href="https://github.com/MaazAmjad/Datasets-for-Urdu-news">here</a>.
| true
|
756,916,565
|
https://api.github.com/repos/huggingface/datasets/issues/1093
|
https://github.com/huggingface/datasets/pull/1093
| 1,093
|
Add NCBI Disease Corpus dataset
|
closed
| 0
| 2020-12-04T08:42:32
| 2020-12-04T11:15:12
| 2020-12-04T11:15:12
|
edugp
|
[] | true
|
|
756,913,134
|
https://api.github.com/repos/huggingface/datasets/issues/1092
|
https://github.com/huggingface/datasets/pull/1092
| 1,092
|
Add Coached Conversation Preference Dataset
|
closed
| 0
| 2020-12-04T08:36:49
| 2020-12-20T13:34:00
| 2020-12-04T13:49:50
|
vineeths96
|
[] |
Adding [Coached Conversation Preference Dataset](https://research.google/tools/datasets/coached-conversational-preference-elicitation/)
| true
|
756,841,254
|
https://api.github.com/repos/huggingface/datasets/issues/1091
|
https://github.com/huggingface/datasets/pull/1091
| 1,091
|
Add Google wellformed query dataset
|
closed
| 1
| 2020-12-04T06:25:54
| 2020-12-06T17:43:03
| 2020-12-06T17:43:02
|
thevasudevgupta
|
[] |
This pull request will add Google wellformed_query dataset. Link of dataset is https://github.com/google-research-datasets/query-wellformedness
| true
|
756,825,941
|
https://api.github.com/repos/huggingface/datasets/issues/1090
|
https://github.com/huggingface/datasets/pull/1090
| 1,090
|
add thaisum
|
closed
| 0
| 2020-12-04T05:54:48
| 2020-12-04T11:16:06
| 2020-12-04T11:16:06
|
cstorm125
|
[] |
ThaiSum, a large-scale corpus for Thai text summarization obtained from several online news websites namely Thairath, ThaiPBS, Prachathai, and The Standard. This dataset consists of over 350,000 article and summary pairs written by journalists. We evaluate the performance of various existing summarization models on ThaiSum dataset and analyse the characteristic of the dataset to present its difficulties.
| true
|
756,823,690
|
https://api.github.com/repos/huggingface/datasets/issues/1089
|
https://github.com/huggingface/datasets/pull/1089
| 1,089
|
add sharc_modified
|
closed
| 0
| 2020-12-04T05:49:49
| 2020-12-04T10:41:30
| 2020-12-04T10:31:44
|
patil-suraj
|
[] |
Adding modified ShARC dataset https://github.com/nikhilweee/neural-conv-qa
| true
|
756,822,017
|
https://api.github.com/repos/huggingface/datasets/issues/1088
|
https://github.com/huggingface/datasets/pull/1088
| 1,088
|
add xquad_r dataset
|
closed
| 0
| 2020-12-04T05:45:55
| 2020-12-04T10:58:13
| 2020-12-04T10:47:01
|
manandey
|
[] | true
|
|
756,794,430
|
https://api.github.com/repos/huggingface/datasets/issues/1087
|
https://github.com/huggingface/datasets/pull/1087
| 1,087
|
Add Big Patent dataset
|
closed
| 2
| 2020-12-04T04:37:30
| 2020-12-06T17:21:00
| 2020-12-06T17:20:59
|
mattbui
|
[] |
* More info on the dataset: https://evasharma.github.io/bigpatent/
* There's another raw version of the dataset available from tfds. However, they're quite large so I don't have the resources to fully test all the configs for that version yet. We'll try to add it later.
| true
|
756,720,643
|
https://api.github.com/repos/huggingface/datasets/issues/1086
|
https://github.com/huggingface/datasets/pull/1086
| 1,086
|
adding cdt dataset
|
closed
| 2
| 2020-12-04T01:28:11
| 2020-12-04T15:04:02
| 2020-12-04T15:04:02
|
abecadel
|
[] |
- **Name:** *Cyberbullying Detection Task*
- **Description:** *The Cyberbullying Detection task was part of 2019 edition of PolEval competition. The goal is to predict if a given Twitter message contains a cyberbullying (harmful) content.*
- **Data:** *https://github.com/ptaszynski/cyberbullying-Polish*
- **Motivation:** *The KLEJ benchmark (Kompleksowa Lista Ewaluacji Językowych) is a set of nine evaluation tasks for the Polish language understanding.*
| true
|
756,704,563
|
https://api.github.com/repos/huggingface/datasets/issues/1085
|
https://github.com/huggingface/datasets/pull/1085
| 1,085
|
add mutual friends conversational dataset
|
closed
| 1
| 2020-12-04T00:48:21
| 2020-12-16T15:58:31
| 2020-12-16T15:58:30
|
VictorSanh
|
[] |
Mutual friends dataset
WIP
TODO:
- scenario_kbs (bug with pyarrow conversion)
- download from codalab checksums bug
| true
|
756,688,727
|
https://api.github.com/repos/huggingface/datasets/issues/1084
|
https://github.com/huggingface/datasets/pull/1084
| 1,084
|
adding cdsc dataset
|
closed
| 0
| 2020-12-04T00:10:05
| 2020-12-04T10:41:26
| 2020-12-04T10:41:26
|
abecadel
|
[] |
- **Name**: *cdsc (domains: cdsc-e & cdsc-r)*
- **Description**: *Polish CDSCorpus consists of 10K Polish sentence pairs which are human-annotated for semantic relatedness and entailment. The dataset may be used for the evaluation of compositional distributional semantics models of Polish. The dataset was presented at ACL 2017. Please refer to the Wróblewska and Krasnowska-Kieraś (2017) for a detailed description of the resource.*
- **Data**: *http://2019.poleval.pl/index.php/tasks/*
- **Motivation**: *The KLEJ benchmark (Kompleksowa Lista Ewaluacji Językowych) is a set of nine evaluation tasks for the Polish language understanding.*
| true
|
756,687,101
|
https://api.github.com/repos/huggingface/datasets/issues/1083
|
https://github.com/huggingface/datasets/pull/1083
| 1,083
|
Add the multilingual Exams dataset
|
closed
| 1
| 2020-12-04T00:06:04
| 2020-12-04T17:12:00
| 2020-12-04T17:12:00
|
yjernite
|
[] |
https://github.com/mhardalov/exams-qa
`multilingual` configs have all languages mixed together
`crosslingual` mixes the languages for test but separates them for train and dec, so I've made one config per language for train/dev data and one config with the joint test set
| true
|
756,676,218
|
https://api.github.com/repos/huggingface/datasets/issues/1082
|
https://github.com/huggingface/datasets/pull/1082
| 1,082
|
Myanmar news dataset
|
closed
| 1
| 2020-12-03T23:39:00
| 2020-12-04T10:13:38
| 2020-12-04T10:13:38
|
mapmeld
|
[] |
Add news topic classification dataset in Myanmar / Burmese languagess
This data was collected in 2017 by Aye Hninn Khine , and published on GitHub with a GPL license
https://github.com/ayehninnkhine/MyanmarNewsClassificationSystem
| true
|
756,672,527
|
https://api.github.com/repos/huggingface/datasets/issues/1081
|
https://github.com/huggingface/datasets/pull/1081
| 1,081
|
Add Knowledge-Enhanced Language Model Pre-training (KELM)
|
closed
| 0
| 2020-12-03T23:30:09
| 2020-12-04T16:36:28
| 2020-12-04T16:36:28
|
joeddav
|
[] |
Adds the KELM dataset.
- Webpage/repo: https://github.com/google-research-datasets/KELM-corpus
- Paper: https://arxiv.org/pdf/2010.12688.pdf
| true
|
756,663,464
|
https://api.github.com/repos/huggingface/datasets/issues/1080
|
https://github.com/huggingface/datasets/pull/1080
| 1,080
|
Add WikiANN NER dataset
|
closed
| 1
| 2020-12-03T23:09:24
| 2020-12-06T17:18:55
| 2020-12-06T17:18:55
|
lewtun
|
[] |
This PR adds the full set of 176 languages from the balanced train/dev/test splits of WikiANN / PAN-X from: https://github.com/afshinrahimi/mmner
Until now, only 40 of these languages were available in `datasets` as part of the XTREME benchmark
Courtesy of the dataset author, we can now download this dataset from a Dropbox URL without needing a manual download anymore 🥳, so at some point it would be worth updating the PAN-X subset of XTREME as well 😄
Link to gist with some snippets for producing dummy data: https://gist.github.com/lewtun/5b93294ab6dbcf59d1493dbe2cfd6bb9
P.S. @yjernite I think I was confused about needing to generate a set of YAML tags per config, so ended up just adding a single one in the README.
| true
|
756,652,427
|
https://api.github.com/repos/huggingface/datasets/issues/1079
|
https://github.com/huggingface/datasets/pull/1079
| 1,079
|
nkjp-ner
|
closed
| 0
| 2020-12-03T22:47:26
| 2020-12-04T09:42:06
| 2020-12-04T09:42:06
|
abecadel
|
[] |
- **Name:** *nkjp-ner*
- **Description:** *The NKJP-NER is based on a human-annotated part of NKJP. We extracted sentences with named entities of exactly one type. The task is to predict the type of the named entity.*
- **Data:** *https://klejbenchmark.com/tasks/*
- **Motivation:** *The KLEJ benchmark (Kompleksowa Lista Ewaluacji Językowych) is a set of nine evaluation tasks for the Polish language understanding.*
| true
|
756,633,215
|
https://api.github.com/repos/huggingface/datasets/issues/1078
|
https://github.com/huggingface/datasets/pull/1078
| 1,078
|
add AJGT dataset
|
closed
| 0
| 2020-12-03T22:16:31
| 2020-12-04T09:55:15
| 2020-12-04T09:55:15
|
zaidalyafeai
|
[] |
Arabic Jordanian General Tweets.
| true
|
756,617,964
|
https://api.github.com/repos/huggingface/datasets/issues/1077
|
https://github.com/huggingface/datasets/pull/1077
| 1,077
|
Added glucose dataset
|
closed
| 0
| 2020-12-03T21:49:01
| 2020-12-04T09:55:53
| 2020-12-04T09:55:52
|
TevenLeScao
|
[] |
This PR adds the [Glucose](https://github.com/ElementalCognition/glucose) dataset.
| true
|
756,584,328
|
https://api.github.com/repos/huggingface/datasets/issues/1076
|
https://github.com/huggingface/datasets/pull/1076
| 1,076
|
quac quac / coin coin
|
closed
| 1
| 2020-12-03T20:55:29
| 2020-12-04T16:36:39
| 2020-12-04T09:15:20
|
VictorSanh
|
[] |
Add QUAC (Question Answering in Context)
I linearized most of the dictionnaries to lists.
Referenced to the authors' datasheet for the dataset card.
🦆🦆🦆
Coin coin
| true
|
756,501,235
|
https://api.github.com/repos/huggingface/datasets/issues/1075
|
https://github.com/huggingface/datasets/pull/1075
| 1,075
|
adding cleaned verion of E2E NLG
|
closed
| 0
| 2020-12-03T19:21:07
| 2020-12-03T19:43:56
| 2020-12-03T19:43:56
|
yjernite
|
[] |
Found at: https://github.com/tuetschek/e2e-cleaning
| true
|
756,483,172
|
https://api.github.com/repos/huggingface/datasets/issues/1074
|
https://github.com/huggingface/datasets/pull/1074
| 1,074
|
Swedish MT STS-B
|
closed
| 0
| 2020-12-03T19:06:25
| 2020-12-04T20:22:27
| 2020-12-03T20:44:28
|
timpal0l
|
[] |
Added a Swedish machine translated version of the well known STS-B Corpus
| true
|
756,468,034
|
https://api.github.com/repos/huggingface/datasets/issues/1073
|
https://github.com/huggingface/datasets/pull/1073
| 1,073
|
Add DialogRE dataset
|
closed
| 0
| 2020-12-03T18:56:40
| 2020-12-20T13:34:48
| 2020-12-04T13:41:51
|
vineeths96
|
[] |
Adding the [DialogRE](https://github.com/nlpdata/dialogre) dataset Version 2.
- All tests passed successfully.
| true
|
756,454,511
|
https://api.github.com/repos/huggingface/datasets/issues/1072
|
https://github.com/huggingface/datasets/pull/1072
| 1,072
|
actually uses the previously declared VERSION on the configs in the template
|
closed
| 0
| 2020-12-03T18:44:27
| 2020-12-03T19:35:46
| 2020-12-03T19:35:46
|
yjernite
|
[] | true
|
|
756,447,296
|
https://api.github.com/repos/huggingface/datasets/issues/1071
|
https://github.com/huggingface/datasets/pull/1071
| 1,071
|
add xlrd to test package requirements
|
closed
| 0
| 2020-12-03T18:32:47
| 2020-12-03T18:47:16
| 2020-12-03T18:47:16
|
yjernite
|
[] |
Adds `xlrd` package to the test requirements to handle scripts that use `pandas` to load excel files
| true
|
756,442,481
|
https://api.github.com/repos/huggingface/datasets/issues/1070
|
https://github.com/huggingface/datasets/pull/1070
| 1,070
|
add conv_ai
|
closed
| 2
| 2020-12-03T18:25:20
| 2020-12-04T07:58:35
| 2020-12-04T06:44:34
|
patil-suraj
|
[] |
Adding ConvAI dataset https://github.com/DeepPavlov/convai/tree/master/2017
| true
|
756,425,737
|
https://api.github.com/repos/huggingface/datasets/issues/1069
|
https://github.com/huggingface/datasets/pull/1069
| 1,069
|
Test
|
closed
| 0
| 2020-12-03T18:01:45
| 2020-12-04T04:24:18
| 2020-12-04T04:24:11
|
manandey
|
[] | true
|
|
756,417,337
|
https://api.github.com/repos/huggingface/datasets/issues/1068
|
https://github.com/huggingface/datasets/pull/1068
| 1,068
|
Add Pubmed (citation + abstract) dataset (2020).
|
closed
| 4
| 2020-12-03T17:54:10
| 2020-12-23T09:52:07
| 2020-12-23T09:52:07
|
Narsil
|
[] | null | true
|
756,414,212
|
https://api.github.com/repos/huggingface/datasets/issues/1067
|
https://github.com/huggingface/datasets/pull/1067
| 1,067
|
add xquad-r dataset
|
closed
| 0
| 2020-12-03T17:50:01
| 2020-12-03T17:53:21
| 2020-12-03T17:53:15
|
manandey
|
[] | true
|
|
756,391,957
|
https://api.github.com/repos/huggingface/datasets/issues/1066
|
https://github.com/huggingface/datasets/pull/1066
| 1,066
|
Add ChrEn
|
closed
| 3
| 2020-12-03T17:17:48
| 2020-12-03T21:49:39
| 2020-12-03T21:49:39
|
yjernite
|
[] |
Adding the Cherokee English machine translation dataset of https://github.com/ZhangShiyue/ChrEn
| true
|
756,383,414
|
https://api.github.com/repos/huggingface/datasets/issues/1065
|
https://github.com/huggingface/datasets/pull/1065
| 1,065
|
add xquad-r dataset
|
closed
| 0
| 2020-12-03T17:06:23
| 2020-12-03T17:42:21
| 2020-12-03T17:42:03
|
manandey
|
[] | true
|
|
756,382,186
|
https://api.github.com/repos/huggingface/datasets/issues/1064
|
https://github.com/huggingface/datasets/issues/1064
| 1,064
|
Not support links with 302 redirect
|
closed
| 2
| 2020-12-03T17:04:43
| 2021-01-14T02:51:25
| 2021-01-14T02:51:25
|
chameleonTK
|
[
"bug",
"enhancement"
] |
I have an issue adding this download link https://github.com/jitkapat/thailitcorpus/releases/download/v.2.0/tlc_v.2.0.tar.gz
it might be because it is not a direct link (it returns 302 and redirects to aws that returns 403 for head requests).
```
r.head("https://github.com/jitkapat/thailitcorpus/releases/download/v.2.0/tlc_v.2.0.tar.gz", allow_redirects=True)
# <Response [403]>
```
| false
|
756,376,374
|
https://api.github.com/repos/huggingface/datasets/issues/1063
|
https://github.com/huggingface/datasets/pull/1063
| 1,063
|
Add the Ud treebank
|
closed
| 1
| 2020-12-03T16:56:41
| 2020-12-04T16:11:54
| 2020-12-04T15:51:46
|
jplu
|
[] |
This PR adds the 183 datasets in 104 languages of the UD Treebank.
| true
|
756,373,187
|
https://api.github.com/repos/huggingface/datasets/issues/1062
|
https://github.com/huggingface/datasets/pull/1062
| 1,062
|
Add KorNLU dataset
|
closed
| 1
| 2020-12-03T16:52:39
| 2020-12-04T11:05:19
| 2020-12-04T11:05:19
|
sumanthd17
|
[] |
Added Korean NLU datasets. The link to the dataset can be found [here](https://github.com/kakaobrain/KorNLUDatasets) and the paper can be found [here](https://arxiv.org/abs/2004.03289)
**Note**: The MNLI tsv file is broken, so this code currently excludes the file. Please suggest other alternative if any @lhoestq
- [x] Followed the instructions in CONTRIBUTING.md
- [x] Ran the tests successfully
- [x] Created the dummy data
| true
|
756,362,661
|
https://api.github.com/repos/huggingface/datasets/issues/1061
|
https://github.com/huggingface/datasets/pull/1061
| 1,061
|
add labr dataset
|
closed
| 0
| 2020-12-03T16:38:57
| 2020-12-03T18:25:44
| 2020-12-03T18:25:44
|
zaidalyafeai
|
[] |
Arabic Book Reviews dataset.
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.