id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
758,601,828
https://api.github.com/repos/huggingface/datasets/issues/1260
https://github.com/huggingface/datasets/pull/1260
1,260
Added NewsPH Raw Dataset
closed
1
2020-12-07T15:17:53
2020-12-08T16:27:15
2020-12-08T16:27:15
jcblaisecruz02
[]
Added the raw version of the NewsPH dataset, which was used to automatically generate the NewsPH-NLI corpus. Dataset of news articles in Filipino from mainstream Philippine news sites on the internet. Can be used as a language modeling dataset or to reproduce the NewsPH-NLI dataset. Paper: https://arxiv.org/abs/2010.11574 Repo: https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
true
758,565,320
https://api.github.com/repos/huggingface/datasets/issues/1259
https://github.com/huggingface/datasets/pull/1259
1,259
Add KorQPair dataset
closed
2
2020-12-07T14:33:57
2021-12-29T00:49:40
2020-12-08T15:11:41
jaketae
[]
This PR adds a [Korean paired question dataset](https://github.com/songys/Question_pair) containing labels indicating whether two questions in a given pair are semantically identical. This dataset was used to evaluate the performance of [KoGPT2](https://github.com/SKT-AI/KoGPT2#subtask-evaluations) on a phrase detection downstream task.
true
758,557,169
https://api.github.com/repos/huggingface/datasets/issues/1258
https://github.com/huggingface/datasets/pull/1258
1,258
arXiv dataset added
closed
1
2020-12-07T14:23:33
2020-12-08T14:07:15
2020-12-08T14:07:15
tanmoyio
[]
true
758,550,490
https://api.github.com/repos/huggingface/datasets/issues/1257
https://github.com/huggingface/datasets/pull/1257
1,257
Add Swahili news classification dataset
closed
0
2020-12-07T14:15:13
2020-12-08T14:44:19
2020-12-08T14:44:19
yvonnegitau
[]
Add Swahili news classification dataset
true
758,531,980
https://api.github.com/repos/huggingface/datasets/issues/1256
https://github.com/huggingface/datasets/pull/1256
1,256
adding LiMiT dataset
closed
0
2020-12-07T14:00:41
2020-12-08T14:58:28
2020-12-08T14:42:51
patil-suraj
[]
Adding LiMiT: The Literal Motion in Text Dataset https://github.com/ilmgut/limit_dataset
true
758,530,243
https://api.github.com/repos/huggingface/datasets/issues/1255
https://github.com/huggingface/datasets/pull/1255
1,255
[doc] nlp/viewer ➡️datasets/viewer
closed
0
2020-12-07T13:58:41
2020-12-08T17:17:54
2020-12-08T17:17:53
julien-c
[]
cc @srush
true
758,518,774
https://api.github.com/repos/huggingface/datasets/issues/1254
https://github.com/huggingface/datasets/pull/1254
1,254
Added WikiText-TL-39
closed
1
2020-12-07T13:43:48
2020-12-08T16:00:58
2020-12-08T16:00:58
jcblaisecruz02
[]
This PR adds the WikiText-TL-39 Filipino Language Modeling dataset. Paper: https://arxiv.org/abs/1907.00409 Repo: https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
true
758,517,391
https://api.github.com/repos/huggingface/datasets/issues/1253
https://github.com/huggingface/datasets/pull/1253
1,253
add thainer
closed
0
2020-12-07T13:41:54
2020-12-08T14:44:49
2020-12-08T14:44:49
cstorm125
[]
ThaiNER (v1.3) is a 6,456-sentence named entity recognition dataset created from expanding the 2,258-sentence [unnamed dataset](http://pioneer.chula.ac.th/~awirote/Data-Nutcha.zip) by [Tirasaroj and Aroonmanakun (2012)](http://pioneer.chula.ac.th/~awirote/publications/). It is used to train NER taggers in [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp). The NER tags are annotated by [Tirasaroj and Aroonmanakun (2012)]((http://pioneer.chula.ac.th/~awirote/publications/)) for 2,258 sentences and the rest by [@wannaphong](https://github.com/wannaphong/). The POS tags are done by [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp)'s `perceptron` engine trained on `orchid_ud`. [@wannaphong](https://github.com/wannaphong/) is now the only maintainer of this dataset.
true
758,511,388
https://api.github.com/repos/huggingface/datasets/issues/1252
https://github.com/huggingface/datasets/pull/1252
1,252
Add Naver sentiment movie corpus
closed
0
2020-12-07T13:33:45
2020-12-08T14:32:33
2020-12-08T14:21:37
jaketae
[]
Supersedes #1168 > This PR adds the [Naver sentiment movie corpus](https://github.com/e9t/nsmc), a dataset containing Korean movie reviews from Naver, the most commonly used search engine in Korea. This dataset is often used to benchmark models on Korean NLP tasks, as seen in [this paper](https://www.aclweb.org/anthology/2020.lrec-1.199.pdf).
true
758,503,689
https://api.github.com/repos/huggingface/datasets/issues/1251
https://github.com/huggingface/datasets/pull/1251
1,251
Add Wiki Atomic Edits Dataset (43M edits)
closed
1
2020-12-07T13:23:08
2020-12-14T10:05:01
2020-12-14T10:05:00
abhishekkrthakur
[]
true
758,491,704
https://api.github.com/repos/huggingface/datasets/issues/1250
https://github.com/huggingface/datasets/pull/1250
1,250
added Nergrit dataset
closed
0
2020-12-07T13:06:12
2020-12-08T14:33:29
2020-12-08T14:33:29
cahya-wirawan
[]
Nergrit Corpus is a dataset collection for Indonesian Named Entity Recognition, Statement Extraction, and Sentiment Analysis. This PR is only for the Named Entity Recognition.
true
758,472,863
https://api.github.com/repos/huggingface/datasets/issues/1249
https://github.com/huggingface/datasets/pull/1249
1,249
Add doc2dial dataset
closed
2
2020-12-07T12:39:09
2020-12-14T16:17:14
2020-12-14T16:17:14
KMFODA
[]
### Doc2dial: A Goal-Oriented Document-Grounded Dialogue Dataset v0.9 Once complete this will add the [Doc2dial](https://doc2dial.github.io/data.html) dataset from the generic data sets list.
true
758,454,438
https://api.github.com/repos/huggingface/datasets/issues/1248
https://github.com/huggingface/datasets/pull/1248
1,248
Update step-by-step guide about the dataset cards
closed
0
2020-12-07T12:12:12
2020-12-07T13:19:24
2020-12-07T13:19:23
thomwolf
[]
Small update in the step-by-step guide about the dataset cards to indicate it can be created and completing while exploring the dataset.
true
758,431,640
https://api.github.com/repos/huggingface/datasets/issues/1247
https://github.com/huggingface/datasets/pull/1247
1,247
Adding indonlu dataset
closed
2
2020-12-07T11:38:45
2020-12-08T14:11:50
2020-12-08T14:11:50
yasirabd
[]
IndoNLU benchmark is a collection of resources for training, evaluating, and analyzing natural language understanding systems for Bahasa Indonesia. It contains 12 datasets.
true
758,418,652
https://api.github.com/repos/huggingface/datasets/issues/1246
https://github.com/huggingface/datasets/pull/1246
1,246
arXiv dataset added
closed
0
2020-12-07T11:20:23
2020-12-07T14:22:58
2020-12-07T14:22:58
tanmoyio
[]
true
758,411,233
https://api.github.com/repos/huggingface/datasets/issues/1245
https://github.com/huggingface/datasets/pull/1245
1,245
Add Google Turkish Treebank Dataset
closed
1
2020-12-07T11:09:17
2023-09-24T09:40:49
2022-10-03T09:39:32
abhishekkrthakur
[ "dataset contribution" ]
null
true
758,384,417
https://api.github.com/repos/huggingface/datasets/issues/1244
https://github.com/huggingface/datasets/pull/1244
1,244
arxiv dataset added
closed
0
2020-12-07T10:32:54
2020-12-07T11:04:23
2020-12-07T11:04:23
tanmoyio
[]
true
758,378,904
https://api.github.com/repos/huggingface/datasets/issues/1243
https://github.com/huggingface/datasets/pull/1243
1,243
Add Google Noun Verb Dataset
closed
1
2020-12-07T10:26:05
2023-09-24T09:40:54
2022-10-03T09:39:37
abhishekkrthakur
[ "dataset contribution" ]
null
true
758,370,579
https://api.github.com/repos/huggingface/datasets/issues/1242
https://github.com/huggingface/datasets/pull/1242
1,242
adding bprec
closed
2
2020-12-07T10:15:49
2020-12-08T14:33:49
2020-12-08T14:33:48
kldarek
[]
true
758,360,643
https://api.github.com/repos/huggingface/datasets/issues/1241
https://github.com/huggingface/datasets/pull/1241
1,241
Opus elhuyar dataset for MT task having languages pair in Spanish to Basque
closed
0
2020-12-07T10:03:34
2020-12-19T14:55:12
2020-12-09T15:12:48
spatil6
[]
Opus elhuyar dataset for MT task having languages pair in Spanish to Basque More info : http://opus.nlpl.eu/Elhuyar.php
true
758,355,523
https://api.github.com/repos/huggingface/datasets/issues/1240
https://github.com/huggingface/datasets/pull/1240
1,240
Multi Domain Sentiment Analysis Dataset (MDSA)
closed
9
2020-12-07T09:57:15
2023-09-24T09:40:59
2022-10-03T09:39:43
abhishekkrthakur
[ "dataset contribution" ]
null
true
758,339,593
https://api.github.com/repos/huggingface/datasets/issues/1239
https://github.com/huggingface/datasets/pull/1239
1,239
add yelp_review_full dataset
closed
1
2020-12-07T09:35:36
2020-12-08T15:43:24
2020-12-08T15:00:50
hfawaz
[]
This corresponds to the Yelp-5 requested in https://github.com/huggingface/datasets/issues/353
true
758,321,688
https://api.github.com/repos/huggingface/datasets/issues/1238
https://github.com/huggingface/datasets/pull/1238
1,238
adding poem_sentiment
closed
0
2020-12-07T09:11:52
2020-12-09T16:36:10
2020-12-09T16:02:45
patil-suraj
[]
Adding poem_sentiment dataset. https://github.com/google-research-datasets/poem-sentiment
true
758,318,353
https://api.github.com/repos/huggingface/datasets/issues/1237
https://github.com/huggingface/datasets/pull/1237
1,237
Add AmbigQA dataset
closed
0
2020-12-07T09:07:19
2020-12-08T13:38:52
2020-12-08T13:38:52
cceyda
[]
# AmbigQA: Answering Ambiguous Open-domain Questions Dataset Adding the [AmbigQA](https://nlp.cs.washington.edu/ambigqa/) dataset as part of the sprint 🎉 (from Open dataset list for Dataset sprint) Added both the light and full versions (as seen on the dataset homepage) The json format changes based on the value of one 'type' field, so I set the unavailable field to an empty list. This is explained in the README -> Data Fields ```py train_light_dataset = load_dataset('./datasets/ambig_qa',"light",split="train") val_light_dataset = load_dataset('./datasets/ambig_qa',"light",split="validation") train_full_dataset = load_dataset('./datasets/ambig_qa',"full",split="train") val_full_dataset = load_dataset('./datasets/ambig_qa',"full",split="validation") for example in train_light_dataset: for i,t in enumerate(example['annotations']['type']): if t =='singleAnswer': # use the example['annotations']['answer'][i] # example['annotations']['qaPairs'][i] - > is [] print(example['annotations']['answer'][i]) else: # use the example['annotations']['qaPairs'][i] # example['annotations']['answer'][i] - > is [] print(example['annotations']['qaPairs'][i]) ``` - [x] All tests passed - [x] Added dummy data - [x] Added data card (as much as I could)
true
758,263,012
https://api.github.com/repos/huggingface/datasets/issues/1236
https://github.com/huggingface/datasets/pull/1236
1,236
Opus finlex dataset of language pair Finnish and Swedish
closed
0
2020-12-07T07:53:57
2020-12-08T13:30:33
2020-12-08T13:30:33
spatil6
[]
Added Opus_finlex dataset of language pair Finnish and Swedish More info : http://opus.nlpl.eu/Finlex.php
true
758,234,511
https://api.github.com/repos/huggingface/datasets/issues/1235
https://github.com/huggingface/datasets/pull/1235
1,235
Wino bias
closed
1
2020-12-07T07:12:42
2020-12-10T20:48:12
2020-12-10T20:48:01
akshayb7
[]
The PR will fail circleCi tests because of the requirement of manual loading of data. Fresh PR because of messed up history of the previous one.
true
758,229,304
https://api.github.com/repos/huggingface/datasets/issues/1234
https://github.com/huggingface/datasets/pull/1234
1,234
Added ade_corpus_v2, with 3 configs for relation extraction and classification task
closed
3
2020-12-07T07:05:14
2020-12-14T17:49:14
2020-12-14T17:49:14
Nilanshrajput
[]
Adverse Drug Reaction Data: ADE-Corpus-V2 dataset added configs for different tasks with given data
true
758,188,699
https://api.github.com/repos/huggingface/datasets/issues/1233
https://github.com/huggingface/datasets/pull/1233
1,233
Add Curiosity Dialogs Dataset
closed
2
2020-12-07T06:01:00
2020-12-20T13:34:09
2020-12-09T14:50:29
vineeths96
[]
Add Facebook [Curiosity Dialogs](https://github.com/facebookresearch/curiosity) Dataset.
true
758,180,669
https://api.github.com/repos/huggingface/datasets/issues/1232
https://github.com/huggingface/datasets/pull/1232
1,232
Add Grail QA dataset
closed
0
2020-12-07T05:46:45
2020-12-08T13:03:19
2020-12-08T13:03:19
mattbui
[]
For more information: https://dki-lab.github.io/GrailQA/
true
758,121,398
https://api.github.com/repos/huggingface/datasets/issues/1231
https://github.com/huggingface/datasets/pull/1231
1,231
Add Urdu Sentiment Corpus (USC)
closed
0
2020-12-07T03:25:20
2020-12-07T18:05:16
2020-12-07T16:43:23
chaitnayabasava
[]
@lhoestq opened a clean PR containing only relevant files. old PR #1140
true
758,119,342
https://api.github.com/repos/huggingface/datasets/issues/1230
https://github.com/huggingface/datasets/pull/1230
1,230
Add Urdu fake news dataset
closed
1
2020-12-07T03:19:50
2020-12-07T18:04:55
2020-12-07T16:57:54
chaitnayabasava
[]
@lhoestq opened a clean PR containing only relevant files. old PR #1125
true
758,100,707
https://api.github.com/repos/huggingface/datasets/issues/1229
https://github.com/huggingface/datasets/pull/1229
1,229
Muchocine - Spanish movie reviews dataset
closed
4
2020-12-07T02:23:29
2020-12-21T10:09:09
2020-12-21T10:09:09
mapmeld
[]
true
758,049,068
https://api.github.com/repos/huggingface/datasets/issues/1228
https://github.com/huggingface/datasets/pull/1228
1,228
add opus_100 dataset
closed
1
2020-12-06T23:17:24
2020-12-09T14:54:00
2020-12-09T14:54:00
thevasudevgupta
[]
This PR will add [opus100 dataset](http://opus.nlpl.eu/opus-100.php).
true
758,049,060
https://api.github.com/repos/huggingface/datasets/issues/1227
https://github.com/huggingface/datasets/pull/1227
1,227
readme: remove link to Google's responsible AI practices
closed
0
2020-12-06T23:17:22
2020-12-07T08:35:19
2020-12-06T23:20:41
stefan-it
[]
...maybe we'll find a company that reallly stands behind responsible AI practices ;)
true
758,036,979
https://api.github.com/repos/huggingface/datasets/issues/1226
https://github.com/huggingface/datasets/pull/1226
1,226
Add menyo_20k_mt dataset
closed
2
2020-12-06T22:16:15
2020-12-10T19:22:14
2020-12-10T19:22:14
yvonnegitau
[]
Add menyo_20k_mt dataset
true
758,035,501
https://api.github.com/repos/huggingface/datasets/issues/1225
https://github.com/huggingface/datasets/pull/1225
1,225
Add Winobias dataset
closed
1
2020-12-06T22:08:20
2020-12-07T06:45:59
2020-12-07T06:40:50
akshayb7
[]
Pardon me for different commits with same message. There were conflicts after I rebased master while simultaneously pushing my changes to local repo, hence the duplicate entries.
true
758,022,998
https://api.github.com/repos/huggingface/datasets/issues/1224
https://github.com/huggingface/datasets/pull/1224
1,224
adding conceptnet5
closed
11
2020-12-06T21:06:53
2020-12-09T16:38:16
2020-12-09T14:37:17
huu4ontocord
[]
Adding the conceptnet5 and omcs txt files used to create the conceptnet5 dataset. Conceptne5 is a common sense dataset. More info can be found here: https://github.com/commonsense/conceptnet5/wiki
true
758,022,208
https://api.github.com/repos/huggingface/datasets/issues/1223
https://github.com/huggingface/datasets/pull/1223
1,223
🇸🇪 Added Swedish Reviews dataset for sentiment classification in Sw…
closed
0
2020-12-06T21:02:54
2020-12-08T10:54:56
2020-12-08T10:54:56
timpal0l
[]
perhaps: @lhoestq 🤗
true
758,018,953
https://api.github.com/repos/huggingface/datasets/issues/1222
https://github.com/huggingface/datasets/pull/1222
1,222
Add numeric fused head dataset
closed
2
2020-12-06T20:46:53
2020-12-08T11:17:56
2020-12-08T11:17:55
ghomasHudson
[]
Adding the [NFH: Numeric Fused Head](https://nlp.biu.ac.il/~lazary/fh/) dataset. Everything looks sensible and I've included both the identification and resolution tasks. I haven't personally used this dataset in my research so am unable to specify what the default configuration / supervised keys should be. I've filled out the basic info on the model card to the best of my knowledge but it's a little tricky to understand exactly what the fields represent. Dataset author: @yanaiela
true
758,016,032
https://api.github.com/repos/huggingface/datasets/issues/1221
https://github.com/huggingface/datasets/pull/1221
1,221
Add HKCanCor
closed
0
2020-12-06T20:32:07
2020-12-09T16:34:18
2020-12-09T16:34:18
j-chim
[]
This PR adds the [Hong Kong Cantonese Corpus](http://compling.hss.ntu.edu.sg/hkcancor/), by [Luke and Wong 2015](http://compling.hss.ntu.edu.sg/hkcancor/data/LukeWong_Hong-Kong-Cantonese-Corpus.pdf). The dummy data included here was manually created, as the original dataset uses a xml-like format (see a copy hosted [here](https://github.com/fcbond/hkcancor/blob/master/sample/d1_v.txt) for example) that requires a few processing steps.
true
758,015,894
https://api.github.com/repos/huggingface/datasets/issues/1220
https://github.com/huggingface/datasets/pull/1220
1,220
add Korean HateSpeech dataset
closed
5
2020-12-06T20:31:29
2020-12-08T15:21:09
2020-12-08T11:05:42
stevhliu
[]
true
758,013,368
https://api.github.com/repos/huggingface/datasets/issues/1219
https://github.com/huggingface/datasets/pull/1219
1,219
Add Korean NER dataset
closed
0
2020-12-06T20:19:06
2021-12-29T00:50:59
2020-12-08T10:25:33
jaketae
[]
Supersedes #1177 > This PR adds the [Korean named entity recognition dataset](https://github.com/kmounlp/NER). This dataset has been used in many downstream tasks, such as training [KoBERT](https://github.com/SKTBrain/KoBERT) for NER, as seen in this [KoBERT-CRF implementation](https://github.com/eagle705/pytorch-bert-crf-ner).
true
758,009,113
https://api.github.com/repos/huggingface/datasets/issues/1218
https://github.com/huggingface/datasets/pull/1218
1,218
Add WMT20 MLQE 3 shared tasks
closed
3
2020-12-06T19:59:12
2020-12-15T15:27:30
2020-12-15T15:27:29
VictorSanh
[]
3 tasks for the WMT 20 MLQE shared tasks -> 3 different datasets (I re-created #1137 because it was too messy). Note that in L199 `task3.py`, I used `logging.warning` to print some missing data in the train set.
true
758,008,321
https://api.github.com/repos/huggingface/datasets/issues/1217
https://github.com/huggingface/datasets/pull/1217
1,217
adding DataCommons fact checking
closed
0
2020-12-06T19:56:12
2020-12-16T16:22:48
2020-12-16T16:22:48
yjernite
[]
Adding the data from: https://datacommons.org/factcheck/ Had to cheat a bit with the dummy data as the test doesn't recognize `.txt.gz`: had to rename uncompressed files with the `.gz` extension manually without actually compressing
true
758,005,982
https://api.github.com/repos/huggingface/datasets/issues/1216
https://github.com/huggingface/datasets/pull/1216
1,216
Add limit
closed
1
2020-12-06T19:46:18
2020-12-08T07:52:11
2020-12-08T07:52:11
j-chim
[]
This PR adds [LiMiT](https://github.com/ilmgut/limit_dataset), a dataset for literal motion classification/extraction by [Manotas et al., 2020](https://www.aclweb.org/anthology/2020.findings-emnlp.88.pdf).
true
758,002,885
https://api.github.com/repos/huggingface/datasets/issues/1215
https://github.com/huggingface/datasets/pull/1215
1,215
Add irc disentanglement
closed
2
2020-12-06T19:30:46
2020-12-16T16:18:25
2020-12-16T16:18:25
dhruvjoshi1998
[]
added files for irc disentanglement dataset was unable to test dummy data as a result of vpn/proxy issues
true
758,002,786
https://api.github.com/repos/huggingface/datasets/issues/1214
https://github.com/huggingface/datasets/pull/1214
1,214
adding medical-questions-pairs dataset
closed
0
2020-12-06T19:30:12
2020-12-09T14:42:53
2020-12-09T14:42:53
tuner007
[]
This dataset consists of 3048 similar and dissimilar medical question pairs hand-generated and labeled by Curai's doctors. Dataset : https://github.com/curai/medical-question-pair-dataset Paper : https://drive.google.com/file/d/1CHPGBXkvZuZc8hpr46HeHU6U6jnVze-s/view
true
757,983,884
https://api.github.com/repos/huggingface/datasets/issues/1213
https://github.com/huggingface/datasets/pull/1213
1,213
add taskmaster3
closed
2
2020-12-06T17:56:03
2020-12-09T11:05:10
2020-12-09T11:00:29
patil-suraj
[]
Adding Taskmaster-3 dataset https://github.com/google-research-datasets/Taskmaster/tree/master/TM-3-2020. The dataset structure almost same as original dataset with these two changes 1. In original dataset, each `apis` has a `args` filed which is a `dict` with variable keys, which represent the name and value of the args. Here converted that to a `list` of `dict` with keys `arg_name` and `arg_value`. For ex. ```python args = {"name.movie": "Mulan", "name.theater": ": "Mountain AMC 16"} ``` becomes ```python [ { "arg_name": "name.movie", "arg_value": "Mulan" }, { "arg_name": "name.theater", "arg_value": "Mountain AMC 16" } ] ``` 2. Each `apis` has a `response` which is also a `dict` with variable keys representing response name/type and it's value. As above converted it to `list` of `dict` with keys `response_name` and `response_value`.
true
757,978,795
https://api.github.com/repos/huggingface/datasets/issues/1212
https://github.com/huggingface/datasets/pull/1212
1,212
Add Sanskrit Classic texts in datasets
closed
1
2020-12-06T17:31:31
2020-12-07T19:04:08
2020-12-07T19:04:08
parmarsuraj99
[]
true
757,973,719
https://api.github.com/repos/huggingface/datasets/issues/1211
https://github.com/huggingface/datasets/pull/1211
1,211
Add large spanish corpus
closed
0
2020-12-06T17:06:50
2020-12-09T13:36:36
2020-12-09T13:36:36
lewtun
[]
Adds a collection of Spanish corpora that can be useful for pretraining language models. Following a nice suggestion from @yjernite we provide the user with three main ways to preprocess / load either * the whole corpus (17GB!) * one specific sub-corpus * the whole corpus, but return a single split. this is useful if you want to cache the whole preprocessing step once and interact with individual sub-corpora See the dataset card for more details. Ready for review!
true
757,966,959
https://api.github.com/repos/huggingface/datasets/issues/1210
https://github.com/huggingface/datasets/pull/1210
1,210
Add XSUM Hallucination Annotations Dataset
closed
1
2020-12-06T16:40:19
2020-12-20T13:34:56
2020-12-16T16:57:11
vineeths96
[]
Adding Google [XSum Hallucination Annotations](https://github.com/google-research-datasets/xsum_hallucination_annotations) dataset.
true
757,965,934
https://api.github.com/repos/huggingface/datasets/issues/1209
https://github.com/huggingface/datasets/pull/1209
1,209
[AfriBooms] Dataset exists already
closed
2
2020-12-06T16:35:13
2020-12-07T16:52:24
2020-12-07T16:52:23
patrickvonplaten
[]
When trying to add "AfriBooms": https://docs.google.com/spreadsheets/d/12ShVow0M6RavnzbBEabm5j5dv12zBaf0y-niwEPPlo4/edit#gid=1386399609 I noticed that the dataset exists already as a config of Universal Dependencies (universal_dependencies.py). I checked and the data exactly matches so that the new data link does not give any new data. This PR improves the config's description a bit by linking to the paper.
true
757,961,368
https://api.github.com/repos/huggingface/datasets/issues/1208
https://github.com/huggingface/datasets/pull/1208
1,208
Add HKCanCor
closed
0
2020-12-06T16:14:43
2020-12-06T20:23:17
2020-12-06T20:21:54
j-chim
[]
(Apologies, didn't manage the branches properly and the PR got too messy. Going to open a new PR with everything in order)
true
757,953,830
https://api.github.com/repos/huggingface/datasets/issues/1207
https://github.com/huggingface/datasets/pull/1207
1,207
Add msr_genomics_kbcomp Dataset
closed
0
2020-12-06T15:40:05
2020-12-07T15:55:17
2020-12-07T15:55:11
manandey
[]
true
757,952,992
https://api.github.com/repos/huggingface/datasets/issues/1206
https://github.com/huggingface/datasets/pull/1206
1,206
Adding Enriched WebNLG dataset
closed
3
2020-12-06T15:36:20
2023-09-24T09:51:43
2020-12-09T09:40:32
TevenLeScao
[]
This pull requests adds the `en` and `de` versions of the [Enriched WebNLG](https://github.com/ThiagoCF05/webnlg) dataset
true
757,942,403
https://api.github.com/repos/huggingface/datasets/issues/1205
https://github.com/huggingface/datasets/pull/1205
1,205
add lst20 with manual download
closed
2
2020-12-06T14:49:10
2020-12-09T16:33:10
2020-12-09T16:33:10
cstorm125
[]
passed on local: ``` RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_lst20 ``` Not sure how to test: ``` RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_lst20 ``` ``` LST20 Corpus is a dataset for Thai language processing developed by National Electronics and Computer Technology Center (NECTEC), Thailand. It offers five layers of linguistic annotation: word boundaries, POS tagging, named entities, clause boundaries, and sentence boundaries. At a large scale, it consists of 3,164,002 words, 288,020 named entities, 248,181 clauses, and 74,180 sentences, while it is annotated with 16 distinct POS tags. All 3,745 documents are also annotated with one of 15 news genres. Regarding its sheer size, this dataset is considered large enough for developing joint neural models for NLP. Manually download at https://aiforthai.in.th/corpus.php ```
true
757,939,475
https://api.github.com/repos/huggingface/datasets/issues/1204
https://github.com/huggingface/datasets/pull/1204
1,204
adding meta_woz dataset
closed
0
2020-12-06T14:34:13
2020-12-16T15:05:25
2020-12-16T15:05:24
pacman100
[]
true
757,935,170
https://api.github.com/repos/huggingface/datasets/issues/1203
https://github.com/huggingface/datasets/pull/1203
1,203
Add Neural Code Search Dataset
closed
3
2020-12-06T14:12:39
2020-12-09T16:40:15
2020-12-09T16:40:15
vinaykudari
[]
true
757,934,408
https://api.github.com/repos/huggingface/datasets/issues/1202
https://github.com/huggingface/datasets/pull/1202
1,202
Medical question pairs
closed
0
2020-12-06T14:09:07
2020-12-06T17:41:28
2020-12-06T17:41:28
tuner007
[]
This dataset consists of 3048 similar and dissimilar medical question pairs hand-generated and labeled by Curai's doctors. Dataset : https://github.com/curai/medical-question-pair-dataset Paper : https://drive.google.com/file/d/1CHPGBXkvZuZc8hpr46HeHU6U6jnVze-s/view **No splits added**
true
757,927,941
https://api.github.com/repos/huggingface/datasets/issues/1201
https://github.com/huggingface/datasets/pull/1201
1,201
adding medical-questions-pairs
closed
0
2020-12-06T13:36:52
2020-12-06T13:39:44
2020-12-06T13:39:32
tuner007
[]
true
757,926,823
https://api.github.com/repos/huggingface/datasets/issues/1200
https://github.com/huggingface/datasets/pull/1200
1,200
Update ADD_NEW_DATASET.md
closed
0
2020-12-06T13:31:32
2020-12-07T08:32:39
2020-12-07T08:32:39
BramVanroy
[]
Windows needs special treatment again: unfortunately adding `torch` to the requirements does not work well (crashing the installation). Users should first install torch manually and then continue with the other commands. This issue arises all the time when adding torch as a dependency, but because so many novice users seem to participate in adding datasets, it may be useful to add an explicit note for Windows users to ensure that they do not run into issues.
true
757,909,237
https://api.github.com/repos/huggingface/datasets/issues/1199
https://github.com/huggingface/datasets/pull/1199
1,199
Turkish NER dataset, script works fine, couldn't generate dummy data
closed
2
2020-12-06T12:00:03
2020-12-16T16:13:24
2020-12-16T16:13:24
merveenoyan
[]
I've written the script (Turkish_NER.py) that includes dataset. The dataset is a zip inside another zip, and it's extracted as .DUMP file. However, after preprocessing I only get .arrow file. After I ran the script with no error messages, I get .arrow file of dataset, LICENSE and dataset_info.json.
true
757,903,453
https://api.github.com/repos/huggingface/datasets/issues/1198
https://github.com/huggingface/datasets/pull/1198
1,198
Add ALT
closed
3
2020-12-06T11:25:30
2020-12-10T04:18:12
2020-12-10T04:18:12
chameleonTK
[]
ALT dataset -- https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/
true
757,900,160
https://api.github.com/repos/huggingface/datasets/issues/1197
https://github.com/huggingface/datasets/pull/1197
1,197
add taskmaster-2
closed
0
2020-12-06T11:05:18
2020-12-07T15:22:43
2020-12-07T15:22:43
patil-suraj
[]
Adding taskmaster-2 dataset. https://github.com/google-research-datasets/Taskmaster/tree/master/TM-2-2020
true
757,894,920
https://api.github.com/repos/huggingface/datasets/issues/1196
https://github.com/huggingface/datasets/pull/1196
1,196
Add IWSLT'15 English-Vietnamese machine translation Data
closed
2
2020-12-06T10:36:31
2020-12-11T18:26:51
2020-12-11T18:26:51
Nilanshrajput
[]
Preprocessed Dataset from IWSLT'15 English-Vietnamese machine translation: English-Vietnamese. from https://nlp.stanford.edu/projects/nmt/data/iwslt15.en-vi/
true
757,889,045
https://api.github.com/repos/huggingface/datasets/issues/1195
https://github.com/huggingface/datasets/pull/1195
1,195
addition of py_ast
closed
5
2020-12-06T10:00:52
2020-12-08T06:19:24
2020-12-08T06:19:24
reshinthadithyan
[]
The dataset consists of parsed Parsed ASTs that were used to train and evaluate the DeepSyn tool. The Python programs are collected from GitHub repositories by removing duplicate files, removing project forks (copy of another existing repository) ,keeping only programs that parse and have at most 30'000 nodes in the AST and we aim to remove obfuscated files
true
757,880,647
https://api.github.com/repos/huggingface/datasets/issues/1194
https://github.com/huggingface/datasets/pull/1194
1,194
Add msr_text_compression
closed
1
2020-12-06T09:06:11
2020-12-09T10:53:45
2020-12-09T10:53:45
jeromeku
[]
Add [MSR Abstractive Text Compression Dataset](https://msropendata.com/datasets/f8ce2ec9-7fbd-48f7-a8bb-2d2279373563)
true
757,840,830
https://api.github.com/repos/huggingface/datasets/issues/1193
https://github.com/huggingface/datasets/pull/1193
1,193
add taskmaster-1
closed
0
2020-12-06T04:09:57
2020-12-07T15:23:24
2020-12-07T15:08:39
patil-suraj
[]
Adding Taskmaster-1 dataset https://github.com/google-research-datasets/Taskmaster/tree/master/TM-1-2019
true
757,839,671
https://api.github.com/repos/huggingface/datasets/issues/1192
https://github.com/huggingface/datasets/pull/1192
1,192
Add NewsPH_NLI dataset
closed
0
2020-12-06T04:00:31
2020-12-07T15:39:43
2020-12-07T15:39:43
anaerobeth
[]
This PR adds the NewsPH-NLI Dataset, the first benchmark dataset for sentence entailment in the low-resource Filipino language. Constructed through exploting the structure of news articles. Contains 600,000 premise-hypothesis pairs, in 70-15-15 split for training, validation, and testing. Link to the paper: https://arxiv.org/pdf/2010.11574.pdf Link to the dataset/repo: https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
true
757,836,654
https://api.github.com/repos/huggingface/datasets/issues/1191
https://github.com/huggingface/datasets/pull/1191
1,191
Added Translator Human Parity Data For a Chinese-English news transla…
closed
5
2020-12-06T03:34:13
2020-12-09T13:22:45
2020-12-09T13:22:45
leoxzhao
[]
…tion system from Open dataset list for Dataset sprint, Microsoft Datasets tab.
true
757,833,698
https://api.github.com/repos/huggingface/datasets/issues/1190
https://github.com/huggingface/datasets/pull/1190
1,190
Add Fake News Detection in Filipino dataset
closed
2
2020-12-06T03:12:15
2020-12-07T15:39:27
2020-12-07T15:39:27
anaerobeth
[]
This PR adds the Fake News Filipino Dataset, a low-resource fake news detection corpora in Filipino. Contains 3,206 expertly-labeled news samples, half of which are real and half of which are fake. Link to the paper: http://www.lrec-conf.org/proceedings/lrec2020/index.html Link to the dataset/repo: https://github.com/jcblaisecruz02/Tagalog-fake-news
true
757,831,035
https://api.github.com/repos/huggingface/datasets/issues/1189
https://github.com/huggingface/datasets/pull/1189
1,189
Add Dengue dataset in Filipino
closed
0
2020-12-06T02:50:47
2020-12-07T15:38:58
2020-12-07T15:38:58
anaerobeth
[]
This PR adds the Dengue Dataset, a benchmark dataset for low-resource multiclass classification, with 4,015 training, 500 testing, and 500 validation examples, each labeled as part of five classes. Each sample can be a part of multiple classes. Collected as tweets. Link to the paper: https://ieeexplore.ieee.org/document/8459963 Link to the dataset/repo: https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
true
757,827,407
https://api.github.com/repos/huggingface/datasets/issues/1188
https://github.com/huggingface/datasets/pull/1188
1,188
adding hind_encorp dataset
closed
13
2020-12-06T02:18:45
2020-12-11T17:40:41
2020-12-11T17:40:41
rahul-art
[]
adding Hindi_Encorp05 dataset
true
757,826,707
https://api.github.com/repos/huggingface/datasets/issues/1187
https://github.com/huggingface/datasets/pull/1187
1,187
Added AQUA-RAT (Algebra Question Answering with Rationales) Dataset
closed
1
2020-12-06T02:12:52
2020-12-07T15:37:12
2020-12-07T15:37:12
arkhalid
[]
true
757,826,660
https://api.github.com/repos/huggingface/datasets/issues/1186
https://github.com/huggingface/datasets/pull/1186
1,186
all test passed
closed
1
2020-12-06T02:12:32
2020-12-07T15:06:55
2020-12-07T15:06:55
rahul-art
[]
need help creating dummy data
true
757,825,413
https://api.github.com/repos/huggingface/datasets/issues/1185
https://github.com/huggingface/datasets/pull/1185
1,185
Add Hate Speech Dataset in Filipino
closed
0
2020-12-06T02:01:56
2020-12-07T15:35:33
2020-12-07T15:35:33
anaerobeth
[]
This PR adds the Hate Speech Dataset, a text classification dataset in Filipino, consisting 10k tweets (training set) that are labeled as hate speech or non-hate speech. Released with 4,232 validation and 4,232 testing samples. Collected during the 2016 Philippine Presidential Elections. Link to the paper: https://pcj.csp.org.ph/index.php/pcj/issue/download/29/PCJ%20V14%20N1%20pp1-14%202019 Link to the dataset/repo: https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
true
757,807,583
https://api.github.com/repos/huggingface/datasets/issues/1184
https://github.com/huggingface/datasets/pull/1184
1,184
Add Adversarial SQuAD dataset
closed
5
2020-12-05T23:51:57
2020-12-16T16:12:58
2020-12-16T16:12:58
cceyda
[]
# Adversarial SQuAD Adding the Adversarial [SQuAD](https://github.com/robinjia/adversarial-squad) dataset as part of the sprint 🎉 This dataset adds adversarial sentences to a subset of the SQuAD dataset's dev examples. How to get the original squad example id is explained in readme->Data Instances. The whole data is intended for use in evaluation. (Which could of course be also used for training if one wants). So there is no classical train/val/test split, but a split based on the number of adversaries added. There are 2 splits of this dataset: - AddSent: Has up to five candidate adversarial sentences that don't answer the question, but have a lot of words in common with the question. This adversary is does not query the model in any way. - AddOneSent: Similar to AddSent, but just one candidate sentences was picked at random. This adversary is does not query the model in any way. (The AddAny and AddCommon datasets mentioned in the paper are dynamically generated based on model's output distribution thus are not included here) The failing test look like some unrelated timeout thing, will probably clear if rerun. - [x] All tests passed - [x] Added dummy data - [x] Added data card (as much as I could)
true
757,806,570
https://api.github.com/repos/huggingface/datasets/issues/1183
https://github.com/huggingface/datasets/pull/1183
1,183
add mkb dataset
closed
3
2020-12-05T23:44:33
2020-12-09T09:38:50
2020-12-09T09:38:50
thevasudevgupta
[]
This PR will add Mann Ki Baat dataset (parallel data for Indian languages).
true
757,804,877
https://api.github.com/repos/huggingface/datasets/issues/1182
https://github.com/huggingface/datasets/pull/1182
1,182
ADD COVID-QA dataset
closed
2
2020-12-05T23:31:56
2020-12-28T13:23:14
2020-12-07T14:23:27
olinguyen
[]
This PR adds the COVID-QA dataset, a question answering dataset consisting of 2,019 question/answer pairs annotated by volunteer biomedical experts on scientific articles related to COVID-19 Link to the paper: https://openreview.net/forum?id=JENSKEEzsoU Link to the dataset/repo: https://github.com/deepset-ai/COVID-QA
true
757,791,992
https://api.github.com/repos/huggingface/datasets/issues/1181
https://github.com/huggingface/datasets/pull/1181
1,181
added emotions detection in arabic dataset
closed
3
2020-12-05T22:08:46
2020-12-21T09:53:51
2020-12-21T09:53:51
abdulelahsm
[]
Dataset for Emotions detection in Arabic text more info: https://github.com/AmrMehasseb/Emotional-Tone
true
757,784,612
https://api.github.com/repos/huggingface/datasets/issues/1180
https://github.com/huggingface/datasets/pull/1180
1,180
Add KorQuAD v2 Dataset
closed
3
2020-12-05T21:33:34
2020-12-16T16:10:30
2020-12-16T16:10:30
cceyda
[]
# The Korean Question Answering Dataset v2 Adding the [KorQuAD](https://korquad.github.io/) v2 dataset as part of the sprint 🎉 This dataset is very similar to SQuAD and is an extension of [squad_kor_v1](https://github.com/huggingface/datasets/pull/1178) which is why I added it as `squad_kor_v2`. - Crowd generated questions and answer (1-answer per question) for Wikipedia articles. Differently from V1 it includes the html structure and markup, which makes it a different enough dataset. (doesn't share ids between v1 and v2 either) - [x] All tests passed - [x] Added dummy data - [x] Added data card (as much as I could) Edit: 🤦 looks like squad_kor_v1 commit sneaked in here too
true
757,784,074
https://api.github.com/repos/huggingface/datasets/issues/1179
https://github.com/huggingface/datasets/pull/1179
1,179
Small update to the doc: add flatten_indices in doc
closed
0
2020-12-05T21:30:10
2020-12-07T13:42:57
2020-12-07T13:42:56
thomwolf
[]
Small update to the doc: add flatten_indices in doc
true
757,783,435
https://api.github.com/repos/huggingface/datasets/issues/1178
https://github.com/huggingface/datasets/pull/1178
1,178
Add KorQuAD v1 Dataset
closed
0
2020-12-05T21:25:46
2020-12-07T13:41:37
2020-12-07T13:41:37
cceyda
[]
# The Korean Question Answering Dataset Adding the [KorQuAD](https://korquad.github.io/KorQuad%201.0/) v1 dataset as part of the sprint 🎉 This dataset is very similar to SQuAD which is why I added it as `squad_kor_v1`. There is also a v2 which I added [here](https://github.com/huggingface/datasets/pull/1180). - Crowd generated questions and answer (1-answer per question) for Wikipedia articles. - [x] All tests passed - [x] Added dummy data - [x] Added data card (as much as I could)
true
757,778,684
https://api.github.com/repos/huggingface/datasets/issues/1177
https://github.com/huggingface/datasets/pull/1177
1,177
Add Korean NER dataset
closed
1
2020-12-05T20:56:00
2020-12-06T20:19:48
2020-12-06T20:19:48
jaketae
[]
This PR adds the [Korean named entity recognition dataset](https://github.com/kmounlp/NER). This dataset has been used in many downstream tasks, such as training [KoBERT](https://github.com/SKTBrain/KoBERT) for NER, as seen in this [KoBERT-CRF implementation](https://github.com/eagle705/pytorch-bert-crf-ner).
true
757,778,365
https://api.github.com/repos/huggingface/datasets/issues/1176
https://github.com/huggingface/datasets/pull/1176
1,176
Add OpenPI Dataset
closed
14
2020-12-05T20:54:06
2022-10-03T09:39:54
2022-10-03T09:39:54
bharatr21
[ "dataset contribution" ]
Add the OpenPI Dataset by AI2 (AllenAI)
true
757,770,077
https://api.github.com/repos/huggingface/datasets/issues/1175
https://github.com/huggingface/datasets/pull/1175
1,175
added ReDial dataset
closed
1
2020-12-05T20:04:18
2020-12-07T13:21:43
2020-12-07T13:21:43
bhavitvyamalik
[]
Updating README Dataset link: https://redialdata.github.io/website/datasheet
true
757,768,474
https://api.github.com/repos/huggingface/datasets/issues/1174
https://github.com/huggingface/datasets/pull/1174
1,174
Add Universal Morphologies
closed
2
2020-12-05T19:54:43
2021-01-26T16:50:16
2021-01-26T16:41:48
yjernite
[]
Adding unimorph universal morphology annotations for 110 languages, pfew!!! one lemma per row with all possible forms and annotations https://unimorph.github.io/
true
757,761,967
https://api.github.com/repos/huggingface/datasets/issues/1173
https://github.com/huggingface/datasets/pull/1173
1,173
add wikipedia biography dataset
closed
7
2020-12-05T19:14:50
2020-12-07T11:13:14
2020-12-07T11:13:14
alejandrocros
[]
My first PR containing the Wikipedia biographies dataset. I have followed all the steps in the [guide](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). It passes all the tests.
true
757,758,532
https://api.github.com/repos/huggingface/datasets/issues/1172
https://github.com/huggingface/datasets/pull/1172
1,172
Add proto_qa dataset
closed
1
2020-12-05T18:55:04
2020-12-07T11:12:24
2020-12-07T11:12:24
bpatidar
[]
Added dataset tags as required.
true
757,757,000
https://api.github.com/repos/huggingface/datasets/issues/1171
https://github.com/huggingface/datasets/pull/1171
1,171
Add imdb Urdu Reviews dataset.
closed
1
2020-12-05T18:46:05
2020-12-07T11:11:17
2020-12-07T11:11:17
chaitnayabasava
[]
Added the imdb Urdu reviews dataset. More info about the dataset over <a href="https://github.com/mirfan899/Urdu">here</a>.
true
757,754,378
https://api.github.com/repos/huggingface/datasets/issues/1170
https://github.com/huggingface/datasets/pull/1170
1,170
Fix path handling for Windows
closed
1
2020-12-05T18:31:54
2020-12-07T10:47:23
2020-12-07T10:47:23
edugp
[]
true
757,747,997
https://api.github.com/repos/huggingface/datasets/issues/1169
https://github.com/huggingface/datasets/pull/1169
1,169
Add Opus fiskmo dataset for Finnish and Swedish for MT task
closed
1
2020-12-05T17:56:55
2020-12-07T11:04:11
2020-12-07T11:04:11
spatil6
[]
Adding fiskmo, a massive parallel corpus for Finnish and Swedish. for more info : http://opus.nlpl.eu/fiskmo.php
true
757,740,780
https://api.github.com/repos/huggingface/datasets/issues/1168
https://github.com/huggingface/datasets/pull/1168
1,168
Add Naver sentiment movie corpus
closed
1
2020-12-05T17:25:23
2020-12-07T13:34:09
2020-12-07T13:34:09
jaketae
[]
This PR adds the [Naver sentiment movie corpus](https://github.com/e9t/nsmc), a dataset containing Korean movie reviews from Naver, the most commonly used search engine in Korea. This dataset is often used to benchmark models on Korean NLP tasks, as seen in [this paper](https://www.aclweb.org/anthology/2020.lrec-1.199.pdf).
true
757,722,921
https://api.github.com/repos/huggingface/datasets/issues/1167
https://github.com/huggingface/datasets/issues/1167
1,167
❓ On-the-fly tokenization with datasets, tokenizers, and torch Datasets and Dataloaders
closed
2
2020-12-05T17:02:56
2023-07-20T15:49:42
2023-07-20T15:49:42
pietrolesci
[ "question", "generic discussion" ]
Hi there, I have a question regarding "on-the-fly" tokenization. This question was elicited by reading the "How to train a new language model from scratch using Transformers and Tokenizers" [here](https://huggingface.co/blog/how-to-train). Towards the end there is this sentence: "If your dataset is very large, you can opt to load and tokenize examples on the fly, rather than as a preprocessing step". I've tried coming up with a solution that would combine both `datasets` and `tokenizers`, but did not manage to find a good pattern. I guess the solution would entail wrapping a dataset into a Pytorch dataset. As a concrete example from the [docs](https://huggingface.co/transformers/custom_datasets.html) ```python import torch class SquadDataset(torch.utils.data.Dataset): def __init__(self, encodings): # instead of doing this beforehand, I'd like to do tokenization on the fly self.encodings = encodings def __getitem__(self, idx): return {key: torch.tensor(val[idx]) for key, val in self.encodings.items()} def __len__(self): return len(self.encodings.input_ids) train_dataset = SquadDataset(train_encodings) ``` How would one implement this with "on-the-fly" tokenization exploiting the vectorized capabilities of tokenizers? ---- Edit: I have come up with this solution. It does what I want, but I feel it's not very elegant ```python class CustomPytorchDataset(Dataset): def __init__(self): self.dataset = some_hf_dataset(...) self.tokenizer = BertTokenizerFast.from_pretrained("bert-base-uncased") def __getitem__(self, batch_idx): instance = self.dataset[text_col][batch_idx] tokenized_text = self.tokenizer(instance, truncation=True, padding=True) return tokenized_text def __len__(self): return len(self.dataset) @staticmethod def collate_fn(batch): # batch is a list, however it will always contain 1 item because we should not use the # batch_size argument as batch_size is controlled by the sampler return {k: torch.tensor(v) for k, v in batch[0].items()} torch_ds = CustomPytorchDataset() # NOTE: batch_sampler returns list of integers and since here we have SequentialSampler # it returns: [1, 2, 3], [4, 5, 6], etc. - check calling `list(batch_sampler)` batch_sampler = BatchSampler(SequentialSampler(torch_ds), batch_size=3, drop_last=True) # NOTE: no `batch_size` as now the it is controlled by the sampler! dl = DataLoader(dataset=torch_ds, sampler=batch_sampler, collate_fn=torch_ds.collate_fn) ```
false
757,721,208
https://api.github.com/repos/huggingface/datasets/issues/1166
https://github.com/huggingface/datasets/pull/1166
1,166
Opus montenegrinsubs
closed
1
2020-12-05T17:00:44
2020-12-07T11:02:49
2020-12-07T11:02:49
spatil6
[]
Opus montenegrinsubs - language pair en-me more info : http://opus.nlpl.eu/MontenegrinSubs.php
true
757,720,226
https://api.github.com/repos/huggingface/datasets/issues/1165
https://github.com/huggingface/datasets/pull/1165
1,165
Add ar rest reviews
closed
8
2020-12-05T16:56:42
2020-12-21T17:06:23
2020-12-21T17:06:23
abdulelahsm
[]
added restaurants reviews in Arabic for sentiment analysis tasks
true
757,716,575
https://api.github.com/repos/huggingface/datasets/issues/1164
https://github.com/huggingface/datasets/pull/1164
1,164
Add DaNe dataset
closed
1
2020-12-05T16:36:50
2020-12-08T12:50:18
2020-12-08T12:49:55
ophelielacroix
[]
true
757,711,340
https://api.github.com/repos/huggingface/datasets/issues/1163
https://github.com/huggingface/datasets/pull/1163
1,163
Added memat : Xhosa-English parallel corpora
closed
2
2020-12-05T16:08:50
2020-12-07T10:40:24
2020-12-07T10:40:24
spatil6
[]
Added memat : Xhosa-English parallel corpora for more info : http://opus.nlpl.eu/memat.php
true
757,707,085
https://api.github.com/repos/huggingface/datasets/issues/1162
https://github.com/huggingface/datasets/pull/1162
1,162
Add Mocha dataset
closed
0
2020-12-05T15:45:14
2020-12-07T10:09:39
2020-12-07T10:09:39
mattbui
[]
More information: https://allennlp.org/mocha
true
757,705,286
https://api.github.com/repos/huggingface/datasets/issues/1161
https://github.com/huggingface/datasets/pull/1161
1,161
Linguisticprobing
closed
1
2020-12-05T15:35:18
2022-10-03T09:40:04
2022-10-03T09:40:04
sileod
[ "dataset contribution" ]
Adding Linguistic probing datasets from What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties https://www.aclweb.org/anthology/P18-1198/
true