author stringlengths 2 29 ⌀ | cardData null | citation stringlengths 0 9.58k ⌀ | description stringlengths 0 5.93k ⌀ | disabled bool 1
class | downloads float64 1 1M ⌀ | gated bool 2
classes | id stringlengths 2 108 | lastModified stringlengths 24 24 | paperswithcode_id stringlengths 2 45 ⌀ | private bool 2
classes | sha stringlengths 40 40 | siblings list | tags list | readme_url stringlengths 57 163 | readme stringlengths 0 977k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | null | @InProceedings{niklaus-etal-2021-swiss,
author = {Niklaus, Joel
and Chalkidis, Ilias
and Stürmer, Matthias},
title = {Swiss-Court-Predict: A Multilingual Legal Judgment Prediction Benchmark},
booktitle = {Proceedings of the 2021 Natural Legal Language Processing Workshop},
year =... | Swiss-Judgment-Prediction is a multilingual, diachronic dataset of 85K Swiss Federal Supreme Court (FSCS) cases annotated with the respective binarized judgment outcome (approval/dismissal), posing a challenging text classification task. We also provide additional metadata, i.e., the publication year, the legal area an... | false | 1,950 | false | swiss_judgment_prediction | 2022-11-03T16:32:23.000Z | null | false | b08ec8b47d0a4b8c3e36e36d06a7b0492a64f55c | [] | [
"arxiv:2110.00806",
"arxiv:2209.12325",
"annotations_creators:found",
"language_creators:found",
"language:de",
"language:fr",
"language:it",
"language:en",
"license:cc-by-sa-4.0",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_categories:text... | https://huggingface.co/datasets/swiss_judgment_prediction/resolve/main/README.md | ---
pretty_name: Swiss-Judgment-Prediction
annotations_creators:
- found
language_creators:
- found
language:
- de
- fr
- it
- en
license:
- cc-by-sa-4.0
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
tags:
- judgement-predic... |
null | null | @inproceedings{2019TabFactA,
title={TabFact : A Large-scale Dataset for Table-based Fact Verification},
author={Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou and William Yang Wang},
booktitle = {International Conference on Learning Representations (ICLR)},
address = {Ad... | The problem of verifying whether a textual hypothesis holds the truth based on the given evidence, also known as fact verification, plays an important role in the study of natural language understanding and semantic representation. However, existing studies are restricted to dealing with unstructured textual evidence (... | false | 3,578 | false | tab_fact | 2022-11-03T16:32:39.000Z | tabfact | false | 45c5957bd8feb525cd77e5f5e580989546d17783 | [] | [
"arxiv:1909.02164",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language:en",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"task_categories:text-classification",
"task_ids:fact-checking"
] | https://huggingface.co/datasets/tab_fact/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- fact-checking
paperswithcode_id: tabfact
pretty_name: TabFact
dataset_... |
null | null | @inproceedings{chakravarthi-etal-2020-corpus,
title = "Corpus Creation for Sentiment Analysis in Code-Mixed {T}amil-{E}nglish Text",
author = "Chakravarthi, Bharathi Raja and
Muralidaran, Vigneshwaran and
Priyadharshini, Ruba and
McCrae, John Philip",
booktitle = "Proceedings of the 1st... | The first gold standard Tamil-English code-switched, sentiment-annotated corpus containing 15,744 comment posts from YouTube. Train: 11,335 Validation: 1,260 and Test: 3,149. This makes the largest general domain sentiment dataset for this relatively low-resource language with code-mixing phenomenon. The dataset cont... | false | 327 | false | tamilmixsentiment | 2022-11-03T16:07:53.000Z | null | false | 862046c7fdcc23007479c8516ade30a881ee2734 | [] | [
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language:en",
"language:ta",
"license:unknown",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_categories:text-classification",
"task_ids:sentiment-classification"
] | https://huggingface.co/datasets/tamilmixsentiment/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
- ta
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
paperswithcode_id: null
pretty_name:... |
null | null | J. Tiedemann, 2012, Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012) | This is a collection of Quran translations compiled by the Tanzil project
The translations provided at this page are for non-commercial purposes only. If used otherwise, you need to obtain necessary permission from the translator or the publisher.
If you are using more than three of the following translations in a web... | false | 950 | false | tanzil | 2022-11-03T16:31:41.000Z | null | false | cf80f7db5d8b09252ff7c01c856acfa5a13c8822 | [] | [
"annotations_creators:found",
"language_creators:found",
"language:am",
"language:ar",
"language:az",
"language:bg",
"language:bn",
"language:bs",
"language:cs",
"language:de",
"language:dv",
"language:en",
"language:es",
"language:fa",
"language:fr",
"language:ha",
"language:hi",
... | https://huggingface.co/datasets/tanzil/resolve/main/README.md | ---
annotations_creators:
- found
language_creators:
- found
language:
- am
- ar
- az
- bg
- bn
- bs
- cs
- de
- dv
- en
- es
- fa
- fr
- ha
- hi
- id
- it
- ja
- ko
- ku
- ml
- ms
- nl
- 'no'
- pl
- pt
- ro
- ru
- sd
- so
- sq
- sv
- sw
- ta
- tg
- th
- tr
- tt
- ug
- ur
- uz
- zh
license:
- unknown
multilinguality:
-... |
null | null | @dataset{scherrer_yves_2020_3707949,
author = {Scherrer, Yves},
title = {{TaPaCo: A Corpus of Sentential Paraphrases for 73 Languages}},
month = mar,
year = 2020,
publisher = {Zenodo},
version = {1.0},
doi = {10.5281/zenodo.3707949},
url = {https://d... | A freely available paraphrase corpus for 73 languages extracted from the Tatoeba database. Tatoeba is a crowdsourcing project mainly geared towards language learners. Its aim is to provide example sentences and translations for particular linguistic constructions and words. The paraphrase corpus is created by populatin... | false | 12,024 | false | tapaco | 2022-11-03T16:47:11.000Z | tapaco | false | fb939c2f45a647d598670267f5638b118c3574d3 | [] | [
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"language:af",
"language:ar",
"language:az",
"language:be",
"language:ber",
"language:bg",
"language:bn",
"language:br",
"language:ca",
"language:cbk",
"language:cmn",
"language:cs",
"language:da",
"language:de... | https://huggingface.co/datasets/tapaco/resolve/main/README.md | ---
annotations_creators:
- machine-generated
language_creators:
- crowdsourced
language:
- af
- ar
- az
- be
- ber
- bg
- bn
- br
- ca
- cbk
- cmn
- cs
- da
- de
- el
- en
- eo
- es
- et
- eu
- fi
- fr
- gl
- gos
- he
- hi
- hr
- hu
- hy
- ia
- id
- ie
- io
- is
- it
- ja
- jbo
- kab
- ko
- kw
- la
- lfn
- lt
- mk
- m... |
null | null | @article{zerrouki2017tashkeela,
title={Tashkeela: Novel corpus of Arabic vocalized texts, data for auto-diacritization systems},
author={Zerrouki, Taha and Balla, Amar},
journal={Data in brief},
volume={11},
pages={147},
year={2017},
publisher={Elsevier}
} | Arabic vocalized texts.
it contains 75 million of fully vocalized words mainly97 books from classical and modern Arabic language. | false | 321 | false | tashkeela | 2022-11-03T16:07:53.000Z | null | false | 8c3a388dcbcff57e0949d8dde6ddc4c566f63672 | [] | [
"annotations_creators:no-annotation",
"language_creators:found",
"language:ar",
"license:gpl-2.0",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-lan... | https://huggingface.co/datasets/tashkeela/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- ar
license:
- gpl-2.0
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: null
pretty... |
null | null | @inproceedings{48484,
title = {Taskmaster-1: Toward a Realistic and Diverse Dialog Dataset},
author = {Bill Byrne and Karthik Krishnamoorthi and Chinnadhurai Sankar and Arvind Neelakantan and Daniel Duckworth and Semih Yavuz and Ben Goodrich and Amit Dubey and Kyu-Young Kim and Andy Cedilnik},
year = {2019}
} | Taskmaster-1 is a goal-oriented conversational dataset. It includes 13,215 task-based dialogs comprising six domains. Two procedures were used to create this collection, each with unique advantages. The first involves a two-person, spoken "Wizard of Oz" (WOz) approach in which trained agents and crowdsourced workers i... | false | 758 | false | taskmaster1 | 2022-11-03T16:31:16.000Z | taskmaster-1 | false | d6401cea353aa0d1b7fedb82f38345567e8ef87e | [] | [
"arxiv:1909.05358",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language:en",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:dialogue... | https://huggingface.co/datasets/taskmaster1/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- dialogue-modeling
paperswithcode_id: taskmaster-1
pretty_name: ... |
null | null | @inproceedings{48484,
title = {Taskmaster-1: Toward a Realistic and Diverse Dialog Dataset},
author = {Bill Byrne and Karthik Krishnamoorthi and Chinnadhurai Sankar and Arvind Neelakantan and Daniel Duckworth and Semih Yavuz and Ben Goodrich and Amit Dubey and Kyu-Young Kim and Andy Cedilnik},
year = {2019}
} | Taskmaster is dataset for goal oriented conversations. The Taskmaster-2 dataset consists of 17,289 dialogs in the seven domains which include restaurants, food ordering, movies, hotels, flights, music and sports. Unlike Taskmaster-1, which includes both written "self-dialogs" and spoken two-person dialogs, Taskmaster-2... | false | 2,170 | false | taskmaster2 | 2022-11-03T16:32:19.000Z | taskmaster-2 | false | 519f8e6b70060eaf9a6f2d6bd4bf4c08f6bf7c01 | [] | [
"arxiv:1909.05358",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language:en",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:dialogue... | https://huggingface.co/datasets/taskmaster2/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- dialogue-modeling
paperswithcode_id: taskmaster-2
pretty_name: ... |
null | null | @inproceedings{48484,
title = {Taskmaster-1: Toward a Realistic and Diverse Dialog Dataset},
author = {Bill Byrne and Karthik Krishnamoorthi and Chinnadhurai Sankar and Arvind Neelakantan and Daniel Duckworth and Semih Yavuz and Ben Goodrich and Amit Dubey and Kyu-Young Kim and Andy Cedilnik},
year = {2019}
} | Taskmaster is dataset for goal oriented conversations. The Taskmaster-3 dataset consists of 23,757 movie ticketing dialogs. By "movie ticketing" we mean conversations where the customer's goal is to purchase tickets after deciding on theater, time, movie name, number of tickets, and date, or opt out of the transaction.... | false | 595 | false | taskmaster3 | 2022-11-03T16:30:39.000Z | null | false | 55b57b262cb27d3ed7a90ac98c1c7301946ec2fe | [] | [
"arxiv:1909.05358",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language:en",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:dialog... | https://huggingface.co/datasets/taskmaster3/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- dialogue-modeling
paperswithcode_id: null
pretty_name: taskma... |
null | null | @InProceedings{TIEDEMANN12.463,
author = {J{\"o}rg}rg Tiedemann},
title = {Parallel Data, Tools and Interfaces in OPUS},
booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)},
year = {2012},
month = {may},
date = {23-25},
address = {Istanbul, Turkey... | This is a collection of translated sentences from Tatoeba
359 languages, 3,403 bitexts
total number of files: 750
total number of tokens: 65.54M
total number of sentence fragments: 8.96M | false | 3,251 | false | tatoeba | 2022-11-03T16:32:34.000Z | tatoeba | false | f0b1d791cdd3b9439a9221c9fab50ed8841538f4 | [] | [
"annotations_creators:found",
"language_creators:found",
"language:ab",
"language:acm",
"language:ady",
"language:af",
"language:afb",
"language:afh",
"language:aii",
"language:ain",
"language:ajp",
"language:akl",
"language:aln",
"language:am",
"language:an",
"language:ang",
"langua... | https://huggingface.co/datasets/tatoeba/resolve/main/README.md | ---
annotations_creators:
- found
language_creators:
- found
language:
- ab
- acm
- ady
- af
- afb
- afh
- aii
- ain
- ajp
- akl
- aln
- am
- an
- ang
- aoz
- apc
- ar
- arq
- ary
- arz
- as
- ast
- avk
- awa
- ayl
- az
- ba
- bal
- bar
- be
- ber
- bg
- bho
- bjn
- bm
- bn
- bo
- br
- brx
- bs
- bua
- bvy
- bzt
- ca
-... |
null | null | @inproceedings{Ye2018WordEmbeddings,
author = {Ye, Qi and Devendra, Sachan and Matthieu, Felix and Sarguna, Padmanabhan and Graham, Neubig},
title = {When and Why are pre-trained word embeddings useful for Neural Machine Translation},
booktitle = {HLT-NAACL},
year = {2018},
} | Data sets derived from TED talk transcripts for comparing similar language pairs
where one is high resource and the other is low resource. | false | 2,419 | false | ted_hrlr | 2022-11-03T16:32:42.000Z | null | false | e9f767ab1634fb0948db8030985b1fa535faa4d4 | [] | [
"annotations_creators:crowdsourced",
"language:az",
"language:be",
"language:en",
"language:es",
"language:fr",
"language:gl",
"language:he",
"language:it",
"language:pt",
"language:ru",
"language:tr",
"language_creators:expert-generated",
"license:cc-by-nc-nd-4.0",
"multilinguality:tran... | https://huggingface.co/datasets/ted_hrlr/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language:
- az
- be
- en
- es
- fr
- gl
- he
- it
- pt
- ru
- tr
language_creators:
- expert-generated
license:
- cc-by-nc-nd-4.0
multilinguality:
- translation
pretty_name: TEDHrlr
size_categories:
- 1M<n<10M
source_datasets:
- extended|ted_talks_iwslt
task_categories:
- transl... |
null | null | J. Tiedemann, 2012, Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012) | A parallel corpus of TED talk subtitles provided by CASMACAT: http://www.casmacat.eu/corpus/ted2013.html. The files are originally provided by https://wit3.fbk.eu.
15 languages, 14 bitexts
total number of files: 28
total number of tokens: 67.67M
total number of sentence fragments: 3.81M | false | 2,382 | false | ted_iwlst2013 | 2022-11-03T16:32:29.000Z | null | false | 844e40cdcafb4751b2a521e80ce4bb390fbbf8c0 | [] | [
"annotations_creators:found",
"language_creators:found",
"language:ar",
"language:de",
"language:en",
"language:es",
"language:fa",
"language:fr",
"language:it",
"language:nl",
"language:pl",
"language:pt",
"language:ro",
"language:ru",
"language:sl",
"language:tr",
"language:zh",
... | https://huggingface.co/datasets/ted_iwlst2013/resolve/main/README.md | ---
annotations_creators:
- found
language_creators:
- found
language:
- ar
- de
- en
- es
- fa
- fr
- it
- nl
- pl
- pt
- ro
- ru
- sl
- tr
- zh
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: nul... |
null | null | @InProceedings{qi-EtAl:2018:N18-2,
author = {Qi, Ye and Sachan, Devendra and Felix, Matthieu and Padmanabhan, Sarguna and Neubig, Graham},
title = {When and Why Are Pre-Trained Word Embeddings Useful for Neural Machine Translation?},
booktitle = {Proceedings of the 2018 Conference of the North Amer... | Massively multilingual (60 language) data set derived from TED Talk transcripts.
Each record consists of parallel arrays of language and text. Missing and
incomplete translations will be filtered out. | false | 439 | false | ted_multi | 2022-11-03T16:16:22.000Z | null | false | 1c1fe0dfd340257fddd61424e7413e290eab5611 | [] | [] | https://huggingface.co/datasets/ted_multi/resolve/main/README.md | ---
pretty_name: TEDMulti
paperswithcode_id: null
dataset_info:
features:
- name: translations
dtype:
translation_variable_languages:
languages:
- ar
- az
- be
- bg
- bn
- bs
- calv
- cs
- da
- de
- el
... |
null | null | @inproceedings{cettolo-etal-2012-wit3,
title = "{WIT}3: Web Inventory of Transcribed and Translated Talks",
author = "Cettolo, Mauro and
Girardi, Christian and
Federico, Marcello",
booktitle = "Proceedings of the 16th Annual conference of the European Association for Machine Translation",
... | The core of WIT3 is the TED Talks corpus, that basically redistributes the original content published by the TED Conference website (http://www.ted.com). Since 2007,
the TED Conference, based in California, has been posting all video recordings of its talks together with subtitles in English
and their translations in m... | false | 3,392 | false | ted_talks_iwslt | 2022-10-28T16:41:35.000Z | null | false | c9b711fb8e09017bb430d0ae5b86caea7642c381 | [] | [
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"language:af",
"language:am",
"language:ar",
"language:arq",
"language:art",
"language:as",
"language:ast",
"language:az",
"language:be",
"language:bg",
"language:bi",
"langua... | https://huggingface.co/datasets/ted_talks_iwslt/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
- expert-generated
language:
- af
- am
- ar
- arq
- art
- as
- ast
- az
- be
- bg
- bi
- bn
- bo
- bs
- ca
- ceb
- cnh
- cs
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fil
- fr
- ga
- gl
- gu
- ha
- he
- hi
- hr
- ht
- hu
- hup
- hy
... |
null | null | @InProceedings{huggingface:dataset,
title = {Indic NLP - Natural Language Processing for Indian Languages},
authors = {Sudalai Rajkumar, Anusha Motamarri},
year={2019}
} | This dataset is created by scraping telugu novels from teluguone.com this dataset can be used for nlp tasks like topic modeling, word embeddings, transfer learning etc | false | 322 | false | telugu_books | 2022-11-03T16:07:57.000Z | null | false | 5692fa3357c49771fc35f2770e2743fcff968b89 | [] | [
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language:te",
"license:unknown",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_... | https://huggingface.co/datasets/telugu_books/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- te
license:
- unknown
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_i... |
null | null | @InProceedings{kaggle:dataset,
title = {Telugu News - Natural Language Processing for Indian Languages},
authors={Sudalai Rajkumar, Anusha Motamarri},
year={2019}
} | This dataset contains Telugu language news articles along with respective
topic labels (business, editorial, entertainment, nation, sport) extracted from
the daily Andhra Jyoti. This dataset could be used to build Classification and Language Models. | false | 324 | false | telugu_news | 2022-11-03T16:08:15.000Z | null | false | 8784f3138f3bca223f04a808bdd236338769dda8 | [] | [
"annotations_creators:machine-generated",
"language_creators:other",
"language:te",
"license:unknown",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:text-classification",
"... | https://huggingface.co/datasets/telugu_news/resolve/main/README.md | ---
annotations_creators:
- machine-generated
language_creators:
- other
language:
- te
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
- text-classification
task_ids:
- language-modeling
- masked-language-modelin... |
null | null | @InProceedings{“TEP: Tehran English-Persian Parallel Corpus”,
title = {TEP: Tehran English-Persian Parallel Corpus”, in proceedings of 12th International Conference on Intelligent Text Processing and Computational Linguistics (CICLing-2011)},
authors={M. T. Pilevar, H. Faili, and A. H. Pilevar, },
year={2011}
} | TEP: Tehran English-Persian parallel corpus. The first free Eng-Per corpus, provided by the Natural Language and Text Processing Laboratory, University of Tehran. | false | 323 | false | tep_en_fa_para | 2022-11-03T16:08:03.000Z | null | false | 06ddfcbac5ce6b9a990ce113070a90fda83b46cc | [] | [
"annotations_creators:found",
"language_creators:found",
"language:en",
"language:fa",
"license:unknown",
"multilinguality:translation",
"size_categories:100K<n<1M",
"source_datasets:original",
"task_categories:translation"
] | https://huggingface.co/datasets/tep_en_fa_para/resolve/main/README.md | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
- fa
license:
- unknown
multilinguality:
- translation
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: TepEnFaPara
dataset_info:
features:
- name: tra... |
null | null | @INPROCEEDINGS{9401852, author={Levkovskyi, Oleksii and Li, Wei}, booktitle={SoutheastCon 2021}, title={Generating Predicate Logic Expressions from Natural Language}, year={2021}, volume={}, number={}, pages={1-8}, doi={10.1109/SoutheastCon45413.2021.9401852}} | The dataset contains about 100,000 simple English sentences selected and filtered from enTenTen15 and their translation into First Order Logic (FOL) Lambda Dependency-based Compositional Semantics using ccg2lambda. | false | 323 | false | text2log | 2022-11-03T16:15:15.000Z | null | false | bfa1a013d61207ed97e9f393a66f2578ca3076b2 | [] | [
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"language:en",
"license:unknown",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"task_categories:translation"
] | https://huggingface.co/datasets/text2log/resolve/main/README.md | ---
annotations_creators:
- machine-generated
language_creators:
- machine-generated
language:
- en
license:
- unknown
multilinguality:
- monolingual
pretty_name: text2log
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- translation
task_ids: []
dataset_info:
features:
- name: sentence
... |
null | null | @article{sirihattasak2019annotation,
title={Annotation and Classification of Toxicity for Thai Twitter},
author={Sirihattasak, Sugan and Komachi, Mamoru and Ishikawa, Hiroshi},
year={2019}
} | Thai Toxicity Tweet Corpus contains 3,300 tweets annotated by humans with guidelines including a 44-word dictionary.
The author obtained 2,027 and 1,273 toxic and non-toxic tweets, respectively; these were labeled by three annotators. The result of corpus
analysis indicates that tweets that include toxic words are not ... | false | 353 | false | thai_toxicity_tweet | 2022-11-03T16:30:39.000Z | null | false | e886630685432cb54b0cef667f3f22275f5879ea | [] | [
"annotations_creators:expert-generated",
"language_creators:found",
"language:th",
"license:cc-by-nc-3.0",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"task_categories:text-classification",
"task_ids:sentiment-classification"
] | https://huggingface.co/datasets/thai_toxicity_tweet/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- th
license:
- cc-by-nc-3.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
paperswithcode_id: null
pretty_name: ThaiToxic... |
null | null | @misc{Wannaphong Phatthiyaphaibun_2019,
title={wannaphongcom/thai-ner: ThaiNER 1.3},
url={https://zenodo.org/record/3550546},
DOI={10.5281/ZENODO.3550546},
abstractNote={Thai Named Entity Recognition},
publisher={Zenodo},
author={Wannaphong Phatthiyaphaibun},
year={2019},
month={Nov}
} | ThaiNER (v1.3) is a 6,456-sentence named entity recognition dataset created from expanding the 2,258-sentence
[unnamed dataset](http://pioneer.chula.ac.th/~awirote/Data-Nutcha.zip) by
[Tirasaroj and Aroonmanakun (2012)](http://pioneer.chula.ac.th/~awirote/publications/).
It is used to train NER taggers in [PyThaiNLP](h... | false | 338 | false | thainer | 2022-11-03T16:15:39.000Z | null | false | 873ab499a95b21e0736a63f21c7f1c75bd99b878 | [] | [
"annotations_creators:expert-generated",
"annotations_creators:machine-generated",
"language_creators:found",
"language_creators:expert-generated",
"language:th",
"license:cc-by-3.0",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|other-tirasaroj-aroonmanakun",
... | https://huggingface.co/datasets/thainer/resolve/main/README.md | ---
annotations_creators:
- expert-generated
- machine-generated
language_creators:
- found
- expert-generated
language:
- th
license:
- cc-by-3.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- extended|other-tirasaroj-aroonmanakun
task_categories:
- token-classification
task_ids:
- named... |
null | null | No clear citation guidelines from source:
https://aiforthai.in.th/corpus.php
SQuAD version:
https://github.com/PyThaiNLP/thaiqa_squad | `thaiqa_squad` is an open-domain, extractive question answering dataset (4,000 questions in `train` and 74 questions in `dev`) in
[SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) format, originally created by [NECTEC](https://www.nectec.or.th/en/) from
Wikipedia articles and adapted to [SQuAD](https://rajpurkar.git... | false | 356 | false | thaiqa_squad | 2022-11-03T16:15:52.000Z | null | false | cb2e0deca5a0ec4e3adbdf6f96aa12ecaaea57da | [] | [
"annotations_creators:expert-generated",
"language_creators:found",
"language:th",
"license:cc-by-nc-sa-3.0",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|other-thaiqa",
"task_categories:question-answering",
"task_ids:extractive-qa",
"task_ids:open-domain-qa... | https://huggingface.co/datasets/thaiqa_squad/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- th
license:
- cc-by-nc-sa-3.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- extended|other-thaiqa
task_categories:
- question-answering
task_ids:
- extractive-qa
- open-domain-qa
paperswithcode_id: null
p... |
null | null | @mastersthesis{chumpolsathien_2020,
title={Using Knowledge Distillation from Keyword Extraction to Improve the Informativeness of Neural Cross-lingual Summarization},
author={Chumpolsathien, Nakhun},
year={2020},
school={Beijing Institute of Technology} | ThaiSum is a large-scale corpus for Thai text summarization obtained from several online news websites namely Thairath,
ThaiPBS, Prachathai, and The Standard. This dataset consists of over 350,000 article and summary pairs
written by journalists. | false | 324 | false | thaisum | 2022-11-03T16:16:06.000Z | null | false | 410513d8dfec72a2fb812c914bf5da039c096bda | [] | [
"annotations_creators:no-annotation",
"language_creators:found",
"language:th",
"license:mit",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"task_categories:summarization",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:langua... | https://huggingface.co/datasets/thaisum/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- th
license:
- mit
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- summarization
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcod... |
null | null | @misc{gao2020pile,
title={The Pile: An 800GB Dataset of Diverse Text for Language Modeling},
author={Leo Gao and Stella Biderman and Sid Black and Laurence Golding and Travis Hoppe and Charles Foster and Jason Phang and Horace He and Anish Thite and Noa Nabeshima and Shawn Presser and Connor Leahy},
y... | The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality
datasets combined together. | false | 4,390 | false | the_pile | 2022-10-28T16:41:39.000Z | null | false | afce106881fec4ed022414974b3c25884539b1fe | [] | [
"arxiv:2101.00027",
"annotations_creators:no-annotation",
"language_creators:found",
"language:en",
"license:other",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",... | https://huggingface.co/datasets/the_pile/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- other
multilinguality:
- monolingual
pretty_name: The Pile
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
---
# ... |
null | null | @article{pile,
title={The {P}ile: An 800GB Dataset of Diverse Text for Language Modeling},
author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and Presser, Shawn and Leahy, Connor},
... | This dataset is Shawn Presser's work and is part of EleutherAi/The Pile dataset. This dataset contains all of bibliotik in plain .txt form, aka 197,000 books processed in exactly the same way as did for bookcorpusopen (a.k.a. books1). seems to be similar to OpenAI's mysterious "books2" dataset referenced in their paper... | false | 345 | false | the_pile_books3 | 2022-11-03T16:16:04.000Z | null | false | 8f2f68541fc37fa840eaa7623b83b38d6ae69adc | [] | [
"arxiv:2101.00027",
"annotations_creators:no-annotation",
"language_creators:found",
"language:en",
"license:mit",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",... | https://huggingface.co/datasets/the_pile_books3/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- mit
multilinguality:
- monolingual
pretty_name: Books3
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
dataset_i... |
null | null | @article{pile,
title={The {P}ile: An 800GB Dataset of Diverse Text for Language Modeling},
author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and Presser, Shawn and Leahy, Connor},
... | OpenWebText2 is part of EleutherAi/The Pile dataset and is an enhanced version of the original OpenWebTextCorpus covering all Reddit submissions from 2005 up until April 2020, with further months becoming available after the corresponding PushShift dump files are released. | false | 209 | false | the_pile_openwebtext2 | 2022-11-03T16:07:43.000Z | null | false | fffa31378926ead54603cbaccd6abc92ff29a32b | [] | [
"arxiv:2101.00027",
"annotations_creators:no-annotation",
"language_creators:found",
"language:en",
"license:mit",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:text-classi... | https://huggingface.co/datasets/the_pile_openwebtext2/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- mit
multilinguality:
- monolingual
pretty_name: OpenWebText2
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
- text-classification
task_ids:
- language-modeling
- maske... |
null | null | @article{pile,
title={The {P}ile: An 800GB Dataset of Diverse Text for Language Modeling},
author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and Presser, Shawn and Leahy, Connor},
... | This dataset is part of EleutherAI/The Pile dataset and is a dataset for Language Models from processing stackexchange data dump, which is an anonymized dump of all user-contributed content on the Stack Exchange network. | false | 326 | false | the_pile_stack_exchange | 2022-11-03T16:08:03.000Z | null | false | a2f26e6b9bc38da50c60b792da4233f5d3af523c | [] | [
"arxiv:2101.00027",
"annotations_creators:no-annotation",
"language_creators:found",
"language:en",
"license:cc-by-sa-4.0",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-mo... | https://huggingface.co/datasets/the_pile_stack_exchange/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: Stack Exchange
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-mo... |
null | null | Roberts Rozis, Raivis Skadins, 2017, Tilde MODEL - Multilingual Open Data for EU Languages. Proceedings of the 21th Nordic Conference of Computational Linguistics NODALIDA 2017 | This is the Tilde MODEL Corpus – Multilingual Open Data for European Languages.
The data has been collected from sites allowing free use and reuse of its content, as well as from Public Sector web sites. The activities have been undertaken as part of the ODINE Open Data Incubator for Europe, which aims to support the ... | false | 955 | false | tilde_model | 2022-11-03T16:31:39.000Z | tilde-model-corpus | false | acc5759086bd8f4b341d50d0d8a256397ae25c83 | [] | [
"annotations_creators:found",
"language_creators:found",
"language:bg",
"language:cs",
"language:da",
"language:de",
"language:el",
"language:en",
"language:es",
"language:et",
"language:fi",
"language:fr",
"language:hr",
"language:hu",
"language:is",
"language:it",
"language:lt",
... | https://huggingface.co/datasets/tilde_model/resolve/main/README.md | ---
annotations_creators:
- found
language_creators:
- found
language:
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- hr
- hu
- is
- it
- lt
- lv
- mt
- nl
- 'no'
- pl
- pt
- ro
- ru
- sk
- sl
- sq
- sr
- sv
- tr
- uk
license:
- cc-by-sa-4.0
multilinguality:
- multilingual
size_categories:
- n<1K
source_datasets:
... |
null | null | @inproceedings{qin-etal-2021-timedial,
title = "{TimeDial: Temporal Commonsense Reasoning in Dialog}",
author = "Qin, Lianhui and Gupta, Aditya and Upadhyay, Shyam and He, Luheng and Choi, Yejin and Faruqui, Manaal",
booktitle = "Proc. of ACL",
year = "2021"
} | TimeDial presents a crowdsourced English challenge set, for temporal commonsense reasoning, formulated
as a multiple choice cloze task with around 1.5k carefully curated dialogs. The dataset is derived from
the DailyDialog (Li et al., 2017), which is a multi-turn dialog corpus.
In order to establish strong baselines a... | false | 323 | false | time_dial | 2022-11-03T16:07:53.000Z | timedial | false | c78882c22fe0de8f19220bd01e679b5e8f7d2c5c | [] | [
"arxiv:2106.04571",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language:en",
"license:cc-by-nc-sa-4.0",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_categories:text-classification",
"task_ids:multi-label-classif... | https://huggingface.co/datasets/time_dial/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
pretty_name: 'TimeDial: Temporal Commonsense Reasoning in Dialog'
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
t... |
null | null | @data{DVN/DPQMQH_2020,
author = {Kulkarni, Rohit},
publisher = {Harvard Dataverse},
title = {{Times of India News Headlines}},
year = {2020},
version = {V1},
doi = {10.7910/DVN/DPQMQH},
url = {https://doi.org/10.7910/DVN/DPQMQH}
} | This news dataset is a persistent historical archive of noteable events in the Indian subcontinent from start-2001 to mid-2020, recorded in realtime by the journalists of India. It contains approximately 3.3 million events published by Times of India. Times Group as a news agency, reaches out a very wide audience acros... | false | 322 | false | times_of_india_news_headlines | 2022-11-03T16:15:42.000Z | null | false | 3c5f34fa047bc69d278345997c9170bd2648cc91 | [] | [
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"language:en",
"license:cc0-1.0",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"task_categories:text2text-generation",
"task_categories:text-retrieval",
"task_ids:document-retrieva... | https://huggingface.co/datasets/times_of_india_news_headlines/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- expert-generated
language:
- en
license:
- cc0-1.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text2text-generation
- text-retrieval
task_ids:
- document-retrieval
- fact-checking-retrieval
- tex... |
null | null | @inproceedings{
title={TIMIT Acoustic-Phonetic Continuous Speech Corpus},
author={Garofolo, John S., et al},
ldc_catalog_no={LDC93S1},
DOI={https://doi.org/10.35111/17gk-bn40},
journal={Linguistic Data Consortium, Philadelphia},
year={1983}
} | The TIMIT corpus of reading speech has been developed to provide speech data for acoustic-phonetic research studies
and for the evaluation of automatic speech recognition systems.
TIMIT contains high quality recordings of 630 individuals/speakers with 8 different American English dialects,
with each individual reading... | false | 4,164 | false | timit_asr | 2022-10-28T16:41:41.000Z | timit | false | 1d0cd09f9ca7c40158e7e5377f45c9c718e53c68 | [] | [
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language:en",
"license:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"task_categories:automatic-speech-recognition"
] | https://huggingface.co/datasets/timit_asr/resolve/main/README.md | ---
pretty_name: TIMIT
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- other
license_details: "LDC-User-Agreement-for-Non-Members"
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- automatic-speech-recogniti... |
null | null | @misc{
author={Karpathy, Andrej},
title={char-rnn},
year={2015},
howpublished={\\url{https://github.com/karpathy/char-rnn}}
} | 40,000 lines of Shakespeare from a variety of Shakespeare's plays. Featured in Andrej Karpathy's blog post 'The Unreasonable Effectiveness of Recurrent Neural Networks': http://karpathy.github.io/2015/05/21/rnn-effectiveness/.
To use for e.g. character modelling:
```
d = datasets.load_dataset(name='tiny_shakespeare')... | false | 2,578 | false | tiny_shakespeare | 2022-11-03T16:32:19.000Z | null | false | 181a293227031ae3ce902ed21bf4ce924f004997 | [] | [] | https://huggingface.co/datasets/tiny_shakespeare/resolve/main/README.md | ---
paperswithcode_id: null
pretty_name: TinyShakespeare
dataset_info:
features:
- name: text
dtype: string
splits:
- name: test
num_bytes: 55780
num_examples: 1
- name: train
num_bytes: 1003864
num_examples: 1
- name: validation
num_bytes: 55780
num_examples: 1
download_size: ... |
null | null | @misc{
author={Sawatphol, Jitkapat},
title={Thai Literature Corpora},
year={2019},
howpublished={\\url{https://attapol.github.io/tlc.html}}
} | Thai Literature Corpora (TLC): Corpora of machine-ingestible Thai classical literature texts.
Release: 6/25/19
It consists of two datasets:
## TLC set
It is texts from [Vajirayana Digital Library](https://vajirayana.org/), stored by chapters and stanzas (non-tokenized).
tlc v.2.0 (6/17/19 : a total of 34 documents,... | false | 636 | false | tlc | 2022-11-03T16:31:06.000Z | null | false | df6c6030d67228b03b8db55c25073f4e036da83d | [] | [
"annotations_creators:expert-generated",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"language:th",
"license:unknown",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"task_categories:text-generation",
"task_categories:fill-mask",
... | https://huggingface.co/datasets/tlc/resolve/main/README.md | ---
pretty_name: Thai Literature Corpora (TLC)
annotations_creators:
- expert-generated
- no-annotation
language_creators:
- expert-generated
language:
- th
license:
- unknown
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- la... |
null | null | @inproceedings{yoshimura-etal-2020-reference,
title = "{SOME}: Reference-less Sub-Metrics Optimized for Manual Evaluations of Grammatical Error Correction",
author = "Yoshimura, Ryoma and
Kaneko, Masahiro and
Kajiwara, Tomoyuki and
Komachi, Mamoru",
booktitle = "Proceedings of the 28th ... | A dataset for GEC metrics with manual evaluations of grammaticality, fluency, and meaning preservation for system outputs. More detail about the creation of the dataset can be found in Yoshimura et al. (2020). | false | 568 | false | tmu_gfm_dataset | 2022-11-03T16:30:48.000Z | null | false | 0d777608043c765380778305df41387c763b0d49 | [] | [
"annotations_creators:crowdsourced",
"language_creators:machine-generated",
"language:en",
"license:unknown",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"task_categories:text2text-generation",
"tags:grammatical-error-correction"
] | https://huggingface.co/datasets/tmu_gfm_dataset/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language_creators:
- machine-generated
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
paperswithcode_id: null
pretty_name: TMU-GFM-Dataset
tags:
- gramm... |
null | null | @article{DBLP:journals/corr/abs-2010-04543,
author = {Joao Augusto Leite and
Diego F. Silva and
Kalina Bontcheva and
Carolina Scarton},
title = {Toxic Language Detection in Social Media for Brazilian Portuguese:
New Dataset and Multilingual Analysis... | ToLD-Br is the biggest dataset for toxic tweets in Brazilian Portuguese, crowdsourced
by 42 annotators selected from a pool of 129 volunteers. Annotators were selected aiming
to create a plural group in terms of demographics (ethnicity, sexual orientation, age, gender).
Each tweet was labeled by three annotators in 6 p... | false | 603 | false | told-br | 2022-11-03T16:30:46.000Z | told-br | false | 6b603e832346c5177bc48d046ea78120a68bed09 | [] | [
"arxiv:2010.04543",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language:pt",
"language_bcp47:pt-BR",
"license:cc-by-sa-4.0",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_categories:text-classification",
"tags:hate-spe... | https://huggingface.co/datasets/told-br/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- pt
language_bcp47:
- pt-BR
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: ToLD-Br
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
paperswithcode_id: t... |
null | null | @inproceedings{parikh2020totto,
title={{ToTTo}: A Controlled Table-To-Text Generation Dataset},
author={Parikh, Ankur P and Wang, Xuezhi and Gehrmann, Sebastian and Faruqui, Manaal and Dhingra, Bhuwan and Yang, Diyi and Das, Dipanjan},
booktitle={Proceedings of EMNLP},
year={2020}
} | ToTTo is an open-domain English table-to-text dataset with over 120,000 training examples that proposes a controlled generation task: given a Wikipedia table and a set of highlighted table cells, produce a one-sentence description. | false | 400 | false | totto | 2022-11-03T16:16:21.000Z | totto | false | 03748f2dfcd14a6deb0cc0a36ea8c3ce99138f55 | [] | [
"arxiv:2004.14373",
"annotations_creators:expert-generated",
"language_creators:found",
"language:en",
"license:cc-by-sa-3.0",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"task_categories:table-to-text"
] | https://huggingface.co/datasets/totto/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- table-to-text
task_ids: []
paperswithcode_id: totto
pretty_name: ToTTo
dataset_info:
features:
- n... |
null | null | @inproceedings{li-roth-2002-learning,
title = "Learning Question Classifiers",
author = "Li, Xin and
Roth, Dan",
booktitle = "{COLING} 2002: The 19th International Conference on Computational Linguistics",
year = "2002",
url = "https://www.aclweb.org/anthology/C02-1150",
}
@inproceedings{hovy... | The Text REtrieval Conference (TREC) Question Classification dataset contains 5500 labeled questions in training set and another 500 for test set.
The dataset has 6 coarse class labels and 50 fine class labels. Average length of each sentence is 10, vocabulary size of 8700.
Data are collected from four sources: 4,500... | false | 103,666 | false | trec | 2022-11-03T16:47:43.000Z | trecqa | false | 2c7efd86065922a44b2b8739bd7dbc5825036267 | [] | [
"annotations_creators:expert-generated",
"language:en",
"language_creators:expert-generated",
"license:unknown",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"task_categories:text-classification",
"task_ids:multi-class-classification"
] | https://huggingface.co/datasets/trec/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- expert-generated
license:
- unknown
multilinguality:
- monolingual
pretty_name: Text Retrieval Conference Question Answering
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-c... |
null | null | @article{2017arXivtriviaqa,
author = {{Joshi}, Mandar and {Choi}, Eunsol and {Weld},
Daniel and {Zettlemoyer}, Luke},
title = "{triviaqa: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension}",
journal = {arXiv e-prints},
year = 2017,
ei... | TriviaqQA is a reading comprehension dataset containing over 650K
question-answer-evidence triples. TriviaqQA includes 95K question-answer
pairs authored by trivia enthusiasts and independently gathered evidence
documents, six per question on average, that provide high quality distant
supervision for answering the ques... | false | 54,008 | false | trivia_qa | 2022-11-03T16:47:40.000Z | triviaqa | false | 35a534c59de67132b80dde63f37f9aed75aeef93 | [] | [
"arxiv:1705.03551",
"annotations_creators:crowdsourced",
"language_creators:machine-generated",
"language:en",
"license:unknown",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"size_categories:100K<n<1M",
"source_datasets:original",
"task_categories:question-answering",
"task_cate... | https://huggingface.co/datasets/trivia_qa/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language_creators:
- machine-generated
language:
- en
license:
- unknown
multilinguality:
- monolingual
paperswithcode_id: triviaqa
pretty_name: TriviaQA
size_categories:
- 10K<n<100K
- 100K<n<1M
source_datasets:
- original
task_categories:
- question-answering
- text2text-gener... |
null | null | @inproceedings{medhaffar-etal-2017-sentiment,
title = "Sentiment Analysis of {T}unisian Dialects: Linguistic Ressources and Experiments",
author = "Medhaffar, Salima and
Bougares, Fethi and
Est{`e}ve, Yannick and
Hadrich-Belguith, Lamia",
booktitle = "Proceedings of the Third {A}rabic N... | Tunisian Sentiment Analysis Corpus.
About 17k user comments manually annotated to positive and negative polarities. This corpus is collected from Facebook users comments written on official pages of Tunisian radios and TV channels namely Mosaique FM, JawhraFM, Shemes FM, HiwarElttounsi TV and Nessma TV. The corpus is ... | false | 321 | false | tsac | 2022-11-03T16:08:15.000Z | tsac | false | fda21e12f36f800f702a526b63f7471a71765235 | [] | [
"annotations_creators:expert-generated",
"language_creators:found",
"language:aeb",
"license:lgpl-3.0",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_categories:text-classification",
"task_ids:sentiment-classification"
] | https://huggingface.co/datasets/tsac/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- aeb
license:
- lgpl-3.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
paperswithcode_id: tsac
pretty_name: Tunisian S... |
null | null | @article{doi:10.5505/pajes.2018.15931,
author = {Yıldırım, Savaş and Yıldız, Tuğba},
title = {A comparative analysis of text classification for Turkish language},
journal = {Pamukkale Univ Muh Bilim Derg},
volume = {24},
number = {5},
pages = {879-886},
year = {2018},
doi = {10.5505/pajes.2018.15931},
note ={doi: 10.55... | The data set is taken from kemik group
http://www.kemik.yildiz.edu.tr/
The data are pre-processed for the text categorization, collocations are found, character set is corrected, and so forth.
We named TTC4900 by mimicking the name convention of TTC 3600 dataset shared by the study http://journals.sagepub.com/doi/abs/1... | false | 349 | false | ttc4900 | 2022-11-03T16:16:00.000Z | null | false | 87a2db923c22846f5e03f5b14cc51c2868671077 | [] | [
"annotations_creators:found",
"language_creators:found",
"language:tr",
"license:unknown",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"task_categories:text-classification",
"tags:news-category-classification"
] | https://huggingface.co/datasets/ttc4900/resolve/main/README.md | ---
annotations_creators:
- found
language_creators:
- found
language:
- tr
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
paperswithcode_id: null
pretty_name: TTC4900 - A Benchmark Data for Turkish Text Categ... |
null | null | @inproceedings{Chayma2020,
title={TUNIZI: a Tunisian Arabizi sentiment analysis Dataset},
author={Fourati, Chayma and Messaoudi, Abir and Haddad, Hatem},
booktitle={AfricaNLP Workshop, Putting Africa on the NLP Map. ICLR 2020, Virtual Event},
volume = {arXiv:3091079},
year = {2020},
url = {https://arxiv.org/submit/3091... | On social media, Arabic speakers tend to express themselves in their own local dialect. To do so, Tunisians use "Tunisian Arabizi", which consists in supplementing numerals to the Latin script rather than the Arabic alphabet. TUNIZI is the first Tunisian Arabizi Dataset including 3K sentences, balanced, covering differ... | false | 321 | false | tunizi | 2022-11-03T16:08:05.000Z | tunizi | false | 6a9b6535db54b6c701686b5988d32651a798c14d | [] | [
"arxiv:2004.14303",
"annotations_creators:expert-generated",
"language_creators:found",
"language:aeb",
"license:unknown",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"task_categories:text-classification",
"task_ids:sentiment-classification"
] | https://huggingface.co/datasets/tunizi/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- aeb
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
paperswithcode_id: tunizi
pretty_name: TUNIZI
data... |
null | null | @article{Khot2017AnsweringCQ,
title={Answering Complex Questions Using Open Information Extraction},
author={Tushar Khot and A. Sabharwal and Peter Clark},
journal={ArXiv},
year={2017},
volume={abs/1704.05572}
} | The TupleInf Open IE dataset contains Open IE tuples extracted from 263K sentences that were used by the solver in “Answering Complex Questions Using Open Information Extraction” (referred as Tuple KB, T). These sentences were collected from a large Web corpus using training questions from 4th and 8th grade as queries.... | false | 643 | false | tuple_ie | 2022-11-03T16:31:04.000Z | tupleinf-open-ie-dataset | false | 3dfe2c3c76c6365261b38762a9a9da0fc68c6ca0 | [] | [
"annotations_creators:found",
"language_creators:machine-generated",
"language:en",
"license:unknown",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"task_categories:other",
"tags:open-information-extraction"
] | https://huggingface.co/datasets/tuple_ie/resolve/main/README.md | ---
annotations_creators:
- found
language_creators:
- machine-generated
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- other
task_ids: []
paperswithcode_id: tupleinf-open-ie-dataset
pretty_name: TupleInf Open IE
tags:
- open-... |
null | null | @article{Xu-EtAl:2016:TACL,
author = {Wei Xu and Courtney Napoles and Ellie Pavlick and Quanze Chen and Chris Callison-Burch},
title = {Optimizing Statistical Machine Translation for Text Simplification},
journal = {Transactions of the Association for Computational Linguistics},
volume = {4},
year = {2016},
url ... | TURKCorpus is a dataset for evaluating sentence simplification systems that focus on lexical paraphrasing,
as described in "Optimizing Statistical Machine Translation for Text Simplification". The corpus is composed of 2000 validation and 359 test original sentences that were each simplified 8 times by different annota... | false | 675 | false | turk | 2022-11-03T16:31:10.000Z | null | false | d51a3c526cef6af652599ad016b5781ad099906d | [] | [
"annotations_creators:machine-generated",
"language_creators:found",
"language:en",
"license:gpl-3.0",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"task_categories:text2text-generation",
"task_ids:text-simplification"
] | https://huggingface.co/datasets/turk/resolve/main/README.md | ---
annotations_creators:
- machine-generated
language_creators:
- found
language:
- en
license:
- gpl-3.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text2text-generation
task_ids:
- text-simplification
paperswithcode_id: null
pretty_name: TURK
dataset_info... |
null | null | @inproceedings{mirzakhalov2021large,
title={A Large-Scale Study of Machine Translation in Turkic Languages},
author={Mirzakhalov, Jamshidbek and Babu, Anoop and Ataman, Duygu and Kariev, Sherzod and Tyers, Francis and Abduraufov, Otabek and Hajili, Mammad and Ivanova, Sardana and Khaytbaev, Abror and Laverghetta Jr... | A Large-Scale Study of Machine Translation in Turkic Languages | false | 14,258 | false | turkic_xwmt | 2022-11-03T16:47:15.000Z | null | false | c2af8281cd4b0f7292cab5621f2c866bb06c80d3 | [] | [
"arxiv:2109.04593",
"annotations_creators:crowdsourced",
"language_creators:found",
"language:az",
"language:ba",
"language:en",
"language:kaa",
"language:kk",
"language:ky",
"language:ru",
"language:sah",
"language:tr",
"language:uz",
"license:mit",
"multilinguality:translation",
"siz... | https://huggingface.co/datasets/turkic_xwmt/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- az
- ba
- en
- kaa
- kk
- ky
- ru
- sah
- tr
- uz
license:
- mit
multilinguality:
- translation
pretty_name: turkic_xwmt
size_categories:
- n<1K
task_categories:
- translation
task_ids: []
source_datasets:
- extended|WMT 2020 News Translati... |
null | null | null | This data set is a dataset from kaggle consisting of Turkish movie reviews and scored between 0-5. | false | 322 | false | turkish_movie_sentiment | 2022-11-03T16:07:48.000Z | null | false | e5cf0b256fbeda1b9b1c04ddf9f24d9108dc93c4 | [] | [
"annotations_creators:found",
"language_creators:found",
"language:tr",
"license:unknown",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_categories:text-classification",
"task_ids:sentiment-classification",
"task_ids:sentiment-scoring"
] | https://huggingface.co/datasets/turkish_movie_sentiment/resolve/main/README.md | ---
annotations_creators:
- found
language_creators:
- found
language:
- tr
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
- sentiment-scoring
paperswithcode_id: null
pretty_name: 'Tu... |
null | null | @InProceedings@article{DBLP:journals/corr/SahinTYES17,
author = {H. Bahadir Sahin and
Caglar Tirkaz and
Eray Yildiz and
Mustafa Tolga Eren and
Omer Ozan Sonmez},
title = {Automatically Annotated Turkish Corpus for Named Entity Recognition
... | Turkish Wikipedia Named-Entity Recognition and Text Categorization
(TWNERTC) dataset is a collection of automatically categorized and annotated
sentences obtained from Wikipedia. The authors constructed large-scale
gazetteers by using a graph crawler algorithm to extract
relevant entity and domain information
from a se... | false | 320 | false | turkish_ner | 2022-11-03T16:07:57.000Z | null | false | 7236c960e1327077a3c1d2f03f9fcb7d16fe0a41 | [] | [
"arxiv:1702.02363",
"annotations_creators:machine-generated",
"language_creators:expert-generated",
"language:tr",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"task_categories:token-classification",
"task_ids:named-entity-recognition... | https://huggingface.co/datasets/turkish_ner/resolve/main/README.md | ---
annotations_creators:
- machine-generated
language_creators:
- expert-generated
language:
- tr
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
paperswithcode_id: null
pretty_name... |
null | null | null | Turkish Product Reviews.
This repository contains 235.165 product reviews collected online. There are 220.284 positive, 14881 negative reviews. | false | 358 | false | turkish_product_reviews | 2022-11-03T16:16:25.000Z | null | false | e75c6875d7d89ec19de6c08ca2912ecd74e881c0 | [] | [
"annotations_creators:found",
"language_creators:found",
"language:tr",
"license:unknown",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"task_categories:text-classification",
"task_ids:sentiment-classification"
] | https://huggingface.co/datasets/turkish_product_reviews/resolve/main/README.md | ---
annotations_creators:
- found
language_creators:
- found
language:
- tr
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
paperswithcode_id: null
pretty_name: Turkish Product Reviews
... |
null | null | \ | Shrinked version (48 entity type) of the turkish_ner.
Original turkish_ner dataset: Automatically annotated Turkish corpus for named entity recognition and text categorization using large-scale gazetteers. The constructed gazetteers contains approximately 300K entities with thousands of fine-grained entity types under... | false | 321 | false | turkish_shrinked_ner | 2022-11-03T16:07:53.000Z | null | false | 19c2f04006bc1fc3a9f52a65fed35037a9302d11 | [] | [
"annotations_creators:machine-generated",
"language_creators:expert-generated",
"language:tr",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|other-turkish_ner",
"task_categories:token-classification",
"task_ids:named-entity-recognition"
] | https://huggingface.co/datasets/turkish_shrinked_ner/resolve/main/README.md | ---
annotations_creators:
- machine-generated
language_creators:
- expert-generated
language:
- tr
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|other-turkish_ner
task_categories:
- token-classification
task_ids:
- named-entity-recognition
paperswithcode_id... |
null | null | @inproceedings{luoma-etal-2020-broad,
title = "A Broad-coverage Corpus for {F}innish Named Entity Recognition",
author = {Luoma, Jouni and Oinonen, Miika and Pyyk{\"o}nen, Maria and Laippala, Veronika and Pyysalo, Sampo},
booktitle = "Proceedings of The 12th Language Resources and Evaluation Conference",
year = "2020",... | An open, broad-coverage corpus for Finnish named entity recognition presented in Luoma et al. (2020) A Broad-coverage Corpus for Finnish Named Entity Recognition. | false | 319 | false | turku_ner_corpus | 2022-11-03T16:07:47.000Z | null | false | da2ff600f5c220301aba1bb64e2ad264ae359b42 | [] | [
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language:fi",
"license:cc-by-nc-sa-4.0",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_categories:token-classification",
"task_ids:named-entity-recognition"
] | https://huggingface.co/datasets/turku_ner_corpus/resolve/main/README.md | ---
pretty_name: Turku NER corpus
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- fi
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition... |
null | null | @inproceedings{barbieri2020tweeteval,
title={{TweetEval:Unified Benchmark and Comparative Evaluation for Tweet Classification}},
author={Barbieri, Francesco and Camacho-Collados, Jose and Espinosa-Anke, Luis and Neves, Leonardo},
booktitle={Proceedings of Findings of EMNLP},
year={2020}
} | TweetEval consists of seven heterogenous tasks in Twitter, all framed as multi-class tweet classification. All tasks have been unified into the same benchmark, with each dataset presented in the same format and with fixed training, validation and test splits. | false | 181,327 | false | tweet_eval | 2022-11-03T16:47:42.000Z | tweeteval | false | 02fe433bab2e2aa5c2d58f715c7dfc57cd2889f2 | [] | [
"arxiv:2010.12421",
"annotations_creators:found",
"language_creators:found",
"language:en",
"license:unknown",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"size_categories:10K<n<100K",
"size_categories:1K<n<10K",
"size_categories:n<1K",
"source_datasets:extended|other-tweet-datas... | https://huggingface.co/datasets/tweet_eval/resolve/main/README.md | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 10K<n<100K
- 1K<n<10K
- n<1K
source_datasets:
- extended|other-tweet-datasets
task_categories:
- text-classification
task_ids:
- intent-classification
- multi-clas... |
null | null | @inproceedings{xiong2019tweetqa,
title={TweetQA: A Social Media Focused Question Answering Dataset},
author={Xiong, Wenhan and Wu, Jiawei and Wang, Hong and Kulkarni, Vivek and Yu, Mo and Guo, Xiaoxiao and Chang, Shiyu and Wang, William Yang},
booktitle={Proceedings of the 57th Annual Meeting of the Association f... | TweetQA is the first dataset for QA on social media data by leveraging news media and crowdsourcing. | false | 1,613 | false | tweet_qa | 2022-11-03T16:31:46.000Z | tweetqa | false | ec89178234f05e28c6ab1d621aef0550ebd6e41e | [] | [
"arxiv:1907.06292",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language:en",
"license:cc-by-sa-4.0",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_categories:question-answering",
"task_ids:open-domain-qa"
] | https://huggingface.co/datasets/tweet_qa/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- open-domain-qa
paperswithcode_id: tweetqa
pretty_name: TweetQA
data... |
null | null | @inproceedings{Mubarak2020bilingualtweets,
title={Constructing a Bilingual Corpus of Parallel Tweets},
author={Mubarak, Hamdy and Hassan, Sabit and Abdelali, Ahmed},
booktitle={Proceedings of 13th Workshop on Building and Using Comparable Corpora (BUCC)},
address={Marseille, France},
year={2020}
} | Twitter users often post parallel tweets—tweets that contain the same content but are
written in different languages. Parallel tweets can be an important resource for developing
machine translation (MT) systems among other natural language processing (NLP) tasks. This
resource is a result of a generic m... | false | 639 | false | tweets_ar_en_parallel | 2022-11-03T16:31:02.000Z | bilingual-corpus-of-arabic-english-parallel | false | ccf597b8124c68f1b2b4f83753193a78a2d21356 | [] | [
"annotations_creators:expert-generated",
"annotations_creators:no-annotation",
"language_creators:found",
"language:ar",
"language:en",
"license:apache-2.0",
"multilinguality:translation",
"size_categories:100K<n<1M",
"source_datasets:original",
"task_categories:translation",
"tags:tweets-transl... | https://huggingface.co/datasets/tweets_ar_en_parallel/resolve/main/README.md | ---
annotations_creators:
- expert-generated
- no-annotation
language_creators:
- found
language:
- ar
- en
license:
- apache-2.0
multilinguality:
- translation
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: bilingual-corpus-of-arabic-english-para... |
null | null | @InProceedings{Z
Roshan Sharma:dataset,
title = {Sentimental Analysis of Tweets for Detecting Hate/Racist Speeches},
authors={Roshan Sharma},
year={2018}
} | The objective of this task is to detect hate speech in tweets. For the sake of simplicity, we say a tweet contains hate speech if it has a racist or sexist sentiment associated with it. So, the task is to classify racist or sexist tweets from other tweets.
Formally, given a training sample of tweets and labels, where ... | false | 2,992 | false | tweets_hate_speech_detection | 2022-11-03T16:32:30.000Z | null | false | 461a9d1d2531d5ec7eda9dd2277714a5ee6fed54 | [] | [
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language:en",
"license:gpl-3.0",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_categories:text-classification",
"task_ids:sentiment-classification"
] | https://huggingface.co/datasets/tweets_hate_speech_detection/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- gpl-3.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
paperswithcode_id: null
pretty_name: Tweets Ha... |
null | null | @inproceedings{alabi-etal-2020-massive,
title = "Massive vs. Curated Embeddings for Low-Resourced Languages: the Case of Yoruba and {T}wi",
author = "Alabi, Jesujoba and
Amponsah-Kaakyire, Kwabena and
Adelani, David and
Espa{\\~n}a-Bonet, Cristina",
booktitle = "Proceedings of the 12th ... | Twi Text C3 is the largest Twi texts collected and used to train FastText embeddings in the
YorubaTwi Embedding paper: https://www.aclweb.org/anthology/2020.lrec-1.335/ | false | 324 | false | twi_text_c3 | 2022-11-03T16:15:20.000Z | null | false | abfb984f7cd500f89c9bc620cdbcd00e56e01496 | [] | [
"annotations_creators:expert-generated",
"language_creators:found",
"language:tw",
"license:cc-by-nc-4.0",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_i... | https://huggingface.co/datasets/twi_text_c3/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- tw
license:
- cc-by-nc-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id... |
null | null | @inproceedings{alabi-etal-2020-massive,
title = "Massive vs. Curated Embeddings for Low-Resourced Languages: the Case of {Y}or{\\`u}b{\\'a} and {T}wi",
author = "Alabi, Jesujoba and
Amponsah-Kaakyire, Kwabena and
Adelani, David and
Espa{\\~n}a-Bonet, Cristina",
booktitle = "Proceedings ... | A translation of the word pair similarity dataset wordsim-353 to Twi.
The dataset was presented in the paper
Alabi et al.: Massive vs. Curated Embeddings for Low-Resourced
Languages: the Case of Yorùbá and Twi (LREC 2020). | false | 321 | false | twi_wordsim353 | 2022-11-03T16:07:57.000Z | null | false | b21f0144299b74934915e2e22d52a969a1b075ed | [] | [
"annotations_creators:crowdsourced",
"language_creators:expert-generated",
"language:en",
"language:tw",
"license:unknown",
"multilinguality:multilingual",
"size_categories:n<1K",
"source_datasets:original",
"task_categories:text-classification",
"task_ids:text-scoring",
"task_ids:semantic-simil... | https://huggingface.co/datasets/twi_wordsim353/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language_creators:
- expert-generated
language:
- en
- tw
license:
- unknown
multilinguality:
- multilingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- text-scoring
- semantic-similarity-scoring
paperswithcode_id: null
... |
null | null | @article{tydiqa,
title = {TyDi QA: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages},
author = {Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}
year = {2020},
journal = {Transactions of... | TyDi QA is a question answering dataset covering 11 typologically diverse languages with 204K question-answer pairs.
The languages of TyDi QA are diverse with regard to their typology -- the set of linguistic features that each language
expresses -- such that we expect models performing well on this set to generalize a... | false | 3,338 | false | tydiqa | 2022-11-03T16:46:41.000Z | tydi-qa | false | 6a707985b27f920840baf50a7889746c23bf4818 | [] | [
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language:ar",
"language:bn",
"language:en",
"language:fi",
"language:id",
"language:ja",
"language:ko",
"language:ru",
"language:sw",
"language:te",
"language:th",
"license:apache-2.0",
"multilinguality:multilingual"... | https://huggingface.co/datasets/tydiqa/resolve/main/README.md | ---
pretty_name: TyDi QA
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- ar
- bn
- en
- fi
- id
- ja
- ko
- ru
- sw
- te
- th
license:
- apache-2.0
multilinguality:
- multilingual
size_categories:
- unknown
source_datasets:
- extended|wikipedia
task_categories:
- question-answering
ta... |
null | null | @article{DBLP:journals/corr/LowePSP15,
author = {Ryan Lowe and
Nissan Pow and
Iulian Serban and
Joelle Pineau},
title = {The Ubuntu Dialogue Corpus: {A} Large Dataset for Research in Unstructured
Multi-Turn Dialogue Systems},
journal = {CoRR},
... | Ubuntu Dialogue Corpus, a dataset containing almost 1 million multi-turn dialogues, with a total of over 7 million utterances and 100 million words. This provides a unique resource for research into building dialogue managers based on neural language models that can make use of large amounts of unlabeled data. The data... | false | 503 | false | ubuntu_dialogs_corpus | 2022-11-03T16:16:37.000Z | ubuntu-dialogue-corpus | false | 05e0c9ff10c9d9377b0407389d788f7a1e34af00 | [] | [
"arxiv:1506.08909",
"annotations_creators:found",
"language:en",
"language_creators:found",
"license:unknown",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"task_categories:conversational",
"task_ids:dialogue-generation"
] | https://huggingface.co/datasets/ubuntu_dialogs_corpus/resolve/main/README.md | ---
annotations_creators:
- found
language:
- en
language_creators:
- found
license:
- unknown
multilinguality:
- monolingual
pretty_name: UDC (Ubuntu Dialogue Corpus)
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- conversational
task_ids:
- dialogue-generation
paperswithcode_id: ubuntu-dial... |
null | null | null | The Universal Declaration of Human Rights (UDHR) is a milestone document in the history of human rights. Drafted by
representatives with different legal and cultural backgrounds from all regions of the world, it set out, for the
first time, fundamental human rights to be universally protected. The Declaration was adopt... | false | 509 | false | udhr | 2022-11-03T16:16:11.000Z | null | false | 36a71b270bf3a7fa355fc656c8b96b7094350a3c | [] | [
"annotations_creators:no-annotation",
"language_creators:found",
"language:aa",
"language:ab",
"language:ace",
"language:acu",
"language:ada",
"language:ady",
"language:af",
"language:agr",
"language:aii",
"language:ajg",
"language:als",
"language:alt",
"language:am",
"language:amc",
... | https://huggingface.co/datasets/udhr/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- aa
- ab
- ace
- acu
- ada
- ady
- af
- agr
- aii
- ajg
- als
- alt
- am
- amc
- ame
- ami
- amr
- ar
- arl
- arn
- ast
- auc
- ay
- az
- ban
- bax
- bba
- bci
- be
- bem
- bfa
- bg
- bho
- bi
- bik
- bin
- blt
- bm
- bn
- bo
- boa
- br
- b... |
null | null | @unpublished{JaZeWordOrderIssues2011,
author = {Bushra Jawaid and Daniel Zeman},
title = {Word-Order Issues in {English}-to-{Urdu} Statistical Machine Translation},
year = {2011},
journal = {The Prague Bulletin of Mathematical Linguistics},
number = {95},
institution = {Univerzita Karlova},
a... | UMC005 English-Urdu is a parallel corpus of texts in English and Urdu language with sentence alignments. The corpus can be used for experiments with statistical machine translation.
The texts come from four different sources:
- Quran
- Bible
- Penn Treebank (Wall Street Journal)
- Emille corpus
The authors provide th... | false | 636 | false | um005 | 2022-11-03T16:31:00.000Z | umc005-english-urdu | false | 169a94976938ee3e28ae4ec1e131ed9ce9245009 | [] | [
"annotations_creators:no-annotation",
"language_creators:other",
"language:en",
"language:ur",
"license:unknown",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"task_categories:translation"
] | https://huggingface.co/datasets/um005/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- other
language:
- en
- ur
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: umc005-english-urdu
pretty_name: UMC005 English-Urdu
dataset_... |
null | null | @inproceedings{title = "United Nations General Assembly Resolutions: a six-language parallel corpus",
abstract = "In this paper we describe a six-ways parallel public-domain corpus consisting of 2100 United Nations General Assembly Resolutions with translations in the six official languages of the United Nations, with ... | United nations general assembly resolutions: A six-language parallel corpus.
This is a collection of translated documents from the United Nations originally compiled into a translation memory by Alexandre Rafalovitch, Robert Dale (see http://uncorpora.org).
6 languages, 15 bitexts
total number of files: 6
total number ... | false | 2,525 | false | un_ga | 2022-11-03T16:32:30.000Z | null | false | 96329710690801f90526494e4e2b4254faed737e | [] | [
"annotations_creators:found",
"language_creators:found",
"language:ar",
"language:en",
"language:es",
"language:fr",
"language:ru",
"language:zh",
"license:unknown",
"multilinguality:translation",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_categories:translation",
"con... | https://huggingface.co/datasets/un_ga/resolve/main/README.md | ---
annotations_creators:
- found
language_creators:
- found
language:
- ar
- en
- es
- fr
- ru
- zh
license:
- unknown
multilinguality:
- translation
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: UnGa
configs:
- ar-to-en
- ar-... |
null | null | @inproceedings{eisele-chen-2010-multiun,
title = "{M}ulti{UN}: A Multilingual Corpus from United Nation Documents",
author = "Eisele, Andreas and
Chen, Yu",
booktitle = "Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}'10)",
month = may,
year = ... | This is a collection of translated documents from the United Nations. This corpus is available in all 6 official languages of the UN, consisting of around 300 million words per language | false | 3,487 | false | un_multi | 2022-11-03T16:46:41.000Z | multiun | false | 4ed67dc3b374c3253d1bf62196c19e2aed147109 | [] | [
"annotations_creators:found",
"language_creators:found",
"language:ar",
"language:de",
"language:en",
"language:es",
"language:fr",
"language:ru",
"language:zh",
"license:unknown",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"task_categories:tra... | https://huggingface.co/datasets/un_multi/resolve/main/README.md | ---
annotations_creators:
- found
language_creators:
- found
language:
- ar
- de
- en
- es
- fr
- ru
- zh
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: multiun
pretty_name: Multilingual Corpus fr... |
null | null | @inproceedings{ziemski-etal-2016-united,
title = "The {U}nited {N}ations Parallel Corpus v1.0",
author = "Ziemski, Micha{\\l} and
Junczys-Dowmunt, Marcin and
Pouliquen, Bruno",
booktitle = "Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}'16)",
... | This parallel corpus consists of manually translated UN documents from the last 25 years (1990 to 2014) for the six official UN languages, Arabic, Chinese, English, French, Russian, and Spanish. | false | 2,534 | false | un_pc | 2022-11-03T16:32:32.000Z | united-nations-parallel-corpus | false | c1e829654a0189119404b15251203e93f66f941e | [] | [
"annotations_creators:found",
"language_creators:found",
"language:ar",
"language:en",
"language:es",
"language:fr",
"language:ru",
"language:zh",
"license:unknown",
"multilinguality:multilingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"task_categories:translation",
"co... | https://huggingface.co/datasets/un_pc/resolve/main/README.md | ---
annotations_creators:
- found
language_creators:
- found
language:
- ar
- en
- es
- fr
- ru
- zh
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: united-nations-parallel-corpus
pretty_name: Uni... |
null | null | null | Universal Dependencies is a project that seeks to develop cross-linguistically consistent treebank annotation for many languages, with the goal of facilitating multilingual parser development, cross-lingual learning, and parsing research from a language typology perspective. The annotation scheme is based on (universal... | false | 2,395 | false | universal_dependencies | 2022-11-03T16:46:46.000Z | universal-dependencies | false | 64d987ac1117dc33fd9300ac114cbb92f04b3e09 | [] | [
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language:af",
"language:aii",
"language:ajp",
"language:akk",
"language:am",
"language:apu",
"language:aqz",
"language:ar",
"language:be",
"language:bg",
"language:bho",
"language:bm",
"language:br",
"language:... | https://huggingface.co/datasets/universal_dependencies/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- af
- aii
- ajp
- akk
- am
- apu
- aqz
- ar
- be
- bg
- bho
- bm
- br
- bxr
- ca
- ckt
- cop
- cs
- cu
- cy
- da
- de
- el
- en
- es
- et
- eu
- fa
- fi
- fo
- fr
- fro
- ga
- gd
- gl
- got
- grc
- gsw
- gun
- gv
- he
- hi
- hr
- ... |
null | null | @article{sylak2016composition,
title={The composition and use of the universal morphological feature schema (unimorph schema)},
author={Sylak-Glassman, John},
journal={Johns Hopkins University},
year={2016}
} | The Universal Morphology (UniMorph) project is a collaborative effort to improve how NLP handles complex morphology in the world’s languages.
The goal of UniMorph is to annotate morphological data in a universal schema that allows an inflected word from any language to be defined by its lexical meaning,
typically carri... | false | 17,129 | false | universal_morphologies | 2022-11-03T16:47:17.000Z | null | false | 684417b69a4027571bab75ae0ef5e9c08de179d3 | [] | [
"annotations_creators:expert-generated",
"language_creators:found",
"language:ady",
"language:ang",
"language:ar",
"language:arn",
"language:ast",
"language:az",
"language:ba",
"language:be",
"language:bg",
"language:bn",
"language:bo",
"language:br",
"language:ca",
"language:ckb",
"... | https://huggingface.co/datasets/universal_morphologies/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- ady
- ang
- ar
- arn
- ast
- az
- ba
- be
- bg
- bn
- bo
- br
- ca
- ckb
- crh
- cs
- csb
- cu
- cy
- da
- de
- dsb
- el
- en
- es
- et
- eu
- fa
- fi
- fo
- fr
- frm
- fro
- frr
- fur
- fy
- ga
- gal
- gd
- gmh
- gml
- got
- grc
- gv
-... |
null | null | @article{MaazUrdufake2020,
author = {Amjad, Maaz and Sidorov, Grigori and Zhila, Alisa and G’{o}mez-Adorno, Helena and Voronkov, Ilia and Gelbukh, Alexander},
title = {Bend the Truth: A Benchmark Dataset for Fake News Detection in Urdu and Its Evaluation},
journal={Journal of Intelligent & Fuzzy Systems},
volume={39}... | Urdu fake news datasets that contain news of 5 different news domains.
These domains are Sports, Health, Technology, Entertainment, and Business.
The real news are collected by combining manual approaches. | false | 323 | false | urdu_fake_news | 2022-11-03T16:08:15.000Z | null | false | 16e3105befc63d6e2004c2e264c0e47d456c91f1 | [] | [
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language:ur",
"license:unknown",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"task_categories:text-classification",
"task_ids:fact-checking",
"task_ids:intent-classification"
] | https://huggingface.co/datasets/urdu_fake_news/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- ur
license:
- unknown
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- fact-checking
- intent-classification
paperswithcode_id: null
pretty_... |
null | null | @inproceedings{khan2020usc,
title={Urdu Sentiment Corpus (v1.0): Linguistic Exploration and Visualization of Labeled Datasetfor Urdu Sentiment Analysis.},
author={Khan, Muhammad Yaseen and Nizami, Muhammad Suffian},
booktitle={2020 IEEE 2nd International Conference On Information Science & Communication Technolog... | “Urdu Sentiment Corpus” (USC) shares the dat of Urdu tweets for the sentiment analysis and polarity detection.
The dataset is consisting of tweets and overall, the dataset is comprising over 17, 185 tokens
with 52% records as positive, and 48 % records as negative. | false | 323 | false | urdu_sentiment_corpus | 2022-11-03T16:08:05.000Z | urdu-sentiment-corpus | false | f72f6bfae9059e75dfdc1194d87456d55b8d0d2c | [] | [
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language:ur",
"license:unknown",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"task_categories:text-classification",
"task_ids:sentiment-classification"
] | https://huggingface.co/datasets/urdu_sentiment_corpus/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- ur
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
paperswithcode_id: urdu-sentiment-corpus
pre... |
null | null | @inproceedings{Veaux2017CSTRVC,
title = {CSTR VCTK Corpus: English Multi-speaker Corpus for CSTR Voice Cloning Toolkit},
author = {Christophe Veaux and Junichi Yamagishi and Kirsten MacDonald},
year = 2017
} | The CSTR VCTK Corpus includes speech data uttered by 110 English speakers with various accents. | false | 387 | false | vctk | 2022-11-03T16:16:04.000Z | vctk | false | c90c871de916fbe962ea7b33e3b75642fab373f7 | [] | [
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language:en",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_categories:automatic-speech-recognition"
] | https://huggingface.co/datasets/vctk/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: VCTK
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- automatic-speech-recognition
task_ids: []
paperswithcode_id: vctk
train-eval-in... |
null | null | \
@inproceedings{luong-vu-2016-non,
title = "A non-expert {K}aldi recipe for {V}ietnamese Speech Recognition System",
author = "Luong, Hieu-Thi and
Vu, Hai-Quan",
booktitle = "Proceedings of the Third International Workshop on Worldwide Language Service Infrastructure and Second Workshop on Open... | \
VIVOS is a free Vietnamese speech corpus consisting of 15 hours of recording speech prepared for
Vietnamese Automatic Speech Recognition task.
The corpus was prepared by AILAB, a computer science lab of VNUHCM - University of Science, with Prof. Vu Hai Quan is the head of.
We publish this corpus in hope to attrac... | false | 364 | false | vivos | 2022-11-03T16:15:35.000Z | null | false | ee479c69d1b2aa2dfc5d04de03efd597d27f014c | [] | [
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"language:vi",
"license:cc-by-nc-sa-4.0",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_categories:automatic-speech-recognition"
] | https://huggingface.co/datasets/vivos/resolve/main/README.md | ---
pretty_name: VIVOS
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
- expert-generated
language:
- vi
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- automatic-speech-recognition
task_ids: []
dataset_inf... |
null | null | @inproceedings{web_nlg,
author = {Claire Gardent and
Anastasia Shimorina and
Shashi Narayan and
Laura Perez{-}Beltrachini},
editor = {Regina Barzilay and
Min{-}Yen Kan},
title = {Creating Training Corpora for {NLG} Micro-Planners},
booktitle ... | The WebNLG challenge consists in mapping data to text. The training data consists
of Data/Text pairs where the data is a set of triples extracted from DBpedia and the text is a verbalisation
of these triples. For instance, given the 3 DBpedia triples shown in (a), the aim is to generate a text such as (b).
a. (John_E_... | false | 2,857 | false | web_nlg | 2022-11-03T16:32:34.000Z | webnlg | false | 5b9e2723c5c37a84bf771c4a1aa6f302f717d60e | [] | [
"annotations_creators:found",
"language_creators:crowdsourced",
"language:en",
"language:ru",
"license:cc-by-sa-3.0",
"license:cc-by-nc-sa-4.0",
"license:gfdl",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-db_pedia",
"source_datasets:original",
"... | https://huggingface.co/datasets/web_nlg/resolve/main/README.md | ---
annotations_creators:
- found
language_creators:
- crowdsourced
language:
- en
- ru
license:
- cc-by-sa-3.0
- cc-by-nc-sa-4.0
- gfdl
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-db_pedia
- original
task_categories:
- tabular-to-text
task_ids:
- rdf-to-text
paperswit... |
null | null | @inproceedings{kowsari2017HDLTex,
title={HDLTex: Hierarchical Deep Learning for Text Classification},
author={Kowsari, Kamran and Brown, Donald E and Heidarysafa, Mojtaba and Jafari Meimandi, Kiana and and Gerber, Matthew S and Barnes, Laura E},
booktitle={Machine Learning and Applications (ICMLA), 2017 16th IEEE Inter... | The Web Of Science (WOS) dataset is a collection of data of published papers
available from the Web of Science. WOS has been released in three versions: WOS-46985, WOS-11967 and WOS-5736. WOS-46985 is the
full dataset. WOS-11967 and WOS-5736 are two subsets of WOS-46985. | false | 970 | false | web_of_science | 2022-11-03T16:31:52.000Z | web-of-science-dataset | false | 388204d5a9496c00da2ad20da0f84a5d5d1cb654 | [] | [
"language:en"
] | https://huggingface.co/datasets/web_of_science/resolve/main/README.md | ---
language:
- en
paperswithcode_id: web-of-science-dataset
pretty_name: Web of Science Dataset
dataset_info:
- config_name: WOS5736
features:
- name: input_data
dtype: string
- name: label
dtype: int32
- name: label_level_1
dtype: int32
- name: label_level_2
dtype: int32
splits:
- name: ... |
null | null | @inproceedings{berant-etal-2013-semantic,
title = "Semantic Parsing on {F}reebase from Question-Answer Pairs",
author = "Berant, Jonathan and
Chou, Andrew and
Frostig, Roy and
Liang, Percy",
booktitle = "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Process... | This dataset consists of 6,642 question/answer pairs.
The questions are supposed to be answerable by Freebase, a large knowledge graph.
The questions are mostly centered around a single named entity.
The questions are popular ones asked on the web (at least in 2013). | false | 7,363 | false | web_questions | 2022-11-03T16:47:19.000Z | webquestions | false | 1882cf421e1e7beb2ff54318d7dcbeb16c82eabf | [] | [
"annotations_creators:crowdsourced",
"language:en",
"language_creators:found",
"license:unknown",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"task_categories:question-answering",
"task_ids:open-domain-qa"
] | https://huggingface.co/datasets/web_questions/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- found
license:
- unknown
multilinguality:
- monolingual
pretty_name: WebQuestions
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- open-domain-qa
paperswithcode_id: webquestions
dataset_... |
null | null | null | Tags: PER(人名), LOC(地点名), GPE(行政区名), ORG(机构名)
Label Tag Meaning
PER PER.NAM 名字(张三)
PER.NOM 代称、类别名(穷人)
LOC LOC.NAM 特指名称(紫玉山庄)
LOC.NOM 泛称(大峡谷、宾馆)
GPE GPE.NAM 行政区的名称(北京)
ORG ORG.NAM 特定机构名称(通惠医院)
ORG.NOM 泛指名称、统称(文艺公司) | false | 356 | false | weibo_ner | 2022-11-03T16:16:14.000Z | weibo-ner | false | f19b3c6c5cc2a7d1b5201499add2782dbab77524 | [] | [
"annotations_creators:expert-generated",
"language_creators:found",
"language:zh",
"license:unknown",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"task_categories:token-classification",
"task_ids:named-entity-recognition"
] | https://huggingface.co/datasets/weibo_ner/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- zh
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
paperswithcode_id: weibo-ner
pretty_name: Weibo NE... |
null | null | @inproceedings{bryant-etal-2019-bea,
title = "The {BEA}-2019 Shared Task on Grammatical Error Correction",
author = "Bryant, Christopher and
Felice, Mariano and
Andersen, {\\O}istein E. and
Briscoe, Ted",
booktitle = "Proceedings of the Fourteenth Workshop on Innovative Use of NLP... | Write & Improve (Yannakoudakis et al., 2018) is an online web platform that assists non-native
English students with their writing. Specifically, students from around the world submit letters,
stories, articles and essays in response to various prompts, and the W&I system provides instant
feedback. Since W&I went live ... | false | 491 | false | wi_locness | 2022-11-03T16:30:39.000Z | locness-corpus | false | 5fa73f5f59ab9b791d69ec171cf0319972b5c724 | [] | [
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language:en",
"license:other",
"multilinguality:monolingual",
"multilinguality:other-language-learner",
"size_categories:1K<n<10K",
"source_datasets:original",
"task_categories:text2text-generation",
"configs:locness",
"... | https://huggingface.co/datasets/wi_locness/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
license:
- other
multilinguality:
- monolingual
- other-language-learner
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
paperswithcode_id: locness-corpus
pretty_nam... |
null | null | @inproceedings{yang2016wider,
Author = {Yang, Shuo and Luo, Ping and Loy, Chen Change and Tang, Xiaoou},
Booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
Title = {WIDER FACE: A Face Detection Benchmark},
Year = {2016}} | WIDER FACE dataset is a face detection benchmark dataset, of which images are
selected from the publicly available WIDER dataset. We choose 32,203 images and
label 393,703 faces with a high degree of variability in scale, pose and
occlusion as depicted in the sample images. WIDER FACE dataset is organized
based on 61 e... | false | 423 | false | wider_face | 2022-11-03T16:16:25.000Z | wider-face-1 | false | 1d6f5c398b3ef19d5429f85314c08b2106019385 | [] | [
"arxiv:1511.06523",
"annotations_creators:expert-generated",
"language_creators:found",
"language:en",
"license:cc-by-nc-nd-4.0",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-wider",
"task_categories:object-detection",
"task_ids:face-detection"
] | https://huggingface.co/datasets/wider_face/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- cc-by-nc-nd-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-wider
task_categories:
- object-detection
task_ids:
- face-detection
paperswithcode_id: wider-face-1
pretty_nam... |
null | null | Clean-up text for 40+ Wikipedia languages editions of pages
correspond to entities. The datasets have train/dev/test splits per language.
The dataset is cleaned up by page filtering to remove disambiguation pages,
redirect pages, deleted pages, and non-entity pages. Each example contains the
wikidata id of the entity, ... | false | 7,450 | false | wiki40b | 2022-11-03T16:47:00.000Z | wiki-40b | false | e16ba9daa2736eaac1819e0366cf79e00ec6953e | [] | [
"language:en"
] | https://huggingface.co/datasets/wiki40b/resolve/main/README.md | ---
language:
- en
paperswithcode_id: wiki-40b
pretty_name: Wiki-40B
dataset_info:
features:
- name: wikidata_id
dtype: string
- name: text
dtype: string
- name: version_id
dtype: string
config_name: en
splits:
- name: test
num_bytes: 522219464
num_examples: 162274
- name: train
... | |
null | null | @article{hayashi20tacl,
title = {WikiAsp: A Dataset for Multi-domain Aspect-based Summarization},
authors = {Hiroaki Hayashi and Prashant Budania and Peng Wang and Chris Ackerson and Raj Neervannan and Graham Neubig},
journal = {Transactions of the Association for Computational Linguistics (TACL)},
year = ... | WikiAsp is a multi-domain, aspect-based summarization dataset in the encyclopedic
domain. In this task, models are asked to summarize cited reference documents of a
Wikipedia article into aspect-based summaries. Each of the 20 domains include 10
domain-specific pre-defined aspects. | false | 3,383 | false | wiki_asp | 2022-11-03T16:32:42.000Z | wikiasp | false | 5a482711a88bcfc3deb1818bf57bb834efd54566 | [] | [
"arxiv:2011.07832",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language:en",
"license:cc-by-sa-4.0",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_categories:summarization",
"tags:aspect-based-summarization"
] | https://huggingface.co/datasets/wiki_asp/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
task_ids: []
paperswithcode_id: wikiasp
pretty_name: WikiAsp
tags:
- aspect-based-su... |
null | null | @InProceedings{WikiAtomicEdits,
title = {{WikiAtomicEdits: A Multilingual Corpus of Wikipedia Edits for Modeling Language and Discourse}},
author = {Faruqui, Manaal and Pavlick, Ellie and Tenney, Ian and Das, Dipanjan},
booktitle = {Proc. of EMNLP},
year = {2018}
} | A dataset of atomic wikipedia edits containing insertions and deletions of a contiguous chunk of text in a sentence. This dataset contains ~43 million edits across 8 languages.
An atomic edit is defined as an edit e applied to a natural language expression S as the insertion, deletion, or substitution of a sub-express... | false | 2,696 | false | wiki_atomic_edits | 2022-11-03T16:32:33.000Z | wikiatomicedits | false | a41c407c3f84a3a38ed6d38ffe9300f263153ad4 | [] | [
"annotations_creators:found",
"language_creators:found",
"language:de",
"language:en",
"language:es",
"language:fr",
"language:it",
"language:ja",
"language:ru",
"language:zh",
"license:cc-by-sa-4.0",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"size_categories:10M<n<100M"... | https://huggingface.co/datasets/wiki_atomic_edits/resolve/main/README.md | ---
annotations_creators:
- found
language_creators:
- found
language:
- de
- en
- es
- fr
- it
- ja
- ru
- zh
license:
- cc-by-sa-4.0
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
- 10M<n<100M
- 1M<n<10M
source_datasets:
- original
task_categories:
- summarization
task_ids: []
paperswithcode_id: wikiato... |
null | null | @inproceedings{acl/JiangMLZX20,
author = {Chao Jiang and
Mounica Maddela and
Wuwei Lan and
Yang Zhong and
Wei Xu},
editor = {Dan Jurafsky and
Joyce Chai and
Natalie Schluter and
Joel R. Tetreault},
title... | WikiAuto provides a set of aligned sentences from English Wikipedia and Simple English Wikipedia
as a resource to train sentence simplification systems. The authors first crowd-sourced a set of manual alignments
between sentences in a subset of the Simple English Wikipedia and their corresponding versions in English Wi... | false | 1,015 | false | wiki_auto | 2022-11-03T16:31:50.000Z | null | false | 07ecdd542118aa1333bb76cfe71d3d834ae463c6 | [] | [
"arxiv:2005.02324",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:found",
"language:en",
"license:cc-by-sa-3.0",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:extended|other-wikipedia",
"task_categories:text2text-ge... | https://huggingface.co/datasets/wiki_auto/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
- machine-generated
language_creators:
- found
language:
- en
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|other-wikipedia
task_categories:
- text2text-generation
task_ids:
- text-simplification
paperswithcode_id... |
null | null | @article{DBLP:journals/corr/LebretGA16,
author = {R{\'{e}}mi Lebret and
David Grangier and
Michael Auli},
title = {Generating Text from Structured Data with Application to the Biography
Domain},
journal = {CoRR},
volume = {abs/1603.07771},
year = {... | This dataset gathers 728,321 biographies from wikipedia. It aims at evaluating text generation
algorithms. For each article, we provide the first paragraph and the infobox (both tokenized).
For each article, we extracted the first paragraph (text), the infobox (structured data). Each
infobox is encoded as a list of (fi... | false | 19,610 | false | wiki_bio | 2022-11-03T16:47:23.000Z | wikibio | false | cade968fe01186bd7976043133cdba51d53595d8 | [] | [
"arxiv:1603.07771",
"annotations_creators:found",
"language_creators:found",
"language:en",
"license:cc-by-sa-3.0",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"task_categories:table-to-text"
] | https://huggingface.co/datasets/wiki_bio/resolve/main/README.md | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- table-to-text
task_ids: []
paperswithcode_id: wikibio
pretty_name: WikiBio
dataset_info:
features:
- name: in... |
null | null | @misc{karpukhin2020dense,
title={Dense Passage Retrieval for Open-Domain Question Answering},
author={Vladimir Karpukhin and Barlas Oğuz and Sewon Min and Patrick Lewis and Ledell Wu and Sergey Edunov and Danqi Chen and Wen-tau Yih},
year={2020},
eprint={2004.04906},
archivePrefix={arXiv},
prima... | This is the wikipedia split used to evaluate the Dense Passage Retrieval (DPR) model.
It contains 21M passages from wikipedia along with their DPR embeddings.
The wikipedia articles were split into multiple, disjoint text blocks of 100 words as passages. | false | 7,712 | false | wiki_dpr | 2022-11-03T16:46:57.000Z | null | false | 35acc55a94817a2d19807aa7e156477a58079989 | [] | [
"arxiv:2004.04906",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"language:en",
"license:cc-by-sa-3.0",
"license:gfdl",
"multilinguality:multilingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"task_categories:fill-mask",
"task_categories:text-generati... | https://huggingface.co/datasets/wiki_dpr/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-3.0
- gfdl
multilinguality:
- multilingual
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- fill-mask
- text-generation
task_ids:
- language-modeling
- masked-language-modeling
pret... |
null | null | @misc{welbl2018constructing,
title={Constructing Datasets for Multi-hop Reading Comprehension Across Documents},
author={Johannes Welbl and Pontus Stenetorp and Sebastian Riedel},
year={2018},
eprint={1710.06481},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | WikiHop is open-domain and based on Wikipedia articles; the goal is to recover Wikidata information by hopping through documents. The goal is to answer text understanding queries by combining multiple facts that are spread across different documents. | false | 29,324 | false | wiki_hop | 2022-11-03T16:47:35.000Z | wikihop | false | 08050e62000fa615cea79e1da8828c827e0fdce0 | [] | [
"arxiv:1710.06481",
"annotations_creators:crowdsourced",
"language_creators:expert-generated",
"language:en",
"license:cc-by-sa-3.0",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_categories:question-answering",
"task_ids:extractive-qa",
"tags:mul... | https://huggingface.co/datasets/wiki_hop/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language_creators:
- expert-generated
language:
- en
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: wikihop
pretty_name: WikiHop
t... |
null | null | @article{ladhak-wiki-2020,
title = {WikiLingua: A New Benchmark Dataset for Multilingual Abstractive Summarization},
authors = {Faisal Ladhak, Esin Durmus, Claire Cardie and Kathleen McKeown},
journal = {arXiv preprint arXiv:2010.03093},
year = {2020},
url = {https://arxiv.org/abs/2010.03093}
} | WikiLingua is a large-scale multilingual dataset for the evaluation of
crosslingual abstractive summarization systems. The dataset includes ~770k
article and summary pairs in 18 languages from WikiHow. The gold-standard
article-summary alignments across languages was done by aligning the images
that are used to describ... | false | 3,200 | false | wiki_lingua | 2022-11-03T16:32:41.000Z | wikilingua | false | dcc50d131d145d68eb01b575a16110a5c8d0b94b | [] | [
"arxiv:2010.03093",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language:ar",
"language:cs",
"language:de",
"language:en",
"language:es",
"language:fr",
"language:hi",
"language:id",
"language:it",
"language:ja",
"language:ko",
"language:nl",
"language:pt",
... | https://huggingface.co/datasets/wiki_lingua/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- ar
- cs
- de
- en
- es
- fr
- hi
- id
- it
- ja
- ko
- nl
- pt
- ru
- th
- tr
- vi
- zh
license:
- cc-by-3.0
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
- 1K<n<10K
source_datasets:
- original
task_categories:
- summ... |
null | null | @misc{miller2016keyvalue,
title={Key-Value Memory Networks for Directly Reading Documents},
author={Alexander Miller and Adam Fisch and Jesse Dodge and Amir-Hossein Karimi and Antoine Bordes and Jason Weston},
year={2016},
eprint={1606.03126},
archivePrefix={arXiv},
primaryClass={cs.... | The WikiMovies dataset consists of roughly 100k (templated) questions over 75k entities based on questions with answers in the open movie database (OMDb). | false | 322 | false | wiki_movies | 2022-11-03T16:15:24.000Z | wikimovies | false | 8d5b5517732b6f30ce41ebe08462266080b604dc | [] | [
"arxiv:1606.03126",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language:en",
"license:cc-by-3.0",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"task_categories:question-answering",
"task_ids:closed-domain-qa"
] | https://huggingface.co/datasets/wiki_movies/resolve/main/README.md | ---
pretty_name: WikiMovies
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-3.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- closed-domain-qa
paperswithcode_id: wikimovies
... |
null | null | @InProceedings{YangYihMeek:EMNLP2015:WikiQA,
author = {{Yi}, Yang and {Wen-tau}, Yih and {Christopher} Meek},
title = "{WikiQA: A Challenge Dataset for Open-Domain Question Answering}",
journal = {Association for Computational Linguistics},
year = 2015,
doi = {10.18653/v1/D15-12... | Wiki Question Answering corpus from Microsoft | false | 28,936 | false | wiki_qa | 2022-11-03T16:47:34.000Z | wikiqa | false | 3ea8b7eab368ef2482c5485b8c78b81b5e614ec9 | [] | [
"annotations_creators:crowdsourced",
"language:en",
"language_creators:found",
"license:other",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_categories:question-answering",
"task_ids:open-domain-qa"
] | https://huggingface.co/datasets/wiki_qa/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- found
license:
- other
multilinguality:
- monolingual
pretty_name: WikiQA
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- open-domain-qa
paperswithcode_id: wikiqa
dataset_info:
feat... |
null | null | @InProceedings{YangYihMeek:EMNLP2015:WikiQA,
author = {{Yi}, Yang and {Wen-tau}, Yih and {Christopher} Meek},
title = "{WikiQA: A Challenge Dataset for Open-Domain Question Answering}",
journal = {Association for Computational Linguistics},
year = 2015,
doi = {10.18653/v1/D15-12... | Arabic Version of WikiQA by automatic automatic machine translators and crowdsourced the selection of the best one to be incorporated into the corpus | false | 321 | false | wiki_qa_ar | 2022-11-03T16:07:58.000Z | wikiqaar | false | 8e3a2526b975d5cf7df9235f40860ab550b8b91a | [] | [
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language:ar",
"license:unknown",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"task_categories:question-answering",
"task_ids:open-domain-qa"
] | https://huggingface.co/datasets/wiki_qa_ar/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- ar
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- open-domain-qa
paperswithcode_id: wikiqaar
pretty_name: English-Arabic Wi... |
null | null | @ONLINE {wikidump,
author = {Wikimedia Foundation},
title = {Wikimedia Downloads},
url = {https://dumps.wikimedia.org}
} | Wikipedia version split into plain text snippets for dense semantic indexing. | false | 800 | false | wiki_snippets | 2022-11-03T16:31:44.000Z | null | false | 30905e27a5b0753d0e1a7ef90878f6e2f2103762 | [] | [
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"language:en",
"license:unknown",
"multilinguality:multilingual",
"size_categories:10M<n<100M",
"source_datasets:extended|wiki40b",
"source_datasets:extended|wikipedia",
"task_categories:text-generation",
"task_categories:othe... | https://huggingface.co/datasets/wiki_snippets/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- en
license:
- unknown
multilinguality:
- multilingual
pretty_name: WikiSnippets
size_categories:
- 10M<n<100M
source_datasets:
- extended|wiki40b
- extended|wikipedia
task_categories:
- text-generation
- other
task_ids:
- language-m... |
null | null | @InProceedings{TIEDEMANN12.463,
author = {J{\"o}rg Tiedemann},
title = {Parallel Data, Tools and Interfaces in OPUS},
booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)},
year = {2012},
month = {may},
date = {23-25},
address = {Istanbul, Turkey},
... | 2 languages, total number of files: 132
total number of tokens: 1.80M
total number of sentence fragments: 78.36k | false | 321 | false | wiki_source | 2022-11-03T16:07:54.000Z | null | false | ed3c7ab60f400e36c1f4699ca194229557c710dd | [] | [
"annotations_creators:found",
"language_creators:found",
"language:en",
"language:sv",
"license:unknown",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_categories:translation"
] | https://huggingface.co/datasets/wiki_source/resolve/main/README.md | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
- sv
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: WikiSource
dataset_info:
features:
- name: id... |
null | null | @InProceedings{BothaEtAl2018,
title = {{Learning To Split and Rephrase From Wikipedia Edit History}},
author = {Botha, Jan A and Faruqui, Manaal and Alex, John and Baldridge, Jason and Das, Dipanjan},
booktitle = {Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing},
pages = {... | One million English sentences, each split into two sentences that together preserve the original meaning, extracted from Wikipedia
Google's WikiSplit dataset was constructed automatically from the publicly available Wikipedia revision history. Although
the dataset contains some inherent noise, it can serve as valuable ... | false | 592 | false | wiki_split | 2022-11-03T16:31:07.000Z | wikisplit | false | 8d78341ef193634b30ad5cb2f029d12bb308bb1b | [] | [
"arxiv:1808.09468",
"annotations_creators:machine-generated",
"language:en",
"language_creators:found",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"task_categories:text2text-generation",
"tags:split-and-rephrase"
] | https://huggingface.co/datasets/wiki_split/resolve/main/README.md | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- found
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: WikiSplit
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
paperswithcode_id: wikisplit
tags:
- split-and-... |
null | null | \
@misc{Bert2BertWikiSummaryPersian,
author = {Mehrdad Farahani},
title = {Summarization using Bert2Bert model on WikiSummary dataset},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {https://github.com/m3hrdadfi/wiki-summary},
} | \
The dataset extracted from Persian Wikipedia into the form of articles and highlights and cleaned the dataset into pairs of articles and highlights and reduced the articles' length (only version 1.0.0) and highlights' length to a maximum of 512 and 128, respectively, suitable for parsBERT. | false | 334 | false | wiki_summary | 2022-11-03T16:15:33.000Z | null | false | a4ad028b495a4b45c52a1ce5858bf375d79ee1f5 | [] | [
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"language:fa",
"license:apache-2.0",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_categories:text2text-generation",
"task_categories:translation",
"task_categories:question-ans... | https://huggingface.co/datasets/wiki_summary/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- fa
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text2text-generation
- translation
- question-answering
- summarization
task_ids:
- abstractive-qa
... |
null | null | @inproceedings{pan-etal-2017-cross,
title = "Cross-lingual Name Tagging and Linking for 282 Languages",
author = "Pan, Xiaoman and
Zhang, Boliang and
May, Jonathan and
Nothman, Joel and
Knight, Kevin and
Ji, Heng",
booktitle = "Proceedings of the 55th Annual Meeting of the... | WikiANN (sometimes called PAN-X) is a multilingual named entity recognition dataset consisting of Wikipedia articles annotated with LOC (location), PER (person), and ORG (organisation) tags in the IOB2 format. This version corresponds to the balanced train, dev, and test splits of Rahimi et al. (2019), which supports 1... | false | 48,058 | false | wikiann | 2022-11-03T16:47:36.000Z | wikiann-1 | false | 3f75cb74ff0a2ce480f94b3186cd8b53d76de71d | [] | [
"arxiv:1902.00193",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"language:ace",
"language:af",
"language:als",
"language:am",
"language:an",
"language:ang",
"language:ar",
"language:arc",
"language:arz",
"language:as",
"language:ast",
"language:ay",
"lan... | https://huggingface.co/datasets/wikiann/resolve/main/README.md | ---
annotations_creators:
- machine-generated
language_creators:
- crowdsourced
language:
- ace
- af
- als
- am
- an
- ang
- ar
- arc
- arz
- as
- ast
- ay
- az
- ba
- bar
- be
- bg
- bh
- bn
- bo
- br
- bs
- ca
- cbk
- cdo
- ce
- ceb
- ckb
- co
- crh
- cs
- csb
- cv
- cy
- da
- de
- diq
- dv
- el
- eml
- en
- eo
- es
... |
null | null | @inproceedings{reese-etal-2010-wikicorpus,
title = "{W}ikicorpus: A Word-Sense Disambiguated Multilingual {W}ikipedia Corpus",
author = "Reese, Samuel and
Boleda, Gemma and
Cuadros, Montse and
Padr{\'o}, Llu{\'i}s and
Rigau, German",
booktitle = "Proceedings of the Seventh Intern... | The Wikicorpus is a trilingual corpus (Catalan, Spanish, English) that contains large portions of the Wikipedia (based on a 2006 dump) and has been automatically enriched with linguistic information. In its present version, it contains over 750 million words. | false | 1,169 | false | wikicorpus | 2022-11-03T16:32:06.000Z | null | false | 709b6d7ccdfd86b2a3f6d1fe56d94adef427f79f | [] | [
"annotations_creators:machine-generated",
"annotations_creators:no-annotation",
"language_creators:found",
"language:ca",
"language:en",
"language:es",
"license:gfdl",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"size_categories:10M<n<100M",
"size_categories:1M<n<10M",
"source_... | https://huggingface.co/datasets/wikicorpus/resolve/main/README.md | ---
pretty_name: Wikicorpus
annotations_creators:
- machine-generated
- no-annotation
language_creators:
- found
language:
- ca
- en
- es
license:
- gfdl
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 10M<n<100M
- 1M<n<10M
source_datasets:
- original
task_categories:
- fill-mask
- text-classification
- t... |
null | null | @misc{koupaee2018wikihow,
title={WikiHow: A Large Scale Text Summarization Dataset},
author={Mahnaz Koupaee and William Yang Wang},
year={2018},
eprint={1810.09305},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | WikiHow is a new large-scale dataset using the online WikiHow
(http://www.wikihow.com/) knowledge base.
There are two features:
- text: wikihow answers texts.
- headline: bold lines as summary.
There are two separate versions:
- all: consisting of the concatenation of all paragraphs as the articles and
... | false | 1,689 | false | wikihow | 2022-11-03T16:32:24.000Z | wikihow | false | b927546d24e82efa0271ad6cafcac03407e2cec5 | [] | [] | https://huggingface.co/datasets/wikihow/resolve/main/README.md | ---
paperswithcode_id: wikihow
pretty_name: WikiHow
dataset_info:
- config_name: all
features:
- name: text
dtype: string
- name: headline
dtype: string
- name: title
dtype: string
splits:
- name: test
num_bytes: 18276023
num_examples: 5577
- name: train
num_bytes: 513238309
nu... |
null | null | @ONLINE {wikidump,
author = {Wikimedia Foundation},
title = {Wikimedia Downloads},
url = {https://dumps.wikimedia.org}
} | Wikipedia dataset containing cleaned articles of all languages.
The datasets are built from the Wikipedia dump
(https://dumps.wikimedia.org/) with one split per language. Each example
contains the content of one full Wikipedia article with cleaning to strip
markdown and unwanted sections (references, etc.). | false | 30,931 | false | wikipedia | 2022-11-03T16:47:25.000Z | null | false | 554b8f42900083defc42e8169bd0a3066417bf6e | [] | [
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"license:cc-by-sa-3.0",
"license:gfdl",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"source_datasets:original",
"multilinguality:multilingu... | https://huggingface.co/datasets/wikipedia/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
pretty_name: Wikipedia
paperswithcode_id: null
license:
- cc-by-sa-3.0
- gfdl
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
source_datasets:
- original
multilinguality:
- multilingual
si... |
null | null | @article{zhongSeq2SQL2017,
author = {Victor Zhong and
Caiming Xiong and
Richard Socher},
title = {Seq2SQL: Generating Structured Queries from Natural Language using
Reinforcement Learning},
journal = {CoRR},
volume = {abs/1709.00103},
year = {2017}... | A large crowd-sourced dataset for developing natural language interfaces for relational databases | false | 5,455 | false | wikisql | 2022-11-03T16:46:51.000Z | wikisql | false | 5d74604b67bb5e3b479990fb00eaf8e3166e5a4c | [] | [
"arxiv:1709.00103",
"annotations_creators:crowdsourced",
"language:en",
"language_creators:found",
"language_creators:machine-generated",
"license:unknown",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"task_categories:text2text-generation",
"tags:text... | https://huggingface.co/datasets/wikisql/resolve/main/README.md | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- found
- machine-generated
license:
- unknown
multilinguality:
- monolingual
pretty_name: WikiSQL
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
paperswithcode_id: wikisql
tags:
- ... |
null | null | @misc{merity2016pointer,
title={Pointer Sentinel Mixture Models},
author={Stephen Merity and Caiming Xiong and James Bradbury and Richard Socher},
year={2016},
eprint={1609.07843},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified
Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike
License. | false | 196,430 | false | wikitext | 2022-11-03T16:47:48.000Z | wikitext-2 | false | 227f367c93579cf446b5ce6dcecb73661beb15c6 | [] | [
"arxiv:1609.07843",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"language:en",
"license:cc-by-sa-3.0",
"license:gfdl",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"task_categories:text-generation",
"task_categories:fill-mask"... | https://huggingface.co/datasets/wikitext/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-3.0
- gfdl
multilinguality:
- monolingual
paperswithcode_id: wikitext-2
pretty_name: WikiText
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- ... |
null | null | @article{cruz2019evaluating,
title={Evaluating Language Model Finetuning Techniques for Low-resource Languages},
author={Cruz, Jan Christian Blaise and Cheng, Charibeth},
journal={arXiv preprint arXiv:1907.00409},
year={2019}
} | Large scale, unlabeled text dataset with 39 Million tokens in the training set. Inspired by the original WikiText Long Term Dependency dataset (Merity et al., 2016). TL means "Tagalog." Originally published in Cruz & Cheng (2019). | false | 344 | false | wikitext_tl39 | 2022-11-03T16:15:46.000Z | wikitext-tl-39 | false | 94967b92a094be4822b40540dedd17cda6892dde | [] | [
"arxiv:1907.00409",
"annotations_creators:no-annotation",
"language_creators:found",
"language:fil",
"language:tl",
"license:gpl-3.0",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_id... | https://huggingface.co/datasets/wikitext_tl39/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- fil
- tl
license:
- gpl-3.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: w... |
null | null | @dataset{thoma_martin_2018_841984,
author = {Thoma, Martin},
title = {{WiLI-2018 - Wikipedia Language Identification database}},
month = jan,
year = 2018,
publisher = {Zenodo},
version = {1.0.0},
doi = {10.5281/zenodo.841984},
url = {https://doi.org/... | It is a benchmark dataset for language identification and contains 235000 paragraphs of 235 languages | false | 322 | false | wili_2018 | 2022-11-03T16:15:50.000Z | wili-2018 | false | abe6efd46d7ba968831a6eaae8184471a4524ba6 | [] | [
"arxiv:1801.07779",
"annotations_creators:no-annotation",
"language_creators:found",
"language:ace",
"language:af",
"language:als",
"language:am",
"language:an",
"language:ang",
"language:ar",
"language:arz",
"language:as",
"language:ast",
"language:av",
"language:ay",
"language:az",
... | https://huggingface.co/datasets/wili_2018/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- ace
- af
- als
- am
- an
- ang
- ar
- arz
- as
- ast
- av
- ay
- az
- azb
- ba
- bar
- bcl
- be
- bg
- bho
- bjn
- bn
- bo
- bpy
- br
- bs
- bxr
- ca
- cbk
- cdo
- ce
- ceb
- chr
- ckb
- co
- crh
- cs
- csb
- cv
- cy
- da
- de
- diq
- dsb
... |
null | null | @article{DBLP:journals/corr/abs-1804-06876,
author = {Jieyu Zhao and
Tianlu Wang and
Mark Yatskar and
Vicente Ordonez and
Kai{-}Wei Chang},
title = {Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods},
journal = {CoRR},
vo... | WinoBias, a Winograd-schema dataset for coreference resolution focused on gender bias.
The corpus contains Winograd-schema style sentences with entities corresponding to people
referred by their occupation (e.g. the nurse, the doctor, the carpenter). | false | 83,556 | false | wino_bias | 2022-11-03T16:47:48.000Z | winobias | false | 8f4025da48d0c9680bd04696d7db3f4b96a772b8 | [] | [
"arxiv:1804.06876",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language:en",
"license:mit",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"task_categories:token-classification",
"task_ids:coreference-resolution"
] | https://huggingface.co/datasets/wino_bias/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- coreference-resolution
paperswithcode_id: winobias
pretty_name: Wino... |
null | null | @inproceedings{levesque2012winograd,
title={The winograd schema challenge},
author={Levesque, Hector and Davis, Ernest and Morgenstern, Leora},
booktitle={Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning},
year={2012},
organization={Citeseer}
} | A Winograd schema is a pair of sentences that differ in only one or two words and that contain an ambiguity that is
resolved in opposite ways in the two sentences and requires the use of world knowledge and reasoning for its
resolution. The schema takes its name from a well-known example by Terry Winograd:
> The city ... | false | 1,006 | false | winograd_wsc | 2022-11-03T16:31:51.000Z | wsc | false | 551cbb8f41c5ea9f821f3310cddd94c1c191c5bf | [] | [
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language:en",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"task_categories:multiple-choice",
"task_ids:multiple-choice-coreference-resolution"
] | https://huggingface.co/datasets/winograd_wsc/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- multiple-choice
task_ids:
- multiple-choice-coreference-resolution
paperswithcode_id: wsc
pretty_na... |
null | null | @InProceedings{ai2:winogrande,
title = {WinoGrande: An Adversarial Winograd Schema Challenge at Scale},
authors={Keisuke, Sakaguchi and Ronan, Le Bras and Chandra, Bhagavatula and Yejin, Choi
},
year={2019}
} | WinoGrande is a new collection of 44k problems, inspired by Winograd Schema Challenge (Levesque, Davis, and Morgenstern
2011), but adjusted to improve the scale and robustness against the dataset-specific bias. Formulated as a
fill-in-a-blank task with binary options, the goal is to choose the right option for a given... | false | 107,419 | false | winogrande | 2022-11-03T16:47:46.000Z | winogrande | false | f3bc62cbae4a79ff4dd45bf81864560dbfed6b3d | [] | [
"language:en"
] | https://huggingface.co/datasets/winogrande/resolve/main/README.md | ---
language:
- en
paperswithcode_id: winogrande
pretty_name: WinoGrande
dataset_info:
- config_name: winogrande_xs
features:
- name: sentence
dtype: string
- name: option1
dtype: string
- name: option2
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 227649
... |
null | null | @article{wiqa,
author = {Niket Tandon and Bhavana Dalvi Mishra and Keisuke Sakaguchi and Antoine Bosselut and Peter Clark}
title = {WIQA: A dataset for "What if..." reasoning over procedural text},
journal = {arXiv:1909.04739v1},
year = {2019},
} | The WIQA dataset V1 has 39705 questions containing a perturbation and a possible effect in the context of a paragraph.
The dataset is split into 29808 train questions, 6894 dev questions and 3003 test questions. | false | 25,477 | false | wiqa | 2022-11-03T16:47:31.000Z | wiqa | false | f67cc524fd4455dc78725fa5d6c4bd21869b63b7 | [] | [
"language:en"
] | https://huggingface.co/datasets/wiqa/resolve/main/README.md | ---
language:
- en
paperswithcode_id: wiqa
pretty_name: What-If Question Answering
dataset_info:
features:
- name: question_stem
dtype: string
- name: question_para_step
sequence: string
- name: answer_label
dtype: string
- name: answer_label_as_choice
dtype: string
- name: choices
seque... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.