_id
stringlengths
24
24
id
stringlengths
5
121
author
stringlengths
2
42
disabled
bool
2 classes
gated
stringclasses
3 values
lastModified
stringlengths
24
24
likes
int64
0
5.42k
private
bool
1 class
sha
stringlengths
40
40
description
stringlengths
0
6.67k
downloads
int64
0
30.4M
paperswithcode_id
stringclasses
623 values
tags
sequencelengths
1
7.92k
createdAt
stringlengths
24
24
key
stringclasses
1 value
citation
stringlengths
0
10.7k
621ffdd236468d709f181d58
amirveyseh/acronym_identification
amirveyseh
false
False
2024-01-09T11:39:57.000Z
19
false
15ef643450d589d5883e289ffadeb03563e80a9e
Dataset Card for Acronym Identification Dataset Dataset Summary This dataset contains the training, validation, and test data for the Shared Task 1: Acronym Identification of the AAAI-21 Workshop on Scientific Document Understanding. Supported Tasks and Leaderboards The dataset supports an acronym-identification task, where the aim is to predic which tokens in a pre-tokenized sentence correspond to acronyms. The dataset was released for a Shared… See the full description on the dataset page: https://huggingface.co/datasets/amirveyseh/acronym_identification.
206
acronym-identification
[ "task_categories:token-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:mit", "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "arxiv:2010.14678", "region:us", "acronym-identification" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d59
ade-benchmark-corpus/ade_corpus_v2
ade-benchmark-corpus
false
False
2024-01-09T11:42:58.000Z
25
false
4ba01c71687dd7c996597042449448ea312126cf
Dataset Card for Adverse Drug Reaction Data v2 Dataset Summary ADE-Corpus-V2 Dataset: Adverse Drug Reaction Data. This is a dataset for Classification if a sentence is ADE-related (True) or not (False) and Relation Extraction between Adverse Drug Event and Drug. DRUG-AE.rel provides relations between drugs and adverse effects. DRUG-DOSE.rel provides relations between drugs and dosages. ADE-NEG.txt provides all sentences in the ADE corpus that DO NOT contain… See the full description on the dataset page: https://huggingface.co/datasets/ade-benchmark-corpus/ade_corpus_v2.
244
null
[ "task_categories:text-classification", "task_categories:token-classification", "task_ids:coreference-resolution", "task_ids:fact-checking", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:unknown", "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d5a
UCLNLP/adversarial_qa
UCLNLP
false
False
2023-12-21T14:20:00.000Z
32
false
c2d5f738db1ad21a4126a144dfbb00cb51e0a4a9
Dataset Card for adversarialQA Dataset Summary We have created three new Reading Comprehension datasets constructed using an adversarial model-in-the-loop. We use three different models; BiDAF (Seo et al., 2016), BERTLarge (Devlin et al., 2018), and RoBERTaLarge (Liu et al., 2019) in the annotation loop and construct three datasets; D(BiDAF), D(BERT), and D(RoBERTa), each with 10,000 training examples, 1,000 validation, and 1,000 test examples. The adversarial… See the full description on the dataset page: https://huggingface.co/datasets/UCLNLP/adversarial_qa.
135
adversarialqa
[ "task_categories:question-answering", "task_ids:extractive-qa", "task_ids:open-domain-qa", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:cc-by-sa-4.0", "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "arxiv:2002.00293", "arxiv:1606.05250", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d5b
Yale-LILY/aeslc
Yale-LILY
false
False
2024-01-09T11:49:13.000Z
12
false
2305f2e63b68056f9b9037a3805c8c196e0d5581
Dataset Card for "aeslc" Dataset Summary A collection of email messages of employees in the Enron Corporation. There are two features: email_body: email body text. subject_line: email subject text. Supported Tasks and Leaderboards More Information Needed Languages Monolingual English (mainly en-US) with some exceptions. Dataset Structure Data Instances default Size of downloaded dataset… See the full description on the dataset page: https://huggingface.co/datasets/Yale-LILY/aeslc.
132
aeslc
[ "task_categories:summarization", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:unknown", "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "arxiv:1906.03497", "region:us", "aspect-based-summarization", "conversations-summarization", "multi-document-summarization", "email-headline-generation" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d5c
nwu-ctext/afrikaans_ner_corpus
nwu-ctext
false
False
2024-01-09T11:51:47.000Z
6
false
445834a997dce8b40e1d108638064381de80c497
Dataset Card for Afrikaans Ner Corpus Dataset Summary The Afrikaans Ner Corpus is an Afrikaans dataset developed by The Centre for Text Technology (CTexT), North-West University, South Africa. The data is based on documents from the South African goverment domain and crawled from gov.za websites. It was created to support NER task for Afrikaans language. The dataset uses CoNLL shared task annotation standards. Supported Tasks and Leaderboards [More… See the full description on the dataset page: https://huggingface.co/datasets/nwu-ctext/afrikaans_ner_corpus.
103
null
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "source_datasets:original", "language:af", "license:other", "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d5d
fancyzhx/ag_news
fancyzhx
false
False
2024-03-07T12:02:37.000Z
124
false
eb185aade064a813bc0b7f42de02595523103ca4
Dataset Card for "ag_news" Dataset Summary AG is a collection of more than 1 million news articles. News articles have been gathered from more than 2000 news sources by ComeToMyHead in more than 1 year of activity. ComeToMyHead is an academic news search engine which has been running since July, 2004. The dataset is provided by the academic comunity for research purposes in data mining (clustering, classification, etc), information retrieval (ranking, search, etc)… See the full description on the dataset page: https://huggingface.co/datasets/fancyzhx/ag_news.
6,975
ag-news
[ "task_categories:text-classification", "task_ids:topic-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:unknown", "size_categories:100K<n<1M", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d5e
allenai/ai2_arc
allenai
false
False
2023-12-21T15:09:48.000Z
116
false
210d026faf9955653af8916fad021475a3f00453
Dataset Card for "ai2_arc" Dataset Summary A new dataset of 7,787 genuine grade-school level, multiple-choice science questions, assembled to encourage research in advanced question-answering. The dataset is partitioned into a Challenge Set and an Easy Set, where the former contains only questions answered incorrectly by both a retrieval-based algorithm and a word co-occurrence algorithm. We are also including a corpus of over 14 million science sentences… See the full description on the dataset page: https://huggingface.co/datasets/allenai/ai2_arc.
836,988
null
[ "task_categories:question-answering", "task_ids:open-domain-qa", "task_ids:multiple-choice-qa", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:cc-by-sa-4.0", "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "arxiv:1803.05457", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d5f
google/air_dialogue
google
false
False
2024-03-07T15:22:15.000Z
15
false
dbdbe7bcef8d344bc3c68a05600f3d95917d6898
Dataset Card for air_dialogue Dataset Summary AirDialogue, is a large dataset that contains 402,038 goal-oriented conversations. To collect this dataset, we create a contextgenerator which provides travel and flight restrictions. Then the human annotators are asked to play the role of a customer or an agent and interact with the goal of successfully booking a trip given the restrictions. News in v1.3: We have included the test split of the AirDialogue dataset. We… See the full description on the dataset page: https://huggingface.co/datasets/google/air_dialogue.
77
null
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:conversational", "task_ids:dialogue-generation", "task_ids:dialogue-modeling", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:crowdsourced", "language_creators:machine-generated", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:cc-by-nc-4.0", "size_categories:100K<n<1M", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d60
komari6/ajgt_twitter_ar
komari6
false
False
2024-01-09T11:58:01.000Z
4
false
af3f2fa5462ac461b696cb300d66e07ad366057f
Dataset Card for Arabic Jordanian General Tweets Dataset Summary Arabic Jordanian General Tweets (AJGT) Corpus consisted of 1,800 tweets annotated as positive and negative. Modern Standard Arabic (MSA) or Jordanian dialect. Supported Tasks and Leaderboards The dataset was published on this paper. Languages The dataset is based on Arabic. Dataset Structure Data Instances A binary datset with with negative… See the full description on the dataset page: https://huggingface.co/datasets/komari6/ajgt_twitter_ar.
135
null
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "language:ar", "license:unknown", "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d61
legacy-datasets/allegro_reviews
legacy-datasets
false
False
2024-01-09T11:59:39.000Z
4
false
71593d1379934286885c53d147bc863ffe830745
Dataset Card for [Dataset Name] Dataset Summary Allegro Reviews is a sentiment analysis dataset, consisting of 11,588 product reviews written in Polish and extracted from Allegro.pl - a popular e-commerce marketplace. Each review contains at least 50 words and has a rating on a scale from one (negative review) to five (positive review). We recommend using the provided train/dev/test split. The ratings for the test set reviews are kept hidden. You can evaluate your… See the full description on the dataset page: https://huggingface.co/datasets/legacy-datasets/allegro_reviews.
124
allegro-reviews
[ "task_categories:text-classification", "task_ids:sentiment-scoring", "task_ids:text-scoring", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "language:pl", "license:cc-by-sa-4.0", "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d62
tblard/allocine
tblard
false
False
2024-01-09T12:02:24.000Z
10
false
a4654f4896408912913a62ace89614879a549287
Dataset Card for Allociné Dataset Summary The Allociné dataset is a French-language dataset for sentiment analysis. The texts are movie reviews written between 2006 and 2020 by members of the Allociné.fr community for various films. It contains 100k positive and 100k negative reviews divided into train (160k), validation (20k), and test (20k). Supported Tasks and Leaderboards text-classification, sentiment-classification: The dataset can be used… See the full description on the dataset page: https://huggingface.co/datasets/tblard/allocine.
342
allocine
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "language:fr", "license:mit", "size_categories:100K<n<1M", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d63
mutiyama/alt
mutiyama
false
False
2024-01-09T12:07:24.000Z
16
false
afbd92e198bbcf17f660e03076fd2938f5a4bbb2
Dataset Card for Asian Language Treebank (ALT) Dataset Summary The ALT project aims to advance the state-of-the-art Asian natural language processing (NLP) techniques through the open collaboration for developing and using ALT. It was first conducted by NICT and UCSY as described in Ye Kyaw Thu, Win Pa Pa, Masao Utiyama, Andrew Finch and Eiichiro Sumita (2016). Then, it was developed under ASEAN IVO as described in this Web page. The process of building ALT began… See the full description on the dataset page: https://huggingface.co/datasets/mutiyama/alt.
156
alt
[ "task_categories:translation", "task_categories:token-classification", "task_ids:parsing", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:multilingual", "multilinguality:translation", "source_datasets:original", "language:bn", "language:en", "language:fil", "language:hi", "language:id", "language:ja", "language:km", "language:lo", "language:ms", "language:my", "language:th", "language:vi", "language:zh", "license:cc-by-4.0", "size_categories:100K<n<1M", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d64
fancyzhx/amazon_polarity
fancyzhx
false
False
2024-01-09T12:23:33.000Z
38
false
9d9c45c18f8c3cf1b23a3c27917b60cbf28f3289
Dataset Card for Amazon Review Polarity Dataset Summary The Amazon reviews dataset consists of reviews from amazon. The data span a period of 18 years, including ~35 million reviews up to March 2013. Reviews include product and user information, ratings, and a plaintext review. Supported Tasks and Leaderboards text-classification, sentiment-classification: The dataset is mainly used for text classification: given the content and the title, predict… See the full description on the dataset page: https://huggingface.co/datasets/fancyzhx/amazon_polarity.
458
null
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:apache-2.0", "size_categories:1M<n<10M", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "arxiv:1509.01626", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d65
defunct-datasets/amazon_reviews_multi
defunct-datasets
false
False
2023-11-02T14:52:21.000Z
95
false
b6115b04af1d02b3c30849bdd4c55899bff0ae63
We provide an Amazon product reviews dataset for multilingual text classification. The dataset contains reviews in English, Japanese, German, French, Chinese and Spanish, collected between November 1, 2015 and November 1, 2019. Each record in the dataset contains the review text, the review title, the star rating, an anonymized reviewer ID, an anonymized product ID and the coarse-grained product category (e.g. ‘books’, ‘appliances’, etc.) The corpus is balanced across stars, so each star rating constitutes 20% of the reviews in each language. For each language, there are 200,000, 5,000 and 5,000 reviews in the training, development and test sets respectively. The maximum number of reviews per reviewer is 20 and the maximum number of reviews per product is 20. All reviews are truncated after 2,000 characters, and all reviews are at least 20 characters long. Note that the language of a review does not necessarily match the language of its marketplace (e.g. reviews from amazon.de are primarily written in German, but could also be written in English, etc.). For this reason, we applied a language detection algorithm based on the work in Bojanowski et al. (2017) to determine the language of the review text and we removed reviews that were not written in the expected language.
90
null
[ "task_categories:summarization", "task_categories:text-generation", "task_categories:fill-mask", "task_categories:text-classification", "task_ids:text-scoring", "task_ids:language-modeling", "task_ids:masked-language-modeling", "task_ids:sentiment-classification", "task_ids:sentiment-scoring", "task_ids:topic-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "multilinguality:multilingual", "source_datasets:original", "language:de", "language:en", "language:es", "language:fr", "language:ja", "language:zh", "license:other", "size_categories:100K<n<1M", "arxiv:2010.02573", "region:us" ]
2022-03-02T23:29:22.000Z
@inproceedings{marc_reviews, title={The Multilingual Amazon Reviews Corpus}, author={Keung, Phillip and Lu, Yichao and Szarvas, György and Smith, Noah A.}, booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing}, year={2020} }
621ffdd236468d709f181d66
defunct-datasets/amazon_us_reviews
defunct-datasets
false
False
2023-11-02T14:57:03.000Z
68
false
e1bfd57e2da5dc7dc4c748eb4a4a112c71e85162
Amazon Customer Reviews (a.k.a. Product Reviews) is one of Amazons iconic products. In a period of over two decades since the first review in 1995, millions of Amazon customers have contributed over a hundred million reviews to express opinions and describe their experiences regarding products on the Amazon.com website. This makes Amazon Customer Reviews a rich source of information for academic researchers in the fields of Natural Language Processing (NLP), Information Retrieval (IR), and Machine Learning (ML), amongst others. Accordingly, we are releasing this data to further research in multiple disciplines related to understanding customer product experiences. Specifically, this dataset was constructed to represent a sample of customer evaluations and opinions, variation in the perception of a product across geographical regions, and promotional intent or bias in reviews. Over 130+ million customer reviews are available to researchers as part of this release. The data is available in TSV files in the amazon-reviews-pds S3 bucket in AWS US East Region. Each line in the data files corresponds to an individual review (tab delimited, with no quote and escape characters). Each Dataset contains the following columns: - marketplace: 2 letter country code of the marketplace where the review was written. - customer_id: Random identifier that can be used to aggregate reviews written by a single author. - review_id: The unique ID of the review. - product_id: The unique Product ID the review pertains to. In the multilingual dataset the reviews for the same product in different countries can be grouped by the same product_id. - product_parent: Random identifier that can be used to aggregate reviews for the same product. - product_title: Title of the product. - product_category: Broad product category that can be used to group reviews (also used to group the dataset into coherent parts). - star_rating: The 1-5 star rating of the review. - helpful_votes: Number of helpful votes. - total_votes: Number of total votes the review received. - vine: Review was written as part of the Vine program. - verified_purchase: The review is on a verified purchase. - review_headline: The title of the review. - review_body: The review text. - review_date: The date the review was written.
118
null
[ "task_categories:summarization", "task_categories:text-generation", "task_categories:fill-mask", "task_categories:text-classification", "task_ids:text-scoring", "task_ids:language-modeling", "task_ids:masked-language-modeling", "task_ids:sentiment-classification", "task_ids:sentiment-scoring", "task_ids:topic-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:other", "size_categories:100M<n<1B", "region:us" ]
2022-03-02T23:29:22.000Z
\
621ffdd236468d709f181d67
sewon/ambig_qa
sewon
false
False
2024-01-09T12:27:07.000Z
7
false
e969d0132f4dd28c2939d55be34f1788c00ccfe7
Dataset Card for AmbigQA: Answering Ambiguous Open-domain Questions Dataset Summary AmbigNQ, a dataset covering 14,042 questions from NQ-open, an existing open-domain QA benchmark. We find that over half of the questions in NQ-open are ambiguous. The types of ambiguity are diverse and sometimes subtle, many of which are only apparent after examining evidence provided by a very large text corpus. AMBIGNQ, a dataset with 14,042 annotations on NQ-OPEN questions… See the full description on the dataset page: https://huggingface.co/datasets/sewon/ambig_qa.
699
ambigqa
[ "task_categories:question-answering", "task_ids:open-domain-qa", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "source_datasets:extended|natural_questions", "source_datasets:original", "language:en", "license:cc-by-sa-3.0", "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "arxiv:2004.10645", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d68
nala-cub/americas_nli
nala-cub
false
False
2024-01-23T09:18:27.000Z
3
false
1f3f4fa57acb59b2f352031de45ba08227d972c0
Dataset Card for AmericasNLI Dataset Summary AmericasNLI is an extension of XNLI (Conneau et al., 2018) a natural language inference (NLI) dataset covering 15 high-resource languages to 10 low-resource indigenous languages spoken in the Americas: Ashaninka, Aymara, Bribri, Guarani, Nahuatl, Otomi, Quechua, Raramuri, Shipibo-Konibo, and Wixarika. As with MNLI, the goal is to predict textual entailment (does sentence A imply/contradict/neither sentence B) and is a… See the full description on the dataset page: https://huggingface.co/datasets/nala-cub/americas_nli.
1,429
null
[ "task_categories:text-classification", "task_ids:natural-language-inference", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:multilingual", "multilinguality:translation", "source_datasets:extended|xnli", "language:ay", "language:bzd", "language:cni", "language:gn", "language:hch", "language:nah", "language:oto", "language:qu", "language:shp", "language:tar", "license:cc-by-sa-4.0", "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "arxiv:2104.08726", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d69
legacy-datasets/ami
legacy-datasets
false
False
2024-01-18T11:01:45.000Z
14
false
81c6507a5cead40db13e77610fdcdf5c0f6261e4
The AMI Meeting Corpus consists of 100 hours of meeting recordings. The recordings use a range of signals synchronized to a common timeline. These include close-talking and far-field microphones, individual and room-view video cameras, and output from a slide projector and an electronic whiteboard. During the meetings, the participants also have unsynchronized pens available to them that record what is written. The meetings were recorded in English using three different rooms with different acoustic properties, and include mostly non-native speakers. \n
128
null
[ "task_categories:automatic-speech-recognition", "annotations_creators:expert-generated", "language_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:cc-by-4.0", "size_categories:100K<n<1M", "region:us" ]
2022-03-02T23:29:22.000Z
@inproceedings{10.1007/11677482_3, author = {Carletta, Jean and Ashby, Simone and Bourban, Sebastien and Flynn, Mike and Guillemot, Mael and Hain, Thomas and Kadlec, Jaroslav and Karaiskos, Vasilis and Kraaij, Wessel and Kronenthal, Melissa and Lathoud, Guillaume and Lincoln, Mike and Lisowska, Agnes and McCowan, Iain and Post, Wilfried and Reidsma, Dennis and Wellner, Pierre}, title = {The AMI Meeting Corpus: A Pre-Announcement}, year = {2005}, isbn = {3540325492}, publisher = {Springer-Verlag}, address = {Berlin, Heidelberg}, url = {https://doi.org/10.1007/11677482_3}, doi = {10.1007/11677482_3}, abstract = {The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting recordings. It is being created in the context of a project that is developing meeting browsing technology and will eventually be released publicly. Some of the meetings it contains are naturally occurring, and some are elicited, particularly using a scenario in which the participants play different roles in a design team, taking a design project from kick-off to completion over the course of a day. The corpus is being recorded using a wide range of devices including close-talking and far-field microphones, individual and room-view video cameras, projection, a whiteboard, and individual pens, all of which produce output signals that are synchronized with each other. It is also being hand-annotated for many different phenomena, including orthographic transcription, discourse properties such as named entities and dialogue acts, summaries, emotions, and some head and hand gestures. We describe the data set, including the rationale behind using elicited material, and explain how the material is being recorded, transcribed and annotated.}, booktitle = {Proceedings of the Second International Conference on Machine Learning for Multimodal Interaction}, pages = {28–39}, numpages = {12}, location = {Edinburgh, UK}, series = {MLMI'05} }
621ffdd236468d709f181d6a
gavinxing/amttl
gavinxing
false
False
2024-01-09T12:28:18.000Z
2
false
271a5aa99e75e936e334b3c52ec178f08bced629
Dataset Card for AMTTL Dataset Summary [More Information Needed] Supported Tasks and Leaderboards [More Information Needed] Languages [More Information Needed] Dataset Structure Data Instances [More Information Needed] Data Fields [More Information Needed] Data Splits [More Information Needed] Dataset Creation Curation Rationale [More Information… See the full description on the dataset page: https://huggingface.co/datasets/gavinxing/amttl.
53
null
[ "task_categories:token-classification", "task_ids:parsing", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "language:zh", "license:mit", "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d6b
facebook/anli
facebook
false
False
2023-12-21T15:34:02.000Z
33
false
8e4813d81f46d313dac7892e1c28076917cfcdf9
Dataset Card for "anli" Dataset Summary The Adversarial Natural Language Inference (ANLI) is a new large-scale NLI benchmark dataset, The dataset is collected via an iterative, adversarial human-and-model-in-the-loop procedure. ANLI is much more difficult than its predecessors including SNLI and MNLI. It contains three rounds. Each round has train/dev/test splits. Supported Tasks and Leaderboards More Information Needed Languages… See the full description on the dataset page: https://huggingface.co/datasets/facebook/anli.
6,777
anli
[ "task_categories:text-classification", "task_ids:natural-language-inference", "task_ids:multi-input-text-classification", "annotations_creators:crowdsourced", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "source_datasets:extended|hotpot_qa", "language:en", "license:cc-by-nc-4.0", "size_categories:100K<n<1M", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "arxiv:1910.14599", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d6c
sealuzh/app_reviews
sealuzh
false
False
2024-01-09T12:30:17.000Z
24
false
9eaa95f66364367e8752b0f34c00f67aafa95d15
Dataset Card for [Dataset Name] Dataset Summary It is a large dataset of Android applications belonging to 23 differentapps categories, which provides an overview of the types of feedback users report on the apps and documents the evolution of the related code metrics. The dataset contains about 395 applications of the F-Droid repository, including around 600 versions, 280,000 user reviews (extracted with specific text mining approaches) Supported… See the full description on the dataset page: https://huggingface.co/datasets/sealuzh/app_reviews.
177
null
[ "task_categories:text-classification", "task_ids:text-scoring", "task_ids:sentiment-scoring", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:unknown", "size_categories:100K<n<1M", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d6d
deepmind/aqua_rat
deepmind
false
False
2024-01-09T12:33:06.000Z
40
false
33301c6a050c96af81f63cad5562cb5363e88971
Dataset Card for AQUA-RAT Dataset Summary A large-scale dataset consisting of approximately 100,000 algebraic word problems. The solution to each question is explained step-by-step using natural language. This data is used to train a program generation model that learns to generate the explanation, while generating the program that solves the question. Supported Tasks and Leaderboards Languages en Dataset Structure… See the full description on the dataset page: https://huggingface.co/datasets/deepmind/aqua_rat.
936
aqua-rat
[ "task_categories:question-answering", "task_ids:multiple-choice-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:apache-2.0", "size_categories:100K<n<1M", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "arxiv:1705.04146", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d6e
google-research-datasets/aquamuse
google-research-datasets
false
False
2024-01-09T12:36:37.000Z
12
false
84df3ebd8bfe31e2875d242300161ea64ac2b06b
Dataset Card for AQuaMuSe Dataset Summary AQuaMuSe is a novel scalable approach to automatically mine dual query based multi-document summarization datasets for extractive and abstractive summaries using question answering dataset (Google Natural Questions) and large document corpora (Common Crawl) This dataset contains versions of automatically generated datasets for abstractive and extractive query-based multi-document summarization as described in AQuaMuSe… See the full description on the dataset page: https://huggingface.co/datasets/google-research-datasets/aquamuse.
53
aquamuse
[ "task_categories:other", "task_categories:question-answering", "task_categories:text2text-generation", "task_ids:abstractive-qa", "task_ids:extractive-qa", "annotations_creators:crowdsourced", "annotations_creators:expert-generated", "language_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:monolingual", "source_datasets:extended|natural_questions", "source_datasets:extended|other-Common-Crawl", "source_datasets:original", "language:en", "license:unknown", "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "arxiv:2010.12694", "region:us", "query-based-multi-document-summarization" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d6f
bigIR/ar_cov19
bigIR
false
False
2023-09-19T06:52:17.000Z
1
false
447b2a5a20c9e8ffaee0f14b31697be7b0dec403
ArCOV-19 is an Arabic COVID-19 Twitter dataset that covers the period from 27th of January till 30th of April 2020. ArCOV-19 is designed to enable research under several domains including natural language processing, information retrieval, and social computing, among others
86
arcov-19
[ "task_categories:other", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "language:ar", "size_categories:1M<n<10M", "arxiv:2004.05861", "region:us", "data-mining" ]
2022-03-02T23:29:22.000Z
@article{haouari2020arcov19, title={ArCOV-19: The First Arabic COVID-19 Twitter Dataset with Propagation Networks}, author={Fatima Haouari and Maram Hasanain and Reem Suwaileh and Tamer Elsayed}, journal={arXiv preprint arXiv:2004.05861}, year={2020}
621ffdd236468d709f181d70
hadyelsahar/ar_res_reviews
hadyelsahar
false
False
2024-01-09T12:38:13.000Z
5
false
d51bf2435d030e0041344f576c5e8d7154828977
Dataset Card for ArRestReviews Dataset Summary Dataset of 8364 restaurant reviews from qaym.com in Arabic for sentiment analysis Supported Tasks and Leaderboards [More Information Needed] Languages The dataset is based on Arabic. Dataset Structure Data Instances A typical data point comprises of the following: "polarity": which is a string value of either 0 or 1 indicating the sentiment around the review… See the full description on the dataset page: https://huggingface.co/datasets/hadyelsahar/ar_res_reviews.
86
null
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "language:ar", "license:unknown", "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d71
iabufarha/ar_sarcasm
iabufarha
false
False
2024-01-09T12:42:05.000Z
12
false
557bf94ac6177cc442f42d0b09b6e4b76e8f47c9
Dataset Card for ArSarcasm Dataset Summary ArSarcasm is a new Arabic sarcasm detection dataset. The dataset was created using previously available Arabic sentiment analysis datasets (SemEval 2017 and ASTD) and adds sarcasm and dialect labels to them. The dataset contains 10,547 tweets, 1,682 (16%) of which are sarcastic. For more details, please check the paper From Arabic Sentiment Analysis to Sarcasm Detection: The ArSarcasm Dataset Supported Tasks… See the full description on the dataset page: https://huggingface.co/datasets/iabufarha/ar_sarcasm.
89
null
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "source_datasets:extended|other-semeval_2017", "source_datasets:extended|other-astd", "language:ar", "license:mit", "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "sarcasm-detection" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d72
abuelkhair-corpus/arabic_billion_words
abuelkhair-corpus
false
False
2024-01-18T11:01:47.000Z
22
false
c948146dc6e63d56b3469be209ea7e35a4ed5579
Abu El-Khair Corpus is an Arabic text corpus, that includes more than five million newspaper articles. It contains over a billion and a half words in total, out of which, there are about three million unique words. The corpus is encoded with two types of encoding, namely: UTF-8, and Windows CP-1256. Also it was marked with two mark-up languages, namely: SGML, and XML.
196
null
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "language:ar", "license:unknown", "size_categories:100K<n<1M", "arxiv:1611.04033", "region:us" ]
2022-03-02T23:29:22.000Z
@article{el20161, title={1.5 billion words arabic corpus}, author={El-Khair, Ibrahim Abu}, journal={arXiv preprint arXiv:1611.04033}, year={2016} }
621ffdd236468d709f181d73
QCRI/arabic_pos_dialect
QCRI
false
False
2024-01-09T12:43:34.000Z
7
false
897e2cecae33a242f5003922d3f1564f0c55c3dd
Dataset Card for Arabic POS Dialect Dataset Summary This dataset was created to support part of speech (POS) tagging in dialects of Arabic. It contains sets of 350 manually segmented and POS tagged tweets for each of four dialects: Egyptian, Levantine, Gulf, and Maghrebi. Supported Tasks and Leaderboards The dataset can be used to train a model for Arabic token segmentation and part of speech tagging in Arabic dialects. Success on this task is… See the full description on the dataset page: https://huggingface.co/datasets/QCRI/arabic_pos_dialect.
54
null
[ "task_categories:token-classification", "task_ids:part-of-speech", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:multilingual", "source_datasets:extended", "language:ar", "license:apache-2.0", "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "arxiv:1708.05891", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d74
halabi2016/arabic_speech_corpus
halabi2016
false
False
2024-08-14T14:21:32.000Z
24
false
a66b1d6ba1c5cc79570bffcd4d83b9ce566db2b4
This Speech corpus has been developed as part of PhD work carried out by Nawar Halabi at the University of Southampton. The corpus was recorded in south Levantine Arabic (Damascian accent) using a professional studio. Synthesized speech as an output using this corpus has produced a high quality, natural voice. Note that in order to limit the required storage for preparing this dataset, the audio is stored in the .flac format and is not converted to a float32 array. To convert, the audio file to a float32 array, please make use of the `.map()` function as follows: ```python import soundfile as sf def map_to_array(batch): speech_array, _ = sf.read(batch["file"]) batch["speech"] = speech_array return batch dataset = dataset.map(map_to_array, remove_columns=["file"]) ```
131
arabic-speech-corpus
[ "task_categories:automatic-speech-recognition", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "source_datasets:original", "language:ar", "license:cc-by-4.0", "size_categories:1K<n<10K", "region:us" ]
2022-03-02T23:29:22.000Z
@phdthesis{halabi2016modern, title={Modern standard Arabic phonetics for speech synthesis}, author={Halabi, Nawar}, year={2016}, school={University of Southampton} }
621ffdd236468d709f181d75
hsseinmz/arcd
hsseinmz
false
False
2024-01-09T12:44:24.000Z
5
false
cc6906b6eda547e4ffc63b8d88ccca7e0515187a
Dataset Card for "arcd" Dataset Summary Arabic Reading Comprehension Dataset (ARCD) composed of 1,395 questions posed by crowdworkers on Wikipedia articles. Supported Tasks and Leaderboards More Information Needed Languages More Information Needed Dataset Structure Data Instances plain_text Size of downloaded dataset files: 1.94 MB Size of the generated dataset: 1.70 MB Total amount… See the full description on the dataset page: https://huggingface.co/datasets/hsseinmz/arcd.
297
arcd
[ "task_categories:question-answering", "task_ids:extractive-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "source_datasets:original", "language:ar", "license:mit", "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d76
ramybaly/arsentd_lev
ramybaly
false
False
2024-01-18T11:01:50.000Z
3
false
ce4d032917566e486a90330392bc7853280e7249
The Arabic Sentiment Twitter Dataset for Levantine dialect (ArSenTD-LEV) contains 4,000 tweets written in Arabic and equally retrieved from Jordan, Lebanon, Palestine and Syria.
34
arsentd-lev
[ "task_categories:text-classification", "task_ids:sentiment-classification", "task_ids:topic-classification", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "language:apc", "language:ajp", "license:other", "size_categories:1K<n<10K", "arxiv:1906.01830", "region:us" ]
2022-03-02T23:29:22.000Z
@article{ArSenTDLev2018, title={ArSentD-LEV: A Multi-Topic Corpus for Target-based Sentiment Analysis in Arabic Levantine Tweets}, author={Baly, Ramy, and Khaddaj, Alaa and Hajj, Hazem and El-Hajj, Wassim and Bashir Shaban, Khaled}, journal={OSACT3}, pages={}, year={2018}}
621ffdd236468d709f181d77
allenai/art
allenai
false
False
2024-01-09T12:45:10.000Z
5
false
df6c96ba77462a86dc1cf530c12a69da47ea42e7
Dataset Card for "art" Dataset Summary ART consists of over 20k commonsense narrative contexts and 200k explanations. The Abductive Natural Language Inference Dataset from AI2. Supported Tasks and Leaderboards More Information Needed Languages More Information Needed Dataset Structure Data Instances anli Size of downloaded dataset files: 5.12 MB Size of the generated dataset: 34.36 MB… See the full description on the dataset page: https://huggingface.co/datasets/allenai/art.
30
art-dataset
[ "task_categories:multiple-choice", "task_categories:text-classification", "task_ids:natural-language-inference", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:unknown", "size_categories:100K<n<1M", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "arxiv:1908.05739", "region:us", "abductive-natural-language-inference" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d78
arxiv-community/arxiv_dataset
arxiv-community
false
False
2024-01-18T11:01:52.000Z
84
false
c70944cb158dcdab8a5403b1fa20f28119f701a6
A dataset of 1.7 million arXiv articles for applications like trend analysis, paper recommender engines, category prediction, co-citation networks, knowledge graph construction and semantic search interfaces.
435
null
[ "task_categories:translation", "task_categories:summarization", "task_categories:text-retrieval", "task_ids:document-retrieval", "task_ids:entity-linking-retrieval", "task_ids:explanation-generation", "task_ids:fact-checking-retrieval", "task_ids:text-simplification", "annotations_creators:no-annotation", "language_creators:expert-generated", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:cc0-1.0", "size_categories:1M<n<10M", "arxiv:1905.00075", "region:us" ]
2022-03-02T23:29:22.000Z
@misc{clement2019arxiv, title={On the Use of ArXiv as a Dataset}, author={Colin B. Clement and Matthew Bierbaum and Kevin P. O'Keeffe and Alexander A. Alemi}, year={2019}, eprint={1905.00075}, archivePrefix={arXiv}, primaryClass={cs.IR} }
621ffdd236468d709f181d79
tuanphong/ascent_kb
tuanphong
false
False
2024-01-09T14:44:26.000Z
2
false
9157196d77890cf20b57075353813b34dba3426e
Dataset Card for Ascent KB Dataset Summary This dataset contains 8.9M commonsense assertions extracted by the Ascent pipeline developed at the Max Planck Institute for Informatics. The focus of this dataset is on everyday concepts such as elephant, car, laptop, etc. The current version of Ascent KB (v1.0.0) is approximately 19 times larger than ConceptNet (note that, in this comparison, non-commonsense knowledge in ConceptNet such as lexical relations is… See the full description on the dataset page: https://huggingface.co/datasets/tuanphong/ascent_kb.
15
ascentkb
[ "task_categories:other", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:cc-by-4.0", "size_categories:10M<n<100M", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "arxiv:2011.00905", "region:us", "knowledge-base" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d7a
achrafothman/aslg_pc12
achrafothman
false
False
2024-01-09T12:45:54.000Z
4
false
cb7cd272db8fcd4004ee04ddf50e194c15ea24d6
Dataset Card for "aslg_pc12" Dataset Summary Synthetic English-ASL Gloss Parallel Corpus 2012 Supported Tasks and Leaderboards More Information Needed Languages More Information Needed Dataset Structure Data Instances default Size of downloaded dataset files: 12.77 MB Size of the generated dataset: 13.50 MB Total amount of disk used: 26.27 MB An example of 'train' looks as follows. {… See the full description on the dataset page: https://huggingface.co/datasets/achrafothman/aslg_pc12.
16
aslg-pc12
[ "task_categories:translation", "annotations_creators:crowdsourced", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:translation", "source_datasets:original", "language:ase", "language:en", "license:cc-by-nc-4.0", "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d7b
AmazonScience/asnq
AmazonScience
false
False
2024-01-09T15:33:53.000Z
1
false
32291fc9663b9ee88abb97114e52501bdd58a129
Dataset Card for "asnq" Dataset Summary ASNQ is a dataset for answer sentence selection derived from Google's Natural Questions (NQ) dataset (Kwiatkowski et al. 2019). Each example contains a question, candidate sentence, label indicating whether or not the sentence answers the question, and two additional features -- sentence_in_long_answer and short_answer_in_sentence indicating whether ot not the candidate sentence is contained in the long_answer and if the… See the full description on the dataset page: https://huggingface.co/datasets/AmazonScience/asnq.
14
asnq
[ "task_categories:multiple-choice", "task_ids:multiple-choice-qa", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "source_datasets:extended|natural_questions", "language:en", "license:cc-by-nc-sa-3.0", "size_categories:10M<n<100M", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "arxiv:1911.04118", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d7c
facebook/asset
facebook
false
False
2023-12-21T15:41:23.000Z
10
false
c7f2fa4bae55ae656091805d4416c1374582bb4e
Dataset Card for ASSET Dataset Summary ASSET (Alva-Manchego et al., 2020) is multi-reference dataset for the evaluation of sentence simplification in English. The dataset uses the same 2,359 sentences from TurkCorpus (Xu et al., 2016) and each sentence is associated with 10 crowdsourced simplifications. Unlike previous simplification datasets, which contain a single transformation (e.g., lexical paraphrasing in TurkCorpus or sentence splitting in HSplit), the… See the full description on the dataset page: https://huggingface.co/datasets/facebook/asset.
55
asset
[ "task_categories:text-classification", "task_categories:text2text-generation", "task_ids:text-simplification", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "source_datasets:extended|other-turkcorpus", "language:en", "license:cc-by-sa-4.0", "size_categories:1K<n<10K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "simplification-evaluation" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d7d
nilc-nlp/assin
nilc-nlp
false
False
2024-01-09T12:47:28.000Z
9
false
6535e48351178e07ade013b05b69f0e35cb28bbb
Dataset Card for ASSIN Dataset Summary The ASSIN (Avaliação de Similaridade Semântica e INferência textual) corpus is a corpus annotated with pairs of sentences written in Portuguese that is suitable for the exploration of textual entailment and paraphrasing classifiers. The corpus contains pairs of sentences extracted from news articles written in European Portuguese (EP) and Brazilian Portuguese (BP), obtained from Google News Portugal and Brazil… See the full description on the dataset page: https://huggingface.co/datasets/nilc-nlp/assin.
193
assin
[ "task_categories:text-classification", "task_ids:text-scoring", "task_ids:natural-language-inference", "task_ids:semantic-similarity-scoring", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "language:pt", "license:unknown", "size_categories:10K<n<100K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d7e
nilc-nlp/assin2
nilc-nlp
false
False
2024-01-09T12:48:38.000Z
12
false
0ff9c86779e06855536d8775ce5550550e1e5a2d
Dataset Card for ASSIN 2 Dataset Summary The ASSIN 2 corpus is composed of rather simple sentences. Following the procedures of SemEval 2014 Task 1. The training and validation data are composed, respectively, of 6,500 and 500 sentence pairs in Brazilian Portuguese, annotated for entailment and semantic similarity. Semantic similarity values range from 1 to 5, and text entailment classes are either entailment or none. The test data are composed of approximately… See the full description on the dataset page: https://huggingface.co/datasets/nilc-nlp/assin2.
472
assin2
[ "task_categories:text-classification", "task_ids:text-scoring", "task_ids:natural-language-inference", "task_ids:semantic-similarity-scoring", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "language:pt", "license:unknown", "size_categories:1K<n<10K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d7f
allenai/atomic
allenai
false
False
2024-01-18T11:01:54.000Z
12
false
a6ea1d221fa3a5c953b1e69f2594816046bb57c7
This dataset provides the template sentences and relationships defined in the ATOMIC common sense dataset. There are three splits - train, test, and dev. From the authors. Disclaimer/Content warning: the events in atomic have been automatically extracted from blogs, stories and books written at various times. The events might depict violent or problematic actions, which we left in the corpus for the sake of learning the (probably negative but still important) commonsense implications associated with the events. We removed a small set of truly out-dated events, but might have missed some so please email us (msap@cs.washington.edu) if you have any concerns.
51
atomic
[ "task_categories:text2text-generation", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:cc-by-4.0", "size_categories:100K<n<1M", "region:us", "common-sense-if-then-reasoning" ]
2022-03-02T23:29:22.000Z
@article{Sap2019ATOMICAA, title={ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning}, author={Maarten Sap and Ronan Le Bras and Emily Allaway and Chandra Bhagavatula and Nicholas Lourie and Hannah Rashkin and Brendan Roof and Noah A. Smith and Yejin Choi}, journal={ArXiv}, year={2019}, volume={abs/1811.00146} }
621ffdd236468d709f181d80
nwu-ctext/autshumato
nwu-ctext
false
False
2024-01-18T11:01:55.000Z
3
false
d1951a019d5dedcb8ce47f55bce6328d31f69956
Multilingual information access is stipulated in the South African constitution. In practise, this is hampered by a lack of resources and capacity to perform the large volumes of translation work required to realise multilingual information access. One of the aims of the Autshumato project is to develop machine translation systems for three South African languages pairs.
84
null
[ "task_categories:translation", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:multilingual", "source_datasets:original", "language:en", "language:tn", "language:ts", "language:zu", "license:cc-by-2.5", "size_categories:100K<n<1M", "region:us" ]
2022-03-02T23:29:22.000Z
@article{groenewald2010processing, title={Processing parallel text corpora for three South African language pairs in the Autshumato project}, author={Groenewald, Hendrik J and du Plooy, Liza}, journal={AfLaT 2010}, pages={27}, year={2010} }
621ffdd236468d709f181d81
facebook/babi_qa
facebook
false
False
2023-01-25T14:26:58.000Z
6
false
021d7aeb7307b7856dd0632f92827bc607dc2f1b
The (20) QA bAbI tasks are a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. The aim is to classify these tasks into skill sets,so that researchers can identify (and then rectify)the failings of their systems.
61
babi-1
[ "task_categories:question-answering", "annotations_creators:machine-generated", "language_creators:machine-generated", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:cc-by-3.0", "size_categories:10K<n<100K", "modality:text", "library:datasets", "library:mlcroissant", "arxiv:1502.05698", "arxiv:1511.06931", "region:us", "chained-qa" ]
2022-03-02T23:29:22.000Z
@misc{weston2015aicomplete, title={Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks}, author={Jason Weston and Antoine Bordes and Sumit Chopra and Alexander M. Rush and Bart van Merriënboer and Armand Joulin and Tomas Mikolov}, year={2015}, eprint={1502.05698}, archivePrefix={arXiv}, primaryClass={cs.AI} }
621ffdd236468d709f181d82
legacy-datasets/banking77
legacy-datasets
false
False
2024-01-10T08:23:17.000Z
39
false
f54121560de48f2852f90be299010d1d6dc612ec
Dataset Card for BANKING77 Dataset Summary Deprecated: Dataset "banking77" is deprecated and will be deleted. Use "PolyAI/banking77" instead. Dataset composed of online banking queries annotated with their corresponding intents. BANKING77 dataset provides a very fine-grained set of intents in a banking domain. It comprises 13,083 customer service queries labeled with 77 intents. It focuses on fine-grained single-domain intent detection. Supported… See the full description on the dataset page: https://huggingface.co/datasets/legacy-datasets/banking77.
623
null
[ "task_categories:text-classification", "task_ids:intent-classification", "task_ids:multi-class-classification", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:cc-by-4.0", "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "arxiv:2003.04807", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d83
phiwi/bbaw_egyptian
phiwi
false
False
2024-01-10T08:24:41.000Z
6
false
f9dde1200348af9b531e8fd09096bd9f9ddfeb34
Dataset Card for "bbaw_egyptian" Dataset Summary This dataset comprises parallel sentences of hieroglyphic encodings, transcription and translation as used in the paper Multi-Task Modeling of Phonographic Languages: Translating Middle Egyptian Hieroglyph. The data triples are extracted from the digital corpus of Egyptian texts compiled by the project "Strukturen und Transformationen des Wortschatzes der ägyptischen Sprache". Supported Tasks and… See the full description on the dataset page: https://huggingface.co/datasets/phiwi/bbaw_egyptian.
11
null
[ "task_categories:translation", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:multilingual", "source_datasets:extended|wikipedia", "language:egy", "language:de", "language:en", "license:cc-by-sa-4.0", "size_categories:100K<n<1M", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d84
midas/bbc_hindi_nli
midas
false
False
2024-01-10T10:00:44.000Z
2
false
bca982bebdd497ab9078feda251111aac4874318
Dataset Card for BBC Hindi NLI Dataset Dataset Summary Dataset for Natural Language Inference in Hindi Language. BBC Hindi Dataset consists of textual-entailment pairs. Each row of the Datasets if made up of 4 columns - Premise, Hypothesis, Label and Topic. Context and Hypothesis is written in Hindi while Entailment_Label is in English. Entailment_label is of 2 types - entailed and not-entailed. Dataset can be used to train models for Natural Language Inference… See the full description on the dataset page: https://huggingface.co/datasets/midas/bbc_hindi_nli.
14
null
[ "task_categories:text-classification", "task_ids:natural-language-inference", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "source_datasets:extended|bbc__hindi_news_classification", "language:hi", "license:mit", "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d85
spyysalo/bc2gm_corpus
spyysalo
false
False
2024-01-10T10:03:04.000Z
6
false
dc0640510665bb3de7c88416ede4708cf6481b61
Dataset Card for bc2gm_corpus Dataset Summary [More Information Needed] Supported Tasks and Leaderboards [More Information Needed] Languages [More Information Needed] Dataset Structure Data Instances [More Information Needed] Data Fields id: Sentence identifier. tokens: Array of tokens composing a sentence. ner_tags: Array of tags, where 0 indicates no disease mentioned, 1 signals the… See the full description on the dataset page: https://huggingface.co/datasets/spyysalo/bc2gm_corpus.
104
null
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:unknown", "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d86
AI-Lab-Makerere/beans
AI-Lab-Makerere
false
False
2024-01-03T12:06:51.000Z
30
false
27aa014ce09b193e1a6f58112d4a66e0eddb69c5
Dataset Card for Beans Dataset Summary Beans leaf dataset with images of diseased and health leaves. Supported Tasks and Leaderboards image-classification: Based on a leaf image, the goal of this task is to predict the disease type (Angular Leaf Spot and Bean Rust), if any. Languages English Dataset Structure Data Instances A sample from the training set is provided below: { 'image_file_path':… See the full description on the dataset page: https://huggingface.co/datasets/AI-Lab-Makerere/beans.
1,107
null
[ "task_categories:image-classification", "task_ids:multi-class-image-classification", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:mit", "size_categories:1K<n<10K", "format:parquet", "modality:image", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d87
nectec/best2009
nectec
false
False
2024-01-10T10:08:29.000Z
0
false
685fffc4105dda00888f127d586c378bf6fa995e
Dataset Card for best2009 Dataset Summary best2009 is a Thai word-tokenization dataset from encyclopedia, novels, news and articles by NECTEC (148,995/2,252 lines of train/test). It was created for BEST 2010: Word Tokenization Competition. The test set answers are not provided publicly. Supported Tasks and Leaderboards word tokenization Languages Thai Dataset Structure Data Instances {'char': ['?', 'ภ'… See the full description on the dataset page: https://huggingface.co/datasets/nectec/best2009.
43
null
[ "task_categories:token-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "language:th", "license:cc-by-nc-sa-3.0", "size_categories:100K<n<1M", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "word-tokenization" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d88
Helsinki-NLP/bianet
Helsinki-NLP
false
False
2024-02-23T15:06:25.000Z
1
false
48a45ed77f0604997882ea2f8202b765f2d0e8b1
Dataset Card for Bianet Dataset Summary A new open-source parallel corpus consisting of news articles collected from the Bianet magazine, an online newspaper that publishes Turkish news, often along with their translations in English and Kurdish. A parallel news corpus in Turkish, Kurdish and English; Bianet collects 3,214 Turkish articles with their sentence-aligned Kurdish or English translations from the Bianet online newspaper. Bianet's Numbers: Languages: 3… See the full description on the dataset page: https://huggingface.co/datasets/Helsinki-NLP/bianet.
25
bianet
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:translation", "source_datasets:original", "language:en", "language:ku", "language:tr", "license:cc-by-sa-4.0", "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "arxiv:1805.05095", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d89
Helsinki-NLP/bible_para
Helsinki-NLP
false
False
2024-01-18T11:01:58.000Z
14
false
0a2c121b0224b552e05f281fc71c55e3180b3d00
This is a multilingual parallel corpus created from translations of the Bible compiled by Christos Christodoulopoulos and Mark Steedman. 102 languages, 5,148 bitexts total number of files: 107 total number of tokens: 56.43M total number of sentence fragments: 2.84M
33
null
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "source_datasets:original", "language:acu", "language:af", "language:agr", "language:ake", "language:am", "language:amu", "language:ar", "language:bg", "language:bsn", "language:cak", "language:ceb", "language:ch", "language:chq", "language:chr", "language:cjp", "language:cni", "language:cop", "language:crp", "language:cs", "language:da", "language:de", "language:dik", "language:dje", "language:djk", "language:dop", "language:ee", "language:el", "language:en", "language:eo", "language:es", "language:et", "language:eu", "language:fi", "language:fr", "language:gbi", "language:gd", "language:gu", "language:gv", "language:he", "language:hi", "language:hr", "language:hu", "language:hy", "language:id", "language:is", "language:it", "language:ja", "language:jak", "language:jiv", "language:kab", "language:kbh", "language:kek", "language:kn", "language:ko", "language:la", "language:lt", "language:lv", "language:mam", "language:mi", "language:ml", "language:mr", "language:my", "language:ne", "language:nhg", "language:nl", "language:no", "language:ojb", "language:pck", "language:pes", "language:pl", "language:plt", "language:pot", "language:ppk", "language:pt", "language:quc", "language:quw", "language:ro", "language:rom", "language:ru", "language:shi", "language:sk", "language:sl", "language:sn", "language:so", "language:sq", "language:sr", "language:ss", "language:sv", "language:syr", "language:te", "language:th", "language:tl", "language:tmh", "language:tr", "language:uk", "language:usp", "language:vi", "language:wal", "language:wo", "language:xh", "language:zh", "language:zu", "license:cc0-1.0", "size_categories:10K<n<100K", "region:us" ]
2022-03-02T23:29:22.000Z
OPUS and A massively parallel corpus: the Bible in 100 languages, Christos Christodoulopoulos and Mark Steedman, *Language Resources and Evaluation*, 49 (2)
621ffdd236468d709f181d8a
NortheasternUniversity/big_patent
NortheasternUniversity
false
False
2024-01-18T11:01:59.000Z
54
false
e807b1d5492aa5f4fac08f3f6c7c85c72887ca12
BIGPATENT, consisting of 1.3 million records of U.S. patent documents along with human written abstractive summaries. Each US patent application is filed under a Cooperative Patent Classification (CPC) code. There are nine such classification categories: A (Human Necessities), B (Performing Operations; Transporting), C (Chemistry; Metallurgy), D (Textiles; Paper), E (Fixed Constructions), F (Mechanical Engineering; Lightning; Heating; Weapons; Blasting), G (Physics), H (Electricity), and Y (General tagging of new or cross-sectional technology) There are two features: - description: detailed description of patent. - abstract: Patent abastract.
89
bigpatent
[ "task_categories:summarization", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:cc-by-4.0", "size_categories:100K<n<1M", "arxiv:1906.03741", "region:us", "patent-summarization" ]
2022-03-02T23:29:22.000Z
@misc{sharma2019bigpatent, title={BIGPATENT: A Large-Scale Dataset for Abstractive and Coherent Summarization}, author={Eva Sharma and Chen Li and Lu Wang}, year={2019}, eprint={1906.03741}, archivePrefix={arXiv}, primaryClass={cs.CL} }
621ffdd236468d709f181d8b
FiscalNote/billsum
FiscalNote
false
False
2024-03-27T16:01:38.000Z
38
false
3d8510441c06a3d9dfb32eb0d7f80151730bcc4f
Dataset Card for "billsum" Dataset Summary BillSum, summarization of US Congressional and California state bills. There are several features: text: bill text. summary: summary of the bills. title: title of the bills. features for us bills. ca bills does not have. text_len: number of chars in text. sum_len: number of chars in summary. Supported Tasks and Leaderboards More Information Needed Languages More Information Needed… See the full description on the dataset page: https://huggingface.co/datasets/FiscalNote/billsum.
351
billsum
[ "task_categories:summarization", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:cc0-1.0", "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "arxiv:1910.00523", "region:us", "bills-summarization" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d8c
microsoft/bing_coronavirus_query_set
microsoft
false
False
2024-01-10T10:17:05.000Z
0
false
77f70b572508c4571927c95e3b9bec64e4275d39
Dataset Card for BingCoronavirusQuerySet Dataset Summary Please note that you can specify the start and end date of the data. You can get start and end dates from here: https://github.com/microsoft/BingCoronavirusQuerySet/tree/master/data/2020 example: load_dataset("bing_coronavirus_query_set", queries_by="state", start_date="2020-09-01", end_date="2020-09-30") You can also load the data by country by using queries_by="country". Supported Tasks and… See the full description on the dataset page: https://huggingface.co/datasets/microsoft/bing_coronavirus_query_set.
14
null
[ "task_categories:text-classification", "task_ids:intent-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:other", "size_categories:100K<n<1M", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d8d
nlpaueb/biomrc
nlpaueb
false
False
2024-01-18T11:02:01.000Z
4
false
5bb4def0bfa1570a933f18af2d8c13c22c2e2b94
We introduce BIOMRC, a large-scale cloze-style biomedical MRC dataset. Care was taken to reduce noise, compared to the previous BIOREAD dataset of Pappas et al. (2018). Experiments show that simple heuristics do not perform well on the new dataset and that two neural MRC models that had been tested on BIOREAD perform much better on BIOMRC, indicating that the new dataset is indeed less noisy or at least that its task is more feasible. Non-expert human performance is also higher on the new dataset compared to BIOREAD, and biomedical experts perform even better. We also introduce a new BERT-based MRC model, the best version of which substantially outperforms all other methods tested, reaching or surpassing the accuracy of biomedical experts in some experiments. We make the new dataset available in three different sizes, also releasing our code, and providing a leaderboard.
26
biomrc
[ "language:en", "region:us" ]
2022-03-02T23:29:22.000Z
@inproceedings{pappas-etal-2020-biomrc, title = "{B}io{MRC}: A Dataset for Biomedical Machine Reading Comprehension", author = "Pappas, Dimitris and Stavropoulos, Petros and Androutsopoulos, Ion and McDonald, Ryan", booktitle = "Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing", month = jul, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.bionlp-1.15", pages = "140--149", abstract = "We introduce BIOMRC, a large-scale cloze-style biomedical MRC dataset. Care was taken to reduce noise, compared to the previous BIOREAD dataset of Pappas et al. (2018). Experiments show that simple heuristics do not perform well on the new dataset and that two neural MRC models that had been tested on BIOREAD perform much better on BIOMRC, indicating that the new dataset is indeed less noisy or at least that its task is more feasible. Non-expert human performance is also higher on the new dataset compared to BIOREAD, and biomedical experts perform even better. We also introduce a new BERT-based MRC model, the best version of which substantially outperforms all other methods tested, reaching or surpassing the accuracy of biomedical experts in some experiments. We make the new dataset available in three different sizes, also releasing our code, and providing a leaderboard.", }
621ffdd236468d709f181d8e
tabilab/biosses
tabilab
false
False
2024-01-10T10:20:02.000Z
5
false
2394a2eda8dae34a30f68f0770775fd5c2e863bd
Dataset Card for BIOSSES Dataset Summary BIOSSES is a benchmark dataset for biomedical sentence similarity estimation. The dataset comprises 100 sentence pairs, in which each sentence was selected from the TAC (Text Analysis Conference) Biomedical Summarization Track Training Dataset containing articles from the biomedical domain. The sentence pairs in BIOSSES were selected from citing sentences, i.e. sentences that have a citation to a reference article. The… See the full description on the dataset page: https://huggingface.co/datasets/tabilab/biosses.
14
biosses
[ "task_categories:text-classification", "task_ids:text-scoring", "task_ids:semantic-similarity-scoring", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:gpl-3.0", "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d8f
TheBritishLibrary/blbooks
TheBritishLibrary
false
False
2024-08-08T06:15:12.000Z
14
false
de11fed4c2a3bfb17a750347db93da52d9fa58c4
A dataset comprising of text created by OCR from the 49,455 digitised books, equating to 65,227 volumes (25+ million pages), published between c. 1510 - c. 1900. The books cover a wide range of subject areas including philosophy, history, poetry and literature.
69
null
[ "task_categories:text-generation", "task_categories:fill-mask", "task_categories:other", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:machine-generated", "multilinguality:multilingual", "source_datasets:original", "language:de", "language:en", "language:es", "language:fr", "language:it", "language:nl", "license:cc0-1.0", "size_categories:100K<n<1M", "region:us", "digital-humanities-research" ]
2022-03-02T23:29:22.000Z
@misc{BritishLibraryBooks2021, author = {British Library Labs}, title = {Digitised Books. c. 1510 - c. 1900. JSONL (OCR derived text + metadata)}, year = {2021}, publisher = {British Library}, howpublished={https://doi.org/10.23636/r7w6-zy15}
621ffdd236468d709f181d90
TheBritishLibrary/blbooksgenre
TheBritishLibrary
false
False
2023-06-01T14:59:51.000Z
4
false
de087348b4ef8c44c2978f8ff819e9e3862089e6
This dataset contains metadata for resources belonging to the British Library’s digitised printed books (18th-19th century) collection (bl.uk/collection-guides/digitised-printed-books). This metadata has been extracted from British Library catalogue records. The metadata held within our main catalogue is updated regularly. This metadata dataset should be considered a snapshot of this metadata.
31
null
[ "task_categories:text-classification", "task_categories:text-generation", "task_categories:fill-mask", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:expert-generated", "language_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:multilingual", "source_datasets:original", "language:de", "language:en", "language:fr", "language:nl", "license:cc0-1.0", "size_categories:10K<n<100K", "modality:text", "library:datasets", "library:mlcroissant", "region:us" ]
2022-03-02T23:29:22.000Z
@misc{british library_genre, title={ 19th Century Books - metadata with additional crowdsourced annotations}, url={https://doi.org/10.23636/BKHQ-0312}, author={{British Library} and Morris, Victoria and van Strien, Daniel and Tolfo, Giorgia and Afric, Lora and Robertson, Stewart and Tiney, Patricia and Dogterom, Annelies and Wollner, Ildi}, year={2021}}
621ffdd236468d709f181d91
ParlAI/blended_skill_talk
ParlAI
false
False
2024-01-10T10:22:26.000Z
65
false
d7b0093243439fa5f0cd9663125cc47575ced2ea
Dataset Card for "blended_skill_talk" Dataset Summary A dataset of 7k conversations explicitly designed to exhibit multiple conversation modes: displaying personality, having empathy, and demonstrating knowledge. Supported Tasks and Leaderboards More Information Needed Languages More Information Needed Dataset Structure Data Instances default Size of downloaded dataset files: 38.11 MB Size… See the full description on the dataset page: https://huggingface.co/datasets/ParlAI/blended_skill_talk.
391
blended-skill-talk
[ "task_ids:dialogue-generation", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:unknown", "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "arxiv:2004.08449", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d92
nyu-mll/blimp
nyu-mll
false
False
2024-01-23T09:58:08.000Z
33
false
877fba0801ffb7cbd8c39c1ff314a46f053f6036
Dataset Card for "blimp" Dataset Summary BLiMP is a challenge set for evaluating what language models (LMs) know about major grammatical phenomena in English. BLiMP consists of 67 sub-datasets, each containing 1000 minimal pairs isolating specific contrasts in syntax, morphology, or semantics. The data is automatically generated according to expert-crafted grammars. Supported Tasks and Leaderboards More Information Needed Languages… See the full description on the dataset page: https://huggingface.co/datasets/nyu-mll/blimp.
599
blimp
[ "task_categories:text-classification", "task_ids:acceptability-classification", "annotations_creators:crowdsourced", "language_creators:machine-generated", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:cc-by-4.0", "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "arxiv:1912.00582", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d93
barilan/blog_authorship_corpus
barilan
false
False
2023-06-06T16:16:13.000Z
11
false
728947f6c98ade87aa396004440cb3b58f173cb8
The Blog Authorship Corpus consists of the collected posts of 19,320 bloggers gathered from blogger.com in August 2004. The corpus incorporates a total of 681,288 posts and over 140 million words - or approximately 35 posts and 7250 words per person. Each blog is presented as a separate file, the name of which indicates a blogger id# and the blogger’s self-provided gender, age, industry and astrological sign. (All are labeled for gender and age but for many, industry and/or sign is marked as unknown.) All bloggers included in the corpus fall into one of three age groups: - 8240 "10s" blogs (ages 13-17), - 8086 "20s" blogs (ages 23-27), - 2994 "30s" blogs (ages 33-47). For each age group there are an equal number of male and female bloggers. Each blog in the corpus includes at least 200 occurrences of common English words. All formatting has been stripped with two exceptions. Individual posts within a single blogger are separated by the date of the following post and links within a post are denoted by the label urllink. The corpus may be freely used for non-commercial research purposes.
776
blog-authorship-corpus
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:unknown", "size_categories:10K<n<100K", "region:us" ]
2022-03-02T23:29:22.000Z
@inproceedings{schler2006effects, title={Effects of age and gender on blogging.}, author={Schler, Jonathan and Koppel, Moshe and Argamon, Shlomo and Pennebaker, James W}, booktitle={AAAI spring symposium: Computational approaches to analyzing weblogs}, volume={6}, pages={199--205}, year={2006} }
621ffdd236468d709f181d94
rezacsedu/bn_hate_speech
rezacsedu
false
False
2024-01-10T10:29:39.000Z
1
false
99612296bc093f0720cac7d7cbfcb67eecf1ca2f
Dataset Card for Bengali Hate Speech Dataset Dataset Summary The Bengali Hate Speech Dataset is a Bengali-language dataset of news articles collected from various Bengali media sources and categorized based on the type of hate in the text. The dataset was created to provide greater support for under-resourced languages like Bengali on NLP tasks, and serves as a benchmark for multiple types of classification tasks. Supported Tasks and Leaderboards… See the full description on the dataset page: https://huggingface.co/datasets/rezacsedu/bn_hate_speech.
45
bengali-hate-speech
[ "task_categories:text-classification", "annotations_creators:crowdsourced", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "language:bn", "license:mit", "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "arxiv:2004.07807", "region:us", "hate-speech-topic-classification" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d95
bnl-data/bnl_newspapers
bnl-data
false
False
2024-01-24T16:24:00.000Z
2
false
fd671e637acfbe911650fa398ec203f4205d128c
Dataset Card for BnL Historical Newspapers Dataset Summary The BnL has digitised over 800.000 pages of Luxembourg newspapers. This dataset currently has one configuration covering a subset of these newspapers, which sit under the "Processed Datasets" collection. The BNL: processed all newspapers and monographs that are in the public domain and extracted the full text and associated meta data of every single article, section, advertisement… The result is a large… See the full description on the dataset page: https://huggingface.co/datasets/bnl-data/bnl_newspapers.
9
null
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:multilingual", "source_datasets:original", "language:ar", "language:da", "language:de", "language:fi", "language:fr", "language:lb", "language:nl", "language:pt", "license:cc0-1.0", "size_categories:100K<n<1M", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d96
bookcorpus/bookcorpus
bookcorpus
false
False
2024-05-03T13:48:33.000Z
256
false
d917559bbe9cf49c638fc331c37c4bf239e3b637
Books are a rich source of both fine-grained information, how a character, an object or a scene looks like, as well as high-level semantics, what someone is thinking, feeling and how these states evolve through a story.This work aims to align books to their movie releases in order to providerich descriptive explanations for visual content that go semantically farbeyond the captions available in current datasets. \
2,796
bookcorpus
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:unknown", "size_categories:10M<n<100M", "arxiv:2105.05241", "region:us" ]
2022-03-02T23:29:22.000Z
@InProceedings{Zhu_2015_ICCV, title = {Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books}, author = {Zhu, Yukun and Kiros, Ryan and Zemel, Rich and Salakhutdinov, Ruslan and Urtasun, Raquel and Torralba, Antonio and Fidler, Sanja}, booktitle = {The IEEE International Conference on Computer Vision (ICCV)}, month = {December}, year = {2015} }
621ffdd236468d709f181d97
defunct-datasets/bookcorpusopen
defunct-datasets
false
False
2023-11-24T14:42:08.000Z
33
false
817f291474dcb4fa865ed7c8298e709cd8a20266
Books are a rich source of both fine-grained information, how a character, an object or a scene looks like, as well as high-level semantics, what someone is thinking, feeling and how these states evolve through a story. This version of bookcorpus has 17868 dataset items (books). Each item contains two fields: title and text. The title is the name of the book (just the file name) while text contains unprocessed book text. The bookcorpus has been prepared by Shawn Presser and is generously hosted by The-Eye. The-Eye is a non-profit, community driven platform dedicated to the archiving and long-term preservation of any and all data including but by no means limited to... websites, books, games, software, video, audio, other digital-obscura and ideas.
22
bookcorpus
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:unknown", "size_categories:10K<n<100K", "arxiv:2105.05241", "region:us" ]
2022-03-02T23:29:22.000Z
@InProceedings{Zhu_2015_ICCV, title = {Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books}, author = {Zhu, Yukun and Kiros, Ryan and Zemel, Rich and Salakhutdinov, Ruslan and Urtasun, Raquel and Torralba, Antonio and Fidler, Sanja}, booktitle = {The IEEE International Conference on Computer Vision (ICCV)}, month = {December}, year = {2015} }
621ffdd236468d709f181d98
google/boolq
google
false
False
2024-01-22T09:16:26.000Z
58
false
35b264d03638db9f4ce671b711558bf7ff0f80d5
Dataset Card for Boolq Dataset Summary BoolQ is a question answering dataset for yes/no questions containing 15942 examples. These questions are naturally occurring ---they are generated in unprompted and unconstrained settings. Each example is a triplet of (question, passage, answer), with the title of the page as optional additional context. The text-pair classification setup is similar to existing natural language inference tasks. Supported Tasks… See the full description on the dataset page: https://huggingface.co/datasets/google/boolq.
8,805
boolq
[ "task_categories:text-classification", "task_ids:natural-language-inference", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:cc-by-sa-3.0", "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "arxiv:1905.10044", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d99
clarin-pl/bprec
clarin-pl
false
False
2024-01-18T11:02:04.000Z
0
false
45f1ac8242a87d96645e04bd6c1c645c85bf61ed
Dataset consisting of Polish language texts annotated to recognize brand-product relations.
11
null
[ "task_categories:text-retrieval", "task_ids:entity-linking-retrieval", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "source_datasets:original", "language:pl", "license:unknown", "size_categories:1K<n<10K", "region:us" ]
2022-03-02T23:29:22.000Z
@inproceedings{inproceedings, author = {Janz, Arkadiusz and Kopociński, Łukasz and Piasecki, Maciej and Pluwak, Agnieszka}, year = {2020}, month = {05}, pages = {}, title = {Brand-Product Relation Extraction Using Heterogeneous Vector Space Representations} }
621ffdd236468d709f181d9a
allenai/break_data
allenai
false
False
2024-01-11T07:39:12.000Z
1
false
42d29b59a18aec2be0986d24469bf67b6291cb27
Dataset Card for "break_data" Dataset Summary Break is a human annotated dataset of natural language questions and their Question Decomposition Meaning Representations (QDMRs). Break consists of 83,978 examples sampled from 10 question answering datasets over text, images and databases. This repository contains the Break dataset along with information on the exact data format. Supported Tasks and Leaderboards More Information Needed… See the full description on the dataset page: https://huggingface.co/datasets/allenai/break_data.
20
break
[ "task_categories:text2text-generation", "task_ids:open-domain-abstractive-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:unknown", "size_categories:100K<n<1M", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d9b
UFRGS/brwac
UFRGS
false
False
2024-01-18T11:02:06.000Z
17
false
3475bc217e5241f9a5c833b2f8ae9b74a2d7e44d
The BrWaC (Brazilian Portuguese Web as Corpus) is a large corpus constructed following the Wacky framework, which was made public for research purposes. The current corpus version, released in January 2017, is composed by 3.53 million documents, 2.68 billion tokens and 5.79 million types. Please note that this resource is available solely for academic research purposes, and you agreed not to use it for any commercial applications. Manually download at https://www.inf.ufrgs.br/pln/wiki/index.php?title=BrWaC
30
brwac
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "language:pt", "license:unknown", "size_categories:1M<n<10M", "region:us" ]
2022-03-02T23:29:22.000Z
@inproceedings{wagner2018brwac, title={The brwac corpus: A new open resource for brazilian portuguese}, author={Wagner Filho, Jorge A and Wilkens, Rodrigo and Idiart, Marco and Villavicencio, Aline}, booktitle={Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)}, year={2018} }
621ffdd236468d709f181d9c
ryo0634/bsd_ja_en
ryo0634
false
False
2024-01-11T07:36:44.000Z
9
false
ed6539dc16c18c481ff3574376b79d7a83a57fb2
Dataset Card for Business Scene Dialogue Dataset Summary This is the Business Scene Dialogue (BSD) dataset, a Japanese-English parallel corpus containing written conversations in various business scenarios. The dataset was constructed in 3 steps: selecting business scenes, writing monolingual conversation scenarios according to the selected scenes, and translating the scenarios into the other language. Half of the monolingual scenarios were written in… See the full description on the dataset page: https://huggingface.co/datasets/ryo0634/bsd_ja_en.
270
business-scene-dialogue
[ "task_categories:translation", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:translation", "source_datasets:original", "language:en", "language:ja", "license:cc-by-nc-sa-4.0", "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "business-conversations-translation" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d9d
community-datasets/bswac
community-datasets
false
False
2024-01-11T12:54:46.000Z
0
false
1dbdabb101d60471e705f84ae821cdb804399dd7
The Bosnian web corpus bsWaC was built by crawling the .ba top-level domain in 2014. The corpus was near-deduplicated on paragraph level, normalised via diacritic restoration, morphosyntactically annotated and lemmatised. The corpus is shuffled by paragraphs. Each paragraph contains metadata on the URL, domain and language identification (Bosnian vs. Croatian vs. Serbian). Version 1.0 of this corpus is described in http://www.aclweb.org/anthology/W14-0405. Version 1.1 contains newer and better linguistic annotations.
22
null
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "language:bs", "license:cc-by-sa-3.0", "size_categories:100M<n<1B", "region:us" ]
2022-03-02T23:29:22.000Z
@misc{11356/1062, title = {Bosnian web corpus {bsWaC} 1.1}, author = {Ljube{\v s}i{\'c}, Nikola and Klubi{\v c}ka, Filip}, url = {http://hdl.handle.net/11356/1062}, note = {Slovenian language resource repository {CLARIN}.{SI}}, copyright = {Creative Commons - Attribution-{ShareAlike} 4.0 International ({CC} {BY}-{SA} 4.0)}, year = {2016} }
621ffdd236468d709f181d9e
dataset-org/c3
dataset-org
false
False
2024-01-11T08:12:46.000Z
10
false
28e91a21a22b95987a90a46cb6d7741c7aad8158
Dataset Card for C3 Dataset Summary Machine reading comprehension tasks require a machine reader to answer questions relevant to the given document. In this paper, we present the first free-form multiple-Choice Chinese machine reading Comprehension dataset (C^3), containing 13,369 documents (dialogues or more formally written mixed-genre texts) and their associated 19,577 multiple-choice free-form questions collected from Chinese-as-a-second-language examinations.… See the full description on the dataset page: https://huggingface.co/datasets/dataset-org/c3.
40
c3
[ "task_categories:question-answering", "task_ids:multiple-choice-qa", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "source_datasets:original", "language:zh", "license:other", "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "arxiv:1904.09679", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181d9f
legacy-datasets/c4
legacy-datasets
false
False
2024-03-05T08:44:26.000Z
229
false
21e98d7063e4037e836a0299d7fbb7efd484e6c3
A colossal, cleaned version of Common Crawl's web crawl corpus. Based on Common Crawl dataset: "https://commoncrawl.org". This is the processed version of Google's C4 dataset by AllenAI.
59
c4
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:multilingual", "source_datasets:original", "language:en", "license:odc-by", "size_categories:100M<n<1B", "arxiv:1910.10683", "region:us" ]
2022-03-02T23:29:22.000Z
@article{2019t5, author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu}, title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer}, journal = {arXiv e-prints}, year = {2019}, archivePrefix = {arXiv}, eprint = {1910.10683}, }
621ffdd236468d709f181da0
china-ai-law-challenge/cail2018
china-ai-law-challenge
false
False
2024-01-16T15:08:12.000Z
13
false
775098da3ba75f033781f8061900b62503e9bea0
Dataset Card for CAIL 2018 Dataset Summary [More Information Needed] Supported Tasks and Leaderboards [More Information Needed] Languages [More Information Needed] Dataset Structure Data Instances [More Information Needed] Data Fields [More Information Needed] Data Splits [More Information Needed] Dataset Creation Curation Rationale [More… See the full description on the dataset page: https://huggingface.co/datasets/china-ai-law-challenge/cail2018.
24
chinese-ai-and-law-cail-2018
[ "task_categories:other", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "language:zh", "license:unknown", "size_categories:1M<n<10M", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "arxiv:1807.02478", "region:us", "judgement-prediction" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181da1
community-datasets/caner
community-datasets
false
False
2024-01-16T13:38:20.000Z
1
false
4749e1d6950c2377b62a2e424147e68406cca9dd
Dataset Card for CANER Dataset Summary The Classical Arabic Named Entity Recognition corpus is a new corpus of tagged data that can be useful for handling the issues in recognition of Arabic named entities. Supported Tasks and Leaderboards Named Entity Recognition Languages Classical Arabic Dataset Structure Data Instances An example from the dataset: {'ner_tag': 1, 'token': 'الجامع'} Where 1 stands… See the full description on the dataset page: https://huggingface.co/datasets/community-datasets/caner.
11
null
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "source_datasets:original", "language:ar", "license:unknown", "size_categories:100K<n<1M", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181da2
soarescmsa/capes
soarescmsa
false
False
2024-01-16T10:30:24.000Z
2
false
42c1ec984cc5461418a24fec2cd9ab8c8d4aa99c
Dataset Card for CAPES Dataset Summary A parallel corpus of theses and dissertations abstracts in English and Portuguese were collected from the CAPES website (Coordenação de Aperfeiçoamento de Pessoal de Nível Superior) - Brazil. The corpus is sentence aligned for all language pairs. Approximately 240,000 documents were collected and aligned using the Hunalign algorithm. Supported Tasks and Leaderboards The underlying task is machine translation.… See the full description on the dataset page: https://huggingface.co/datasets/soarescmsa/capes.
12
capes
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "source_datasets:original", "language:en", "language:pt", "license:unknown", "size_categories:1M<n<10M", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "arxiv:1905.01715", "region:us", "dissertation-abstracts-translation", "theses-translation" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181da3
kchawla123/casino
kchawla123
false
False
2024-01-16T13:53:39.000Z
4
false
290898d2d08b6591db17005504e40ce00ac1028e
Dataset Card for Casino Dataset Summary We provide a novel dataset (referred to as CaSiNo) of 1030 negotiation dialogues. Two participants take the role of campsite neighbors and negotiate for Food, Water, and Firewood packages, based on their individual preferences and requirements. This design keeps the task tractable, while still facilitating linguistically rich and personal conversations. This helps to overcome the limitations of prior negotiation datasets… See the full description on the dataset page: https://huggingface.co/datasets/kchawla123/casino.
23
casino
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:dialogue-modeling", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:cc-by-4.0", "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181da4
community-datasets/catalonia_independence
community-datasets
false
False
2024-01-16T13:54:09.000Z
3
false
cf24d44e517efa534f048e5fc5981f399ed25bee
Dataset Card for Catalonia Independence Corpus Dataset Summary This dataset contains two corpora in Spanish and Catalan that consist of annotated Twitter messages for automatic stance detection. The data was collected over 12 days during February and March of 2019 from tweets posted in Barcelona, and during September of 2018 from tweets posted in the town of Terrassa, Catalonia. Each corpus is annotated with three classes: AGAINST, FAVOR and NEUTRAL, which express… See the full description on the dataset page: https://huggingface.co/datasets/community-datasets/catalonia_independence.
58
cic
[ "task_categories:text-classification", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "source_datasets:original", "language:ca", "language:es", "license:cc-by-nc-sa-4.0", "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "stance-detection" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181da5
microsoft/cats_vs_dogs
microsoft
false
False
2024-08-08T05:35:11.000Z
26
false
b5ae3589204019bc2cc97e99e4914a54589333ef
Dataset Card for Cats Vs. Dogs Dataset Summary A large set of images of cats and dogs. There are 1738 corrupted images that are dropped. This dataset is part of a now-closed Kaggle competition and represents a subset of the so-called Asirra dataset. From the competition page: The Asirra data set Web services are often protected with a challenge that's supposed to be easy for people to solve, but difficult for computers. Such a challenge is often called a CAPTCHA… See the full description on the dataset page: https://huggingface.co/datasets/microsoft/cats_vs_dogs.
953
cats-vs-dogs
[ "task_categories:image-classification", "task_ids:multi-class-image-classification", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:unknown", "size_categories:10K<n<100K", "format:parquet", "modality:image", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181da6
community-datasets/cawac
community-datasets
false
False
2024-01-16T15:50:41.000Z
0
false
7dc6be007333a09f1b5d2474508c43d18551859d
Dataset Card for caWaC Dataset Summary caWaC is a 780-million-token web corpus of Catalan built from the .cat top-level-domain in late 2013. Supported Tasks and Leaderboards [More Information Needed] Languages Dataset is monolingual in Catalan language. Dataset Structure Data Instances [More Information Needed] Data Fields [More Information Needed] Data Splits [More… See the full description on the dataset page: https://huggingface.co/datasets/community-datasets/cawac.
8
cawac
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "language:ca", "license:cc-by-sa-3.0", "size_categories:10M<n<100M", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181da7
cam-cst/cbt
cam-cst
false
False
2024-01-16T16:01:16.000Z
12
false
72b5c46b1248e3316360f0f2f0b2c39e773b68e4
Dataset Card for CBT Dataset Summary The Children’s Book Test (CBT) is designed to measure directly how well language models can exploit wider linguistic context. The CBT is built from books that are freely available. This dataset contains four different configurations: V: where the answers to the questions are verbs. P: where the answers to the questions are pronouns. NE: where the answers to the questions are named entities. CN: where the answers to the… See the full description on the dataset page: https://huggingface.co/datasets/cam-cst/cbt.
72
cbt
[ "task_categories:other", "task_categories:question-answering", "task_ids:multiple-choice-qa", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:gfdl", "size_categories:100K<n<1M", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "arxiv:1511.02301", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181da8
statmt/cc100
statmt
false
False
2024-03-05T12:15:34.000Z
66
false
8c658c983d32eab9170d77d416252cfaa0c23e96
This corpus is an attempt to recreate the dataset used for training XLM-R. This corpus comprises of monolingual data for 100+ languages and also includes data for romanized languages (indicated by *_rom). This was constructed using the urls and paragraph indices provided by the CC-Net repository by processing January-December 2018 Commoncrawl snapshots. Each file comprises of documents separated by double-newlines and paragraphs within the same document separated by a newline. The data is generated using the open source CC-Net repository. No claims of intellectual property are made on the work of preparation of the corpus.
406
cc100
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:multilingual", "source_datasets:original", "language:af", "language:am", "language:ar", "language:as", "language:az", "language:be", "language:bg", "language:bn", "language:br", "language:bs", "language:ca", "language:cs", "language:cy", "language:da", "language:de", "language:el", "language:en", "language:eo", "language:es", "language:et", "language:eu", "language:fa", "language:ff", "language:fi", "language:fr", "language:fy", "language:ga", "language:gd", "language:gl", "language:gn", "language:gu", "language:ha", "language:he", "language:hi", "language:hr", "language:ht", "language:hu", "language:hy", "language:id", "language:ig", "language:is", "language:it", "language:ja", "language:jv", "language:ka", "language:kk", "language:km", "language:kn", "language:ko", "language:ku", "language:ky", "language:la", "language:lg", "language:li", "language:ln", "language:lo", "language:lt", "language:lv", "language:mg", "language:mk", "language:ml", "language:mn", "language:mr", "language:ms", "language:my", "language:ne", "language:nl", "language:no", "language:ns", "language:om", "language:or", "language:pa", "language:pl", "language:ps", "language:pt", "language:qu", "language:rm", "language:ro", "language:ru", "language:sa", "language:sc", "language:sd", "language:si", "language:sk", "language:sl", "language:so", "language:sq", "language:sr", "language:ss", "language:su", "language:sv", "language:sw", "language:ta", "language:te", "language:th", "language:tl", "language:tn", "language:tr", "language:ug", "language:uk", "language:ur", "language:uz", "language:vi", "language:wo", "language:xh", "language:yi", "language:yo", "language:zh", "language:zu", "license:unknown", "size_categories:10M<n<100M", "arxiv:1911.02116", "arxiv:1911.00359", "region:us" ]
2022-03-02T23:29:22.000Z
@inproceedings{conneau-etal-2020-unsupervised, title = "Unsupervised Cross-lingual Representation Learning at Scale", author = "Conneau, Alexis and Khandelwal, Kartikay and Goyal, Naman and Chaudhary, Vishrav and Wenzek, Guillaume and Guzm{\\'a}n, Francisco and Grave, Edouard and Ott, Myle and Zettlemoyer, Luke and Stoyanov, Veselin", editor = "Jurafsky, Dan and Chai, Joyce and Schluter, Natalie and Tetreault, Joel", booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.acl-main.747", doi = "10.18653/v1/2020.acl-main.747", pages = "8440--8451", } @inproceedings{wenzek-etal-2020-ccnet, title = "{CCN}et: Extracting High Quality Monolingual Datasets from Web Crawl Data", author = "Wenzek, Guillaume and Lachaux, Marie-Anne and Conneau, Alexis and Chaudhary, Vishrav and Guzm{\\'a}n, Francisco and Joulin, Armand and Grave, Edouard", editor = "Calzolari, Nicoletta and B{\\'e}chet, Fr{\\'e}d{\\'e}ric and Blache, Philippe and Choukri, Khalid and Cieri, Christopher and Declerck, Thierry and Goggi, Sara and Isahara, Hitoshi and Maegaard, Bente and Mariani, Joseph and Mazo, H{\\'e}l{\\`e}ne and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios", booktitle = "Proceedings of the Twelfth Language Resources and Evaluation Conference", month = may, year = "2020", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://aclanthology.org/2020.lrec-1.494", pages = "4003--4012", language = "English", ISBN = "979-10-95546-34-4", }
621ffdd236468d709f181da9
vblagoje/cc_news
vblagoje
false
False
2024-01-04T06:45:02.000Z
45
false
81eb2ce0d2a9dad6ad16b68ef750ec290880fa36
Dataset Card for CC-News Dataset Summary CC-News dataset contains news articles from news sites all over the world. The data is available on AWS S3 in the Common Crawl bucket at /crawl-data/CC-NEWS/. This version of the dataset has been prepared using news-please - an integrated web crawler and information extractor for news.It contains 708241 English language news articles published between Jan 2017 and December 2019. It represents a small portion of the… See the full description on the dataset page: https://huggingface.co/datasets/vblagoje/cc_news.
814
cc-news
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:unknown", "size_categories:100K<n<1M", "format:parquet", "modality:image", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181daa
ahelk/ccaligned_multilingual
ahelk
false
False
2024-01-18T11:02:11.000Z
5
false
732e0c60b22e16ea2fddcf7b10e4eeff64f88caa
CCAligned consists of parallel or comparable web-document pairs in 137 languages aligned with English. These web-document pairs were constructed by performing language identification on raw web-documents, and ensuring corresponding language codes were corresponding in the URLs of web documents. This pattern matching approach yielded more than 100 million aligned documents paired with English. Recognizing that each English document was often aligned to mulitple documents in different target language, we can join on English documents to obtain aligned documents that directly pair two non-English documents (e.g., Arabic-French).
11
ccaligned
[ "task_categories:other", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:translation", "source_datasets:original", "language:af", "language:ak", "language:am", "language:ar", "language:as", "language:ay", "language:az", "language:be", "language:bg", "language:bm", "language:bn", "language:br", "language:bs", "language:ca", "language:ceb", "language:ckb", "language:cs", "language:cy", "language:de", "language:dv", "language:el", "language:eo", "language:es", "language:fa", "language:ff", "language:fi", "language:fo", "language:fr", "language:fy", "language:ga", "language:gl", "language:gn", "language:gu", "language:he", "language:hi", "language:hr", "language:hu", "language:id", "language:ig", "language:is", "language:it", "language:iu", "language:ja", "language:ka", "language:kac", "language:kg", "language:kk", "language:km", "language:kn", "language:ko", "language:ku", "language:ky", "language:la", "language:lg", "language:li", "language:ln", "language:lo", "language:lt", "language:lv", "language:mg", "language:mi", "language:mk", "language:ml", "language:mn", "language:mr", "language:ms", "language:mt", "language:my", "language:ne", "language:nl", "language:no", "language:nso", "language:ny", "language:om", "language:or", "language:pa", "language:pl", "language:ps", "language:pt", "language:rm", "language:ro", "language:ru", "language:rw", "language:sc", "language:sd", "language:se", "language:shn", "language:si", "language:sk", "language:sl", "language:sn", "language:so", "language:sq", "language:sr", "language:ss", "language:st", "language:su", "language:sv", "language:sw", "language:syc", "language:szl", "language:ta", "language:te", "language:tg", "language:th", "language:ti", "language:tl", "language:tn", "language:tr", "language:ts", "language:tt", "language:ug", "language:uk", "language:ur", "language:uz", "language:ve", "language:vi", "language:war", "language:wo", "language:xh", "language:yi", "language:yo", "language:zgh", "language:zh", "language:zu", "language:zza", "license:unknown", "size_categories:n<1K", "region:us" ]
2022-03-02T23:29:22.000Z
@inproceedings{elkishky_ccaligned_2020, author = {El-Kishky, Ahmed and Chaudhary, Vishrav and Guzm{\'a}n, Francisco and Koehn, Philipp}, booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020)}, month = {November}, title = {{CCAligned}: A Massive Collection of Cross-lingual Web-Document Pairs}, year = {2020} address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.emnlp-main.480", doi = "10.18653/v1/2020.emnlp-main.480", pages = "5960--5969" }
621ffdd236468d709f181dab
community-datasets/cdsc
community-datasets
false
False
2024-01-18T08:46:51.000Z
0
false
b54010592d87b35ea7e007a1de9e6a3ed7d35f8b
Dataset Card for [Dataset Name] Dataset Summary Polish CDSCorpus consists of 10K Polish sentence pairs which are human-annotated for semantic relatedness and entailment. The dataset may be used for the evaluation of compositional distributional semantics models of Polish. The dataset was presented at ACL 2017. Please refer to the Wróblewska and Krasnowska-Kieraś (2017) for a detailed description of the resource. Supported Tasks and Leaderboards… See the full description on the dataset page: https://huggingface.co/datasets/community-datasets/cdsc.
11
polish-cdscorpus
[ "task_categories:other", "annotations_creators:expert-generated", "language_creators:other", "multilinguality:monolingual", "source_datasets:original", "language:pl", "license:cc-by-nc-sa-4.0", "size_categories:10K<n<100K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "sentences entailment and relatedness" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181dac
ptaszynski/cdt
ptaszynski
false
False
2024-01-18T14:08:18.000Z
0
false
6c872f54a00a2bd65b1e502b5221dd1161d30789
Dataset Card for [Dataset Name] Dataset Summary The Cyberbullying Detection task was part of 2019 edition of PolEval competition. The goal is to predict if a given Twitter message contains a cyberbullying (harmful) content. Supported Tasks and Leaderboards [More Information Needed] Languages Polish Dataset Structure Data Instances [More Information Needed] Data Fields sentence: an… See the full description on the dataset page: https://huggingface.co/datasets/ptaszynski/cdt.
10
null
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:other", "multilinguality:monolingual", "source_datasets:original", "language:pl", "license:bsd-3-clause", "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181dad
sagteam/cedr_v1
sagteam
false
False
2024-01-18T14:11:21.000Z
6
false
abafbe63cf92c33791b217e8f4f3460f816f1d96
Dataset Card for [cedr] Dataset Summary The Corpus for Emotions Detecting in Russian-language text sentences of different social sources (CEDR) contains 9410 comments labeled for 5 emotion categories (joy, sadness, surprise, fear, and anger). Here are 2 dataset configurations: "main" - contains "text", "labels", and "source" features; "enriched" - includes all "main" features and "sentences". Dataset with predefined train/test splits. Supported… See the full description on the dataset page: https://huggingface.co/datasets/sagteam/cedr_v1.
16
null
[ "task_categories:text-classification", "task_ids:sentiment-classification", "task_ids:multi-label-classification", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "language:ru", "license:apache-2.0", "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "emotion-classification" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181dae
google-research-datasets/cfq
google-research-datasets
false
False
2024-01-18T14:16:34.000Z
3
false
6627f9390245fe11ef09f349b82f6c89f577aabf
Dataset Card for "cfq" Dataset Summary The Compositional Freebase Questions (CFQ) is a dataset that is specifically designed to measure compositional generalization. CFQ is a simple yet realistic, large dataset of natural language questions and answers that also provides for each question a corresponding SPARQL query against the Freebase knowledge base. This means that CFQ can also be used for semantic parsing. Supported Tasks and Leaderboards More… See the full description on the dataset page: https://huggingface.co/datasets/google-research-datasets/cfq.
14
cfq
[ "task_categories:question-answering", "task_categories:other", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "annotations_creators:no-annotation", "language_creators:expert-generated", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:cc-by-4.0", "size_categories:100K<n<1M", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "arxiv:1912.09713", "region:us", "compositionality" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181daf
shiyue/chr_en
shiyue
false
False
2024-01-18T14:19:36.000Z
3
false
1b111eca2b6f2c08ff347b916a3b9cf05642a135
Dataset Card for ChrEn Dataset Summary ChrEn is a Cherokee-English parallel dataset to facilitate machine translation research between Cherokee and English. ChrEn is extremely low-resource contains 14k sentence pairs in total, split in ways that facilitate both in-domain and out-of-domain evaluation. ChrEn also contains 5k Cherokee monolingual data to enable semi-supervised learning. Supported Tasks and Leaderboards The dataset is intended to use… See the full description on the dataset page: https://huggingface.co/datasets/shiyue/chr_en.
8
chren
[ "task_categories:fill-mask", "task_categories:text-generation", "task_categories:translation", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:expert-generated", "annotations_creators:found", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "multilinguality:multilingual", "multilinguality:translation", "source_datasets:original", "language:chr", "language:en", "license:other", "size_categories:100K<n<1M", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "arxiv:2010.04791", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181db0
uoft-cs/cifar10
uoft-cs
false
False
2024-01-04T06:53:11.000Z
53
false
0b2714987fa478483af9968de7c934580d0bb9a2
Dataset Card for CIFAR-10 Dataset Summary The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images. The dataset is divided into five training batches and one test batch, each with 10000 images. The test batch contains exactly 1000 randomly-selected images from each class. The training batches contain the remaining images in random order, but some training batches may… See the full description on the dataset page: https://huggingface.co/datasets/uoft-cs/cifar10.
20,119
cifar-10
[ "task_categories:image-classification", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "source_datasets:extended|other-80-Million-Tiny-Images", "language:en", "license:unknown", "size_categories:10K<n<100K", "format:parquet", "modality:image", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181db1
uoft-cs/cifar100
uoft-cs
false
False
2024-01-04T06:57:47.000Z
31
false
aadb3af77e9048adbea6b47c21a81e47dd092ae5
Dataset Card for CIFAR-100 Dataset Summary The CIFAR-100 dataset consists of 60000 32x32 colour images in 100 classes, with 600 images per class. There are 500 training images and 100 testing images per class. There are 50000 training images and 10000 test images. The 100 classes are grouped into 20 superclasses. There are two labels per image - fine label (actual class) and coarse label (superclass). Supported Tasks and Leaderboards… See the full description on the dataset page: https://huggingface.co/datasets/uoft-cs/cifar100.
2,303
cifar-100
[ "task_categories:image-classification", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "source_datasets:extended|other-80-Million-Tiny-Images", "language:en", "license:unknown", "size_categories:10K<n<100K", "format:parquet", "modality:image", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181db2
google-research-datasets/circa
google-research-datasets
false
False
2024-01-18T14:21:12.000Z
3
false
faa1b5a78dd926a899bcd4da289c2e3abe8061a9
Dataset Card for CIRCA Dataset Summary The Circa (meaning ‘approximately’) dataset aims to help machine learning systems to solve the problem of interpreting indirect answers to polar questions. The dataset contains pairs of yes/no questions and indirect answers, together with annotations for the interpretation of the answer. The data is collected in 10 different social conversational situations (eg. food preferences of a friend). The following are the situational… See the full description on the dataset page: https://huggingface.co/datasets/google-research-datasets/circa.
9
circa
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:cc-by-4.0", "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "arxiv:2010.03450", "region:us", "question-answer-pair-classification" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181db3
google/civil_comments
google
false
False
2024-01-25T08:23:15.000Z
10
false
f2970eb3a55777454c94069077cc8d9b5866312d
Dataset Card for "civil_comments" Dataset Summary The comments in this dataset come from an archive of the Civil Comments platform, a commenting plugin for independent news sites. These public comments were created from 2015 - 2017 and appeared on approximately 50 English-language news sites across the world. When Civil Comments shut down in 2017, they chose to make the public comments available in a lasting open archive to enable future research. The original… See the full description on the dataset page: https://huggingface.co/datasets/google/civil_comments.
1,479
civil-comments
[ "task_categories:text-classification", "task_ids:multi-label-classification", "language:en", "license:cc0-1.0", "size_categories:1M<n<10M", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "arxiv:1903.04561", "region:us", "toxic-comment-classification" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181db4
community-datasets/clickbait_news_bg
community-datasets
false
False
2024-01-18T14:25:02.000Z
1
false
116216daae4af666df84c0c3296c92d2ff9bcb29
Dataset Card for Clickbait/Fake News in Bulgarian Dataset Summary This is a corpus of Bulgarian news over a fixed period of time, whose factuality had been questioned. The news come from 377 different sources from various domains, including politics, interesting facts and tips&tricks. The dataset was prepared for the Hack the Fake News hackathon. It was provided by the Bulgarian Association of PR Agencies and is available in Gitlab. The corpus was automatically… See the full description on the dataset page: https://huggingface.co/datasets/community-datasets/clickbait_news_bg.
11
null
[ "task_categories:text-classification", "task_ids:fact-checking", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "source_datasets:original", "language:bg", "license:unknown", "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181db5
tdiggelm/climate_fever
tdiggelm
false
False
2024-01-18T14:28:07.000Z
18
false
ae61ccb9320a78109a246414139ff3a2bd677b8b
Dataset Card for ClimateFever Dataset Summary A dataset adopting the FEVER methodology that consists of 1,535 real-world claims regarding climate-change collected on the internet. Each claim is accompanied by five manually annotated evidence sentences retrieved from the English Wikipedia that support, refute or do not give enough information to validate the claim totalling in 7,675 claim-evidence pairs. The dataset features challenging claims that relate multiple… See the full description on the dataset page: https://huggingface.co/datasets/tdiggelm/climate_fever.
131
climate-fever
[ "task_categories:text-classification", "task_categories:text-retrieval", "task_ids:text-scoring", "task_ids:fact-checking", "task_ids:fact-checking-retrieval", "task_ids:semantic-similarity-scoring", "task_ids:multi-input-text-classification", "annotations_creators:crowdsourced", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "source_datasets:extended|wikipedia", "source_datasets:original", "language:en", "license:unknown", "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "arxiv:2012.00614", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181db6
clinc/clinc_oos
clinc
false
False
2024-01-18T14:33:10.000Z
11
false
155b9c710419136e17307b80d0a13e68cd46b4ec
Dataset Card for CLINC150 Dataset Summary Task-oriented dialog systems need to know when a query falls outside their range of supported intents, but current text classification corpora only define label sets that cover every example. We introduce a new dataset that includes queries that are out-of-scope (OOS), i.e., queries that do not fall into any of the system's supported intents. This poses a new challenge because models cannot assume that every query at… See the full description on the dataset page: https://huggingface.co/datasets/clinc/clinc_oos.
161
clinc150
[ "task_categories:text-classification", "task_ids:intent-classification", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:cc-by-3.0", "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181db7
clue/clue
clue
false
False
2024-01-17T07:48:08.000Z
40
false
28178267a609dd08bdc703dd6c931dfc2c2f4431
Dataset Card for "clue" Dataset Summary CLUE, A Chinese Language Understanding Evaluation Benchmark (https://www.cluebenchmarks.com/) is a collection of resources for training, evaluating, and analyzing Chinese language understanding systems. Supported Tasks and Leaderboards More Information Needed Languages More Information Needed Dataset Structure Data Instances afqmc Size of downloaded… See the full description on the dataset page: https://huggingface.co/datasets/clue/clue.
4,896
clue
[ "task_categories:text-classification", "task_categories:multiple-choice", "task_ids:topic-classification", "task_ids:semantic-similarity-scoring", "task_ids:natural-language-inference", "task_ids:multiple-choice-qa", "annotations_creators:other", "language_creators:other", "multilinguality:monolingual", "source_datasets:original", "language:zh", "license:unknown", "size_categories:100K<n<1M", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "arxiv:2004.05986", "region:us", "coreference-nli", "qa-nli" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181db8
hfl/cmrc2018
hfl
false
False
2024-08-08T06:11:44.000Z
20
false
137f2c45a24275fb68f6961c4d357f46288886aa
Dataset Card for "cmrc2018" Dataset Summary A Span-Extraction dataset for Chinese machine reading comprehension to add language diversities in this area. The dataset is composed by near 20,000 real questions annotated on Wikipedia paragraphs by human experts. We also annotated a challenge set which contains the questions that need comprehensive understanding and multi-sentence inference throughout the context. Supported Tasks and Leaderboards More… See the full description on the dataset page: https://huggingface.co/datasets/hfl/cmrc2018.
139
cmrc-2018
[ "task_categories:question-answering", "task_ids:extractive-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "source_datasets:original", "language:zh", "license:cc-by-sa-4.0", "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181db9
festvox/cmu_hinglish_dog
festvox
false
False
2024-01-18T14:36:48.000Z
7
false
19796c6fb32020154cb2745d48704fa73e29b17d
Dataset Card for CMU Document Grounded Conversations Dataset Summary This is a collection of text conversations in Hinglish (code mixing between Hindi-English) and their corresponding English versions. Can be used for Translating between the two. The dataset has been provided by Prof. Alan Black's group from CMU. Supported Tasks and Leaderboards abstractive-mt Languages Dataset Structure Data Instances… See the full description on the dataset page: https://huggingface.co/datasets/festvox/cmu_hinglish_dog.
11
null
[ "task_categories:translation", "annotations_creators:machine-generated", "language_creators:crowdsourced", "multilinguality:multilingual", "multilinguality:translation", "source_datasets:original", "language:en", "language:hi", "license:cc-by-sa-3.0", "license:gfdl", "size_categories:1K<n<10K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "arxiv:1809.07358", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181dba
abisee/cnn_dailymail
abisee
false
False
2024-01-18T15:31:34.000Z
203
false
96df5e686bee6baa90b8bee7c28b81fa3fa6223d
Dataset Card for CNN Dailymail Dataset Dataset Summary The CNN / DailyMail Dataset is an English-language dataset containing just over 300k unique news articles as written by journalists at CNN and the Daily Mail. The current version supports both extractive and abstractive summarization, though the original version was created for machine reading and comprehension and abstractive question answering. Supported Tasks and Leaderboards… See the full description on the dataset page: https://huggingface.co/datasets/abisee/cnn_dailymail.
8,394
cnn-daily-mail-1
[ "task_categories:summarization", "task_ids:news-articles-summarization", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:apache-2.0", "size_categories:100K<n<1M", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
2022-03-02T23:29:22.000Z
null
621ffdd236468d709f181dbb
google-research-datasets/coached_conv_pref
google-research-datasets
false
False
2024-01-18T09:16:22.000Z
2
false
c5a050c88eea9927dc6b914184b1c2b2d031cd07
A dataset consisting of 502 English dialogs with 12,000 annotated utterances between a user and an assistant discussing movie preferences in natural language. It was collected using a Wizard-of-Oz methodology between two paid crowd-workers, where one worker plays the role of an 'assistant', while the other plays the role of a 'user'. The 'assistant' elicits the 'user’s' preferences about movies following a Coached Conversational Preference Elicitation (CCPE) method. The assistant asks questions designed to minimize the bias in the terminology the 'user' employs to convey his or her preferences as much as possible, and to obtain these preferences in natural language. Each dialog is annotated with entity mentions, preferences expressed about entities, descriptions of entities provided, and other statements of entities.
20
coached-conversational-preference-elicitation
[ "task_categories:other", "task_categories:text-generation", "task_categories:fill-mask", "task_categories:token-classification", "task_ids:dialogue-modeling", "task_ids:parsing", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:cc-by-sa-4.0", "size_categories:n<1K", "region:us", "Conversational Recommendation" ]
2022-03-02T23:29:22.000Z
@inproceedings{48414, title = {Coached Conversational Preference Elicitation: A Case Study in Understanding Movie Preferences}, author = {Filip Radlinski and Krisztian Balog and Bill Byrne and Karthik Krishnamoorthi}, year = {2019}, booktitle = {Proceedings of the Annual SIGdial Meeting on Discourse and Dialogue} }

Hugging Face Hub Stats

Updated Daily

Downloads last month
0
Edit dataset card