text-classification bool 2
classes | text stringlengths 0 664k |
|---|---|
false |
# Dataset Card for malromur_asr
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fie... |
false | # AutoTrain Dataset for project: image-classification
## Dataset Description
This dataset has been automatically processed by AutoTrain for project image-classification.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as fo... |
true |
# NLU Few-shot Benchmark - English and German
This is a few-shot training dataset from the domain of human-robot interaction.
It contains texts in German and English language with 64 different utterances (classes).
Each utterance (class) has exactly 20 samples in the training set.
This leads to a total of 1280 differe... |
false | # Dataset Card for "lexFridmanPodcast-transcript-audio"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [D... |
false |
# FLEURS
## Dataset Description
- **Fine-Tuning script:** [pytorch/speech-recognition](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition)
- **Paper:** [FLEURS: Few-shot Learning Evaluation of
Universal Representations of Speech](https://arxiv.org/abs/2205.12446)
- **Total amou... |
false |
# Dataset Card for Universal Dependencies Treebank
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instan... |
false |
# Dataset Card for `mmarco/fr/dev`
The `mmarco/fr/dev` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/mmarco#mmarco/fr/dev).
# Data
This dataset provides:
- `queries` (i.e., topics); count=101,093
- ... |
false |
# Dataset Card for `mmarco/it`
The `mmarco/it` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/mmarco#mmarco/it).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=8,841,823
... |
false |
# Dataset Card for `nfcorpus`
The `nfcorpus` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/nfcorpus#nfcorpus).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=5,371
This... |
false |
# Dataset Card for Wikipedia
This repo is a fork of the original Hugging Face Wikipedia repo [here](https://huggingface.co/datasets/wikipedia).
The difference is that this fork does away with the need for `apache-beam`, and this fork is very fast if you have a lot of CPUs on your machine.
It will use all CPUs availab... |
false |
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:** https://github.com/liyucheng09/Metaphor_Generator
- **Repository:** https://github.com/liyucheng09/Metaphor_Generator
- **Paper:** CM-Gen: A Neural Framework for Chinese Metaphor Generation with Explicit Context Modelling
- **Leaderboard:**
- **... |
false |
# MIRACL (bn) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-bn-queries-22-12](https://huggingface.... |
false |
# Snow Mountain
## Dataset Description
- **Paper: https://arxiv.org/abs/2206.01205**
- **Point of Contact: Joel Mathew**
### Dataset Summary
The Snow Mountain dataset contains the audio recordings (in .mp3 format) and the corresponding text of The Bible (contains both Old Testament (OT) and New Testament (NT)) in ... |
false |
# Dataset Card for PlotQA
## Dataset Description
- **PlotQA from here:** [PlotQA](https://github.com/NiteshMethani/PlotQA)
### Dataset Summary
PlotQA is a VQA dataset with 28.9 million question-answer pairs grounded over 224,377 plots on data from real-world sources and questions based on crowd-sourced question te... |
false | # Dataset Card for "Brazilian_Coffee_Scenes"
## Dataset Description
- **Paper** [Do deep features generalize from everyday objects to remote sensing and aerial scenes domains?](https://www.cv-foundation.org/openaccess/content_cvpr_workshops_2015/W13/papers/Penatti_Do_Deep_Features_2015_CVPR_paper.pdf)
### Licensing ... |
false | # Dataset Card for "RSI-CB256"
## Dataset Description
- **Paper** [Exploring Models and Data for Remote Sensing Image Caption Generation](https://ieeexplore.ieee.org/iel7/36/4358825/08240966.pdf)
-
### Licensing Information
For academic purposes.
## Citation Information
[Exploring Models and Data for Remote Sensi... |
false | # Dataset Card for "bashkir-russian-parallel-corpora"
### How the dataset was assembled.
1. find the text in two languages. it can be a translated book or an internet page (wikipedia, news site)
2. our algorithm tries to match Bashkir sentences with their translation in Russian
3. We give these pairs to people to che... |
false | # Binhvq News
- Source: https://github.com/binhvq/news-corpus
- Num examples: 19,365,593
- Language: Vietnamese
```python
from datasets import load_dataset
load_dataset("tdtunlp/binhvq_news_vi")
``` |
false |
# m2m3_qualitative_analysis_ocr_ptrn_cmbert_iob2
## Introduction
This dataset was used to perform **qualitative analysis** of [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) on **nested NER task** using Independant NER layers approach [M1].
It contains Paris... |
false | # Opus100
- Source: https://huggingface.co/datasets/opus100
- Num examples:
- 1,000,000 (train)
- 2,000 (validation)
- 192,744 (test)
- Language: English
```python
from datasets import load_dataset
load_dataset("tdtunlp/opus100_envi")
```
- Format for Translation task
```python
def preprocess(sample):
e... |
false |
# Bengali Abstractive News Summarization (BANS)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:** [BANS PAPER](https://doi.org/10.1007/978-981-33-4673-4_4)
- **Leaderboard:**
- **Point of Contact:** [Prithwiraj Bhattacharjee](prithwiraj_cse@lus.ac.bd)
### Dataset Summary
Nowadays news or tex... |
false | # Dataset Card for "french_simplified"
Files taken from: https://github.com/psawa/alector_corpus/tree/master/corpus |
false |
A cleaned and tokenized version of the English data from [Mozilla Common Voice 11 dataset](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0/tree/main).
Cleaning steps:
* Filtered on samples with >2 upvotes and <1 downvotes]
* Removed non voice audio at start and end through pytorch VAD
Tokenizati... |
false | |
false |
# Dataset Card for naamapadam
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields... |
false |
# Dataset Card for Wikipedia
## Table of Contents
- [Dataset Card for "wikipedia"](#dataset-card-for-wikipedia)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboar... |
false |
# Dataset Card for liver-disease
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/liver-disease
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
liver-disease
### Supported Tasks and Leader... |
false |
# Dataset Card for wine-labels
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/wine-labels
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
wine-labels
### Supported Tasks and Leaderboards... |
true | Dataset Card for English Historical Quotes
# I-Dataset Summary
english_historical_quotes is a dataset of many historical quotes.
This dataset can be used for multi-label text classification and text generation. The content of each quote is in English.
# II-Supported Tasks and Leaderboards
Multi-label text class... |
false |
# Dataset Card for "jomleh"
## Dataset Summary
"Jomleh" is a high-quality Farsi language dataset consisting of sentences that have been carefully preprocessed to ensure they contain only Farsi characters, without any contamination from other languages. The data has been sourced from multiple sources and undergone a ... |
false | # Dataset Card for AbLit
## Dataset Description
- **Homepage:** https://github.com/roemmele/AbLit
- **Repository:** https://github.com/roemmele/AbLit
- **Paper:** https://arxiv.org/pdf/2302.06579.pdf
- **Point of Contact:** melissa@roemmele.io
### Dataset Summary
The AbLit dataset contains **ab**ridged versions of ... |
true |
# Dataset Card for "EmoNoBa"
### Dataset Summary
Detecting Multi-labeled Emotion for 6 emotion categories, namely Love, Joy, Surprise, Anger, Sadness, Fear.
### Citation Information
```
@inproceedings{islam2022emonoba,
title={EmoNoBa: A Dataset for Analyzing Fine-Grained Emotions on Noisy Bangla Texts},
autho... |
true |
# Dataset Card for "EmoNoBa"
### Dataset Summary
Detecting Multi-labeled Emotion for 6 emotion categories, namely Love, Joy, Surprise, Anger, Sadness, Fear.
### Citation Information
```
@inproceedings{islam2022emonoba,
title={EmoNoBa: A Dataset for Analyzing Fine-Grained Emotions on Noisy Bangla Texts},
autho... |
false |
# Anti-Spoofing dataset: replay
The dataset consists of 40,000 videos and selfies with unique people. 15,000 attack replays from 4,000 unique devices. 10,000 attacks with A4 printouts and 10,000 attacks with cut-out printouts.
# File with the extension .csv
includes the following information for each media file:
- *... |
true |
# Dataset Card for "SentNoB"
### Dataset Summary
Social Media User Comments' Sentiment Analysis Dataset. Each user comments are labeled with either positive (1), negative (2), or neutral (0).
### Citation Information
```
@inproceedings{islam2021sentnob,
title={SentNoB: A Dataset for Analysing Sentiment on Noisy... |
false |
# prompt3M
3M+ unique prompts collated from multiple sources
```
3129340 rows x 1 columns
'prompt'
```
|
false | |
false | # Summary
`EVILDolly` is an open source dataset of instruction-following records with wrong answers derived from [databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k).
The dataset includes answers that are wrong, but appear to be correct and reasonable. The goal is to provide negativ... |
false |
translate by @Nekofoxtweet (me)
twitter source from @RindouMikoto |
false | |
true |
# typescript-instruct
A dataset of TypeScript snippets, processed from the typescript subset of [the-stack-smol](https://huggingface.co/datasets/bigcode/the-stack-smol).
# Processing
- Each source file is parsed with the TypeScript AST and queried for 'semantic chunks' of the following types.
```
ClassDeclaration -... |
true | |
false |
## CRAN packages dataset
R and Rmd source codes for CRAN packages.
The dataset has been constructed using the following steps:
- Downloaded latest version from all packages on CRAN (see last updated). The source code has been downloaded from the [GitHub mirror](https://github.com/cran).
- Identified the licenses f... |
false |
# Dataset Card for ConflcitQA
## Dataset Description
- **Repository:** https://github.com/OSU-NLP-Group/LLM-Knowledge-Conflict
- **Paper:** https://arxiv.org/abs/2305.13300
- **Point of Contact:** Point of Contact: [Jian Xie](mailto:jianx0321@gmail.com)
## Citation
If our paper or related resources prove valuable to y... |
false |
# ParsiGoo Dataset Cart
This is a Persian multispeaker dataset for text-to-speech purposes. The dataset includes the following speakers:
- ariana_Male2
- moujeze_Female1
- ariana_Male1
- ariana_Female1
## Technical detailes
#### the beginning and the end with nonspeech parts trimmed
#### Sample rate: 22050
#### D... |
false | # Crossmodal-3600: A Massively Multilingual Multimodal Evaluation Dataset
## Abstract
Research in massively multilingual image captioning has been severely hampered by a lack of high-quality evaluation datasets. In this paper we present the Crossmodal-3600 dataset (XM3600 in short), a geographically-diverse set of 36... |
false |
# Intro
This dataset represents a compilation of audio-to-text transcripts from the Lex Fridman Podcast. The Lex Fridman Podcast, hosted by AI researcher at MIT, Lex Fridman, is a deep dive into a broad range of topics that touch on science, technology, history, philosophy, and the nature of intelligence, consciousnes... |
true |
# Dataset Card for "amazon_us_reviews"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-i... |
false | # alpaca-cleaned-ru
Translated version of [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned) into Russian.
> WIP. Code prompts and answers translated incorrectly.
## Dataset Description
- **Repository:** https://github.com/gururise/AlpacaDataCleaned |
false | # toxic_dvach_detoxified
Toxic subsed of [marriamaslova/toxic_dvach](https://huggingface.co/datasets/marriamaslova/toxic_dvach) dataset with detoxified column by [s-nlp/ruT5-base-detox](https://huggingface.co/s-nlp/ruT5-base-detox) model. |
false |
# Dataset Card for Flores 101
## Table of Contents
- [Dataset Card for Flores 101](#dataset-card-for-flores-101)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderbo... |
true |
# Dataset Card for MeLiSA (Mercado Libre for Sentiment Analysis)
** **NOTE: THIS CARD IS UNDER CONSTRUCTION** **
** **NOTE 2: THE RELEASED VERSION OF THIS DATASET IS A DEMO VERSION.** **
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks... |
false |
# Dataset Card for World Bank Project Documents
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structur... |
false | # Dataset Card for XSum NL
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)... |
false |
# Dataset Card for "Widdd"
## Dataset Description
WiDDD stands for WIkiData Disambig with Descriptions. The former dataset comes from [Cetoli & al](https://arxiv.org/pdf/1810.09164.pdf) paper, and is aimed at solving Named Entity Disambiguation. This datasets tries to extract relevant information from entities de... |
true | |
true | |
false |
# Dataset Card for notional-python
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Considerations for Using th... |
false |
# Dataset Card for Tilde-MODEL-Catalan
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dat... |
false |
# Dataset Card for ca-text-corpus
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-st... |
false |
# Dataset Card for ca-text-corpus
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-st... |
false |
# Dataset Card for open-source-english-catalan-corpus
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset St... |
false |
# TREC Cast 2019
[TREC Cast](http://www.treccast.ai) have released a document collection with topics and qrels of which a subset has been annotated such that it is suitable for multi-turn conversational search.
## Dataset statistics
- # Passages: 38,426,252
- # Topics: 20
- # Queries: 173
## Subsets
### CAR + MS... |
false | # Dataset Card for "nostradamus-propheties"
## Dataset Description
### Dataset Summary
The Nostradamus propheties dataset is a set of structured files containing the "Propheties" by Nostradamus, translated in modern English.
The original text consists of 10 "Centuries", every century containing 100 numbered quatrai... |
true | # AutoNLP Dataset for project: devign_raw_test
## Dataset Descritpion
This dataset has been automatically processed by AutoNLP for project devign_raw_test.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json... |
false |
# Dataset Card for Architext
## Dataset Description
This is the raw training data used to train the Architext models referenced in "Architext: Language-Driven Generative Architecture Design" .
- **Homepage:** https://architext.design/
- **Paper:** https://arxiv.org/abs/2303.07519
- **Point of Contact:** Theodoros G... |
true |
# Dataset Card for [readability-es-caes]
## Dataset Description
### Dataset Summary
This dataset is a compilation of short articles from websites dedicated to learn Spanish as a second language. These articles have been compiled from the following sources:
- [CAES corpus](http://galvan.usc.es/caes/) (Martínez et ... |
true | This file contains news texts (sentences) belonging to different writing styles. The original dataset created by {*Upeksha, D., Wijayarathna, C., Siriwardena, M.,
Lasandun, L., Wimalasuriya, C., de Silva, N., and Dias, G. (2015). Implementing a corpus for Sinhala language. 01*}is processed and cleaned.
If you use this... |
false |
# Dataset Card for "twitter-pos-vcb"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-ins... |
false |
# Dataset Card for Aksharantar
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-struc... |
true |
# Sinhala-English-Code-Mixed-Code-Switched-Dataset
This dataset contains 10,000 comments that have been annotated at the sentence level for sentiment analysis, humor detection, hate speech detection, aspect identification, and language identification.
The following is the tag scheme.
* Sentiment - Positive, Negativ... |
false |
# Dataset Card for BBNLI
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#da... |
true | # AutoTrain Dataset for project: osdg-sdg-classifier
## Dataset Descritpion
This dataset has been pre-processed using standard python cleaning functions and further automatically processed by AutoTrain for project osdg-sdg-classifier.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Struc... |
false | # Dataset Card for tweet_eval
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
... |
false |
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instan... |
false |
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instan... |
true |
# SPOLIN
[![CC BY-NC 4.0][cc-by-nc-shield]][cc-by-nc]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Available SPOLIN Versions](#available_spolin_versions)
- [Relevant Links](#relevant-links)
- [Dataset Structure](#dataset-structure)
- [Dataset Stati... |
false |
# Dataset Card for SRSD-Feynman (Easy set)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#d... |
false |
# Dataset Card for answersumm
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields... |
false |
# Rendered SST-2
The [Rendered SST-2 Dataset](https://github.com/openai/CLIP/blob/main/data/rendered-sst2.md) from Open AI.
Rendered SST2 is an image classification dataset used to evaluate the models capability on optical character recognition. This dataset was generated by rendering sentences in the Standford Sent... |
false |
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instan... |
false |
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instan... |
false |
# Dataset Card for ERWT Hertiage Made Digital Newspapers training data
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languag... |
true | # Dataset Card for Multilingual HateCheck
## Dataset Description
Multilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish.
For each language, there are 25+ functional t... |
true | # Dataset Card for Multilingual HateCheck
## Dataset Description
Multilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish.
For each language, there are 25+ functional t... |
false | How to use it:
```
from datasets import load_dataset
remote_dataset = load_dataset("VanessaSchenkel/translation-en-pt", field="data")
remote_dataset
```
Output:
```
DatasetDict({
train: Dataset({
features: ['id', 'translation'],
num_rows: 260482
})
})
```
Exemple:
```
remote_dataset["trai... |
false | # 한국어 속담 모음 v1.0
국립국어원 우리말샘의 속담을 정제해 만든 데이터입니다.
- 현대에 맞지 않는 단어가 포함된 속담 삭제
- 괄호로 표현된 변형 삭제
- 중복내용 통합
## 원본 데이터 받기
우리말샘에서 속담의 해설을 포함한 원본데이터를 다운받을 수 있습니다.
> 국립국어원 누리집 사전에 실려 있는 속담을 '자세히 찾기' 기능을 활용하여 보실 수 있습니다. 속담이 더 많이 실려 있는 사전-우리말샘의 '자세히 찾기'로 들어가셔서 '속담'을 선택하시면 사전에 실려 있는 모든 속담의 목록이 나옵니다.
https://opendict.korean.go.kr/
... |
false |
This is the Chinese generation datasets collected by TextBox, including:
- LCSTS (lcsts)
- CSL (csl)
- ADGEN (adgen).
The detail and leaderboard of each dataset can be found in [TextBox page](https://github.com/RUCAIBox/TextBox#dataset). |
false |
This is the data-to-text generation datasets collected by TextBox, including:
- WebNLG v2.1 (webnlg)
- WebNLG v3.0 (webnlg2)
- WikiBio (wikibio)
- E2E (e2e)
- DART (dart)
- ToTTo (totto)
- ENT-DESC (ent)
- AGENDA (agenda)
- GenWiki (genwiki)
- TEKGEN (tekgen)
- LogicNLG (logicnlg)
- WikiTableT (wikit)
- WEATHERGOV (wg... |
false |
This is the open dialogue datasets collected by TextBox, including:
- PersonaChat (pc)
- DailyDialog (dd)
- DSTC7-AVSD (da)
- SGD (sgd)
- Topical-Chat (tc)
- Wizard of Wikipedia (wow)
- Movie Dialog (md)
- Cleaned OpenSubtitles Dialogs (cos)
- Empathetic Dialogues (ed)
- Curiosity (curio)
- CMU Document Grounded Conve... |
true |
# Popular Surname Nationality Mapping
Sample of popular surnames for 30+ countries labeled with nationality (language)
|
false |
This is a copy of the [Multi-News](https://huggingface.co/datasets/multi_news) dataset, except the input source documents of its `test` split have been replaced by a __sparse__ retriever. The retrieval pipeline used:
- __query__: The `summary` field of each example
- __corpus__: The union of all documents in the `tra... |
false |
This is a copy of the [WCEP-10](https://huggingface.co/datasets/ccdv/WCEP-10) dataset, except the input source documents of its `test` split have been replaced by a __sparse__ retriever. The retrieval pipeline used:
- __query__: The `summary` field of each example
- __corpus__: The union of all documents in the `trai... |
false |
# Dataset Card for Fashionpedia_4_categories
This dataset is a variation of the fashionpedia dataset available [here](https://huggingface.co/datasets/detection-datasets/fashionpedia), with 2 key differences:
- It contains only 4 categories:
- Clothing
- Shoes
- Bags
- Accessories
- New splits were created:
... |
false | # Dataset Card for Flickr_bw_rgb
_Dataset A image-caption dataset which stores group of black and white and color images with corresponding
captions mentioning the content of the image with a 'colorized photograph of' or 'Black and white photograph of' suffix.
This dataset can then be used for fine-tuning image to te... |
false |
# Dataset Card for OLM September/October 2022 Common Crawl
Cleaned and deduplicated pretraining dataset, created with the OLM repo [here](https://github.com/huggingface/olm-datasets) from 16% of the September/October 2022 Common Crawl snapshot.
Note: `last_modified_timestamp` was parsed from whatever a website retur... |
false | # Dataset Card for sberdevices_golos_100h_farfield
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instance... |
false | # Dataset Card for COYO-Labeled-300M
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-s... |
false |
# Dataset Card for ACL Anthology Corpus
[](https://creativecommons.org/licenses/by-nc-sa/4.0/)
This repository provides full-text and metadata to the ACL anthology collection (80k articles/posters as of September 2022) also including .pd... |
false |
# Dataset Card for OLM November/December 2022 Common Crawl
Cleaned and deduplicated pretraining dataset, created with the OLM repo [here](https://github.com/huggingface/olm-datasets) from 15% of the November/December 2022 Common Crawl snapshot.
Note: `last_modified_timestamp` was parsed from whatever a website retur... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.