text-classification bool 2
classes | text stringlengths 0 664k |
|---|---|
false | # Dataset Card for "github-code-scala"
This contains just the scala data in [github-code-clean](https://huggingface.co/datasets/codeparrot/github-code). There are 817k samples with a total download size of 1.52GB. |
false |
# Dataset Card for Habr QnA
## Table of Contents
- [Dataset Card for Habr QnA](#dataset-card-for-habr-qna)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
... |
false | |
false | |
true | |
true |
# Cyrillic dataset of 8 Turkic languages spoken in Russia and former USSR
## Dataset Description
The dataset is a part of the [Leipzig Corpora (Wiki) Collection]: https://corpora.uni-leipzig.de/
For the text-classification comparison, Russian has been included to the dataset.
**Paper:**
Dirk Goldhahn, Thomas Eck... |
true | # Dataset Card for "TASTESet"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
false |
This dataset is curated by [GIZ Data Service Center](https://www.giz.de/expertise/html/63018.html) in the form of Sqaud dataset with features `question`, `answer`, `answer_start`, `context` and `language`.
The source dataset for this comes from [Changing Transport Tracker](https://changing-transport.org/tracker/),
w... |
false | # Dataset Card for "spanish-chinese"
All sensences extracted from the United Nations Parallel Corpus v1.0.
The parallel corpus consists of manually translated United Nations documents for the six
official UN languages, Arabic, Chinese, English, French, Russian, and Spanish.
The corpus is freely available for downloa... |
false | # AutoTrain Dataset for project: vision-tcg
## Dataset Description
This dataset has been automatically processed by AutoTrain for project vision-tcg.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
... |
false | tags:
- autotrain
- translation
language:
- en
- unk
datasets:
- Maghrebi/autotrain-data-90
co2_eq_emissions:
emissions: 0.0075699862718682795 |
false | # AutoTrain Dataset for project: data-image
## Dataset Description
This dataset has been automatically processed by AutoTrain for project data-image.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
... |
false | |
false |
This dataset contains an automatically generated set of Question and Answers extracted from the "TESTO UNICO SULLA SALUTE E SICUREZZA SUL LAVORO 81/08" document [link](https://www.lavoro.gov.it/documenti-e-norme/studi-e-statistiche/Documents/Testo%20Unico%20sulla%20Salute%20e%20Sicurezza%20sul%20Lavoro/Testo-Unico-81-... |
true |
# Constructive and Toxic Speech Detection for Open-domain Social Media Comments in Vietnamese
This is the official repository for the UIT-ViCTSD dataset from the paper [Constructive and Toxic Speech Detection for Open-domain Social Media Comments in Vietnamese](https://arxiv.org/pdf/2103.10069.pdf), which was accepted... |
true |
# MagicPrompt_SD_Washed
It's a version of datasets of [Gustavosta/MagicPrompt-Stable-Diffusion](https://huggingface.co/Gustavosta/MagicPrompt-Stable-Diffusion).
When I want to train a model using origin data, some bad prompts broke model and waste many time.
So I washed the origin datasets:
1. 😄 delete some mea... |
false | Test Only |
true | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains more than 2.1 million negative user reviews (reviews with 1 or 2 ratings) from 9775 apps across 48 categories from Google Pl... |
false | # Dataset Card for "tv_dialogue"
This dataset contains transcripts for famous movies and TV shows from multiple sources.
An example dialogue would be:
```
[PERSON 1] Hello
[PERSON 2] Hello Person 2!
How's it going?
(they are both talking)
[PERSON 1] I like being an example
on Huggingface!
They are examples on Hugg... |
false | = |
false | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/hugg... |
false | just a test to see how this works |
false |
Dataset generated from HKR train set using Stackmix
=========================================
Number of images: 2476836
Sources:
* [HKR dataset](https://github.com/abdoelsayed2016/HKR_Dataset)
* [Stackmix code](https://github.com/ai-forever/StackMix-OCR)
|
true | # Dataset Card for "torch-forum"
Dataset structure
```
{
title:str
category:str,
posts:List[{
poster:str,
contents:str,
likes:int,
isAccepted:bool
}]
}
``` |
false | |
false | # Dataset Card for "ignatius"
This dataset was created to participate in the keras dreambooth sprint. It is based on the Spanish comedian [Ignatius Farray](https://es.wikipedia.org/wiki/Ignatius_Farray)
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-th... |
false | |
false | |
false | |
false |
Dataset generated from cyrillic train set using Stackmix
========================================================
Number of images: 3700269
Sources:
* [Cyrillic dataset](https://www.kaggle.com/datasets/constantinwerner/cyrillic-handwriting-dataset)
* [Stackmix code](https://github.com/ai-forever/StackMix-OCR)
|
false | # Dataset Card for "Babelscape-wikineural-joined"
This dataset is a merged version of [wikineural](https://huggingface.co/datasets/Babelscape/wikineural)
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
<pre><code>
@inproceedings{te... |
false |
# Dataset Card for SAIL 2017
### Dataset Summary
The dataset was a part of Shared Task on Sentiment Analysis in Indian Languages (SAIL) Tweets. It was presented in FIRE 2017.
### Languages
Code-Mixed sentences in English and Hindi
### Source Data
http://amitavadas.com/SAIL/data.html
#### Initial Data Collection... |
false |
Dataset generated using handwritten fonts
=========================================
Number of images: 2634473
Sources:
* [Handwriting generation code](https://github.com/NastyBoget/HandwritingGeneration)
The code was executed with `hkr` option (with fewer augmentations)
|
false |
# Mnist-Ambiguous
This dataset contains mnist-like images, but with an unclear ground truth. For each image, there are two classes which could be considered true.
Robust and uncertainty-aware DNNs should thus detect and flag these issues.
### Features
Same as mnist, the supervised dataset has an `image` (28x28 int ... |
false | |
false | # AutoTrain Dataset for project: test
## Dataset Description
This dataset has been automatically processed by AutoTrain for project test.
### Languages
The BCP-47 code for the dataset's language is fr.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"tokens... |
false |
Dataset generated using handwritten fonts
=========================================
Number of images: 3700269
Sources:
* [Handwriting generation code](https://github.com/NastyBoget/HandwritingGeneration)
The code was executed with `cyrillic` option (more augmentations)
|
false | |
false | # Mtet
- Source: https://github.com/vietai/mTet
- Num examples:
- 8,327,706 (train)
- 3,106 (validation)
- 2,536 (test)
- Language: Vietnamese
```python
from datasets import load_dataset
load_dataset("tdtunlp/mtet-prompt-envi")
``` |
false | # AutoTrain Dataset for project: skill2go_summ_mbart
## Dataset Description
This dataset has been automatically processed by AutoTrain for project skill2go_summ_mbart.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as foll... |
false | # AutoTrain Dataset for project: t5-autotrain
## Dataset Description
This dataset has been automatically processed by AutoTrain for project t5-autotrain.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
... |
true | # AutoTrain Dataset for project: cv-sentiment
## Dataset Description
This dataset has been automatically processed by AutoTrain for project cv-sentiment.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[... |
false |
## Corpus Summary
This corpus has 192050 entries made up of descriptive sentences of the faces of the CelebA dataset.
The preprocessing of the corpus has been to translate into Spanish the captions of the CelebA dataset with the algorithm used in [Text2FaceGAN](https://arxiv.org/pdf/1911.11378.pdf).
In particular, a... |
false |
Dataset generated from HKR train set using ScrabbleGAN
======================================================
Number of images: 2476836
Sources:
* [HKR dataset](https://github.com/abdoelsayed2016/HKR_Dataset)
* [ScrabbleGAN code](https://github.com/ai-forever/ScrabbleGAN)
|
false |
## Corpus Summary
This corpus contains 250000 entries made up of a pair of sentences in Spanish and their respective similarity value in the range 0 to 1. This corpus was used in the training of the
[sentence-transformer](https://www.sbert.net/) library to improve the efficiency of the [RoBERTa-large-bne](https://hu... |
false |
# Latvian text dataset
Data set of latvian language texts. Intended for use in AI tool development, like speech recognition or spellcheckers
## Data sources used
* Latvian Wikisource articles - https://wikisource.org/wiki/Category:Latvian
* Literary works of Rainis - https://repository.clarin.lv/repository/xmlui/ha... |
false | |
false | |
false | ## <h1>Spongebob Transcripts Dataset 🧽</h1>
The Spongebob Transcripts Dataset is a collection of transcripts from the beloved animated television series, Spongebob Squarepants. This dataset includes information on each line of dialogue spoken by a character, including the character's name, their replica, and the episo... |
false | # Dataset Card for "wavenet_flashback"
https://cloud.google.com/text-to-speech/docs/reference/rest/v1/text/synthesize#AudioConfig
sv-SE-Wavenet-{voice}
https://spraakbanken.gu.se/resurser/flashback-dator |
false | # Dataset Card for "QM9"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
false | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This based on the mozilla-foundation/common_voice_11_0 Dataset on Haggingface.
It's still not finished, I'll adjust it
This dataset card aims to be... |
false | |
false |
#
The [`tatsu-lab/alpaca` dataset](https://huggingface.co/datasets/tatsu-lab/alpaca) was split into train/test/val with the goal of training text-to-text generation models to generate instruction prompts corresponding to arbitrary text.
To do this, you would use
- `output` as **the text2text model** input column
- ... |
false | # Dataset Card for "OIG_small_chip2_portuguese_brasil"
This dataset was translated to Portuguese-Brasil from [here](https://huggingface.co/datasets/0-hero/OIG-small-chip2)
The data was translated with *MarianMT* model and weights [Helsinki-NLP/opus-mt-en-ROMANCE](https://huggingface.co/Helsinki-NLP/opus-mt-en-ROMANC... |
false | |
false | # Dataset Card for Fragment Of Bookcorpus
## Dataset Description
A smaller sample of the bookcorpus dataset, Which includes around 100,000 lines of text.
^^^(In comparison to the original bookcorpus' 74.1~ Million lines of text)^^^
### Dataset Summary
Modified and Uploaded to the hugggingface library as a part of ... |
false | # Dataset Card for "letter_recognition"
Images in this dataset was generated using the script defined below. The original dataset in CSV format and more information of the original dataset is available at [A-Z Handwritten Alphabets in .csv format](https://www.kaggle.com/datasets/sachinpatel21/az-handwritten-alphabets-... |
false | |
false | # AutoTrain Dataset for project: amber-mines
## Dataset Description
This dataset has been automatically processed by AutoTrain for project amber-mines.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
... |
false | - **StorySmithGPT** - You are StorySmithGPT and you excel at crafting immersive and engaging stories. Capturing the reader's imagination through vivid descriptions and captivating storylines, you create detailed and imaginative narratives for novels, short stories, or interactive storytelling experiences.
- **TimeWarp... |
false | # AutoTrain Dataset for project: t5baseparaphrase
## Dataset Description
This dataset has been automatically processed by AutoTrain for project t5baseparaphrase.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
... |
false | |
false |
# SlovAlapca dataset
This dataset was created using machine translation (DeepL) of the original Alpaca dataset published here: https://github.com/tatsu-lab/stanford_alpaca
Here is an example of the first record...
```json
[
{
"instruction": "Uveďte tri tipy, ako si udržať zdravie.",
"input": "",
... |
false |
Dataset generated using handwritten fonts
=========================================
Number of images: 300000
Sources:
* [Handwriting generation code](https://github.com/NastyBoget/HandwritingGeneration)
The code was executed with `hkr` option (with fewer augmentations) |
false | |
false | https://huggingface.c.o/datasets/Samuelcr8/Eva |
false |
从小说以及其他来源提取的单/多轮对话语料。 |
true | # Dataset Card for "split-imdb"
|
false |
Dataset generated from cyrillic train set using Stackmix
========================================================
Number of images: 300000
Sources:
* [Cyrillic dataset](https://www.kaggle.com/datasets/constantinwerner/cyrillic-handwriting-dataset)
* [Stackmix code](https://github.com/ai-forever/StackMix-OCR)
|
false |
Dataset generated from HKR train set using ScrabbleGAN
======================================================
Number of images: 300000
Sources:
* [HKR dataset](https://github.com/abdoelsayed2016/HKR_Dataset)
* [ScrabbleGAN code](https://github.com/ai-forever/ScrabbleGAN) |
true | # Dataset Card for "sentences-and-emotions"
Recognizing Emotion Cause in Conversations. Soujanya Poria, Navonil Majumder, Devamanyu Hazarika, Deepanway Ghosal, Rishabh Bhardwaj, Samson Yu Bai Jian, Pengfei Hong, Romila Ghosh, Abhinaba Roy, Niyati Chhaya, Alexander Gelbukh, Rada Mihalcea. Cognitive Computation (2021). |
false | |
false | # Dataset Card for Banc Trawsgrifiadau Bangor
This dataset is a bank of 20 hours 6 minutes and 49 seconds of segments of natural speech from over 50 contributors in mp3 file format, together with corresponding 'verbatim' transcripts of the speech in .tsv file format. The majority of the speech is spontaneous, natural ... |
false | # Disclaimer
This was inspired from https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions
# Dataset Card for A subset of Vivian Maier's photographs BLIP captions
The captions are generated with the [pre-trained BLIP model](https://github.com/salesforce/BLIP).
For each row the dataset contains `image` and `... |
false | # AutoTrain Dataset for project: meme-classification
## Dataset Description
This dataset has been automatically processed by AutoTrain for project meme-classification.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as foll... |
true |
# MORFITT
## Data ([Zenodo](https://zenodo.org/record/7893841#.ZFLFDnZBybg)) | Publication ([arXiv](TODO) / [HAL](TODO) / [ACL Anthology](TODO))
[Yanis LABRAK](https://www.linkedin.com/in/yanis-labrak-8a7412145/), [Richard DUFOUR](https://cv.hal.science/richard-dufour), [Mickaël ROUVIER](https://cv.hal.science/micka... |
false | # Dataset Card de "somos-alpaca-es"
Este conjunto de datos es una versión traducida del dataset Alpaca en Español.
Este conjunto de datos sirve como referencia para el esfuerzo colaborativo de limpieza y mejora del dataset durante el hackathon SomosNLP 2023.
Cuantas más personas y equipos participen mayor calidad f... |
true |
# Dataset Card for IMDB 3000 Sphere
- **Homepage:** [http://ai.stanford.edu/~amaas/data/sentiment/](http://ai.stanford.edu/~amaas/data/sentiment/)
## Dataset Summary
Large Movie Review Dataset.
This is a 3000 item selection from the `imdb` dataset for binary sentiment classification for use in the Sphere course on ... |
true | dataset_info:
features:
- name: url
dtype: string
- name: repository_url
dtype: string
- name: labels_url
dtype: string
- name: comments_url
dtype: string
- name: events_url
dtype: string
- name: html_url
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: stri... |
false | # AutoTrain Dataset for project: tree-classification
## Dataset Description
This dataset has been automatically processed by AutoTrain for project tree-classification.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as foll... |
false |
Dataset generated from Cyrillic train set using ScrabbleGAN
======================================================
Number of images: 300000
Sources:
* [Cyrillic dataset](https://www.kaggle.com/datasets/constantinwerner/cyrillic-handwriting-dataset)
* [ScrabbleGAN code](https://github.com/ai-forever/ScrabbleGAN) |
false | # AutoTrain Dataset for project: clasificacion_pisicinas
## Dataset Description
This dataset has been automatically processed by AutoTrain for project clasificacion_pisicinas.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks... |
false | # Dataset Card for "gen-qm-17000"
### Dataset Summary
Dataset for converting request into query and extracting model name.
DEV/VAL/TEST: 90/10/10
SIZE: 17000
### Supported Tasks and Leaderboards
The tasks represented in GEN-QM cover a text2text generation for producing qureries based on request or extracting mode... |
false | # Tree-disease
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<259x194 RGB PIL image>",
"target": 1
},
{
"image": "<275x183 RGB PIL image>",
"target": 16
}
]
```
##... |
false | ### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<265x190 RGB PIL image>",
"target": 10
},
{
"image": "<800x462 RGB PIL image>",
"target": 6
}
]
```
### Dataset Fields... |
false | |
true |
# ParaDetox: Detoxification with Parallel Data (English). Paraphrase Task Negative Results
This repository contains information about **Paraphrase Task** markup from [English Paradetox dataset](https://huggingface.co/datasets/s-nlp/paradetox) collection pipeline.
In this dataset, the samples that were marked as *"can... |
false |
Redistribution of data from https://www.sciencebase.gov/catalog/item/573ccf18e4b0dae0d5e4b109. Some files renamed for consistency. Corrupted or missing files replaced with data from https://landsat.usgs.gov/landsat-7-cloud-cover-assessment-validation-data.
Landsat Data Distribution Policy: https://www.usgs.gov/media/... |
false | # AutoTrain Dataset for project: leaf
## Dataset Description
This dataset has been automatically processed by AutoTrain for project leaf.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image... |
true |
# ParaDetox: Detoxification with Parallel Data (Russian). Paraphrase Task Negative Results
This repository contains information about **Paraphrase Task** markup from [Russian Paradetox dataset](https://huggingface.co/datasets/s-nlp/ru_paradetox) collection pipeline.
## ParaDetox Collection Pipeline
The ParaDetox D... |
true |
# ParaDetox: Detoxification with Parallel Data (Russian). Toxicity Task Results
This repository contains information about **Toxicity Task** markup from [Russian Paradetox dataset](https://huggingface.co/datasets/s-nlp/ru_paradetox) collection pipeline.
## ParaDetox Collection Pipeline
The ParaDetox Dataset collec... |
false |
Redistribution of data from https://landsat.usgs.gov/landsat-8-cloud-cover-assessment-validation-data, masks modified to add georeferencing metadata.
Landsat Data Distribution Policy: https://www.usgs.gov/media/files/landsat-data-distribution-policy |
false | # Dataset Card for PIEs corpus
### Dataset Summary
This corpus is a collection of 57170 potentially idiomatic expressions (PIEs) based on the British National Corpus, prepaired for NER task.
Each of the objects is comes with a contextual set of tokens, BIO tags and boolean label.
The data sources are:
* [MAGPIE corpu... |
false | # AutoTrain Dataset for project: paraphrases
## Dataset Description
This dataset has been automatically processed by AutoTrain for project paraphrases.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
... |
false |
# Dataset Card for "deep-research"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structu... |
true | # Dataset Card for "GTA V Myths"
List of Myths in GTA V, extracted from [Caylus's Channel](https://www.youtube.com/watch?v=bKKOBbWy2sQ&ab_channel=Caylus)
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
false |
# Dataset Card for "luganda_english_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Dataset might contain a few mistakes, espeecially on the one word translations. Indicators for verbs and nouns (v.i and n.i) may not have ... |
false | # AutoTrain Dataset for project: pegasus-reddit-summarizer
## Dataset Description
This dataset has been automatically processed by AutoTrain for project pegasus-reddit-summarizer.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset lo... |
true | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The DACCORD dataset is a collection of 1034 sentence pairs annotated as a binary classification task for automatic detection of contradictions bet... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.