text-classification bool 2
classes | text stringlengths 0 664k |
|---|---|
false |
# Do what you will with the data this is old photos of crafts I used to make - just abide by the liscence above and you good to go! |
false | # Alexa Answers from [alexaanswers.amazon.com](https://alexaanswers.amazon.com/)
The Alexa Answers community helps to improve Alexa’s knowledge and answer questions asked by Alexa users. Which contains some very quirky and hard question like
Q: what percent of the population has blackhair
A: The most common hair col... |
false | ## To use this dataset for your research, please cite the following preprint. Full-paper will be available soon.
[Preprint](https://arxiv.org/abs/2212.02842)
### Citation:
@article{thambawita2022visem,
title={VISEM-Tracking: Human Spermatozoa Tracking Dataset},
author={Thambawita, Vajira and Hicks, Steven A and Sto... |
false | # OpenSubtitles
- Source: https://huggingface.co/datasets/open_subtitles
- Num examples: 3,505,276
- Language: English
```python
from datasets import load_dataset
load_dataset("tdtunlp/open_subtitles_envi")
```
- Format for Translation task
```python
def preprocess(sample):
eng = sample['en']
vie = sample[... |
false |
# m1_fine_tuning_ocr_ptrn_cmbert_io
## Introduction
This dataset was used to fine-tuned [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) for **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from the 1... |
false |
# m2m3_qualitative_analysis_ocr_ptrn_cmbert_io
## Introduction
This dataset was used to perform **qualitative analysis** of [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) on **nested NER task** using Independant NER layers approach [M1].
It contains Paris t... |
false | # MFAQ
- Source: https://huggingface.co/datasets/clips/mfaq
- Num examples:
- 26,494 (train)
- 663 (validation)
- Language: Vietnamese
```python
from datasets import load_dataset
load_dataset("tdtunlp/mfag_vi")
```
- Format for QA task
```python
def preprocess(sample):
question = sample['question']
an... |
false | # MFAQ
- Source: https://huggingface.co/datasets/clips/mfaq
- Num examples:
- 3,567,659 (train)
- 151,825 (validation)
- Language: English
```python
from datasets import load_dataset
load_dataset("tdtunlp/mfaq_en")
```
- Format for QA task
```python
def preprocess(sample):
question = sample['question']
... |
false | |
false |
## Shaded relief image dataset for geomorphological studies of Polish postglacial landscape
This dataset contains a list of 138 png images of shaded relief cut into the 128x128 arrays. The area that the dataset covers is compacted within the
two main geomorphological spheres in Poland - postglacial denuded and nonden... |
false | # Dataset Card for Dataset Name
## Dataset Description
- **https://duskfallcrew.carrd.co/:**
- **https://discord.gg/Da7s8d3KJ7**
### Dataset Summary
A mixture of photography and other goods from Dusfkallcrew that has been either curated or taken by duskfall crew. Some may or may not be AI generated.
This templa... |
false |
# Do not resell the data, you don't own the data but you do your own outputs of your training. See main lisc for details |
false | # Negative outputs from various models of Stable Diffusion - use at your will to train textual inversons or other things. |
false | # Dataset Card for DuskfallCrewArtStyle_Lora
## Dataset Description
- **Homepage:https://duskfallcrew.carrd.co/**
- **Point of Contact: See the Carrd website for contact info, or DM us on HF**
### Dataset Summary
This data set is the basis for the LoRa that is in this repository.
### Supported Tasks and Lea... |
false |
<h2>Dataset to make the galactic-diffusion</h2>
<h5>num: 133</h5>
<h5>source: <b><i>Entergalactic</i></b> on Netflix</h5>
<h5>including: male, female, male and female, indoor scene, outdoor scene</h5> |
false |
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/hug... |
false |
# xlsum
- Source: https://huggingface.co/datasets/GEM/xlsum
- Num examples:
- 32,108 (train)
- 4,013 (validation)
- 4,013 (test)
- Language: Vietnamese
```python
from datasets import load_dataset
load_dataset("tdtunlp/xlsum_vi")
```
- Format for Summarization task
```python
def preprocess(sample):
titl... |
false | This is an image dataset for object detection of wildlife in the mixed coniferous broad-leaved forest.
A total of 25,657 images in this dataset were generated from video clips taken by infrared cameras in the Northeast Tiger and Leopard National Park, including 17 main species (15 wild animals and 2 major domestic ani... |
true | # Dataset Card for "turkishSMS-ds"
The dataset was utilized in the following study. It consists of Turkish SMS spam and legitimate data.
Uysal, A. K., Gunal, S., Ergin, S., & Gunal, E. S. (2013). The impact of feature extraction and selection on SMS spam filtering. Elektronika ir Elektrotechnika, 19(5), 67-72.
[More... |
false |
# Alloprof dataset
This is the dataset refered to in our paper:
Alloprof: a new French question-answer education dataset and its use in an information retrieval case study (https://arxiv.org/abs/2302.07738)
This dataset was provided by [AlloProf](https://www.alloprof.qc.ca/), an organisation in Quebec, Canada offer... |
false |
# Dataset Card for "REDDIT_comments"
## Dataset Description
- **Homepage:**
- **Paper: https://arxiv.org/abs/2001.08435**
### Dataset Summary
Comments of 50 high-quality subreddits, extracted from the REDDIT PushShift data dumps (from 2006 to Jan 2023).
### Supported Tasks
These comments can be used for text genera... |
true |
# Dataset Card for Fandom23K
*The BigKnow2022 dataset and its subsets are not yet complete. Not all information here may be accurate or accessible.*
## Dataset Description
- **Homepage:** (TODO) https://docs.ryokoai.com/docs/training/dataset#Fandom22K
- **Repository:** <https://github.com/RyokoAI/BigKnow2022>
- **... |
false |
## Description
This is a clenned version of AllenAI mC4 PtBR section. The original dataset can be found here https://huggingface.co/datasets/allenai/c4
## Clean procedure
We applied the same clenning procedure as explained here: https://gitlab.com/yhavinga/c4nlpreproc.git
The repository offers two strategies. The... |
false |
Dataset generated from HKR train set using Stackmix
===================================================
Number of images: 300000
Sources:
* [HKR dataset](https://github.com/abdoelsayed2016/HKR_Dataset)
* [Stackmix code](https://github.com/ai-forever/StackMix-OCR)
|
false |
# SwissNER
A multilingual test set for named entity recognition (NER) on Swiss news articles.
## Description
SwissNER is a dataset for named entity recognition based on manually annotated news articles in Swiss Standard German, French, Italian, and Romansh Grischun.
We have manually annotated a selection of article... |
true | # Dataset Card for unarXive IMRaD classification
## Dataset Description
* **Homepage:** [https://github.com/IllDepence/unarXive](https://github.com/IllDepence/unarXive)
* **Paper:** [unarXive 2022: All arXiv Publications Pre-Processed for NLP, Including Structured Full-Text and Citation Network](https://arxiv.org/abs... |
false | # LandCover.ai: Dataset for Automatic Mapping of Buildings, Woodlands, Water and Roads from Aerial Imagery
My project based on the dataset, can be found on Github: https://github.com/MortenTabaka/Semantic-segmentation-of-LandCover.ai-dataset
The dataset used in this project is the [Landcover.ai Dataset](https://landc... |
true | # Mathematics StackExchange Dataset
This dataset contains questions and answers from Mathematics StackExchange (math.stackexchange.com). The data was collected using the Stack Exchange API. Total collected questions 465.295.
## Data Format
The dataset is provided in JSON Lines format, with one JSON object per line. ... |
true | # Dataset Validated from https://huggingface.co/spaces/dariolopez/argilla-reddit-c-ssrs-suicide-dataset-es
https://huggingface.co/spaces/dariolopez/argilla-reddit-c-ssrs-suicide-dataset-es |
false | This dataset splits the original [Self-instruct dataset](https://huggingface.co/datasets/yizhongw/self_instruct) into training (90%) and test (10%). |
false |
# Dataset Card for AdvertiseGen
- **formal url:** https://www.luge.ai/#/luge/dataDetail?id=9
## Dataset Description
数据集介绍
AdvertiseGen是电商广告文案生成数据集。
AdvertiseGen以商品网页的标签与文案的信息对应关系为基础构造,是典型的开放式生成任务,在模型基于key-value输入生成开放式文案时,与输入信息的事实一致性需要得到重点关注。
- 任务描述:给定商品信息的关键词和属性列表kv-list,生成适合该商品的广告文案adv;
- 数据规模:训练集114k,验证集1k,测试集... |
false | # Dataset Card for "petfinder-dogs"
## Dataset Description
- **Homepage:** https://www.petfinder.com/
- **Paper:** N.A.
- **Leaderboard:** N.A.
- **Point of Contact:** N.A.
### Dataset Summary
Contains 700k+ 300px-wide images of 150k+ distinct dogs extracted from the PetFinder API in March 2023.
Only those having ... |
true | # Dataset Validated from https://huggingface.co/spaces/dariolopez/argilla-elena-reddit-c-ssrs-suicide-dataset-es
https://dariolopez-argilla-elena-reddit-c-ssrs-suic-00dc6af.hf.space |
false | # DailyDialog
- Source: https://huggingface.co/datasets/daily_dialog
- Num examples:
- 11,118 (train)
- 1,000 (validation)
- 1,000 (test)
- Language: English
```python
from datasets import load_dataset
load_dataset("tdtunlp/daily_dialog_en")
``` |
false | ## Introduction
* We build a large-scale dataset called the Theme and Aesthetics Dataset with 66K images (TAD66K), which is specifically designed for IAA. Specifically, (1) it is a theme-oriented dataset containing 66K images covering 47 popular themes. All images were carefully selected by hand based on the theme. (2)... |
false | # Alpaca-Cleaned
- Source: https://huggingface.co/datasets/yahma/alpaca-cleaned
- Num examples: 51,848
- Language: English
```python
from datasets import load_dataset
load_dataset("tdtunlp/alpaca_en")
```
- Format for Instruction task
```python
def preprocess(sample):
instruction = sample['instruction']
in... |
false |
## Dataset Multi30k: English-Ukrainian variation
Multi30K dataset is designed to develop multilingual multimodal researches.
Initially this dataset extends the Flickr30K dataset by adding German translations. The descriptions were collected from a crowdsourcing platform, while the translations were collected from pr... |
false |
# Self instruct
- Source: https://github.com/yizhongw/self-instruct
- Num examples: 82,612
- Language: English
```python
from datasets import load_dataset
load_dataset("tdtunlp/self_instruct_en")
```
- Format for Instruction task
```python
def preprocess(sample):
instruction = sample['instruction']
input ... |
false |
# Dataset Card for cloud-types
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/cloud-types
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
cloud-types
### Supported Tasks and Leaderboards... |
false | |
false | |
true |
# Dataset Description
* Example model using the dataset: https://huggingface.co/hackathon-somos-nlp-2023/roberta-base-bne-finetuned-suicide-es
* Example space using the dataset: https://huggingface.co/spaces/hackathon-somos-nlp-2023/suicide-comments-es
* Language: Spanish
## Dataset Summary
The dataset consists of ... |
false | |
false | |
false | ### Dataset Summary
First 10k rows of the scientific_papers["pubmed"] dataset. 8:1:1 split (10000:1250:1250).
### Usage
```
from datasets import load_dataset
train_dataset = load_dataset("ronitHF/pubmed-10k-8.1.1", split="train")
val_dataset = load_dataset("ronitHF/pubmed-10k-8.1.1", split="validation")
test_datase... |
false |
Please see [repo](https://github.com/niizam/4chan-datasets) to turn the text file into json/csv format
Deleted some boards, since they are already archived by https://archive.4plebs.org/ |
false |
# TempoFunk S(mall)Dance
10k samples of metadata and encoded latents & prompts of videos themed around **dance**.
## Data format
- Video frame latents
- Numpy arrays
- 120 frames, 512x512 source size
- Encoded shape (120, 4, 64, 64)
- CLIP (openai) encoded prompts
- Video description (as seen in metadata)
... |
false | |
false |
19K Multilingual VQA Alignment Dataset, in the format of Mini-GPT4 dataset.
With 1.1K images from COCO-2017, resized.
|
false | # Face Mask Detection
Dataset includes 250 000 images, 4 types of mask worn on 28 000 unique faces. All images were collected using the Toloka.ai crowdsourcing service and validated by TrainingData.pro
# File with the extension .csv
includes the following information for each media file:
- **WorkerId**: the identifie... |
false |
# The Portrait and 26 Photos (272 people)
Each set includes 27 photos of people. Each person provided two types of photos: one photo in profile (portrait_1), and 26 photos from their life (photo_1, photo_2, …, photo_26).
# The Portrait
The portrait photo is a photo that shows a person in profile. Mandatory conditions... |
false | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/hugg... |
true |
# Dataset Card for blognone-20230430
## Dataset Summary
[Blognone](https://www.blognone.com/) posts from January 1, 2020 to April 30, 2023.
## Features
- title: (str)
- author: (str)
- date: (str)
- tags: (list)
- content: (str)
## Licensing Information
Blognone posts are published are licensed under the [Creati... |
false | |
true | |
false |
# License Plates
Over **1.2 million** annotated license plates from vehicles around the world. This dataset is tailored for **License Plate Recognition tasks** and includes images from both YouTube and PlatesMania.
Annotation details are provided in the About section below.
# About
## Variables in .csv files:
- **... |
true |
# Dataset Card for Review Helpfulness Prediction (RHP) Dataset
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:** [On the Role of Reviewer Expertise in Temporal Review Helpfulness Prediction](https://aclanthology.org/2023.findings-eacl.125/)
- **Leaderboard:**
### Dataset Summary
The success o... |
false | |
false | # Dataset Card for "Trad_food"
- info: This dataset comes from the ANSES-CIQUAL 2020 Table in English in XML format, found on https://www.data.gouv.fr/fr/datasets/table-de-composition-nutritionnelle-des-aliments-ciqual/ .
I made some minor changes on it in order to have it meets my needs (removed/added words ... |
true |
> I am not the author of this dataset. [View on GitHub](https://github.com/ye-kyaw-thu/khPOS).
# khPOS (draft released 1.0)
khPOS (Khmer Part-of-Speech) Corpus for Khmer NLP Research and Developments
## Lincense
Creative Commons Attribution-NonCommercial-Share Alike 4.0 International (CC BY-NC-SA 4.0) License
[Det... |
false | |
false |
# Dataset Card for AIO Version 2.0 with Japanese Wikipedia
This dataset is used for baseline systems of AIO (AI王), a competition to promote research on question answering systems for the Japanese language.
Each data point consists of a question, the answers, and positive and negative passages for the question.
Please... |
false |
# Dataset Card for multilingual tatoeba translations with ~3M entries (llama supported languages only).
### Dataset Summary
~3M entries. Just more user-friendly version that combines all of the entries of original dataset in a single file (llama supported languages only):
https://huggingface.co/datasets/Helsinki-NL... |
false |
ESLO audio dataset
configs:
- no_overlap_no_hesitation
- no_hesitation
- no_overlap
- raw
Licence Creative Commons Attribution - Pas d'Utilisation Commerciale - Partage dans les Mêmes Conditions 4.0 International
Dependencies:
- ffmpeg: `sudo apt-get install ffmpeg`
- ffmpeg-python: `pip install ffmpeg-python`
`... |
false |
# Summary
This is a 🇹🇭 Thai-instructed dataset translated using Google Cloud Translation from [GPTeacher](https://github.com/teknium1/GPTeacher), A collection of modular datasets generated by GPT-4, General-Instruct & Roleplay-Instruct
and is comprised of around 20,000 examples with deduplication. The dataset was a... |
false |
# Summary
This is a 🇹🇭 Thai-instructed dataset translated using Google Cloud Translation from [HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3)
( Included total **24K**, 17K reddit_eli5, 4K finance, 1.2K medicine, 1.2K open_qa and 0.8K wiki_csai )
The first human-ChatGPT comparison corpus which is introduce... |
true | # Universal Text Classification Dataset (UTCD)
## Load dataset
```python
from datasets import load_dataset
dataset = load_dataset('claritylab/utcd', name='in-domain')
```
## Description
UTCD is a curated compilation of 18 datasets revised for Zero-shot Text Classification spanning 3 aspect categories of Sentiment, ... |
false | |
false | # Gaepago (Gae8J/gaepago_s)
## How to use
### 1. Install dependencies
```bash
pip install datasets==2.10.1
pip install soundfile==0.12.1
pip install librosa==0.10.0.post2
```
### 2. Load the dataset
```python
from datasets import load_dataset
dataset = load_dataset("Gae8J/gaepago_s")
```
Outputs
```
DatasetDict({
... |
false | |
false |
# Dataset Card for CIFAKE_autotrain_compatible
## Dataset Description
- **Homepage:** [Kaggle data card](https://www.kaggle.com/datasets/birdy654/cifake-real-and-ai-generated-synthetic-images?resource=download)
- **Paper:** Krizhevsky, A., & Hinton, G. (2009). Learning multiple layers of features from tiny images.
... |
false | # AutoTrain Dataset for project: cilantroperejil
## Dataset Description
This dataset has been automatically processed by AutoTrain for project cilantroperejil.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
``... |
false |
# Breast Histopathology Image dataset
- This dataset is just a rearrangement of the Original dataset at Kaggle: https://www.kaggle.com/datasets/paultimothymooney/breast-histopathology-images
- Data Citation: https://www.ncbi.nlm.nih.gov/pubmed/27563488 , http://spie.org/Publications/Proceedings/Paper/10.1117/12.20438... |
true | |
false | # Dataset Card for Acapella Evaluation Dataset
## Dataset Description
- **Homepage:** <https://ccmusic-database.github.io>
- **Repository:** <https://huggingface.co/datasets/CCMUSIC/acapella_evaluation>
- **Paper:** <https://doi.org/10.5281/zenodo.5676893>
- **Leaderboard:** <https://ccmusic-database.github.io/team.h... |
false |
# Summary
This is a 🇹🇭 Thai-translated (GCP) dataset based on English 74K [Alpaca-CoT](https://github.com/PhoebusSi/alpaca-CoT) instruction dataset.
Supported Tasks:
- Training LLMs
- Synthetic Data Generation
- Data Augmentation
Languages: Thai
Version: 1.0
--- |
true | |
true | # Dataset Card for "pubmed-rct-200k_indexed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
true |
# Dataset Card for d0rj/conv_ai_3_ru
## Dataset Description
- **Homepage:** https://github.com/aliannejadi/ClariQ
- **Repository:** https://github.com/aliannejadi/ClariQ
- **Paper:** https://arxiv.org/abs/2009.11352
### Dataset Summary
This is translated version of [conv_ai_3](https://huggingface.co/datasets/conv_... |
false | # Dataset Card for "piqa_ru"
This is translated version of [piqa dataset](https://huggingface.co/datasets/piqa) into Russian. |
false | |
false |
# Dataset Card for multi-figqa
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Field... |
false |
# Audio Dataset
This dataset consists of audio data for the following categories:
* Coughing
* Running water
* Toilet flush
* Other sounds
Although this data is unbalanced, data augmentations can be added to process the data for audio classification. The file structure looks as follows:
\- audio/
 ... |
false | |
false | # Race
- Source: https://huggingface.co/datasets/race
- Num examples:
- 87,866 (train)
- 4,887 (validation)
- 4,934 (test)
- Language: English
```python
from datasets import load_dataset
load_dataset("vietgpt/race_en")
```
- Format for QA task
```python
def preprocess_qa(sample):
article = sample['articl... |
false | |
true |
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:https://github.com/kaistAI/CoT-Collection**
- **Repository:https://github.com/kaistAI/CoT-Collection**
- **Paper:https://arxiv.org/abs/2305.14045**
- **Point of Contact:sejune@lklab.io**
### Dataset Summary
This dataset card aims to be a base ... |
true |
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:https://github.com/kaistAI/CoT-Collection**
- **Repository:https://github.com/kaistAI/CoT-Collection**
- **Paper:https://arxiv.org/abs/2305.14045**
- **Point of Contact:sejune@lklab.io**
### Dataset Summary
This dataset card aims to be a base ... |
true | # Dataset Card for "HC3-ru"
This is translated version of [Hello-SimpleAI/HC3 dataset](https://huggingface.co/datasets/Hello-SimpleAI/HC3) into Russian.
## Citation
Checkout this papaer [arxiv: 2301.07597](https://arxiv.org/abs/2301.07597)
```
@article{guo-etal-2023-hc3,
title = "How Close is ChatGPT to Human E... |
false | include six common Time-series-forcasting dataset
* ETTsmall
- ETTh1
- ETTh2
- ETTm1
- ETTm2
* traffic
* eletricity
* illness
* exchange_rate |
false | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/hugg... |
false | |
false | # curation-corpus
## Dataset Description
- **Homepage:** [https://github.com/CurationCorp/curation-corpus](https://github.com/CurationCorp/curation-corpus)
- **Repository:** [https://github.com/CurationCorp/curation-corpus](https://github.com/CurationCorp/curation-corpus)
## Source
Data from [this official repo](ht... |
false | # FETV
**FETV** is a benchmark for **F**ine-grained **E**valuation of open-domain **T**ext-to-**V**ideo generation
## Overview
FETV consist of a diverse set of text prompts, categorized based on three orthogonal aspects: major content, attribute control, and prompt complexity.
 leads to better results. With symbol tuning, labels are replaced with arbitrary symbols (e.g. foo/bar), which makes learning in context a key condition to learn t... |
true |
M3KE, or Massive Multi-Level Multi-Subject Knowledge Evaluation, is a benchmark developed to assess the knowledge acquired by large Chinese language models by evaluating their multitask accuracy in both zero- and few-shot settings. The benchmark comprises 20,477 questions spanning 71 tasks. For further information ab... |
true |
### Dataset Description
- **Homepage:** https://github.com/sunnweiwei/user-satisfaction-simulation
- **Repository:** https://github.com/sunnweiwei/user-satisfaction-simulation
- **Paper:** https://arxiv.org/pdf/2105.03748.pdf
- **View records using Datasette:** [datasette-link](https://lite.datasette.io/?parquet=http... |
true |
# French Grammatical Errors
This dataset contains pairs of sentences and an explanation:
- "phrase1" is a french sentence containing a grammatical error
- "phrase2" is the same sentence without any error (please reach out if you think
an error is present -- I could not see any)
- "explication" is some text explain... |
false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.