text-classification
bool
2 classes
text
stringlengths
0
664k
false
# Plantations Segmentation The images consist of aerial photography of agricultural plantations with crops such as cabbage and zucchini. The dataset addresses agricultural tasks such as plant detection and counting, health assessment, and irrigation planning. The dataset consists of plantations' photographs with objec...
false
# M2CRB ## How to get the data with a given language combination ``` from datasets import load_dataset def get_dataset(prog_lang, nat_lang): test_data = load_dataset("blindsubmissions/M2CRB") test_data = test_data.filter( lambda example: example["docstring_language"] == nat_lang and example[...
false
## Instruction Tuning: GeoSignal Scientific domain adaptation has two main steps during instruction tuning. - Instruction tuning with general instruction-tuning data. Here we use Alpaca-GPT4. - Instruction tuning with restructured domain knowledge, which we call expertise instruction tuning. For K2, we use knowledg...
false
# Benchmark: GeoBenchmark In GeoBenchmark, we collect 183 multiple-choice questions in NPEE, and 1,395 in AP Test, for objective tasks. Meanwhile, we gather all 939 subjective questions in NPEE to be the subjective tasks set and use 50 to measure the baselines with human evaluation.
false
true
false
Alpaca tasks dataset translated in Greek from GPT3.5 Translation is done in chunks of 10K.
false
1111
false
# Google Conceptual Captions in Vietnamese This is Vietnamese version of Google Conceptual Captions dateset. It has more than 3.3 million image urls with captions. It was built by using Google Translate API. The Vietnamese version has the exact metadata as English one. The only difference is the caption content. I ...
false
# Outdoor Garbage Dataset The dataset consisting of garbage cans of various capacities and types. Best to train a neural network to monitor the timely removal of garbage and organize the logistics of vehicles for garbage collection. Dataset is useful for the recommendation systems, optimization and automization the w...
false
# Docstring to code data ## Licensing Information M2CRB is a subset filtered and pre-processed from [The Stack](https://huggingface.co/datasets/bigcode/the-stack), a collection of source code from repositories with various licenses. Any use of all or part of the code gathered in M2CRB must abide by the terms of the or...
false
# Dataset of bald people Dataset consists of 5000 photos of people with 7 stages of hairloss according to the Norwood scale. Dataset is useful for training neural networks for the recommendation systems, optimizing the work processes of trichologists and applications in the Med / Beauty spheres. # Get the Dataset T...
false
# COCO 2017 image captions in Vietnamese The dataset is firstly introduced in [dinhanhx/VisualRoBERTa](https://github.com/dinhanhx/VisualRoBERTa/tree/main). I use VinAI tools to translate [COCO 2027 image caption](https://cocodataset.org/#download) (2017 Train/Val annotations) from English to Vietnamese. Then we merge...
false
Original Dataset [JeanKaddour/minipile](https://huggingface.co/datasets/JeanKaddour/minipile) See the [Thought Tokens Repository](https://github.com/ZelaAI/thought-tokens) for demonstration of streaming usage of this dataset and specific implementation of how this dataset was prepared. Tokenized with the GPTNeoX to...
true
false
## GitHub R repositories dataset R source files from GitHub. This dataset has been created using the public GitHub datasets from Google BigQuery. This is the actual query that has been used to export the data: ``` EXPORT DATA OPTIONS ( uri = 'gs://your-bucket/gh-r/*.parquet', format = 'PARQUET') as ( sele...
false
# Dataset Card for the Qur'anic Reading Comprehension Dataset (QRCD) ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages...
false
Dataset Name: Eng-Sinhala Translation Dataset Description: This dataset contains approximately 80,000 lines of English-Sinhala translation pairs. It can be used to train models for machine translation tasks and other natural language processing applications. Files: 1. src.txt: This file contains the source sentences...
false
# Dataset Card for OpenFire ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structur...
true
false
# Dataset Card for "TALI-small" ## Table of Contents 1. Dataset Description 1. Abstract 2. Brief Description 2. Dataset Information 1. Modalities 2. Dataset Variants 3. Dataset Statistics 4. Data Fields 5. Data Splits 3. Dataset Creation 4. Dataset Use 5. Additional Information ## Dataset Description #...
false
# EVJVQA - Multilingual Visual Question Answering ## Abstract Visual Question Answering (VQA) is a challenging task of natural language processing (NLP) and computer vision (CV), attracting significant attention from researchers. English is a resource-rich language that has witnessed various developments in datasets...
false
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/hug...
false
﷽ # Dataset Card for Tarteel AI's EveryAyah Dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Inst...
false
true
false
# Vision-CAIR cc_sbu_align in multilang This is Google-translated versions of [Vision-CAIR/cc_sbu_align](https://huggingface.co/datasets/Vision-CAIR/cc_sbu_align). Please visit [2. Second finetuning stage](https://huggingface.co/datasets/Vision-CAIR/cc_sbu_align#training) to understand how the English one was created....
false
# Dataset Card for OKD-CL ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingfac...
true
### Labels |label|meaning| |:---|:-----------| |achievement_P | in favor of achievement | |achievement_N | against achievement | |power_dominance_P | in favor of power: dominance | |power_dominance_N | against power: dominance | |power_resources_P | in favor of power: resourc...
true
# Dataset Card for Dataset Name ## Name Motivación Diaria ## Dataset Description - **Autor:** Rubén Darío Jaramillo - **Email:** rubend18@hotmail.com - **WhatsApp:** +593 93 979 6676 ### Dataset Summary Scrapeado de http://www.motivaciondiaria.com/ ### Languages [Spanish]
false
**F**unds **R**eport **F**ront **P**age **E**ntities (FRFPE) is a dataset for document understanding and token classification. It contains 356 titles/front pages of annual and semi-annual reports as well as extracted text and annotations for five different token categories. FRFPE serves as an example of how to tr...
false
# Sol: Simian Opertional Lexicon The dataset
true
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/hugg...
false
Dataset redistributed without change with permission from the author. If you use this dataset in your research, please cite the following paper: https://doi.org/10.3390/rs6064907
false
false
Instructions created from Amazon ESCI dataset as the alpaca style, includes 20k instruction pairs. Used for *query generation*. Following the schema: ```json [ ..., { "instruction": "Generate a search query from the give product description.", "input": "FLYDAY Flying Disc with LED Lights ...",...
true
# IPCC Confidence in Climate Statements _What do LLMs know about climate? Let's find out!_ ## ICCS Dataset We introduce the **ICCS dataset (IPCC Confidence in Climate Statements)** is a novel, curated, expert-labeled, natural language dataset of 8094 statements extracted or paraphrased from the IPCC Assessment Repor...
true
# Dataset Card for "super_glue" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instance...
false
false
# million-faces Welcome to "million-faces", one of the largest facesets available to the public. Comprising a staggering one million faces, all images in this dataset are entirely AI-generated. Due to the nature of AI-generated images, please be aware that some artifacts may be present in the dataset. The dataset i...
false
#### Warning: Due to the nature of the source, certain images are very large. Large number of artistic images, mostly (but hardly exclusively) sourced from Wikimedia Commons. <br> Pull requests are allowed, and even encouraged.
true
# AutoTrain Dataset for project: bhaav-sentiment ## Dataset Description This dataset has been automatically processed by AutoTrain for project bhaav-sentiment. ### Languages The BCP-47 code for the dataset's language is en. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```...
false
# FICLE Dataset The dataset can be loaded and utilized through the following: ```python from datasets import load_dataset ficle_data = load_dataset("tathagataraha/ficle") ``` # Dataset card for Falcon RefinedWeb ## Dataset Description * **GitHub Repo:** https://github.com/blitzprecision/FICLE * **Paper:** * **Poi...
false
# VQAv2 in Vietnamese This is Google-translated version of [VQAv2](https://visualqa.org/) in Vietnamese. The process of building Vietnamese version as follows: - In `en/` folder, - Download `v2_OpenEnded_mscoco_train2014_questions.json` and `v2_mscoco_train2014_annotations.json` from [VQAv2](https://visualqa.org/). ...
false
false
![Screenshot 2023-06-11 at 23.19.31.png](https://s3.amazonaws.com/moonup/production/uploads/6226bae1c8655fec3995a41d/cO9OKcYBO7-MbDZJopF6J.png) General information The overall ACDC dataset was created from real clinical exams acquired at the University Hospital of Dijon. Acquired data were fully anonymized and handl...
false
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/hug...
false
# Helmet Detection Dataset The dataset consist of photographs of construction workers during the work. The dataset provides helmet detection using bounding boxes, and addresses public safety tasks such as providing compliance with safety regulations, authomizing the processes of identification of rules violations and ...
false
# Dataset Card for Dataset Name ## Dataset Description - **Homepage: m2sodai.jonggyu.me** - **Repository: temporarily private** - **Paper: under review** - **Point of Contact: jgjang0123 [at] gmail [dot] com** ### Dataset Summary The M<sup>2</sup>SODAI dataset is the first multi-modal, bounding-box-labeled, and...
false
false
# ORCHESTRA-simple-1M GitHub: [nk2028/ORCHESTRA-dataset](https://github.com/nk2028/ORCHESTRA-dataset) **中文簡介** ORCHESTRA (c**O**mp**R**ehensive **C**lassical c**H**in**ES**e poe**TR**y d**A**taset) 是一個全面的古典中文詩歌的數據集,數據來自[搜韻網](https://sou-yun.cn/)。本數據集由 [nk2028](https://nk2028.shn.hk/) 進行格式轉換並發佈,希望透過公開高品質的古典中文詩歌數據,促進...
true
false
# TextCaps in Vietnamese This is Vietnamese version of [TextCaps dataset](https://textvqa.org/textcaps/). It has 109765 image-caption pairs for training, and 15830 ones for validation. It was built by using Google Translate API. The Vietnamese version has the almost metadata as English one. The Vietnamese version does...
false
# TextVQA in Vietnamese This is Google-translated version of [TextVQA](https://textvqa.org/) in Vietnamese. The process of building Vietnamese version as follows: - In en/ folder, - Download `TextVQA_0.5.1_train.json`, `TextVQA_0.5.1_val.json`. - By using [set data structure](https://docs.python.org/3/tutorial/dat...
false
# Dataset Card for "code_x_glue_tc_text_to_code" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances...
false
# curation-corpus-ru ## Dataset Description - **Repository:** [https://github.com/CurationCorp/curation-corpus](https://github.com/CurationCorp/curation-corpus) Translated version of [d0rj/curation-corpus](https://huggingface.co/datasets/d0rj/curation-corpus) into Russian.
false
false
# OK-VQA in multilang This is Google-translated versions of [OK-VQA](https://okvqa.allenai.org/index.html) in many languages. Each language version stays in each folder. The process of building Vietnamese version as follows: - In `en/` folder, - From [OK-VQA](https://okvqa.allenai.org/index.html), obtain all json f...
false
# Grocery Shelves Dataset ## Facing is the process of arranging products on shelves and counters. The dataset consist of labeled photographs of grocery store shelves. The Grocery Shelves Dataset can be used to analyze and optimize product placement data, develop strategies for increasing product visibility, maximize...
false
false
false
# Dataset Card for Common Voice Corpus 6.1 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#da...
false
# AutoTrain Dataset for project: aniaitokenclassification ## Dataset Description This dataset has been automatically processed by AutoTrain for project aniaitokenclassification. ### Languages The BCP-47 code for the dataset's language is en. ## Dataset Structure ### Data Instances A sample from this dataset look...
false
# RED<sup>FM</sup>: a Filtered and Multilingual Relation Extraction Dataset This is the human-filtered dataset from the 2023 ACL paper [RED^{FM}: a Filtered and Multilingual Relation Extraction Dataset](https://arxiv.org/abs/2306.09802). If you use the model, please reference this work in your paper: @inproceedin...
false
# Ayaka/MoeDict-cmn-hak-10k
false
true
true
This is the data used in the paper [Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias](https://github.com/yueyu1030/AttrPrompt). - `label.txt`: the label name for each class - `train.jsonl`: The original training set. - `valid.jsonl`: The original validation set. - `test.jsonl`:...
true
This is the data used in the paper [Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias](https://github.com/yueyu1030/AttrPrompt). - `label.txt`: the label name for each class - `train.jsonl`: The original training set. - `valid.jsonl`: The original validation set. - `test.jsonl`...
true
This is the data used in the paper [Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias](https://github.com/yueyu1030/AttrPrompt). - `label.txt`: the label name for each class - `train.jsonl`: The original training set. - `valid.jsonl`: The original validation set. - `test.jsonl`...
true
This is the data used in the paper [Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias](https://github.com/yueyu1030/AttrPrompt). - `label.txt`: the label name for each class - `train.jsonl`: The original training set. - `valid.jsonl`: The original validation set. - `test.jsonl`: ...
true
This is the data used in the paper [Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias](https://github.com/yueyu1030/AttrPrompt). - `label.txt`: the label name for each class - `train.jsonl`: The original training set. - `valid.jsonl`: The original validation set. - `test.jsonl`:...
true
This is the data used in the paper [Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias](https://github.com/yueyu1030/AttrPrompt). - `label.txt`: the label name for each class - `train.jsonl`: The original training set. - `valid.jsonl`: The original validation set. - `test.jsonl`:...
true
This is the data used in the paper [Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias](https://github.com/yueyu1030/AttrPrompt). - `label.txt`: the label name for each class - `train.jsonl`: The original training set. - `valid.jsonl`: The original validation set. - `test.jsonl`:...
false
# Basketball Tracking ## Tracking is a deep learning process where the algorithm tracks the movement of an object. The dataset consist of screenshots from videos of basketball games with the ball labeled with a bounging box. The dataset can be used to train a neural network in ball control recognition. The dataset i...
false
false
# Dataset Card for Never Ending Language Learning (NELL) ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data ...
true
This is the data used in the paper [Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias](https://github.com/yueyu1030/AttrPrompt). - `label.txt`: the label name for each class - `train.jsonl`: The original training set. - `valid.jsonl`: The original validation set. - `test.jsonl`:...
false
# AutoTrain Dataset for project: fhdd_arabic_chatbot ## Dataset Description This dataset has been automatically processed by AutoTrain for project fhdd_arabic_chatbot. ### Languages The BCP-47 code for the dataset's language is en2ar. ## Dataset Structure ### Data Instances A sample from this dataset looks as fo...
true
# Dataset Card for "pandassdcctest" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
false
# Dataset Card for OpusBooks ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) ...
false
# Dataset Card for Invoices (Sparrow) This dataset contains 500 invoice documents annotated and processed to be ready for Donut ML model fine-tuning. Annotation and data preparation task was done by [Katana ML](https://www.katanaml.io) team. [Sparrow](https://github.com/katanaml/sparrow/tree/main) - open-source dat...
false
# Dataset Card for Dataset Name ## Dataset Description - **Homepage** https://sites.google.com/view/v-lol/home - **Repository** https://github.com/ml-research/vlol-dataset-gen - **Paper** https://arxiv.org/abs/2306.07743 - **Point of Contact:** lukas_henrik.helff@tu-darmstadt.de ### Dataset Summary This diagnostic ...
false
false
# Dataset Card for "symbolic-instruction-tuning-sql" Original component (=no Flan) from the symbolic instruction tuning dataset, with flan column names. [From Zero to Hero: Examining the Power of Symbolic Tasks in Instruction Tuning](https://arxiv.org/abs/2304.07995). The training code can be found in [here](https:/...
true
false
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/hugg...
false
false
Mostly unfiltered anime-style images generated by various text to image models, collected from various sources (some were submitted for inclusion by their creators).<br> Includes a subset of [p1atdev/niji-v5](https://huggingface.co/datasets/p1atdev/niji-v5/), albeit captioned differently than the source. <br> Contains ...
false
true
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The RTE3-FR dataset is the French translation of the Textual Entailment English dataset used in the [RTE-3 Challenge](https://nlp.stanford.edu/RT...
false
# HaVG: Hausa Visual Genome ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The Hausa Visual Genome (HaVG) dataset contains the description of an image or a section within the image in Hausa and its equivalent in English. The ...
true
<p align="center"> <img src="https://raw.githubusercontent.com/afrisenti-semeval/afrisent-semeval-2023/main/images/afrisenti-twitter.png", width="700" height="500"> -------------------------------------------------------------------------------- ## Dataset Description - **Homepage:** https://github.com/afrisenti-s...
true
<p align="center"> <img src="https://raw.githubusercontent.com/hausanlp/NaijaSenti/main/image/naijasenti_logo1.png", width="500"> -------------------------------------------------------------------------------- ## Dataset Description - **Homepage:** https://github.com/hausanlp/NaijaSenti - **Repository:** [GitHub]...
false
# wikisum ## Dataset Description - **Homepage:** https://registry.opendata.aws/wikisum/ - **Repository:** https://github.com/tensorflow/tensor2tensor/tree/master/tensor2tensor/data_generators/wikisum - **Paper:** [Generating Wikipedia by Summarizing Long Sequences](https://arxiv.org/abs/1801.10198) - **Leaderboard:**...
false
false
true
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/hug...
false
## Data Origins Original dataset: https://huggingface.co/datasets/jondurbin/rosettacode-raw/ Cleaner code: https://github.com/the-crypt-keeper/rosettacode-parser ## Data Fields |Field|Type|Description| |---|---|---| |title|string|problem title| |task|string|problem description| |language|string|solution language/v...
false