text-classification bool 2
classes | text stringlengths 0 664k |
|---|---|
true | # AutoTrain Dataset for project: severe-js100-sentiment
## Dataset Description
This dataset has been automatically processed by AutoTrain for project severe-js100-sentiment.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks a... |
true | |
true | # Dataset Card for "alpaca-gigo-detector"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
false | # AutoTrain Dataset for project: pegasus-subreddit-comments-summarizer
## Dataset Description
This dataset has been automatically processed by AutoTrain for project pegasus-subreddit-comments-summarizer.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sam... |
false | |
false | |
false | |
true | |
false | Source: https://dumps.wikimedia.org/kkwiki/latest/ [kwiki-latest-pages-articles.xml.bz2] |
false | # Dataset Card for "igbo-translation"
## Dataset Summary
This data set contains translated data from engllish to igbo language for use in training general purpose translation models
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
false | # Silver Ukrainian Coreference Dataset
## Dataset Description
### Dataset Summary
A silver coreference resolution dataset for the Ukrainian language. The dataset was generated automatically with the usage of the word alignment method from the following English dataset: https://github.com/d5555/Coreference-dataset.
T... |
true | |
false | |
true | |
false | i have no idea how to add data |
false | |
false | |
true |
# Dataset Card for ScribbleHub17K
*The BigKnow2022 dataset and its subsets are not yet complete. Not all information here may be accurate or accessible.*
## Dataset Description
- **Homepage:** (TODO)
- **Repository:** <https://github.com/RyokoAI/BigKnow2022>
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Con... |
false | |
false | |
true |
# Dataset Card for Honeyfeed3600
*The BigKnow2022 dataset and its subsets are not yet complete. Not all information here may be accurate or accessible.*
## Dataset Description
- **Homepage:** (TODO)
- **Repository:** <https://github.com/RyokoAI/BigKnow2022>
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Cont... |
false | # 预览[.](https://huggingface.co/datasets/jiaheillu/sovits_audio_preview/blob/main/README.md)
**简体中文**|
[English](https://huggingface.co/datasets/jiaheillu/sovits_audio_preview/blob/main/README_EN.md)|
[日本語](https://huggingface.co/datasets/jiaheillu/sovits_audio_preview/blob/main/README_JP.md)
本仓库用于预览so-vits-svc-4.0训练... |
false | This is a text2video model for diffusers, fine-tuned with a [modelscope](https://huggingface.co/damo-vilab/text-to-video-ms-1.7b) to have an anime-style appearance.
It was trained at 384x384 resolution.
It still generates unstable content often.
The usage is the same as with the original modelscope model.
exam... |
true | # AutoTrain Dataset for project: roulette-prediction-next-sequence
## Dataset Description
This dataset has been automatically processed by AutoTrain for project roulette-prediction-next-sequence.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from... |
false |
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/hug... |
true | Reference: Ponnarassery-, Sreeja (2017), “Poem Emotion Recognition Corpus (PERC)”, Mendeley Data, V1, doi: 10.17632/n9vbc8g9cx.1 |
false |
The dataset was translated into Polish using this model: "gsarti/opus-mt-tc-en-pl"
### How to use
```python
from datasets import load_dataset
dataset = load_dataset("Aspik101/translated_polish_alpaca")
```
|
false | # AutoTrain Dataset for project: arp_summ_1
## Dataset Description
This dataset has been automatically processed by AutoTrain for project arp_summ_1.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
... |
false |
09/04/2023 Update:
New instructions added from: https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM
Original Version: https://github.com/tatsu-lab/stanford_alpaca#data-release
AI BASED TRANSLATION RESULTS OF STANFORD ALPACA EN TO TR
For academic only, please cite before you use it.
Taşar, D. E. T. (2023)... |
true |
# Dataset Card for XNLI Parallel Corpus
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
### Supported Tasks and Leaderboards
Binary mode classification (spoken vs written)
### Languages
- English
- German
- French
## Data... |
false | # Mtet
- Num examples:
- 5,072 (test)
- 6,212 (validation)
- Language: English, Vietnamese
## Prompts
"Translate the following sentence into <target>: ",
"What is the <target> translation for: ",
"What is the <target> equivalent of: ",
"What does the following sentence means in <ta... |
false |
This dataset is made from this repo [here](https://github.com/janelleshane/DnD_bios)
and it contains 2322 character bios to be used |
false |
# Ukrainian StackExchange Dataset
This repository contains a dataset collected from the Ukrainian StackExchange website.
The parsed date is 02/04/2023.
The dataset is in JSON format and includes text data parsed from the website https://ukrainian.stackexchange.com/.
## Dataset Description
The Ukrainian StackExchan... |
false | |
true | # Dataset Card for "MULTI_VALUE_wnli_reduced_relative"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
true |
## Dataset Description
- **BioStars Homepage:** https://www.biostars.org/
- **BioStars Paper:** https://doi.org/10.1371/journal.pcbi.1002216
- **Code Repository (This Dataset):** https://github.com/cannin/biostars_qa
### Dataset Summary
This dataset contains 4803 question/answer pairs extracted from the [BioStars]... |
true |
# Dataset Card for CNNovel125K
*The BigKnow2022 dataset and its subsets are not yet complete. Not all information here may be accurate or accessible.*
## Dataset Description
- **Homepage:** (TODO)
- **Repository:** <https://github.com/RyokoAI/BigKnow2022>
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contac... |
true | # Dataset Card for "DiagTrast"
## Table of Content
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structu... |
false | |
true | ### Resumen del dataset
Se trata de un dataset en español, extraído del centro de documentación de la Fundación Secretariado Gitano, en el que se presentan distintas situaciones discriminatorias acontecidas por el pueblo gitano. Puesto que el objetivo del modelo es crear un sistema de generación de actuaciones que per... |
true | |
true |
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This is a set of (title, integer category) descriptions taken from The Pirate Bay via
[123dw's](https://thepiratebay.org/search.php?q=user:123dw)... |
false | # AutoTrain Dataset for project: syn
## Dataset Description
This dataset has been automatically processed by AutoTrain for project syn.
### Languages
The BCP-47 code for the dataset's language is it.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"tokens":... |
false |
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/hug... |
false | |
false | # Synthetic Dataset for Product Descriptions and Ads
The basic process was as follows:
1. Prompt GPT-4 to create a list of 100 sample clothing items and descriptions for those items.
2. Split the output into desired format `{"product" : "<PRODUCT NAME>", "description" : "<DESCRIPTION>"}
3. Prompt GPT-4 to create adve... |
false | # 🚢 Stanford Human Preferences Dataset (SHP) (Italian Translation)
The Stanford Human Preferences Dataset (SHP) is a collection of responses to questions and instructions in 18 different subject areas, ranging from cooking to legal advice. This version of the dataset is a **partial** Italian translation of the origi... |
false | # Chess Rock VS Pawn
The [Chess Rock VS Pawn dataset](https://archive-beta.ics.uci.edu/dataset/22/chess+king+rook+vs+king+pawn) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|--------------... |
false |
Armenian wikipedia at date 04.2023
80M tokens
296.539 articles |
false |
0.7M tokens |
true | |
false | |
false | ## Dataset
This FLAN dataset is built for instruction causal language modeling. Improper encoding/decoding was further cleaned. The dataset includes the Dialog Zero Shot Options task.
## List of Mixtures
We've broken down the Flan Collection into several sub-mixtures. These are "flan" (Flan 2021), "t0" (P3 excluding Fl... |
false |
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** lambdasec@okyasoft.com
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://gith... |
false | # Mtet
- Num examples:
- 5,072 (test)
- 6,212 (validation)
- Language: English, Vietnamese |
false | # Dataset Card for DIALOGSum Corpus
## Dataset Description
### Links
- **Homepage:** https://aclanthology.org/2021.findings-acl.449
- **Repository:** https://github.com/cylnlp/dialogsum
- **Paper:** https://aclanthology.org/2021.findings-acl.449
- **Point of Contact:** https://huggingface.co/knkarthick
### Dataset Sum... |
false | |
false |
# English Malayalam names
This dataset has 27814162 person names both in English and Malayalam.
The source for this dataset is various election roles published by Government.
Potential usages:
1. English <-> Malayalam name transliteration tasks
2. Named entity recognition
3. Person name recognition
## License
C... |
true |
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This repository contains a French version of the [GQNLI](https://github.com/ruixiangcui/GQNLI) challenge dataset, originally written in English. ... |
false | # xnli_vi
- Num examples:
- 5,010 (test)
- 2,490 (validation)
- 392,702 (train)
- Language: Vietnamese, English |
false |
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/hug... |
true |
# Northwind Invoices and Related Documents
This dataset contains a collection of invoices and related documents from the Northwind database, a sample database used by Microsoft for demonstrating database functionalities.
The invoices include information about the customer, the salesperson, the order date, order ID,... |
false |
<img src="https://s3.amazonaws.com/moonup/production/uploads/632eed9e04b24dbdb9eaa6d4/ToFJ26XGVkO2FTJ4dH-yH.png" width="256" height="256"> |
false | # Open_subtitles
- Num examples:
- 3,505,276 (train)
- Language: English, Vietnamese
|
true |
# Northwind Shipping Orders and Related Documents
This dataset contains a collection of Shipping Orders and related documents from the Northwind database, a sample database used by Microsoft for demonstrating database functionalities.
The Shipping Orders include information about the ship name, Address , Region, po... |
false | This dataset was collected from Wikipedia : https://hu.wikipedia.org/wiki/Magyarorsz%C3%A1gon_anyak%C3%B6nyvezhet%C5%91_ut%C3%B3nevek_list%C3%A1ja |
false | # Musk
The [Musk dataset](https://archive.ics.uci.edu/ml/datasets/Musk) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Census dataset including personal characteristic of a person, and their income threshold.
# Configurations and tasks
| **Configuration** | **Task** | **Descrip... |
false | |
false | # Dataset Card for "hy_eanc_2023"
5M tokens
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
false |
# Word Sense Disambiguation for FLUE
## Dataset Description
- **Homepage:**
- **Repository:**
- **https://arxiv.org/pdf/1905.05677.pdf**
- **Leaderboard:**
- **loic.vial@univ-grenoble-alpes.fr**
### Dataset Summary
This dataset is splitted in 3 sub-datasets: FrenchSemEval-Task12, French WNGT and an automatic... |
false | |
false | # FrenchSemEval
## Dataset Description
- **Homepage:**
- **Repository:**
- **https://aclanthology.org/W19-0422.pdf**
- **Leaderboard:**
- **vincent.segonne@univ-grenoble-alpes.fr**
### Dataset Summary
This dataset correspond to the FrenchSemEval, in which verb occurences where manually annotated with Wiktionar... |
true |
#### Purchase Orders Dataset
This dataset consists of purchase orders from various companies. It was created by [CHERGUELAINE Ayoub](https://www.linkedin.com/in/ayoub-cherguelaine/) & [BOUBEKRI Faycal](https://www.linkedin.com/in/faycal-boubekri-832848199/) with the help of ChatGPT for the purpose of document classi... |
false | # MIT-Adobe FiveK Dataset
The MIT-Adobe FiveK Dataset [[1]]( #references ) is a publicly available dataset providing the following items.
1. 5,000 RAW images in DNG format
2. retouched images of each RAW image by five experts in TIFF format (25,000 images, 16 bits per channel, ProPhoto RGB color space, and lossless co... |
true |
# Northwind Stock Report Dataset
This dataset was created by [CHERGUELAINE Ayoub](https://www.linkedin.com/in/ayoub-cherguelaine/) & [BOUBEKRI Faycal](https://www.linkedin.com/in/faycal-boubekri-832848199/) for the purpose of document classification and analytics. The dataset contains monthly stock reports and month... |
true | |
false |
_The Dataset Teaser is now enabled instead! Isn't this better?_

# TD 02: Urban Surface Textures
This dataset contains multi-photo texture captures in outdoor nature scenes — many... |
true |
## General concept
The **'inappropriateness'** substance we tried to collect in the dataset and detect with the model **is NOT a substitution of toxicity**, it is rather a derivative of toxicity.
So the model based on our dataset could serve as **an additional layer of inappropriateness filtering after toxicity and... |
true |
## General concept of the model
Sensitive topics are such topics that have a high chance of initiating a toxic conversation: homophobia, politics, racism, etc. This dataset uses 18 topics.
More details can be found [in this article ](https://www.aclweb.org/anthology/2021.bsnlp-1.4/) presented at the workshop for B... |
false |
# MegaInstruct
A large instruct dataset, merging multiple into the alpaca format
### Note:
Both the gpt4all and vicuna datasets have usernames appended to them, so hopefully username aware chatbot datasets can be added on top of this! |
false | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
It's still not finished, I'll adjust it
This dataset card aims to be a base template for new datasets. It has been generated using [this raw templa... |
false | 一个来自K-SportsSum:https://github.com/krystalan/k-sportssum 的实现,原作者给出了思路,但并未实现其具体过程,此数据集是对该数据集“新闻与评论句子根据相似度搭配”部分的实现。
方法是:遍历新闻句子,以类似指针的方式获取新闻句子的时间信息(如果有的话),然后将两个指针作为范围,将范围内的新闻句子遍历,在同一时间范围之内查找评论句子,评分后选择最高的结果,并删除该句以防止重复,最终获得一句新闻搭配一句评论的结果。
我使用了bert—Score和ROUGE指标,按照7:3加权计算分数。
*建议* 数据集内给出了该搭配的指标,请考虑使用平均数等方式过滤掉较低的坏搭配。
An impleme... |
false |
Western armenian wikipedia 04.2023
4M tokens
10.785 articles |
false | # AutoTrain Dataset for project: pro
## Dataset Description
This dataset has been automatically processed by AutoTrain for project pro.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": ... |
false | # Dataset Card for "mfm"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
false | # Opus100 Prompt
- Num examples:
- 1,000,000 (train)
- 2,000 (validation)
- 2,000 (test)
- Language: English, Vietnamese
|
false | # AutoTrain Dataset for project: ethnicity-test_v003
## Dataset Description
This dataset has been automatically processed by AutoTrain for project ethnicity-test_v003.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as foll... |
false | ## Dataset
This FLAN dataset is built for instruction causal language modeling. Improper encoding/decoding was further cleaned. The dataset includes the Dialog Few Shot Options task.
## List of Mixtures
We've broken down the Flan Collection into several sub-mixtures. These are "flan" (Flan 2021), "t0" (P3 excluding Fla... |
false |
# Dataset Card for Quora Chat Dutch
## Dataset Description
- **Homepage:** N/A
- **Repository:** N/A
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** Bram Vanroy
### Dataset Summary
This dataset contains 54,444 conversations between een AI assistant and a (fake) "Human" (generated) in Dutch. They ar... |
false | ## Dataset
This FLAN dataset is built for instruction causal language modeling. Improper encoding/decoding was further cleaned. The dataset includes the Dialog submix.
## List of Mixtures
We've broken down the Flan Collection into several sub-mixtures. These are "flan" (Flan 2021), "t0" (P3 excluding Flan 2021), "niv2"... |
false |
# Persian_ChatBot_dataset_Fine_Tuning_Alpaca_Model
Persian ChatBot dataset, fine-tune LLaMa on instructed data (preprocessed alpaca dataset). [GitHub](https://github.com/AliEdalat/ChatBot_for_persian_LLaMA_fine_tune.git)
- we use [preprocessed alpaca dataset](https://github.com/thisserand/alpaca-lora-finetune-languag... |
false | # Dataset Card for "NTU-Stem"

The NTU Tree Dataset is a high-resolution few-shot learning dataset of the stem images of 15 different tree species found in the National Taiwan University (NTU) campus. The dataset was collected using personal cellphones in an effort to increase familiarity with th... |
false | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/hugg... |
true | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/hugg... |
true |
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/hug... |
true | # Dataset Card for "Hugging Face GitHub Issues
## Dataset Description
- **Point of Contact:** [Ben Chan](benchan79@gmail.com)
### Dataset Summary
GitHub Issues is a dataset consisting of GitHub issues and pull requests associated with the 🤗 Datasets [repository](https://github.com/huggingface/datasets). It is inte... |
false | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/hugg... |
false | Samples de ~10-15 segundos de Luis Alberto Spinetta cantando.
Limpio, sin instrumentos y sin silencios.
Canciones de Pescado Rabioso, Almendra, Invisible y como solista. |
true |
# Dataset Card for XNLI Code-Mixed Corpus
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
### Supported Tasks and Leaderboards
Binary mode classification (spoken vs written)
### Languages
- English
- German
- French
- Germ... |
false |
# Dataset Card for Stack Overflow Chat Dutch
## Dataset Description
- **Homepage:** N/A
- **Repository:** N/A
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** Bram Vanroy
### Dataset Summary
This dataset contains 56,964 conversations between een AI assistant and a (fake) "Human" (generated) in Dutch... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.