text-classification bool 2
classes | text stringlengths 0 664k |
|---|---|
true | |
false |
This is a transformed copy of [Alpaca Cleaned Dutch](https://huggingface.co/datasets/BramVanroy/alpaca-cleaned-dutch) to make it suitable for the format used in [Baize](https://github.com/project-baize/baize-chatbot). Please refer to that dataset for more information, which includes:
- licensing information;
- biase... |
true |
Argument Mining in Scientific Reviews (AMSR)
We release a new dataset of peer-reviews from different computer science conferences with annotated arguments, called AMSR (**A**rgument **M**ining in **S**cientific **R**eviews).
1. Raw Data
conferences_raw/ contains directories for each conference we scraped (e.g., [icl... |
false |
## E Dataset
This is the card for e dataset |
false |
# Dataset Card for Snippet-MLSUM-500
### Dataset Summary
This dataset is a sample of ~500 news articles from the [MLSUM](https://huggingface.co/datasets/mlsum) dataset, augmented with machine generated news snippets.
### Supported Tasks
This dataset was created to support the task of generating news snippets such... |
false |
# Roi'adan V'anzey Lycoris
As of right now this is yet to be tested in A1111.
Alpha edition was trained at 110 steps for 10 epochs.
I don't think that's right LOL.
I'll fill out more of this when i'm sure it's OK
It's working just need time to re fill this card out: https://civitai.com/models/25430 |
false |
# NorPaca Norwegian Bokmål
This dataset is a translation to Norwegian Bokmål of [alpaca_gpt4_data.json](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM), a clean version of the [Alpaca dataset made at Stanford](https://huggingface.co/datasets/tatsu-lab/alpaca), but generated with GPT4.
# Prompt to generat... |
false | # Dataset Card for Genshin Voice
## Dataset Description
### Dataset Summary
The Genshin Voice dataset is a text-to-voice dataset of different Genshin Impact characters unpacked from the game.
### Languages
The text in the dataset is in Mandarin.
## Dataset Creation
### Source Data
#### Initial Data Collection a... |
false | # AutoTrain Dataset for project: colors-1
## Dataset Description
This dataset has been automatically processed by AutoTrain for project colors-1.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
... |
false | |
true | This is the Spanish version of Winogrande Small (640 instances) for training only.
The translation was done manually by a group of experts. The dataset will still be improved in the future.
we also acknowledge Somos-NLP for this achievement. |
true | # Dataset Card
**Paper**: On the Challenges of Using Black-Box APIs for Toxicity Evaluation in Research
**Abstract**: Perception of toxicity evolves over time and often differs between geographies and cultural backgrounds. Similarly, black-box commercially available APIs for detecting toxicity, such as the Perspectiv... |
true | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/hugg... |
false |
# Dataset Card for Instruct-Snippet-MLSUM-500
### Dataset Summary
This is a dataset for multitask instruction finetuning dataset for the task of news snippet generation. It is built from a sample of ~500 news articles from the [MLSUM](https://huggingface.co/datasets/mlsum) dataset, augmented with machine generated n... |
true | <div align="center">
<article style="display: flex; flex-direction: column; align-items: center; justify-content: center;">
<p align="center"><img width="300" src="https://user-images.githubusercontent.com/25022954/209616423-9ab056be-5d62-4eeb-b91d-3b20f64cfcf8.svg" /></p>
<h1 style="width: 100%; text-align: ce... |
false | Rana is an alter in Duskfall Crew's system -
Virtual World Lycoris sets are based on Dissociative Identity Disorder
Actually wait.. Rana is A FORMER alter, and is now fused with Tobias and Tori lol. |
false | Alex Brightman Lycoris |
true |
# Dataset Card for XNLI Code-Mixed Corpus (Sampled)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
### Supported Tasks and Leaderboards
Binary mode classification (spoken vs written)
### Languages
- English
- German
- Fre... |
false | # XSUM NO
A norwegian summarization dataset custom made for evaluation or fine-tuning of GPT models.
## Data Collection
Data was scraped from Aftenposten.no and Vg.no, and the summarization column is represented by the title and ingress.
## How to Use
```python
from datasets import load_dataset
data = load_dataset(... |
false | |
false | |
false | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/hugg... |
true | |
false |
# AnimeHeadsv3 Object Detection Dataset
The AnimeHeadsv3 Object Detection Dataset is a collection of anime and art images, including manga pages, that have been annotated with object bounding boxes for use in object detection tasks.
## Contents
There are two versions of the dataset available:
The dataset contains a ... |
true | # PARARULE-Plus-Depth-2
This is a branch which includes the dataset from PARARULE-Plus Depth=2. PARARULE Plus is a deep multi-step reasoning dataset over natural language. It can be seen as an improvement on the dataset of PARARULE (Peter Clark et al., 2020). Both PARARULE and PARARULE-Plus follow the closed-world assu... |
true | # PARARULE-Plus-Depth-3
This is a branch which includes the dataset from PARARULE-Plus Depth=3. PARARULE Plus is a deep multi-step reasoning dataset over natural language. It can be seen as an improvement on the dataset of PARARULE (Peter Clark et al., 2020). Both PARARULE and PARARULE-Plus follow the closed-world assu... |
true | # PARARULE-Plus-Depth-4
This is a branch which includes the dataset from PARARULE-Plus Depth=4. PARARULE Plus is a deep multi-step reasoning dataset over natural language. It can be seen as an improvement on the dataset of PARARULE (Peter Clark et al., 2020). Both PARARULE and PARARULE-Plus follow the closed-world assu... |
true | # PARARULE-Plus-Depth-5
This is a branch which includes the dataset from PARARULE-Plus Depth=5. PARARULE Plus is a deep multi-step reasoning dataset over natural language. It can be seen as an improvement on the dataset of PARARULE (Peter Clark et al., 2020). Both PARARULE and PARARULE-Plus follow the closed-world assu... |
false | # English to Colloquial Indonesian Dataset (EColIndo)
First-ever large-scale high-quality English to Colloquial Indonesian dataset.
Fully generated from ChatGPT Zero-Shot Translation.
Author: Yonathan Setiawan
|
false |
---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
splits:
- name: train
num_bytes: 19334450
num_examples: 79168
- name: test
num_bytes: 2134369
num_examples: 8757
---
# Dataset Card for NQ-GAR |
true | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/hugg... |
false | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/hugg... |
false |
# Dataset Card for "squad"
## Table of Contents
- [Dataset Card for "squad"](#dataset-card-for-squad)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- ... |
false | |
true | # Dataset: sentiment_analysis-IT-dataset
## Dataset Description
Our data has been collected by annotating tweets on Italian language from a broad range of topics. In total, we have 2037 tweets annotated with an emotion label. More details can be found in our paper (https://aclanthology.org/2021.wassa-1.8/).
### Lang... |
false |
# Dataset Card for April 2023 Polish Wikipedia
Wikipedia dataset containing cleaned articles of Polish language.
The dataset has been built from the Wikipedia dump (https://dumps.wikimedia.org/)
using the [OLM Project](https://github.com/huggingface/olm-datasets).
Each example contains the content of one full Wikip... |
false |
# Dataset Card for Snippet-MLSUM-500-V2
### Dataset Summary
This dataset is a sample of ~500 news articles from the [MLSUM](https://huggingface.co/datasets/mlsum) dataset, augmented with machine generated news snippets.
### Supported Tasks
This dataset was created to support the task of generating news snippets s... |
false |
# Dataset Card for Instruct-Snippet-MLSUM-500-V2
### Dataset Summary
This is a dataset for multitask instruction finetuning dataset for the task of news snippet generation. It is built from a sample of ~500 news articles from the [MLSUM](https://huggingface.co/datasets/mlsum) dataset, augmented with machine generate... |
true | # Dataset Card for "applescript-lines-100k-non-annotated"
## Description
Dataset of 100,000 unique lines of AppleScript code scraped from GitHub and GitHub Gists. The dataset has been de-duplicated, comments have been removed (both single and multi-line), and effort has been made to merge multi-line structures such a... |
true | |
false |
# 弱智吧笑话数据集
弱智吧是百度贴吧中的一个非常受欢迎的论坛,以创作短小精悍的冷笑话而闻名。这些笑话通常采用双关语、不寻常的断句、不合理的逻辑等创作手法。即使是目前最先进的语言模型,也难以完全理解弱智吧的笑话。
[弱智吧](https://tieba.baidu.com/f?ie=utf-8&kw=%E5%BC%B1%E6%99%BA)
我从互联网上收集了一些弱智吧的笑话,共100条,其中45条是陈述句,55条是问句。我结合人工和语言模型对这些笑话进行了一些解析,并制作了这个小型数据集。
## 陈述句笑话
陈述句笑话通常以句号结尾,不容易被语言模型误解为正常的问题。
例如:“出人头地常年盛产人头。”
## 问句笑话
问句... |
false | |
false |
The dataset contains (almost) the entire OpenSubtittles database for Japanese:
- Over 7000 tv shows and/or movies.
- The subtittles are human generated.
- The dataset has been parsed, cleaned and converted to UTF-8.
File contents:
- OpenSubtitles.parquet: The text and the time data.
- OpenSubtitles_meta.parquet: The... |
false | # AutoTrain Dataset for project: fine-tune
## Dataset Description
This dataset has been automatically processed by AutoTrain for project fine-tune.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
... |
true | |
false |
Ok SO, as usual we don't usually have time to test these, though there is a chance many of the poses are inclusive of testing - the images are in the grids, and we've included sample images for the face landmarks.
...Don't mock us LOL, we literally found a face landmark demo on huggingface, and went nuts making dumb ... |
false | # AutoTrain Dataset for project: xx
## Dataset Description
This dataset has been automatically processed by AutoTrain for project xx.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"feat_db_i... |
false | [Original dataset] - This dataset is just the translation of the [gsm8k] dataset.
[Original dataset]: <https://huggingface.co/datasets/gsm8k>
[gsm8k]: <https://huggingface.co/datasets/gsm8k> |
false | |
true | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/hugg... |
true | # Dataset Card for "cr"
## Dataset Description
Product review dataset from SentEval.
## Data Fields
- `sentence`: Complete sentence expressing an opinion about a product.
- `label`: Sentiment of the opinion, either "negative" (0) or positive (1).
[More Information needed](https://github.com/huggingface/datasets/blo... |
false | # AutoTrain Dataset for project: cancer-lakera
## Dataset Description
This dataset has been automatically processed by AutoTrain for project cancer-lakera.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```jso... |
false | |
false | |
false |
# character_similarity
This is a dataset used for training models to determine whether two anime images (containing only one person) depict the same character. The dataset includes the following versions:
| Version | Filename | Characters | Images | Information ... |
false | Everything in here should be under CreativeML Open Rail.
We hope that you enjoy the content in here.
We're not at risk for anything you do with it.
Go on, use it! |
false | |
false | [Original dataset] - This dataset is just the translation of the [qasc] dataset.
[Original dataset]: <https://huggingface.co/datasets/qasc>
[qasc]: <https://huggingface.co/datasets/qasc> |
false |
# Dataset Card for German REBEL Dataset
### Dataset Summary
This dataset is the German version of Babelscape/rebel-dataset. It has been generated using [CROCODILE](https://github.com/Babelscape/crocodile).
The Wikipedia Version is from November 2022.
### Languages
- German
## Dataset Structure
```
{"docid": "94... |
false | # Dataset Card for Anything v3.0 Glazed Samples
## Dataset Description
### Dataset Summary
This dataset contains image samples originally generated by [Linaqruf/anything-v3.0](https://huggingface.co/Linaqruf/anything-v3.0)
and subsequently processed by [Glaze](https://glaze.cs.uchicago.edu/) tool.
### Supported Ta... |
false | |
true | |
true | ### Dataset Description
This dataset, compiled by Brendan Dolan-Gavitt, contains ~100 thousand `c++` functions and GPT-3.5 turbo-generated summaries of the code's purpose.
An example of Brendan's original prompt and GPT-3.5's summary may be found below.
```
int gg_set_focus_pos(gg_widget_t *widget, int x, int y) {
r... |
false |
# Dataset Card for GPT4All-Community-Discussions
## Dataset Description
This dataset contains ethically gathered discussions from the community, who shared their experiences with various open source discussion models using the GPT4All-ui tool. The dataset is open for any use, including commercial use, as long as pro... |
false |
ESLO audio dataset
configs:
- no_overlap_no_hesitation
- no_hesitation
- no_overlap
- raw
Licence Creative Commons Attribution - Pas d'Utilisation Commerciale - Partage dans les Mêmes Conditions 4.0 International
Dependencies:
- ffmpeg: `sudo apt-get install ffmpeg`
- ffmpeg-python: `pip install ffmpeg-python`
`... |
false | |
true |
The dataset is stored at the OSF [here](https://osf.io/ksdnm/)
MLRegTest is a benchmark for sequence classification, containing training, development, and test sets from 1,800 regular languages.
Regular languages are formal languages, which are sets of sequences definable with certain kinds of formal grammars, includ... |
false | https://github.com/Koziev/NLP_Datasets/tree/master/ChangePerson dataset in nice form |
false | |
false | |
false | |
false | |
true | 很棒 |
false | # Dataset Card for "open-instruct-v1_deduped"
- Deduplicated version of [Isotonic/open-instruct-v1](https://huggingface.co/datasets/Isotonic/open-instruct-v1)
- Deduplicated with min Jaccard similarity of 0.8
- Uses Stablility's System Prompt
```
### System: StableLM Tuned (Alpha version)
- StableLM is a helpful and h... |
false | |
false | # AutoTrain Dataset for project: finalbartmodel
## Dataset Description
This dataset has been automatically processed by AutoTrain for project finalbartmodel.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```j... |
false |
# Dataset Card for AfriQA
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)... |
false | # AutoTrain Dataset for project: shawt
## Dataset Description
This dataset has been automatically processed by AutoTrain for project shawt.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"ima... |
false | test |
false | - info: This dataset comes from the ANSES-CIQUAL 2020 Table in English in XML format, found on https://www.data.gouv.fr/fr/datasets/table-de-composition-nutritionnelle-des-aliments-ciqual/ |
false |
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/hug... |
false | # AutoTrain Dataset for project: suzume-questioner
## Dataset Description
This dataset has been automatically processed by AutoTrain for project suzume-questioner.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:... |
false | LORA EDITION - for you LORA MERGING NERDS!
We're gonna re-do this in lycoris for you lyco-hoarding nerds.
Also we're not at fault for anything you do with this, don't do anything illegal with it, and please SERIOUSLY if she shows up in the middle of the night don't feed her - you've watched Gremlins you know how this... |
false | |
true |
# Victorian Era Authorship Attribution Data Set
> GUNGOR, ABDULMECIT, Benchmarking Authorship Attribution Techniques Using Over A Thousand Books by Fifty Victorian Era Novelists, Purdue Master of Thesis, 2018-04
## NOTICE
This dataset was downloaded from the [UCI Machine Learning Repository](https://archive.ics.uci... |
false | # Source Datasets #
<li>1 - news from the website of the Komi administration (https://rkomi.ru/)</li>
<li>2 - Komi media library (http://videocorpora.ru/)</li>
<li>3 - Millet porridge by Ivan Toropov (adaptation)</li>
<br>
# Authors #
Shilova Nadezhda<br>
Chernousov Georgy
|
false |
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [U... |
true | # AutoTrain Dataset for project: car0fil-001
## Dataset Description
This dataset has been automatically processed by AutoTrain for project car0fil-001.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
... |
false | ERROR: type should be string, got "\n\nhttps://osf.io/dwsnm/" |
false | |
false | |
false | # Machine translation dataset for NLU (Virual Assistant) with slot transfer between languages
## Dataset Summary
Disclaimer: This is for research purposes only. Please have a look at the license section below. Some of the datasets used to construct IVA_MT have an unknown license.
IVA_MT is a machine translation datas... |
false |
# Dataset Card for "george-chou/AAL-statistics-volumn"
## Usage
```
from datasets import load_dataset
data = load_dataset("george-chou/AAL-statistics-volumn",
data_files='AAL_statistics_volumn_labelled.csv', split='train')
for item in data:
print(item)
```
## Maintenance
```
git clone git@h... |
true | |
false | |
false | # Dataset Card for "github-code-haskell-file"
Rows: 339k
Download Size: 806M
This dataset is extracted from [github-code-clean](https://huggingface.co/datasets/codeparrot/github-code-clean).
Each row also contains attribute values for my personal analysis project.
12.6% (43k) of the rows have cyclomatic complexity a... |
false | # :page_with_curl: Spanish Paraphrase Corpora

Manually paraphrased corpus in Spanish
## The Sushi Corpus
This [corpus](https://github.com/GIL-UNAM/SpanishParaphraseCorpora/tree/main/Sushi) is designed to assess the similarity between a... |
false | # Selfies, ID Images dataset
**4083** sets, which includes *2 photos of a person from his documents and 13 selfies*. **571** sets of Hispanics and **3512** sets of Caucasians.
Photo documents contains only a photo of a person. All personal information from the document is hidden
## File with the extension .csv
includ... |
false | # Anti-Spoofing dataset: real
The dataset consists of 40,000 videos and selfies with unique people. 15,000 attack replays from 4,000 unique devices.
# File with the extension .csv
includes the following information for each media file:
- **phone**: the device used to capture the media files,
- **selfie_link**: the ... |
true | 建议final,包含xss、sql注入等数据,安全数据采用sst-2的部分数据 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.