text-classification bool 2
classes | text stringlengths 0 664k |
|---|---|
false |
# m2m3_fine_tuning_ref_cmbert_iob2
## Introduction
This dataset was used to fine-tuned [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) for **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from the 19th century.
## Dataset ... |
false |
# m2m3_fine_tuning_ref_ptrn_cmbert_iob2
## Introduction
This dataset was used to fine-tuned [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) for **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from t... |
false |
# m0_qualitative_analysis_ref_cmbert_io
## Introduction
This dataset was used to perform **qualitative analysis** of [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) on **flat NER task** using Flat NER approach [M0].
It contains 19th-century Paris trade directories' entries.
## Dat... |
false |
# m2m3_fine_tuning_ocr_cmbert_io
## Introduction
This dataset was used to fine-tuned [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) for **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from the 19th century.
## Dataset pa... |
false |
# m0_qualitative_analysis_ref_ptrn_cmbert_io
## Introduction
This dataset was used to perform **qualitative analysis** of [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) on **flat NER task** using Flat NER approach [M0].
It contains 19th-century Paris trade ... |
false |
# m2m3_fine_tuning_ocr_ptrn_cmbert_io
## Introduction
This dataset was used to fine-tuned [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) for **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from the... |
false |
# m0_qualitative_analysis_ocr_cmbert_io
## Introduction
This dataset was used to perform **qualitative analysis** of [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) on **flat NER task** using Flat NER approach [M0].
It contains 19th-century Paris trade directories' entries.
## Dat... |
false |
# m2m3_fine_tuning_ocr_cmbert_iob2
## Introduction
This dataset was used to fine-tuned [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) for **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from the 19th century.
## Dataset ... |
false |
# m0_qualitative_analysis_ocr_ptrn_cmbert_io
## Introduction
This dataset was used to perform **qualitative analysis** of [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) on **flat NER task** using Flat NER approach [M0].
It contains 19th-century Paris trade ... |
false |
# m2m3_fine_tuning_ocr_ptrn_cmbert_iob2
## Introduction
This dataset was used to fine-tuned [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) for **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from t... |
false |
# m1_qualitative_analysis_ref_cmbert_io
## Introduction
This dataset was used to perform **qualitative analysis** of [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) on **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from t... |
false |
# m1_qualitative_analysis_ref_ptrn_cmbert_io
## Introduction
This dataset was used to perform **qualitative analysis** of [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) on **nested NER task** using Independant NER layers approach [M1].
It contains Paris tra... |
false |
# m1_qualitative_analysis_ref_cmbert_iob2
## Introduction
This dataset was used to perform **qualitative analysis** of [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) on **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from... |
false |
# m1_qualitative_analysis_ref_ptrn_cmbert_iob2
## Introduction
This dataset was used to perform **qualitative analysis** of [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) on **nested NER task** using Independant NER layers approach [M1].
It contains Paris t... |
false |
# m1_qualitative_analysis_ocr_cmbert_io
## Introduction
This dataset was used to perform **qualitative analysis** of [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) on **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from t... |
false |
# m1_qualitative_analysis_ocr_ptrn_cmbert_io
## Introduction
This dataset was used to perform **qualitative analysis** of [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) on **nested NER task** using Independant NER layers approach [M1].
It contains Paris tra... |
false |
# m1_qualitative_analysis_ocr_cmbert_iob2
## Introduction
This dataset was used to perform **qualitative analysis** of [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) on **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from... |
false |
# m1_qualitative_analysis_ocr_ptrn_cmbert_iob2
## Introduction
This dataset was used to perform **qualitative analysis** of [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) on **nested NER task** using Independant NER layers approach [M1].
It contains Paris t... |
false |
# m2m3_qualitative_analysis_ref_cmbert_io
## Introduction
This dataset was used to perform **qualitative analysis** of [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) on **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from... |
false |
# m2m3_qualitative_analysis_ref_ptrn_cmbert_io
## Introduction
This dataset was used to perform **qualitative analysis** of [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) on **nested NER task** using Independant NER layers approach [M1].
It contains Paris t... |
false |
# m2m3_qualitative_analysis_ref_cmbert_iob2
## Introduction
This dataset was used to perform **qualitative analysis** of [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) on **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries fr... |
false |
# m2m3_qualitative_analysis_ref_ptrn_cmbert_iob2
## Introduction
This dataset was used to perform **qualitative analysis** of [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) on **nested NER task** using Independant NER layers approach [M1].
It contains Paris... |
false |
# m2m3_qualitative_analysis_ocr_cmbert_io
## Introduction
This dataset was used to perform **qualitative analysis** of [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) on **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries from... |
false |
# m2m3_qualitative_analysis_ocr_cmbert_iob2
## Introduction
This dataset was used to perform **qualitative analysis** of [Jean-Baptiste/camembert-ner](https://huggingface.co/Jean-Baptiste/camembert-ner) on **nested NER task** using Independant NER layers approach [M1].
It contains Paris trade directories entries fr... |
false | # vinbigdata_asr_vlsp_2020
- Source:
- Num examples: 46,494
- Language: Vietnamese
```python
from datasets import load_dataset
load_dataset("tdtunlp/vinbigdata_asr_vlsp_2020_vi")
``` |
false | # wikiquote_filtered
- Source: https://huggingface.co/datasets/bigscience-data/roots_en_wikiquote
- Num examples: 449
- Language: Vietnamese
```python
from datasets import load_dataset
load_dataset("tdtunlp/wikiquote_vi") |
false | # wikiquote_filtered
- Source: https://huggingface.co/datasets/bigscience-data/roots_en_wikiquote
- Num examples: 31,929
- Language: English
```python
from datasets import load_dataset
load_dataset("tdtunlp/wikiquote_en")
``` |
false | # wikivoyage_filtered
- Source: https://huggingface.co/datasets/bigscience-data/roots_en_wikivoyage
- Num examples: 24,838
- Language: English
```python
from datasets import load_dataset
load_dataset("tdtunlp/wikivoyage_en")
``` |
false | # wikivoyage_filtered
- Source: https://huggingface.co/datasets/bigscience-data/roots_en_wikivoyage
- Num examples: 1,527
- Language: Vietnamese
```python
from datasets import load_dataset
load_dataset("tdtunlp/wikivoyage_vi") |
false | # wikibooks_filtered
- Source: https://huggingface.co/datasets/bigscience-data/roots_en_wikibooks
- Num examples: 54,773
- Language: English
```python
from datasets import load_dataset
load_dataset("tdtunlp/wikibooks_en")
``` |
false | # wikibooks_filtered
- Source: https://huggingface.co/datasets/bigscience-data/roots_en_wikibooks
- Num examples: 3,832
- Language: Vietnamese
```python
from datasets import load_dataset
load_dataset("tdtunlp/wikibooks_vi")
``` |
false | # vinbigdata_monolingual_vlsp_2020
- Source:
- Num examples: 18,579,972
- Language: Vietnamese
```python
from datasets import load_dataset
load_dataset("tdtunlp/vinbigdata_monolingual_vlsp_2020_vi")
``` |
true | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:https://huggingface.co/datasets/jjiiaa/mj-prompts/
- **Repository:https://huggingface.co/datasets/jjiiaa/mj-prompts/
### Dataset Summary
adding soon
## Dataset Structure
adding soon
### Data Splits
adding soon
### Licensing Information
adding... |
true | |
false | # wiki_lingua
- Source: https://huggingface.co/datasets/GEM/wiki_lingua
- Num examples: 6,616
- Language: Vietnamese
```python
from datasets import load_dataset
load_dataset("tdtunlp/wiki_lingua_vi")
``` |
false | # wiki_lingua
- Source: https://huggingface.co/datasets/GEM/wiki_lingua
- Num examples: 57,945
- Language: English
```python
from datasets import load_dataset
load_dataset("tdtunlp/wiki_lingua_en")
``` |
false |
# Dataset Card for UTS_Text
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structur... |
true | |
true | AG's News Topic Classification Dataset
Version 3, Updated 09/09/2015
ORIGIN
AG is a collection of more than 1 million news articles. News articles have been gathered from more than 2000 news sources by ComeToMyHead in more than 1 year of activity. ComeToMyHead is an academic news search engine which has been runni... |
false |
# App Flow
This dataset consists of hourly maximum traffic flow for 128 systems deployed on 16 logic data centers, resulting in 1083 different time series in total.
The length of each series is more than 4 months. Each time series is divided into two segments for training and testing with a ratio of 32:1.
This datase... |
true | # AutoTrain Dataset for project: stratefied-processing
## Dataset Description
This dataset has been automatically processed by AutoTrain for project stratefied-processing.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as ... |
true | # AutoTrain Dataset for project: i-bert-twitter-sentiment
## Dataset Description
This dataset has been automatically processed by AutoTrain for project i-bert-twitter-sentiment.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset loo... |
false |
## Source:
Copied from the [original dataset](https://archive.ics.uci.edu/ml/datasets/breast+cancer+wisconsin+(diagnostic))
### Creators:
1. Dr. William H. Wolberg, General Surgery Dept.
University of Wisconsin, Clinical Sciences Center
Madison, WI 53792
wolberg '@' eagle.surgery.wisc.edu
2. W. Nick Street, Computer... |
false |
# Dataset Card for Neon Isometric
Do i know what i'm doing? NO. XD So if anyone wants to help with this go for gold
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datase... |
false | TRAINING_CORPUS.txt:
The TRAINING_CORPUS is the collection of 12 books (The modern Prometheus, The liar of the white worm by bram Stoker, The Vampyre; a Tale, Nightmare Abbey; by Thomas Love Peacock', The History of Caliph Vathek by William Beckford The Lock and Key Library :Classic Mystery and Detectives Stories: Old... |
false | TRAINING_CORPUS.txt
The TRAINING_CORPUS is the collection of 12 books (The modern Prometheus, The liar of the white worm by bram Stoker, The Vampyre; a Tale, Nightmare Abbey; by Thomas Love Peacock', The History of Caliph Vathek by William Beckford The Lock and Key Library :Classic Mystery and Detectives Stories: Old T... |
false | # Vector store of embeddings for books
- **"1984" by George Orwell**
- **"The Almanac of Naval Ravikant" by Eric Jorgenson**
This is a [faiss](https://github.com/facebookresearch/faiss) vector store created with [instructor embeddings](https://github.com/HKUNLP/instructor-embedding) using [LangChain](https://langchai... |
false |
## Source:
Creator:
David J. Slate
Odesta Corporation; 1890 Maple Ave; Suite 115; Evanston, IL 60201
Donor:
David J. Slate (dave '@' math.nwu.edu) (708) 491-3867
## Data Set Information:
The objective is to identify each of a large number of black-and-white rectangular pixel displays as one of the 26 capital let... |
true | |
true | # AutoTrain Dataset for project: tax_issues
## Dataset Description
This dataset has been automatically processed by AutoTrain for project tax_issues.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
... |
false | |
false | # AutoTrain Dataset for project: klasifikasi-tutupan-lahan
## Dataset Description
This dataset has been automatically processed by AutoTrain for project klasifikasi-tutupan-lahan.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset l... |
false |
Thanks and please support:
Ecigator is one of the well-known vape brands spun off from Giftsoar Technology Co., Ltd, it’s an ISO-certified [disposable vape manufacturer](https://ecigator.com/) for OEMs, ODMs, and OBM since 2010.
[https://ecigator.com/](https://ecigator.com/) |
false |
# Dataset Card for dev_mode-wtq
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fiel... |
false |
# Dataset Card for UTS_Dictionary
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-st... |
false | # Dataset Card for "ISSAI_KSC_335RS_v_1_1"
Kazakh Speech Corpus (KSC)
Identifier: SLR102
Summary: A crowdsourced open-source Kazakh speech corpus developed by ISSAI (330 hours)
Category: Speech
License: Attribution 4.0 International (CC BY 4.0)
Downloads (use a mirror closer to you):
ISSAI_KSC_335RS_v1.1_flac.tar.... |
true |
# Dataset Card for Recept
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More I... |
false |
## Source
Source: [UCI](https://archive.ics.uci.edu/ml/datasets/BlogFeedback)
## Data Set Information:
This data originates from blog posts. The raw HTML-documents
of the blog posts were crawled and processed.
The prediction task associated with the data is the prediction
of the number of comments in the upcoming 24... |
false | Concatenated and edited collection of fairy tales taken from Project Gutenberg.
Texts:
https://www.gutenberg.org/files/2591/2591-0.txt
https://www.gutenberg.org/files/503/503-0.txt
https://www.gutenberg.org/files/7277/7277-0.txt
https://www.gutenberg.org/cache/epub/35862/pg35862.txt
https://www.gutenberg.org/cache/epub... |
false | # AutoTrain Dataset for project: guitarsproject
## Dataset Description
This dataset has been automatically processed by AutoTrain for project guitarsproject.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```j... |
false | |
false | |
false |
## Source
https://www.kaggle.com/datasets/dhoogla/unswnb15?resource=download
## Dataset
This is an academic intrusion detection dataset. All the credit goes to the original authors: dr. Nour Moustafa and dr. Jill Slay.
Please cite their original paper and all other appropriate articles listed on the UNSW-NB15 page.... |
false | |
false | |
false | |
false | |
false | # Dataset Card
## Table of Contents
- [Dataset Card](#dataset-card)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Datase... |
false | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/hugg... |
false |
# Dataset Card for semantic-domains-greek-lemmatized
## Dataset Description
- **Point of Contact:** https://huggingface.co/ryderwishart / https://github.com/ryderwishart
### Dataset Summary
Semantic domains aligned to tokens, broken down by sentences. Tokens have been lemmatized according to data in [Clear-Bible/m... |
true | Basic SQL Database that classifies claims as true (1) or false (0).
Cleaned FEVER and FEVEROUS dataset and scraped and cleaned politifact website into this DB file. |
false | |
false | # AutoTrain Dataset for project: auto_train
## Dataset Description
This dataset has been automatically processed by AutoTrain for project auto_train.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
... |
false | |
false |
The blob dataset!
-----
This dataset consists of a collection of 100000 that contains randomly generated blobs over a random noise background. Each image has annotated its number of blobs and if they are large or small.
The task consists of learning at the same time a quantitable guess (number of blobs) and a qualit... |
false | |
true | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
### Supported Tasks and Leaderboards
Multi-answer questioning, token classification
### Languages
English
## Dataset Structure
### Data Inst... |
true | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/hugg... |
false |
# Dataset Card for Star Villain Marvel Comics LoRa
## Dataset Description
- **https://duskfallcrew.carrd.co/:**
- **https://civitai.com/models/14831/star-ryan-ripley:**
- **https://civitai.com/models/14831/star-ryan-ripley**
# Data set for Duskfallcrew/Star_Marvel_comics_LoRa
Trained with: https://colab.research... |
false | |
false | |
true | # AutoTrain Dataset for project: fake_news_fine_tuned_v4
## Dataset Description
This dataset has been automatically processed by AutoTrain for project fake_news_fine_tuned_v4.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks... |
false | # AutoTrain Dataset for project: cylonix_summarize
## Dataset Description
This dataset has been automatically processed by AutoTrain for project cylonix_summarize.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:... |
false | # AutoTrain Dataset for project: size
## Dataset Description
This dataset has been automatically processed by AutoTrain for project size.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image... |
true | # Movie Review Data
* Original source: sentence polarity dataset v1.0 http://www.cs.cornell.edu/people/pabo/movie-review-data/
* Seems to same as https://huggingface.co/datasets/rotten_tomatoes, but different split.
## Original README
=======
Introduction
This README v1.0 (June, 2005) for the v1.0 sentence polarit... |
true | # Dataset Card for "reklamation24_haus-reinigung-full"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
false | # CNN Dailymail
- Source: https://huggingface.co/datasets/cnn_dailymail
- Num examples:
- 287,113 (train)
- 13,368 (validation)
- 11,490 (test)
- Language: English
```python
from datasets import load_dataset
load_dataset("tdtunlp/cnn_dailymail_en")
```
- Format for summarization task
```python
import re
def... |
true | # AutoTrain Dataset for project: sentiment_analysis
## Dataset Description
This dataset has been automatically processed by AutoTrain for project sentiment_analysis.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows... |
false | |
false |
### Source Corpus
```
@misc{inel-kamas-1-0,
title={INEL Kamas Corpus},
DOI={10.25592/uhhfdm.9752},
author={Gusev, Valentin and Klooster, Tiina and Wagner-Nagy, Beáta},
year={2019},
month={Dec}
}
``` |
true | |
false | |
false |
# Dataset Card Creation Guide
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderbo... |
false | |
false | # AutoTrain Dataset for project: english-tokipona
## Dataset Description
This dataset has been automatically processed by AutoTrain for project english-tokipona.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
... |
false | # AutoTrain Dataset for project: map_no_map_twitter_demo
## Dataset Description
This dataset has been automatically processed by AutoTrain for project map_no_map_twitter_demo.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks... |
false | |
false | # Dataset name: "modified_anthropic_convo_data"
# Dataset Card for Conversational AI Bot
## Dataset Description
- **Homepage:** https://huggingface.co/datasets/Anthropic/hh-rlhf
### Dataset Summary
- This dataset is the augmented version of the same dataset found here https://huggingface.co/datasets/Anthropic/hh-... |
false | |
false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.