license: mit
task_categories:
- text-classification
language:
- en
pretty_name: Sentiment Merged
size_categories:
- 100K<n<1M
Dataset Card for Sentiment Merged (SST-3, DynaSent R1, R2)
This is a dataset for 3-way sentiment classification of reviews (negative, neutral, positive). It is a merge of Stanford Sentiment Treebank (SST-3) and DynaSent Rounds 1 and 2, licensed under Apache 2.0 and Creative Commons Attribution 4.0 respectively.
Dataset Details
The SST-3, DynaSent R1, and DynaSent R2 datasets were randomly mixed to form a new dataset with 102,097 Train examples, 5,421 Validation examples, and 6,530 Test examples. See Table 1 for the distribution of labels within this merged dataset.
Table 1: Label Distribution for the Merged Dataset
Split | Negative | Neutral | Positive |
---|---|---|---|
Train | 21,910 | 49,148 | 31,039 |
Validation | 1,868 | 1,669 | 1,884 |
Test | 2,352 | 1,829 | 2,349 |
Table 2: Contribution of Sources to the Merged Dataset
Dataset | Samples | Percent (%) |
---|---|---|
DynaSent R1 Train | 80,488 | 78.83 |
DynaSent R2 Train | 13,065 | 12.80 |
SST-3 Train | 8,544 | 8.37 |
Total | 102,097 | 100.00 |
Dataset Description
SST-5 is the Stanford Sentiment Treebank 5-way classification (positive, somewhat positive, neutral, somewhat negative, negative). To create SST-3 (positive, neutral, negative), the 'somewhat positive' class was merged and treated as 'positive'. Similarly, the 'somewhat negative class' was merged and treated as 'negative'.
DynaSent is a sentiment analysis dataset and dynamic benchmark with three classification labels: positive, negative, and neutral. The dataset was created in two rounds. First, a RoBERTa model was fine-tuned on a variety of datasets including SST-3, IMBD, and Yelp. They then extracted challenging sentences that fooled the model, and validated them with humans. For Round 2, a new RoBERTa model was trained on similar (but different) data, including the Round 1 dataset. The Dynabench platform was then used to create sentences written by workers that fooled the model.
It’s worth noting that the source datasets all have class imbalances. SST-3 positive and negative are about twice the number of neutral. In DynaSent R1, the neutral are more than three times the negative. And in DynaSent R2, the positive are more than double the neutral. Although this imbalance may be by design for DynaSent (to focus on the more challenging neutral class), it still represents an imbalanced dataset. The risk is that the model will learn mostly the dominant class. Merging the data helps mitigate this imbalance. Although there is still a majority of neutral examples in the training dataset, the neutral to negative ratio in DynaSent R1 is 3.21, and this is improved to 2.24 in the merged dataset.
Another potential issue is that the models will learn the dominant dataset, which is DynaSent R1. See Table 2 for a breakdown of the sources contributing to the Merged dataset.
- Curated by: Jim Beno
- Language(s) (NLP): English
- License: MIT
Dataset Sources
- Repository: jbeno/sentiment
- Paper: ELECTRA and GPT-4o: Cost-Effective Partners for Sentiment Analysis (arXiv:2501.00062)
Citation
If you use this material in your research, please cite:
@article{beno-2024-electragpt,
title={ELECTRA and GPT-4o: Cost-Effective Partners for Sentiment Analysis},
author={James P. Beno},
journal={arXiv preprint arXiv:2501.00062},
year={2024},
eprint={2501.00062},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2501.00062},
}
Uses
The dataset is intended to be used for 3-way sentiment classification of reviews (negative, neutral, positive).
Dataset Structure
There are three CSV files: train_all.csv, val_all.csv, test_all.csv. Each represents the merged train, validation and test splits as defined by the original source datasets.
Column | Description |
---|---|
sentence | The review sentence |
label | The class label: negative, neutral, or positive |
source | The source dataset: sst_local, dyansent_r1, or dynasent_r2 |
split | The split: train, validation, or test |
Dataset Creation
Curation Rationale
The dataset was created to fine-tune models on sentiment classification. The idea was to create a diverse 3-way sentiment classification dataset with challenging reviews.
Source Data
See Stanford Sentiment Treebank and DynaSent for details.