NatLee's picture
Update README.md
d1d7692
metadata
task_categories:
  - text-classification
language:
  - en
size_categories:
  - 100K<n<1M

NLP: Sentiment Classification Dataset

This is a bundle dataset for a NLP task of sentiment classification in English.

There is a sample project is using this dataset GURA-gru-unit-for-recognizing-affect.

Content

  • myanimelist-sts: This dataset is derived from MyAnimeList, a social networking and cataloging service for anime and manga fans. The dataset typically includes user reviews with ratings. We used skip-thoughts to summarize them. You can find the original source of the dataset myanimelist-comment-dataset and the version is 2023-05-11.

  • aclImdb: The ACL IMDB dataset is a large movie review dataset collected for sentiment analysis tasks. It contains 50,000 highly polar movie reviews, divided evenly into 25,000 training and 25,000 test sets. Each set includes an equal number of positive and negative reviews. The source is from sentiment

  • MR: Movie Review Data (MR) is a dataset that contains 5,331 positive and 5,331 negative processed sentences/lines. This dataset is suitable for binary sentiment classification tasks, and it's a good starting point for text classification models. You can find the source movie-review-data and the section is Sentiment scale datasets.

  • MPQA: The Multi-Perspective Question Answering (MPQA) dataset is a resource for opinion detection and sentiment analysis research. It consists of news articles from a wide variety of sources annotated for opinions and other private states. You can get the source from MPQA

  • SST2: The Stanford Sentiment Treebank version 2 (SST2) is a popular benchmark for sentence-level sentiment analysis. It includes movie review sentences with corresponding sentiment labels (positive or negative). You can obtain the dataset from SST2

  • SUBJ: The Subjectivity dataset is used for sentiment analysis research. It consists of 5000 subjective and 5000 objective processed sentences, which can help a model to distinguish between subjective and objective (factual) statements. You can find the source movie-review-data and the section is Subjectivity datasets.

Tokenizer

from pathlib import Path
import pickle
from tensorflow.keras.preprocessing.text import Tokenizer

def check_data_path(file_path:str) -> bool:
    if Path(file_path).exists():
        print(f'[Path][OK] {file_path}')
        return True
    print(f'[Path][FAILED] {file_path}')
    return False

sentences = []

# =====================
# Anime Reviews
# =====================
dataset = './myanimelist-sts.pkl'
if check_data_path(dataset):
    with open(dataset, 'rb') as p:
        X, Y = pickle.load(p)
        sentences.extend(X)
        sentences.extend(Y)


# =====================
# MPQA
# =====================
dataset = './MPQA.pkl'
if check_data_path(dataset):
    with open(dataset, 'rb') as p:
        mpqa = pickle.load(p)
        sentences.extend(list(mpqa.sentence))


# =====================
# IMDB
# =====================
dataset = './aclImdb.pkl'
if check_data_path(dataset):
    with open(dataset, 'rb') as p:
        x_test, y_test, x_train, y_train = pickle.load(p)
        sentences.extend(x_train)
        sentences.extend(y_train)

# =====================
# MR
# =====================
dataset = './MR.pkl'
if check_data_path(dataset):
    with open(dataset, 'rb') as p:
        mr = pickle.load(p)
        sentences.extend(list(mr.sentence))

# =====================
# SST2
# =====================
dataset = './SST2.pkl'
if check_data_path(dataset):
    with open(dataset, 'rb') as p:
        sst2 = pickle.load(p)
        sentences.extend(list(sst2.sentence))

# =====================
# SUBJ
# =====================
dataset = './SUBJ.pkl'
if check_data_path(dataset):
    with open(dataset, 'rb') as p:
        subj = pickle.load(p)
        sentences.extend(list(subj.sentence))

sentences = map(str, sentences)

#Tokenize the sentences
myTokenizer = Tokenizer(
    num_words = 100,
    oov_token="{OOV}"
)
myTokenizer.fit_on_texts(sentences)
print(myTokenizer.word_index)

with open('./big-tokenizer.pkl', 'wb') as p:
    pickle.dump(myTokenizer, p)