Edit model card

T5-base fine-tuned for Sentiment Anlalysis πŸŽžοΈπŸ‘πŸ‘Ž

Google's T5 base fine-tuned on IMDB dataset for Sentiment Analysis downstream task.

Details of T5

The T5 model was presented in Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu in Here the abstract:

Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new β€œColossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.

model image

Details of the downstream task (Sentiment analysis) - Dataset πŸ“š

IMDB

This is a dataset for binary sentiment classification containing substantially more data than previous benchmark datasets. We provide a set of 25,000 highly polar movie reviews for training, and 25,000 for testing.

Model fine-tuning πŸ‹οΈβ€

The training script is a slightly modified version of this Colab Notebook created by Suraj Patil, so all credits to him!

Test set metrics 🧾

           |precision | recall  | f1-score |support|
|----------|----------|---------|----------|-------|
|negative  |     0.95 |     0.95|      0.95|  12500|
|positive  |     0.95 |     0.95|      0.95|  12500|
|----------|----------|---------|----------|-------|
|accuracy|            |         |      0.95|  25000|
|macro avg|       0.95|     0.95|      0.95|  25000|
|weighted avg|    0.95|     0.95|     0.95 |  25000|

Model in Action πŸš€

from transformers import AutoTokenizer, AutoModelWithLMHead

tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-imdb-sentiment")

model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-base-finetuned-imdb-sentiment")

def get_sentiment(text):
  input_ids = tokenizer.encode(text + '</s>', return_tensors='pt')

  output = model.generate(input_ids=input_ids,
               max_length=2)
  
  dec = [tokenizer.decode(ids) for ids in output]
  label = dec[0]
  return label
  
get_sentiment("I dislike a lot that film")

# Output: 'negative'

Created by Manuel Romero/@mrm8488 | LinkedIn

Made with β™₯ in Spain

Downloads last month
176
Safetensors
Model size
223M params
Tensor type
F32
Β·

Dataset used to train mrm8488/t5-base-finetuned-imdb-sentiment

Space using mrm8488/t5-base-finetuned-imdb-sentiment 1