julien-c's picture
julien-c HF staff
Migrate model card from transformers-repo
54a4d70
---
language: en
datasets:
- imdb
---
# T5-small fine-tuned for Sentiment Anlalysis ๐ŸŽž๏ธ๐Ÿ‘๐Ÿ‘Ž
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) [small](https://huggingface.co/t5-small) fine-tuned on [IMDB](https://huggingface.co/datasets/imdb) dataset for **Sentiment Analysis** downstream task.
## Details of T5
The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract:
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new โ€œColossal Clean Crawled Corpusโ€, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.
![model image](https://camo.githubusercontent.com/623b4dea0b653f2ad3f36c71ebfe749a677ac0a1/68747470733a2f2f6d69726f2e6d656469756d2e636f6d2f6d61782f343030362f312a44304a31674e51663876727255704b657944387750412e706e67)
## Details of the downstream task (Sentiment analysis) - Dataset ๐Ÿ“š
[IMDB](https://huggingface.co/datasets/imdb)
This is a dataset for binary sentiment classification containing substantially more data than previous benchmark datasets. It provides a set of **25,000** highly polar movie reviews for training, and **25,000** for testing.
## Model fine-tuning ๐Ÿ‹๏ธโ€
The training script is a slightly modified version of [this Colab Notebook](https://github.com/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) created by [Suraj Patil](https://github.com/patil-suraj), so all credits to him!
## Test set metrics ๐Ÿงพ
| |precision | recall | f1-score |support|
|----------|----------|---------|----------|-------|
|negative | 0.92 | 0.93| 0.92| 12500|
|positive | 0.93 | 0.92| 0.92| 12500|
|----------|----------|---------|----------|-------|
|accuracy| | | 0.92| 25000|
|macro avg| 0.92| 0.92| 0.92| 25000|
|weighted avg| 0.92| 0.92| 0.92| 25000|
## Model in Action ๐Ÿš€
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-small-finetuned-imdb-sentiment")
model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-small-finetuned-imdb-sentiment")
def get_sentiment(text):
input_ids = tokenizer.encode(text + '</s>', return_tensors='pt')
output = model.generate(input_ids=input_ids,
max_length=2)
dec = [tokenizer.decode(ids) for ids in output]
label = dec[0]
return label
get_sentiment("I dislike a lot that film")
# Output: 'negative'
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
> Made with <span style="color: #e25555;">&hearts;</span> in Spain