Edit model card

T5-base fine-tuned for Sentiment Analysis πŸ‘πŸ‘Ž

Google's T5 base fine-tuned on SST-2 dataset for Sentiment Analysis downstream task.

Details of T5

The T5 model was presented in Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu

Model fine-tuning πŸ‹οΈβ€

The model has been finetuned for 10 epochs on standard hyperparameters

Val set metrics 🧾

           |precision | recall  | f1-score |support|
|----------|----------|---------|----------|-------|
|negative  |     0.95 |     0.95|      0.95|   428 |
|positive  |     0.94 |     0.96|      0.95|   444 |
|----------|----------|---------|----------|-------|
|accuracy|            |         |      0.95|   872 |
|macro avg|       0.95|     0.95|      0.95|   872 |
|weighted avg|    0.95|     0.95|     0.95 |   872 |

Model in Action πŸš€

from transformers import T5Tokenizer, T5ForConditionalGeneration

tokenizer = T5Tokenizer.from_pretrained("t5-finetune-sst2")
model = T5ForConditionalGeneration.from_pretrained("t5-finetune-sst2")

def get_sentiment(text):

    inputs = tokenizer("sentiment: " + text, max_length=128, truncation=True, return_tensors="pt").input_ids
    preds = model.generate(inputs)
    decoded_preds = tokenizer.batch_decode(sequences=preds, skip_special_tokens=True)

    return decoded_preds

get_sentiment("This movie is awesome")

# labels are 'p' for 'positive' and 'n' for 'negative'
# Output: ['p']

This model card is based on "mrm8488/t5-base-finetuned-imdb-sentiment" by Manuel Romero/@mrm8488

Downloads last month
130
Safetensors
Model size
223M params
Tensor type
F32
Β·

Dataset used to train michelecafagna26/t5-base-finetuned-sst2-sentiment