--- license: apache-2.0 language: en datasets: - sst2 metrics: - precision - recall - f1 tags: - text-classification --- # T5-base fine-tuned for Sentiment Analysis ๐Ÿ‘๐Ÿ‘Ž [OpenAI's GPT-2](https://openai.com/blog/tags/gpt-2/) medium fine-tuned on [SST-2](https://huggingface.co/datasets/st2) dataset for **Sentiment Analysis** downstream task. ## Details of T5 The **GPT-2** model was presented in [Language Models are Unsupervised Multitask Learners](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf) by *Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever* ## Model fine-tuning ๐Ÿ‹๏ธโ€ The model has been finetuned for 10 epochs on standard hyperparameters ## Val set metrics ๐Ÿงพ |precision | recall | f1-score |support| |----------|----------|---------|----------|-------| |negative | 0.92 | 0.92| 0.92| 428 | |positive | 0.92 | 0.93| 0.92| 444 | |----------|----------|---------|----------|-------| |accuracy| | | 0.92| 872 | |macro avg| 0.92| 0.92| 0.92| 872 | |weighted avg| 0.92| 0.92| 0.92| 872 | ## Model in Action ๐Ÿš€ ```python from transformers import GPT2Tokenizer, GPT2ForSequenceClassification tokenizer = GPT2Tokenizer.from_pretrained("michelecafagna26/gpt2-medium-finetuned-sst2-sentiment") model = GPT2ForSequenceClassification.from_pretrained("michelecafagna26/gpt2-medium-finetuned-sst2-sentiment") inputs = tokenizer("I love it", return_tensors="pt") model(**inputs).logits.argmax(axis=1) # 1: Positive, 0: Negative # Output: tensor([1]) ``` > This model card is based on "mrm8488/t5-base-finetuned-imdb-sentiment" by Manuel Romero/@mrm8488