michelecafagna26 commited on
Commit
1348a11
β€’
1 Parent(s): e895b50

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +52 -0
README.md CHANGED
@@ -1,3 +1,55 @@
1
  ---
2
  license: apache-2.0
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ language: en
4
+ datasets:
5
+ - sst2
6
  ---
7
+
8
+ # T5-base fine-tuned for Sentiment Analysis πŸ‘πŸ‘Ž
9
+
10
+
11
+ [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) base fine-tuned on [SST-2](https://huggingface.co/datasets/st2) dataset for **Sentiment Analysis** downstream task.
12
+
13
+ ## Details of T5
14
+
15
+ The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu*
16
+
17
+ ## Model fine-tuning πŸ‹οΈβ€
18
+
19
+ The model has been finetuned for 10 epochs on standard hyperparameters
20
+
21
+
22
+ ## Val set metrics 🧾
23
+
24
+ |precision | recall | f1-score |support|
25
+ |----------|----------|---------|----------|-------|
26
+ |negative | 1.00 | 1.00| 1.00| 428 |
27
+ |positive | 1.00 | 1.00| 1.00| 444 |
28
+ |----------|----------|---------|----------|-------|
29
+ |accuracy| | | 1.00| 872 |
30
+ |macro avg| 1.00| 1.00| 1.00| 872 |
31
+ |weighted avg| 1.00| 1.00| 1.00 | 872 |
32
+
33
+
34
+ ## Model in Action πŸš€
35
+
36
+ ```python
37
+ from transformers import T5Tokenizer, T5ForConditionalGeneration
38
+
39
+ tokenizer = T5Tokenizer.from_pretrained("t5-finetune-sst2")
40
+ model = T5ForConditionalGeneration.from_pretrained("t5-finetune-sst2")
41
+
42
+ def get_sentiment(text):
43
+
44
+ inputs = tokenizer("sentiment: " + text, max_length=128, truncation=True, return_tensors="pt").input_ids
45
+ preds = model.generate(inputs)
46
+ decoded_preds = tokenizer.batch_decode(sequences=preds, skip_special_tokens=True)
47
+
48
+ return decoded_preds
49
+
50
+ get_sentiment("This movie is awesome")
51
+
52
+ # Output: ['positive']
53
+ ```
54
+
55
+ > This model card is based on "mrm8488/t5-base-finetuned-imdb-sentiment" by Manuel Romero/@mrm8488