julien-c HF staff commited on
Commit
cd1013f
β€’
1 Parent(s): 6f60e69

Migrate model card from transformers-repo

Browse files

Read announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/mrm8488/t5-small-finetuned-emotion/README.md

Files changed (1) hide show
  1. README.md +82 -0
README.md ADDED
@@ -0,0 +1,82 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ datasets:
4
+ - emotion
5
+ ---
6
+
7
+ # T5-small fine-tuned for Emotion Recognition πŸ˜‚πŸ˜’πŸ˜‘πŸ˜ƒπŸ˜―
8
+
9
+
10
+ [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) [small](https://huggingface.co/t5-small) fine-tuned on [emotion recognition](https://github.com/dair-ai/emotion_dataset) dataset for **Emotion Recognition** downstream task.
11
+
12
+ ## Details of T5
13
+
14
+ The **T5** model was presented in [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) by *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* in Here the abstract:
15
+
16
+ Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new β€œColossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.
17
+
18
+ ![model image](https://i.imgur.com/jVFMMWR.png)
19
+
20
+ ## Details of the downstream task (Sentiment Recognition) - Dataset πŸ“š
21
+
22
+ [Elvis Saravia](https://twitter.com/omarsar0) has gathered a great [dataset](https://github.com/dair-ai/emotion_dataset) for emotion recognition. It allows to classifiy the text into one of the following **6** emotions:
23
+
24
+ - sadness 😒
25
+ - joy πŸ˜ƒ
26
+ - love πŸ₯°
27
+ - anger 😑
28
+ - fear 😱
29
+ - surprise 😯
30
+
31
+ ## Model fine-tuning πŸ‹οΈβ€
32
+
33
+ The training script is a slightly modified version of [this Colab Notebook](https://github.com/patil-suraj/exploring-T5/blob/master/t5_fine_tuning.ipynb) created by [Suraj Patil](https://github.com/patil-suraj), so all credits to him!
34
+
35
+ ## Test set metrics 🧾
36
+
37
+ | |precision | recall | f1-score |support|
38
+ |----------|----------|---------|----------|-------|
39
+ |anger | 0.92| 0.93| 0.92| 275|
40
+ |fear | 0.90| 0.90| 0.90| 224|
41
+ |joy | 0.97| 0.91| 0.94| 695|
42
+ |love | 0.75| 0.89| 0.82| 159|
43
+ |sadness | 0.96| 0.97| 0.96| 581|
44
+ |surpirse | 0.73| 0.80| 0.76| 66|
45
+ | |
46
+ |accuracy| | | 0.92| 2000|
47
+ |macro avg| 0.87| 0.90| 0.88| 2000|
48
+ |weighted avg| 0.93| 0.92| 0.92| 2000|
49
+
50
+
51
+ Confusion Matrix
52
+
53
+ ![CM](https://i.imgur.com/JBtAwPx.png)
54
+
55
+
56
+ ## Model in Action πŸš€
57
+
58
+ ```python
59
+ from transformers import AutoTokenizer, AutoModelWithLMHead
60
+
61
+ tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-small-finetuned-emotion")
62
+
63
+ model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-small-finetuned-emotion")
64
+
65
+ def get_emotion(text):
66
+ input_ids = tokenizer.encode(text + '</s>', return_tensors='pt')
67
+
68
+ output = model.generate(input_ids=input_ids,
69
+ max_length=2)
70
+
71
+ dec = [tokenizer.decode(ids) for ids in output]
72
+ label = dec[0]
73
+ return label
74
+
75
+ get_emotion("i feel as if i havent blogged in ages are at least truly blogged i am doing an update cute") # Output: 'joy'
76
+
77
+ get_emotion("i have a feeling i kinda lost my best friend") # Output: 'sadness'
78
+ ```
79
+
80
+ > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
81
+
82
+ > Made with <span style="color: #e25555;">&hearts;</span> in Spain