File size: 2,878 Bytes
d468c58
 
 
 
5d47454
d468c58
55793b6
e047a7f
 
d468c58
 
 
 
 
5d47454
d468c58
5d47454
 
d468c58
 
 
5d47454
d468c58
5d47454
d468c58
5d47454
 
 
d468c58
5d47454
da8faff
5d47454
 
 
 
 
 
 
55793b6
 
 
 
 
 
 
5d47454
 
55793b6
 
 
 
 
 
d468c58
55793b6
d468c58
55793b6
d468c58
55793b6
 
 
 
 
 
 
 
 
 
 
 
 
d468c58
 
55793b6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
---
tags:
- generated_from_keras_callback
model-index:
- name: twitter-roberta-base-emotion-multilabel-latest
  results: []
pipeline_tag: text-classification
language:
- en
---

<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->

# twitter-roberta-base-emotion-multilabel-latest

This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-2021-124m](https://huggingface.co/cardiffnlp/twitter-roberta-base-2021-124m) on the 
[`SemEval 2018 - Task 1 Affect in Tweets`](https://aclanthology.org/S18-1001/) `(subtask: E-c / multilabel classification)`.



## Performance

Following metrics are achieved on the test split:

- F1 (micro): 0.7218
- F1 (macro): 0.5746  
- Jaccard Index (samples): 0.6073: 

### Usage
#### 1. [tweetnlp](https://pypi.org/project/tweetnlp/)
Install tweetnlp via pip.
```shell
pip install tweetnlp
```
Load the model in python.
```python
import tweetnlp

model = tweetnlp.load_model('topic_classification', model_name='cardiffnlp/twitter-roberta-base-emotion-multilabel-latest')

model.predict("I am so happy and sad at the same time")

>> {'label': ['joy', 'sadness']}

```
#### 2. pipeline
```shell
pip install -U tensorflow==2.10
```

```python
from transformers import pipeline

pipe = pipeline("text-classification", model="cardiffnlp/twitter-roberta-base-emotion-multilabel-latest", return_all_scores=True)

pipe("I am so happy and sad at the same time")

>> [[{'label': 'anger', 'score': 0.0059011634439229965},
  {'label': 'anticipation', 'score': 0.024502484127879143},
  {'label': 'disgust', 'score': 0.016748998314142227},
  {'label': 'fear', 'score': 0.20184014737606049},
  {'label': 'joy', 'score': 0.9260002970695496},
  {'label': 'love', 'score': 0.13167349994182587},
  {'label': 'optimism', 'score': 0.32711178064346313},
  {'label': 'pessimism', 'score': 0.08952841907739639},
  {'label': 'sadness', 'score': 0.8542942404747009},
  {'label': 'surprise', 'score': 0.059213291853666306},
  {'label': 'trust', 'score': 0.01618659868836403}]]

```


### Reference 
```
@inproceedings{camacho-collados-etal-2022-tweetnlp,
    title={{T}weet{NLP}: {C}utting-{E}dge {N}atural {L}anguage {P}rocessing for {S}ocial {M}edia},
    author={Camacho-Collados, Jose and Rezaee, Kiamehr and Riahi, Talayeh and Ushio, Asahi and Loureiro, Daniel and Antypas, Dimosthenis and Boisson, Joanne and Espinosa-Anke, Luis and Liu, Fangyu and Mart{\'\i}nez-C{\'a}mara, Eugenio and others},
    author = "Ushio, Asahi  and
      Camacho-Collados, Jose",
    booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
    month = nov,
    year = "2022",
    address = "Abu Dhabi, U.A.E.",
    publisher = "Association for Computational Linguistics",
}

```