You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

EmoMosaic-large

EmoMosaic-large is a well-competent model designed for classifying emotions in text, demonstrating strong performance across multiple domains. In fact, it outperforms recent state-of-the-art models on SemEval-2018 Task 1: Affect in Tweets and GoEmotions datasets. The model was developed as a part of my master's thesis.

Author

Author: Bc. Vít Tlustoš (tlustos.vit@gmail.com)

Supervisor: doc. Malik Aamir Saeed Ph.D

Thesis Text

Once the thesis has been defended, the text will be accessible at https://www.vut.cz/studenti/zav-prace/detail/153407. You are welcome to read it. Should you have any questions, please don't hesitate to contact me via the provided email.

Demo Application

As a part of the solution, we developed a Gradio application and deployed it on the Hugging Face Spaces platform. Once the thesis is made public, you can access it at: https://huggingface.co/spaces/vtlustos/EmoMosaic-space. This allows anyone to experiment with the models easily without requiring any technical skills or setup.

Models

To utilize these models within your application, first install all the necessary dependencies.

pip install torch
pip install transformers
pip install datasets

To utilize these models within your application, integrate following code and format your samples as context</s><s>sentence. The context is optional and represents sentences preceding the sentence to be classified, while sentence refers to the actual sentence undergoing classification. This example demonstrates how to use the EmoMosaic-base model. If you prefer to use its larger counterpart, replace vtlustos/EmoMosaic-base with vtlustos/EmoMosaic-large.

import torch
from transformers import RobertaTokenizer
from transformers import RobertaForSequenceClassification

# 1. initialize the model
tokenizer = RobertaTokenizer.from_pretrained(
    "vtlustos/EmoMosaic-base"
)
model = RobertaForSequenceClassification.from_pretrained(
    "vtlustos/EmoMosaic-base"
).to('cuda:0')

# 2. tokenize the sentences
tokens = tokenizer(
    [
        "All your work was lost when the computer crashed.</s><s>Oh my god. I spent a whole week on that."
    ],
    truncation=True,
    padding=True,
    return_tensors = "pt"
)    

# 3. make the prediction
with torch.no_grad():
    logits = model(
        tokens["input_ids"].to('cuda:0'), 
        tokens["attention_mask"].to('cuda:0')
    ).logits

# 4. convert to probabilities
preds = torch.sigmoid(logits)

print(preds)

After executing the code, you will receive a tensor with dimensions [S,E], where S represents the number of samples and E denotes the number of emotions. To associate individual probabilities with their respective emotions, use to the dictionary provided below:

ix2label = {
    "0": "admiration",
    "1": "amusement",
    "2": "anger",
    "3": "annoyance",
    "4": "anticipation",
    "5": "approval",
    "6": "caring",
    "7": "confusion",
    "8": "curiosity",
    "9": "desire",
    "10": "disappointment",
    "11": "disapproval",
    "12": "disgust",
    "13": "embarrassment",
    "14": "excitement",
    "15": "fear",
    "16": "gratitude",
    "17": "grief",
    "18": "happiness",
    "19": "joy",
    "20": "love",
    "21": "nervousness",
    "22": "optimism",
    "23": "pessimism",
    "24": "pride",
    "25": "realization",
    "26": "relief",
    "27": "remorse",
    "28": "sadness",
    "29": "surprise",
    "30": "trust"
}

Results

Here we present a brief overview of the results. For an in-depth analysis and discussion, please refer to the text of the thesis. The analysis covers model training, comparisons with other methods, assessments of performance at the level of individual categories, calibration, and qualitative evaluations across various scenarios.

SemEval-2018 Task 1: Affect in Tweets

Model Accuracy P (macro) R (macro) F1 (macro) P (micro) R (micro) F1 (micro)
EmoMosaic-base 20.65 54.96 62.58 58.44 64.63 73.62 68.83
EmoMosaic-large 22.49 57.97 64.12 60.72 67.44 75.27 71.14

Note: P and R denote precision and recall, respectively. Results are shown for our top-performing models measured on the test set of the SemEval-2018 Task 1: Affect in Tweets dataset.

GoEmotions

Model Accuracy P (macro) R (macro) F1 (macro) P (micro) R (micro) F1 (micro)
EmoMosaic-base 46.47 51.41 57.81 53.72 52.70 62.53 57.19
EmoMosaic-large 46.67 51.35 58.34 53.93 52.86 63.39 57.65

Note: P and R denote precision and recall, respectively. Results are shown for our two top-performing models measured on the test set of the GoEmotions dataset.

XED

Model Accuracy P (macro) R (macro) F1 (macro) P (micro) R (micro) F1 (micro)
EmoMosaic-base 51.78 48.47 63.00 54.67 48.62 63.86 55.21
EmoMosaic-large 52.59 50.35 66.54 57.19 50.43 67.43 57.70

Note: P and R denote precision and recall, respectively. Results are shown for our two top-performing models measured on the test set of the XED dataset.

DailyDialog

Model Accuracy P (macro) R (macro) F1 (macro) P (micro) R (micro) F1 (micro)
EmoMosaic-base 84.85 46.34 49.60 46.94 53.44 64.81 58.57
EmoMosaic-large 85.05 47.20 53.80 49.65 54.24 68.77 60.65

Note: P and R denote precision and recall, respectively. Results are shown for our two top-performing models measured on the test set of the DailyDialog dataset.

Per-Emotion Performance

EmoMosaic-base

Emotion Precision Recall F1
admiration 63.82 80.16 71.06
amusement 74.11 94.32 83.00
anger 63.46 74.08 68.36
annoyance 35.15 44.37 39.23
anticipation 39.09 55.15 45.75
approval 43.40 45.87 44.60
caring 45.67 42.96 44.27
confusion 36.10 56.86 44.16
curiosity 48.48 67.25 56.34
desire 53.09 51.81 52.44
disappointment 35.57 35.10 35.33
disapproval 40.00 49.44 44.22
disgust 62.05 71.31 66.36
embarrassment 57.69 40.54 47.62
excitement 37.40 44.66 40.71
fear 61.93 68.69 65.13
gratitude 93.29 90.91 92.09
grief 66.67 66.67 66.67
happiness 58.10 70.76 63.81
joy 73.43 81.18 77.11
love 64.95 73.74 69.07
nervousness 33.33 43.48 37.74
optimism 64.33 76.00 69.68
pessimism 42.31 52.80 46.98
pride 66.67 37.50 48.00
realization 32.71 24.14 27.78
relief 55.56 45.45 50.00
remorse 55.56 89.29 68.49
sadness 58.65 70.14 63.88
surprise 40.02 51.29 44.96
trust 35.33 47.01 40.34

EmoMosaic-large

Emotion Precision Recall F1
admiration 65.25 79.37 71.62
amusement 73.87 93.18 82.41
anger 64.29 76.00 69.66
annoyance 33.81 44.06 38.26
anticipation 42.10 57.99 48.78
approval 42.66 44.73 43.67
caring 40.26 45.93 42.91
confusion 38.76 52.94 44.75
curiosity 48.40 74.65 58.73
desire 65.08 49.40 56.16
disappointment 34.36 37.09 35.67
disapproval 39.14 47.94 43.10
disgust 63.62 72.30 67.68
embarrassment 58.33 37.84 45.90
excitement 39.82 43.69 41.67
fear 64.22 71.24 67.55
gratitude 91.01 92.05 91.53
grief 66.67 66.67 66.67
happiness 58.21 75.23 65.63
joy 74.55 83.53 78.78
love 64.13 76.13 69.62
nervousness 42.86 39.13 40.91
optimism 66.98 79.38 72.66
pessimism 43.66 47.73 45.61
pride 63.64 43.75 51.85
realization 34.29 24.83 28.80
relief 33.33 36.36 34.78
remorse 57.78 92.86 71.23
sadness 61.08 72.67 66.37
surprise 44.02 55.67 49.16
trust 40.59 48.26 44.09
Downloads last month
0
Safetensors
Model size
355M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train vtlustos/EmoMosaic-large