Edit model card

Model Description

This is a distilbert-base-uncased model fine-tuned for the purpose of classifying (emotional) contexts in the Empathetic Dialogues dataset.

Limitations and bias

EmpatheticDialogues:

  1. Unable to ascertain the degree of cultural specificity for the context that a respondent described when given an emotion label (i.e., p(description | emotion, culture))
  2. ...

Training data

see dataset

Training procedure

Preprocessing

Evaluation results

Test results

Top 1 accuracy: 53.4 Top 5 accuracy: 86.1

Downloads last month
350
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train bdotloh/distilbert-base-uncased-empathetic-dialogues-context