I noticed unexpected predictions in certain scenarios (I feel lonely and hopeless. Nothing seems to bring me joy.).

#3011
by richardmtp - opened

Example Case (Incorrect Prediction):
I tested the model with the following input: "I feel lonely and hopeless. Nothing seems to bring me joy."
Expected Emotion: Sadness
Model Output:
{
'Anger': 0.00085,
'Disgust': 0.00256,
'Fear': 0.00203,
'Joy': 0.00098,
'Sadness': 0.00518,
'Surprise': 0.98749 # Unexpectedly high!
}
Questions for the Community:
Why is "Surprise" predicted instead of "Sadness"?
Could this be due to the dataset the model was fine-tuned on?
Has anyone faced similar misclassifications, and how did you address them?
Would fine-tuning the model on a mental health-specific dataset improve accuracy? If so, what datasets would you recommend?

Example Case (Incorrect Prediction):
I tested the model with the following input: "I feel lonely and hopeless. Nothing seems to bring me joy."
Expected Emotion: Sadness
Model Output:
{
'Anger': 0.00085,
'Disgust': 0.00256,
'Fear': 0.00203,
'Joy': 0.00098,
'Sadness': 0.00518,
'Surprise': 0.98749 # Unexpectedly high!
}
Questions for the Community:
Why is "Surprise" predicted instead of "Sadness"?
Could this be due to the dataset the model was fine-tuned on?
Has anyone faced similar misclassifications, and how did you address them?
Would fine-tuning the model on a mental health-specific dataset improve accuracy? If so, what datasets would you recommend?

Hello, I wonder if you've identified the issue yet?
I tried your example today, and it turns out my results are even more disappointing.
You might want to update it and run it again to see if that helps.

Sign up or log in to comment