Edit model card

Sarcasm Detection Model for Customer Reviews

For access to the synthetic dataset used, please contact: [deniz.bilgin@uni-konstanz.de].

Model Description

This model is a fine-tuned version of microsoft/mdeberta-v3-base designed specifically to detect sarcasm in customer reviews. It is optimized to classify text as either sarcastic or nonsarcastic, helping users better understand the intent behind seemingly positive customer feedback that may, in fact, be sarcastic. This model is intended to complement a sentiment analysis model, which can first filter out negative comments, allowing sarcasm detection to focus on positive comments that may contain hidden criticism or irony.

  • Model Type: DeBERTa (fine-tuned for sarcasm detection)
  • Language: English
  • Labels: nonsarcasm, sarcasm
  • Intended Use: Customer reviews, specifically to distinguish between genuinely positive feedback and sarcastic remarks in positive comments.

Use Cases and Recommendations

Suggested Workflow

  1. Sentiment Analysis: Use a sentiment analysis model to classify reviews as positive or negative.
  2. Sarcasm Detection: Apply this sarcasm detection model to reviews classified as positive. This approach allows the model to focus on distinguishing genuinely positive comments from sarcastic ones, enhancing the understanding of customer sentiment.

Potential Applications

  • Customer Service Analysis: Identify sarcastic comments within positive feedback, which could help businesses address underlying customer dissatisfaction.
  • Social Media Monitoring: Detect sarcasm in online reviews and feedback, improving sentiment tracking accuracy.
  • User Feedback Interpretation: Assist NLP systems in understanding the nuanced meanings behind user feedback, reducing misunderstandings caused by sarcasm.

Model Performance

This sarcasm detection model is evaluated on a dataset with a heavy skew toward nonsarcastic examples. While it achieves high accuracy, the evaluation focuses specifically on the precision and recall trade-off for the sarcasm class, ensuring it is adept at detecting sarcasm without over-identifying it in nonsarcastic comments.

Sarcasm Class Performance

  • Precision: 0.9964
  • Recall: 0.9986
  • F1-Score: 0.9975

Nonsarcasm Class Performance

  • Precision: 0.9995
  • Recall: 0.9989
  • F1-Score: 0.9992

Precision and Recall Trade-off

Given the subtle nature of sarcasm, achieving a balance between precision and recall is crucial. A model with high precision but low recall may be overly cautious, missing many sarcastic statements. Conversely, a high-recall, low-precision model may over-detect sarcasm, resulting in false positives. This model achieves a high degree of both precision and recall for sarcasm detection, which is particularly beneficial for applications where accurately capturing sarcastic sentiment is critical.

Training Data

The model was fine-tuned on a custom dataset tailored for sarcasm detection in customer feedback. This dataset emphasizes both sarcastic and nonsarcastic comments, despite being imbalanced towards nonsarcasm, to ensure that the model is well-equipped to handle sarcasm accurately. For further details or access to this dataset, please contact.

Training Hyperparameters

  • Learning Rate: 2e-5
  • Epochs: 3
  • Batch Size: 4
  • Gradient Accumulation Steps: 2
  • Weight Decay: 0.015
  • Warm-up Ratio: 0.1

Evaluation Metrics

The model was evaluated with precision, recall, and F1-score metrics, particularly focusing on the sarcasm class to ensure reliable sarcasm detection:

  • Overall Precision for Sarcasm: 0.9964
  • Overall Recall for Sarcasm: 0.9986
  • Overall F1-Score for Sarcasm: 0.9975
  • Nonsarcasm Precision: 0.9995
  • Nonsarcasm Recall: 0.9989
  • Nonsarcasm F1-Score: 0.9992

Downloads last month
29
Safetensors
Model size
279M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for dnzblgn/Sarcasm-Detection-Customer-Reviews

Finetuned
(204)
this model