Text2Text Generation
Transformers
Safetensors
English
bart
Inference Endpoints

Model Card for sarcasm_minus

This model is a facebook/bart-large fine-tuned on sarcastic comments from raquiba/Sarcasm_News_Headline dataset.

Model Details

This model is not intended to be used for plain inference as it is very likely to predict sarcastic content. It is intended to be used instead as "utility model" for detecting and fixing sarcastic content as its token probability distributions will likely differ from comparable models not trained/fine-tuned over sarcastic data. Its name sarcasm_minus refers to the G- model in Detoxifying Text with MARCO: Controllable Revision with Experts and Anti-Experts.

Model Description

  • Developed by: [tteofili]
  • Shared by : [tteofili]
  • License: [apache-2.0]
  • Finetuned from model : facebook/bart-large

    Bias, Risks, and Limitations

    This model is fine-tuned over sarcastic comments from raquiba/Sarcasm_News_Headline and it is very likely to produce sarcastic content. For this reason this model should only be used in combination with other models for the sake of detecting / fixing sarcastic content, see for example Detoxifying Text with MARCO: Controllable Revision with Experts and Anti-Experts.

    Evaluation

    This section describes the evaluation protocols and provides the results.

    Testing Data, Factors & Metrics

    Testing Data

    This model was tested on raquiba/Sarcasm_News_Headline testset.

    Metrics

    Model was evaluated using perplexity (on the MLM task).

    Results

    Perplexity: 1.00

Downloads last month
19
Safetensors
Model size
406M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train trustyai/sarcasm_minus