--- license: apache-2.0 datasets: - raquiba/Sarcasm_News_Headline language: - en metrics: - perplexity --- # Model Card for `sarcasm_minus` This model is a `facebook/bart-large` fine-tuned on sarcastic comments from `raquiba/Sarcasm_News_Headline` dataset. ## Model Details This model is not intended to be used for plain inference as it is very likely to predict sarcastic content. It is intended to be used instead as "utility model" for detecting and fixing sarcastic content as its token probability distributions will likely differ from comparable models not trained/fine-tuned over sarcastic data. Its name `sarcasm_minus` refers to the _G-_ model in [Detoxifying Text with MARCO: Controllable Revision with Experts and Anti-Experts](https://aclanthology.org/2023.acl-short.21.pdf). ### Model Description - **Developed by:** [tteofili] - **Shared by :** [tteofili] - **License:** [apache-2.0] - **Finetuned from model :** [facebook/bart-large](https://huggingface.co/facebook/bart-large) ## Uses ## Bias, Risks, and Limitations This model is fine-tuned over sarcastic comments from `raquiba/Sarcasm_News_Headline` and it is very likely to produce sarcastic content. For this reason this model should only be used in combination with other models for the sake of detecting / fixing sarcastic content, see for example [Detoxifying Text with MARCO: Controllable Revision with Experts and Anti-Experts](https://aclanthology.org/2023.acl-short.21.pdf). ## Evaluation This section describes the evaluation protocols and provides the results. ### Testing Data, Factors & Metrics #### Testing Data This model was tested on `raquiba/Sarcasm_News_Headline` testset. #### Metrics Model was evaluated using `perplexity` (on the MLM task). ### Results Perplexity: _1.00_