Text2Text Generation
Transformers
Safetensors
English
bart
Inference Endpoints
tteofili commited on
Commit
e29fb05
·
verified ·
1 Parent(s): c1ea051

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -7,7 +7,7 @@ language:
7
  metrics:
8
  - perplexity
9
  ---
10
- # Model Card for `gminus`
11
 
12
  This model is a `facebook/bart-large` fine-tuned on sarcastic comments from `raquiba/Sarcasm_News_Headline` dataset.
13
 
@@ -15,7 +15,7 @@ This model is a `facebook/bart-large` fine-tuned on sarcastic comments from `raq
15
 
16
  This model is not intended to be used for plain inference as it is very likely to predict sarcastic content.
17
  It is intended to be used instead as "utility model" for detecting and fixing sarcastic content as its token probability distributions will likely differ from comparable models not trained/fine-tuned over sarcastic data.
18
- Its name `gminus` refers to the _G-_ model in [Detoxifying Text with MARCO: Controllable Revision with Experts and Anti-Experts](https://aclanthology.org/2023.acl-short.21.pdf).
19
 
20
  ### Model Description
21
 
 
7
  metrics:
8
  - perplexity
9
  ---
10
+ # Model Card for `sarcasm_minus`
11
 
12
  This model is a `facebook/bart-large` fine-tuned on sarcastic comments from `raquiba/Sarcasm_News_Headline` dataset.
13
 
 
15
 
16
  This model is not intended to be used for plain inference as it is very likely to predict sarcastic content.
17
  It is intended to be used instead as "utility model" for detecting and fixing sarcastic content as its token probability distributions will likely differ from comparable models not trained/fine-tuned over sarcastic data.
18
+ Its name `sarcasm_minus` refers to the _G-_ model in [Detoxifying Text with MARCO: Controllable Revision with Experts and Anti-Experts](https://aclanthology.org/2023.acl-short.21.pdf).
19
 
20
  ### Model Description
21