julien-c HF staff commited on
Commit
7fc7eef
1 Parent(s): 86b4868

Migrate model card from transformers-repo

Browse files

Read announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/aliosm/ComVE-distilgpt2/README.md

Files changed (1) hide show
  1. README.md +67 -0
README.md ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: "en"
3
+ tags:
4
+ - exbert
5
+ - commonsense
6
+ - semeval2020
7
+ - comve
8
+ license: "mit"
9
+ datasets:
10
+ - ComVE
11
+ metrics:
12
+ - bleu
13
+ widget:
14
+ - text: "Chicken can swim in water. <|continue|>"
15
+ ---
16
+
17
+ # ComVE-distilgpt2
18
+
19
+ ## Model description
20
+
21
+ Finetuned model on Commonsense Validation and Explanation (ComVE) dataset introduced in [SemEval2020 Task4](https://competitions.codalab.org/competitions/21080) using a causal language modeling (CLM) objective.
22
+ The model is able to generate a reason why a given natural language statement is against commonsense.
23
+
24
+ ## Intended uses & limitations
25
+
26
+ You can use the raw model for text generation to generate reasons why natural language statements are against commonsense.
27
+
28
+ #### How to use
29
+
30
+ You can use this model directly to generate reasons why the given statement is against commonsense using [`generate.sh`](https://github.com/AliOsm/SemEval2020-Task4-ComVE/tree/master/TaskC-Generation) script.
31
+
32
+ *Note:* make sure that you are using version `2.4.1` of `transformers` package. Newer versions has some issue in text generation and the model repeats the last token generated again and again.
33
+
34
+ #### Limitations and bias
35
+
36
+ The model biased to negate the entered sentence usually instead of producing a factual reason.
37
+
38
+ ## Training data
39
+
40
+ The model is initialized from the [distilgpt2](https://github.com/huggingface/transformers/blob/master/model_cards/distilgpt2-README.md) model and finetuned using [ComVE](https://github.com/wangcunxiang/SemEval2020-Task4-Commonsense-Validation-and-Explanation) dataset which contains 10K against commonsense sentences, each of them is paired with three reference reasons.
41
+
42
+ ## Training procedure
43
+
44
+ Each natural language statement that against commonsense is concatenated with its reference reason with `<|continue|>` as a separator, then the model finetuned using CLM objective.
45
+ The model trained on Nvidia Tesla P100 GPU from Google Colab platform with 5e-5 learning rate, 15 epochs, 128 maximum sequence length and 64 batch size.
46
+
47
+ <center>
48
+ <img src="https://i.imgur.com/xKbrwBC.png">
49
+ </center>
50
+
51
+ ## Eval results
52
+
53
+ The model achieved 13.7582/13.8026 BLEU scores on SemEval2020 Task4: Commonsense Validation and Explanation development and testing dataset.
54
+
55
+ ### BibTeX entry and citation info
56
+
57
+ ```bibtex
58
+ @article{fadel2020justers,
59
+ title={JUSTers at SemEval-2020 Task 4: Evaluating Transformer Models Against Commonsense Validation and Explanation},
60
+ author={Fadel, Ali and Al-Ayyoub, Mahmoud and Cambria, Erik},
61
+ year={2020}
62
+ }
63
+ ```
64
+
65
+ <a href="https://huggingface.co/exbert/?model=aliosm/ComVE-distilgpt2">
66
+ <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
67
+ </a>