julien-c HF staff commited on
Commit
f2eedc2
1 Parent(s): 3445a96

Migrate model card from transformers-repo

Browse files

Read announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/aliosm/ComVE-gpt2-large/README.md

Files changed (1) hide show
  1. README.md +68 -0
README.md ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: "en"
3
+ tags:
4
+ - gpt2
5
+ - exbert
6
+ - commonsense
7
+ - semeval2020
8
+ - comve
9
+ license: "mit"
10
+ datasets:
11
+ - https://github.com/wangcunxiang/SemEval2020-Task4-Commonsense-Validation-and-Explanation
12
+ metrics:
13
+ - bleu
14
+ widget:
15
+ - text: "Chicken can swim in water. <|continue|>"
16
+ ---
17
+
18
+ # ComVE-gpt2-large
19
+
20
+ ## Model description
21
+
22
+ Finetuned model on Commonsense Validation and Explanation (ComVE) dataset introduced in [SemEval2020 Task4](https://competitions.codalab.org/competitions/21080) using a causal language modeling (CLM) objective.
23
+ The model is able to generate a reason why a given natural language statement is against commonsense.
24
+
25
+ ## Intended uses & limitations
26
+
27
+ You can use the raw model for text generation to generate reasons why natural language statements are against commonsense.
28
+
29
+ #### How to use
30
+
31
+ You can use this model directly to generate reasons why the given statement is against commonsense using [`generate.sh`](https://github.com/AliOsm/SemEval2020-Task4-ComVE/tree/master/TaskC-Generation) script.
32
+
33
+ *Note:* make sure that you are using version `2.4.1` of `transformers` package. Newer versions has some issue in text generation and the model repeats the last token generated again and again.
34
+
35
+ #### Limitations and bias
36
+
37
+ The model biased to negate the entered sentence usually instead of producing a factual reason.
38
+
39
+ ## Training data
40
+
41
+ The model is initialized from the [gpt2-large](https://github.com/huggingface/transformers/blob/master/model_cards/gpt2-README.md) model and finetuned using [ComVE](https://github.com/wangcunxiang/SemEval2020-Task4-Commonsense-Validation-and-Explanation) dataset which contains 10K against commonsense sentences, each of them is paired with three reference reasons.
42
+
43
+ ## Training procedure
44
+
45
+ Each natural language statement that against commonsense is concatenated with its reference reason with `<|conteniue|>` as a separator, then the model finetuned using CLM objective.
46
+ The model trained on Nvidia Tesla P100 GPU from Google Colab platform with 5e-5 learning rate, 5 epochs, 128 maximum sequence length and 64 batch size.
47
+
48
+ <center>
49
+ <img src="https://i.imgur.com/xKbrwBC.png">
50
+ </center>
51
+
52
+ ## Eval results
53
+
54
+ The model achieved 16.5110/15.9299 BLEU scores on SemEval2020 Task4: Commonsense Validation and Explanation development and testing dataset.
55
+
56
+ ### BibTeX entry and citation info
57
+
58
+ ```bibtex
59
+ @article{fadel2020justers,
60
+ title={JUSTers at SemEval-2020 Task 4: Evaluating Transformer Models Against Commonsense Validation and Explanation},
61
+ author={Fadel, Ali and Al-Ayyoub, Mahmoud and Cambria, Erik},
62
+ year={2020}
63
+ }
64
+ ```
65
+
66
+ <a href="https://huggingface.co/exbert/?model=aliosm/ComVE-gpt2-large">
67
+ <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
68
+ </a>