rbattle commited on
Commit
cfbe9d4
1 Parent(s): b564707

Add Model Card

Browse files
Files changed (1) hide show
  1. README.md +31 -0
README.md CHANGED
@@ -1,3 +1,34 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+
5
+ # viniLM-2021-qa-evaluator
6
+
7
+ This model takes a question answer pair as an input and outputs a value representing its prediction about whether the input was a valid question and answer pair or not. The model is a pretrained [viniLM-2021](https://huggingface.co/VMware/vinilm-2021-from-large) with a sequence classification head.
8
+
9
+ Observationally, this model produces similar results as the original [BERT-Base-cased QA Evaluator](https://huggingface.co/iarfmoose/bert-base-cased-qa-evaluator), but inference is twice as fast.
10
+
11
+ ## Intended uses
12
+
13
+ The QA evaluator was originally designed to be used with the [t5-base-question-generator](https://huggingface.co/iarfmoose/t5-base-question-generator) for evaluating the quality of generated questions.
14
+
15
+ The input for the QA evaluator follows the format for `BertForSequenceClassification`, but using the question and answer as the two sequences. Inputs should take the following format:
16
+ ```
17
+ [CLS] <question> [SEP] <answer> [SEP]
18
+ ```
19
+
20
+ ## Limitations and bias
21
+
22
+ The model is trained to evaluate if a question and answer are semantically related, but cannot determine whether an answer is actually true/correct or not.
23
+
24
+ ## Training data
25
+
26
+ This model was trained with the same [dataset](https://huggingface.co/datasets/iarfmoose/qa_evaluator) as the original [BERT-Base-cased QA Evaluator](https://huggingface.co/iarfmoose/bert-base-cased-qa-evaluator), which is made up of question-answer pairs from the following datasets:
27
+ - [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/)
28
+ - [RACE](http://www.cs.cmu.edu/~glai1/data/race/)
29
+ - [CoQA](https://stanfordnlp.github.io/coqa/)
30
+ - [MSMARCO](https://microsoft.github.io/msmarco/)
31
+
32
+ ## Training procedure
33
+
34
+ The question and answer were concatenated 50% of the time. In the other 50% of the time a corruption operation was performed (either swapping the answer for an unrelated answer, or by copying part of the question into the answer). The model was then trained to predict whether the input sequence represented one of the original QA pairs or a corrupted input.