lewtun HF staff commited on
Commit
1c7f3a0
1 Parent(s): 8c84cb4

Add evaluation results on the adversarialQA config of adversarial_qa

Browse files

Beep boop, I am a bot from Hugging Face's automatic model evaluator 👋!\
Your model has been evaluated on the adversarialQA config of the [adversarial_qa](https://huggingface.co/datasets/adversarial_qa) dataset by

@nbroad

, using the predictions stored [here](https://huggingface.co/datasets/autoevaluate/autoeval-eval-project-adversarial_qa-92a1abad-1303449870).\
Accept this pull request to see the results displayed on the [Hub leaderboard](https://huggingface.co/spaces/autoevaluate/leaderboards?dataset=adversarial_qa).\
Evaluate your model on more datasets [here](https://huggingface.co/spaces/autoevaluate/model-evaluator?dataset=adversarial_qa).

Files changed (1) hide show
  1. README.md +18 -1
README.md CHANGED
@@ -9,7 +9,24 @@ datasets:
9
  - duorc
10
  model-index:
11
  - name: rob-base-superqa2
12
- results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  ---
14
 
15
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
9
  - duorc
10
  model-index:
11
  - name: rob-base-superqa2
12
+ results:
13
+ - task:
14
+ type: question-answering
15
+ name: Question Answering
16
+ dataset:
17
+ name: adversarial_qa
18
+ type: adversarial_qa
19
+ config: adversarialQA
20
+ split: test
21
+ metrics:
22
+ - name: Exact Match
23
+ type: exact_match
24
+ value: 12.4
25
+ verified: true
26
+ - name: F1
27
+ type: f1
28
+ value: 12.4
29
+ verified: true
30
  ---
31
 
32
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You