lewtun HF staff commited on
Commit
9d48a50
1 Parent(s): 932875d

Add evaluation results on the adversarialQA config of adversarial_qa

Browse files

Beep boop, I am a bot from Hugging Face's automatic model evaluator 👋!\
Your model has been evaluated on the adversarialQA config of the [adversarial_qa](https://huggingface.co/datasets/adversarial_qa) dataset by

@ceyda

, using the predictions stored [here](https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-adversarial_qa-e34332b7-12205629).\
Accept this pull request to see the results displayed on the [Hub leaderboard](https://huggingface.co/spaces/autoevaluate/leaderboards?dataset=adversarial_qa).\
Evaluate your model on more datasets [here](https://huggingface.co/spaces/autoevaluate/model-evaluator?dataset=adversarial_qa).

Files changed (1) hide show
  1. README.md +17 -0
README.md CHANGED
@@ -23,6 +23,23 @@ model-index:
23
  type: f1
24
  value: 78.6191
25
  verified: true
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
  ---
27
 
28
  # bert-base-uncased for QA
 
23
  type: f1
24
  value: 78.6191
25
  verified: true
26
+ - task:
27
+ type: question-answering
28
+ name: Question Answering
29
+ dataset:
30
+ name: adversarial_qa
31
+ type: adversarial_qa
32
+ config: adversarialQA
33
+ split: validation
34
+ metrics:
35
+ - name: Exact Match
36
+ type: exact_match
37
+ value: 23.1333
38
+ verified: true
39
+ - name: F1
40
+ type: f1
41
+ value: 34.5358
42
+ verified: true
43
  ---
44
 
45
  # bert-base-uncased for QA