lewtun HF staff commited on
Commit
85791e7
1 Parent(s): 54d10aa

Add evaluation results on the adversarialQA config of adversarial_qa

Browse files

Beep boop, I am a bot from Hugging Face's automatic model evaluator 👋!\
Your model has been evaluated on the adversarialQA config of the [adversarial_qa](https://huggingface.co/datasets/adversarial_qa) dataset by

@mbartolo

, using the predictions stored [here](https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-adversarial_qa-58460439-11825575).\
Accept this pull request to see the results displayed on the [Hub leaderboard](https://huggingface.co/spaces/autoevaluate/leaderboards?dataset=adversarial_qa).\
Evaluate your model on more datasets [here](https://huggingface.co/spaces/autoevaluate/model-evaluator?dataset=adversarial_qa).

Files changed (1) hide show
  1. README.md +17 -0
README.md CHANGED
@@ -27,6 +27,23 @@ model-index:
27
  type: f1
28
  value: 91.1623
29
  verified: true
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30
  ---
31
  # deberta-v3-large for QA
32
 
 
27
  type: f1
28
  value: 91.1623
29
  verified: true
30
+ - task:
31
+ type: question-answering
32
+ name: Question Answering
33
+ dataset:
34
+ name: adversarial_qa
35
+ type: adversarial_qa
36
+ config: adversarialQA
37
+ split: validation
38
+ metrics:
39
+ - name: Exact Match
40
+ type: exact_match
41
+ value: 41.9333
42
+ verified: true
43
+ - name: F1
44
+ type: f1
45
+ value: 56.3652
46
+ verified: true
47
  ---
48
  # deberta-v3-large for QA
49