lewtun HF staff commited on
Commit
eb99045
1 Parent(s): 9530fb8

Add evaluation results on the adversarialQA config of adversarial_qa

Browse files

Beep boop, I am a bot from Hugging Face's automatic model evaluator 👋!\
Your model has been evaluated on the adversarialQA config of the [adversarial_qa](https://huggingface.co/datasets/adversarial_qa) dataset by

@nbroad

, using the predictions stored [here](https://huggingface.co/datasets/autoevaluate/autoeval-eval-project-adversarial_qa-0243fffc-1303549871).\
Accept this pull request to see the results displayed on the [Hub leaderboard](https://huggingface.co/spaces/autoevaluate/leaderboards?dataset=adversarial_qa).\
Evaluate your model on more datasets [here](https://huggingface.co/spaces/autoevaluate/model-evaluator?dataset=adversarial_qa).

Files changed (1) hide show
  1. README.md +17 -0
README.md CHANGED
@@ -44,6 +44,23 @@ model-index:
44
  type: f1
45
  value: 12.4
46
  verified: true
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
47
  ---
48
 
49
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
44
  type: f1
45
  value: 12.4
46
  verified: true
47
+ - task:
48
+ type: question-answering
49
+ name: Question Answering
50
+ dataset:
51
+ name: adversarial_qa
52
+ type: adversarial_qa
53
+ config: adversarialQA
54
+ split: validation
55
+ metrics:
56
+ - name: Exact Match
57
+ type: exact_match
58
+ value: 42.3667
59
+ verified: true
60
+ - name: F1
61
+ type: f1
62
+ value: 53.3255
63
+ verified: true
64
  ---
65
 
66
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You