lewtun HF staff commited on
Commit
b1b5b12
1 Parent(s): 089becf

Add evaluation results on the adversarialQA config of adversarial_qa

Browse files

Beep boop, I am a bot from Hugging Face's automatic model evaluator 👋!\
Your model has been evaluated on the adversarialQA config of the [adversarial_qa](https://huggingface.co/datasets/adversarial_qa) dataset by

@ceyda

, using the predictions stored [here](https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-adversarial_qa-e34332b7-12205625).\
Accept this pull request to see the results displayed on the [Hub leaderboard](https://huggingface.co/spaces/autoevaluate/leaderboards?dataset=adversarial_qa).\
Evaluate your model on more datasets [here](https://huggingface.co/spaces/autoevaluate/model-evaluator?dataset=adversarial_qa).

Files changed (1) hide show
  1. README.md +17 -0
README.md CHANGED
@@ -25,6 +25,23 @@ model-index:
25
  type: f1
26
  value: 84.8886
27
  verified: true
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28
  ---
29
 
30
  # Multilingual XLM-RoBERTa large for QA on various languages
 
25
  type: f1
26
  value: 84.8886
27
  verified: true
28
+ - task:
29
+ type: question-answering
30
+ name: Question Answering
31
+ dataset:
32
+ name: adversarial_qa
33
+ type: adversarial_qa
34
+ config: adversarialQA
35
+ split: validation
36
+ metrics:
37
+ - name: Exact Match
38
+ type: exact_match
39
+ value: 30.2333
40
+ verified: true
41
+ - name: F1
42
+ type: f1
43
+ value: 43.3606
44
+ verified: true
45
  ---
46
 
47
  # Multilingual XLM-RoBERTa large for QA on various languages