lewtun HF staff commited on
Commit
423989e
1 Parent(s): 92d1d2a

Add evaluation results on the default config of quoref

Browse files

Beep boop, I am a bot from Hugging Face's automatic model evaluator 👋!\
Your model has been evaluated on the default config of the [quoref](https://huggingface.co/datasets/quoref) dataset by

@nbroad

, using the predictions stored [here](https://huggingface.co/datasets/autoevaluate/autoeval-eval-project-quoref-9c01ff03-1305849901).\
Accept this pull request to see the results displayed on the [Hub leaderboard](https://huggingface.co/spaces/autoevaluate/leaderboards?dataset=quoref).\
Evaluate your model on more datasets [here](https://huggingface.co/spaces/autoevaluate/model-evaluator?dataset=quoref).

Files changed (1) hide show
  1. README.md +17 -0
README.md CHANGED
@@ -39,6 +39,23 @@ model-index:
39
  type: f1
40
  value: 82.336
41
  verified: true
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
42
  task:
43
  - question-answering
44
  datasets:
 
39
  type: f1
40
  value: 82.336
41
  verified: true
42
+ - task:
43
+ type: question-answering
44
+ name: Question Answering
45
+ dataset:
46
+ name: quoref
47
+ type: quoref
48
+ config: default
49
+ split: validation
50
+ metrics:
51
+ - name: Exact Match
52
+ type: exact_match
53
+ value: 78.8581
54
+ verified: true
55
+ - name: F1
56
+ type: f1
57
+ value: 82.8261
58
+ verified: true
59
  task:
60
  - question-answering
61
  datasets: