Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
10K<n<100K
Language Creators:
expert-generated
Annotations Creators:
expert-generated
Source Datasets:
extended|s2orc
ArXiv:
Tags:
License:
Sasha Luccioni commited on
Commit
b377720
1 Parent(s): 7114a9c

Eval metadata batch 2 : Health Fact, Jigsaw Toxicity, LIAR, LJ Speech, MSRA NER, Multi News, NCBI Disease, Poem Sentiment (#4336)

Browse files

* Eval metadata batch 2 : Health Fact, Jigsaw Toxicity, LIAR, LJ Speech, MSRA NER, Multi News, NCBI Disease, PiQA, Poem Sentiment, QAsper

* Update README.md

fixing header

* Update datasets/piqa/README.md

Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>

* Update README.md

changing MSRA NER metric to `seqeval`

* Update README.md

removing ROUGE args

* Update README.md

removing duplicate information

* Update README.md

removing eval for now

* Update README.md

removing eval for now

Co-authored-by: sashavor <sasha.luccioni@huggingface.co>
Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>

Commit from https://github.com/huggingface/datasets/commit/095d12ff7414df118f60e00cd6494299a881743a

Files changed (1) hide show
  1. README.md +9 -9
README.md CHANGED
@@ -66,9 +66,9 @@ QASPER is a dataset for question answering on scientific research papers. It con
66
 
67
  ### Supported Tasks and Leaderboards
68
 
69
- - `question-answering`: The dataset can be used to train a model for Question Answering. Success on this task is typically measured by achieving a *high* [F1 score](https://huggingface.co/metrics/f1). The [official baseline model](https://github.com/allenai/qasper-led-baseline) currently achieves 33.63 Token F1 score & uses [Longformer](https://huggingface.co/transformers/model_doc/longformer.html). This task has an active leaderboard which can be found [here](https://paperswithcode.com/sota/question-answering-on-qasper)
70
 
71
- - `evidence-selection`: The dataset can be used to train a model for Evidence Selection. Success on this task is typically measured by achieving a *high* [F1 score](https://huggingface.co/metrics/f1). The [official baseline model](https://github.com/allenai/qasper-led-baseline) currently achieves 39.37 F1 score & uses [Longformer](https://huggingface.co/transformers/model_doc/longformer.html). This task has an active leaderboard which can be found [here](https://paperswithcode.com/sota/evidence-selection-on-qasper)
72
 
73
 
74
  ### Languages
@@ -95,17 +95,17 @@ A typical instance in the dataset:
95
  'answer': [{
96
  'unanswerable':False,
97
  'extractive_spans':["q1_answer1_extractive_span1","q1_answer1_extractive_span2"],
98
- 'yes_no':False,
99
- 'free_form_answer':"q1_answer1",
100
- 'evidence':["q1_answer1_evidence1","q1_answer1_evidence2",..],
101
  'highlighted_evidence':["q1_answer1_highlighted_evidence1","q1_answer1_highlighted_evidence2",..]
102
  },
103
  {
104
  'unanswerable':False,
105
  'extractive_spans':["q1_answer2_extractive_span1","q1_answer2_extractive_span2"],
106
- 'yes_no':False,
107
- 'free_form_answer':"q1_answer2",
108
- 'evidence':["q1_answer2_evidence1","q1_answer2_evidence2",..],
109
  'highlighted_evidence':["q1_answer2_highlighted_evidence1","q1_answer2_highlighted_evidence2",..]
110
  }],
111
  'worker_id':["q1_answer1_worker_id","q1_answer2_worker_id"]
@@ -152,7 +152,7 @@ Unanswerable answers have "unanswerable" set to true. The remaining answers have
152
 
153
  ### Data Splits
154
 
155
- | | Train | Valid |
156
  | ----- | ------ | ----- |
157
  | Number of papers | 888 | 281 |
158
  | Number of questions | 2593 | 1005 |
 
66
 
67
  ### Supported Tasks and Leaderboards
68
 
69
+ - `question-answering`: The dataset can be used to train a model for Question Answering. Success on this task is typically measured by achieving a *high* [F1 score](https://huggingface.co/metrics/f1). The [official baseline model](https://github.com/allenai/qasper-led-baseline) currently achieves 33.63 Token F1 score & uses [Longformer](https://huggingface.co/transformers/model_doc/longformer.html). This task has an active leaderboard which can be found [here](https://paperswithcode.com/sota/question-answering-on-qasper)
70
 
71
+ - `evidence-selection`: The dataset can be used to train a model for Evidence Selection. Success on this task is typically measured by achieving a *high* [F1 score](https://huggingface.co/metrics/f1). The [official baseline model](https://github.com/allenai/qasper-led-baseline) currently achieves 39.37 F1 score & uses [Longformer](https://huggingface.co/transformers/model_doc/longformer.html). This task has an active leaderboard which can be found [here](https://paperswithcode.com/sota/evidence-selection-on-qasper)
72
 
73
 
74
  ### Languages
 
95
  'answer': [{
96
  'unanswerable':False,
97
  'extractive_spans':["q1_answer1_extractive_span1","q1_answer1_extractive_span2"],
98
+ 'yes_no':False,
99
+ 'free_form_answer':"q1_answer1",
100
+ 'evidence':["q1_answer1_evidence1","q1_answer1_evidence2",..],
101
  'highlighted_evidence':["q1_answer1_highlighted_evidence1","q1_answer1_highlighted_evidence2",..]
102
  },
103
  {
104
  'unanswerable':False,
105
  'extractive_spans':["q1_answer2_extractive_span1","q1_answer2_extractive_span2"],
106
+ 'yes_no':False,
107
+ 'free_form_answer':"q1_answer2",
108
+ 'evidence':["q1_answer2_evidence1","q1_answer2_evidence2",..],
109
  'highlighted_evidence':["q1_answer2_highlighted_evidence1","q1_answer2_highlighted_evidence2",..]
110
  }],
111
  'worker_id':["q1_answer1_worker_id","q1_answer2_worker_id"]
 
152
 
153
  ### Data Splits
154
 
155
+ | | Train | Valid |
156
  | ----- | ------ | ----- |
157
  | Number of papers | 888 | 281 |
158
  | Number of questions | 2593 | 1005 |