aaditya commited on
Commit
d55f554
1 Parent(s): 5c26998

Add MedMCQA dataset (#4064)

Browse files

* adding medmcqa dataset

* Update datasets/medmcqa/README.md

Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>

* Update datasets/medmcqa/README.md

Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>

* Update datasets/medmcqa/README.md

Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>

* Update datasets/medmcqa/README.md

Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>

* Update medmcqa.py

* Update README.md

* Update README.md

* Update README.md

* Update README.md

* Add files via upload

* Delete dummy_data.zip

* Update medmcqa.py

* Create eadme.txt

* Add files via upload

* Delete eadme.txt

* Create medmcqa.py

* Update medmcqa.py

* Delete dataset_infos.json

* Add files via upload

* Update medmcqa.py

* Update medmcqa.py

* Update datasets/medmcqa/README.md

Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>

* Delete dummy_data.zip

* Delete medmcqa.py

* Add files via upload

* fix dummy_data location and YAML tags

* Update README.md

* Update datasets/medmcqa/medmcqa.py

Co-authored-by: Albert Villanova del Moral <8515462+albertvillanova@users.noreply.github.com>

* Update datasets/medmcqa/medmcqa.py

Co-authored-by: Albert Villanova del Moral <8515462+albertvillanova@users.noreply.github.com>

* Update datasets/medmcqa/README.md

Co-authored-by: Albert Villanova del Moral <8515462+albertvillanova@users.noreply.github.com>

* Update datasets/medmcqa/medmcqa.py

Co-authored-by: Albert Villanova del Moral <8515462+albertvillanova@users.noreply.github.com>

Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>
Co-authored-by: Quentin Lhoest <lhoest.q@gmail.com>
Co-authored-by: Albert Villanova del Moral <8515462+albertvillanova@users.noreply.github.com>

Commit from https://github.com/huggingface/datasets/commit/0d6add1e54c88592e77f749ecd106dccc910f169

Files changed (4) hide show
  1. README.md +233 -0
  2. dataset_infos.json +1 -0
  3. dummy/1.1.0/dummy_data.zip +3 -0
  4. medmcqa.py +116 -0
README.md ADDED
@@ -0,0 +1,233 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - no-annotation
4
+ language_creators:
5
+ - expert-generated
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - apache-2-0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 100K<n<1M
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - question-answering
18
+ - multiple-choice
19
+ task_ids:
20
+ - multiple-choice-qa
21
+ - open-domain-qa
22
+ paperswithcode_id: medmcqa
23
+ pretty_name: MedMCQA
24
+ ---
25
+
26
+ # Dataset Card for MedMCQA
27
+
28
+ ## Table of Contents
29
+ - [Dataset Description](#dataset-description)
30
+ - [Dataset Summary](#dataset-summary)
31
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
32
+ - [Languages](#languages)
33
+ - [Dataset Structure](#dataset-structure)
34
+ - [Data Instances](#data-instances)
35
+ - [Data Fields](#data-instances)
36
+ - [Data Splits](#data-instances)
37
+ - [Dataset Creation](#dataset-creation)
38
+ - [Curation Rationale](#curation-rationale)
39
+ - [Source Data](#source-data)
40
+ - [Annotations](#annotations)
41
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
42
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
43
+ - [Social Impact of Dataset](#social-impact-of-dataset)
44
+ - [Discussion of Biases](#discussion-of-biases)
45
+ - [Other Known Limitations](#other-known-limitations)
46
+ - [Additional Information](#additional-information)
47
+ - [Dataset Curators](#dataset-curators)
48
+ - [Licensing Information](#licensing-information)
49
+ - [Citation Information](#citation-information)
50
+
51
+ ## Dataset Description
52
+
53
+ - **Homepage:** https://medmcqa.github.io
54
+ - **Repository:** https://github.com/medmcqa/medmcqa
55
+ - **Paper:** https://arxiv.org/abs/2203.14371
56
+ - **Leaderboard:** https://paperswithcode.com/dataset/medmcqa
57
+ - **Point of Contact:** [Aaditya Ura](mailto:aadityaura@gmail.com)
58
+
59
+ ### Dataset Summary
60
+
61
+ MedMCQA is a large-scale, Multiple-Choice Question Answering (MCQA) dataset designed to address real-world medical entrance exam questions.
62
+
63
+ MedMCQA has more than 194k high-quality AIIMS & NEET PG entrance exam MCQs covering 2.4k healthcare topics and 21 medical subjects are collected with an average token length of 12.77 and high topical diversity.
64
+
65
+ Each sample contains a question, correct answer(s), and other options which require a deeper language understanding as it tests the 10+ reasoning abilities of a model across a wide range of medical subjects & topics. A detailed explanation of the solution, along with the above information, is provided in this study.
66
+
67
+ MedMCQA provides an open-source dataset for the Natural Language Processing community.
68
+ It is expected that this dataset would facilitate future research toward achieving better QA systems.
69
+ The dataset contains questions about the following topics:
70
+
71
+ - Anesthesia
72
+ - Anatomy
73
+ - Biochemistry
74
+ - Dental
75
+ - ENT
76
+ - Forensic Medicine (FM)
77
+ - Obstetrics and Gynecology (O&G)
78
+ - Medicine
79
+ - Microbiology
80
+ - Ophthalmology
81
+ - Orthopedics
82
+ - Pathology
83
+ - Pediatrics
84
+ - Pharmacology
85
+ - Physiology
86
+ - Psychiatry
87
+ - Radiology
88
+ - Skin
89
+ - Preventive & Social Medicine (PSM)
90
+ - Surgery
91
+
92
+ ### Supported Tasks and Leaderboards
93
+
94
+ multiple-choice-QA, open-domain-QA: The dataset can be used to train a model for multi-choice questions answering, open domain questions answering. Questions in these exams are challenging and generally require deeper domain and language understanding as it tests the 10+ reasoning abilities across a wide range of medical subjects & topics.
95
+
96
+ ### Languages
97
+
98
+ The questions and answers are available in English.
99
+
100
+ ## Dataset Structure
101
+
102
+ ### Data Instances
103
+
104
+ ```
105
+ {
106
+ "question":"A 40-year-old man presents with 5 days of productive cough and fever. Pseudomonas aeruginosa is isolated from a pulmonary abscess. CBC shows an acute effect characterized by marked leukocytosis (50,000 mL) and the differential count reveals a shift to left in granulocytes. Which of the following terms best describes these hematologic findings?",
107
+ "exp": "Circulating levels of leukocytes and their precursors may occasionally reach very high levels (>50,000 WBC mL). These extreme elevations are sometimes called leukemoid reactions because they are similar to the white cell counts observed in leukemia, from which they must be distinguished. The leukocytosis occurs initially because of the accelerated release of granulocytes from the bone marrow (caused by cytokines, including TNF and IL-1) There is a rise in the number of both mature and immature neutrophils in the blood, referred to as a shift to the left. In contrast to bacterial infections, viral infections (including infectious mononucleosis) are characterized by lymphocytosis Parasitic infestations and certain allergic reactions cause eosinophilia, an increase in the number of circulating eosinophils. Leukopenia is defined as an absolute decrease in the circulating WBC count.",
108
+ "cop":1,
109
+ "opa":"Leukemoid reaction",
110
+ "opb":"Leukopenia",
111
+ "opc":"Myeloid metaplasia",
112
+ "opd":"Neutrophilia",
113
+ "subject_name":"Pathology",
114
+ "topic_name":"Basic Concepts and Vascular changes of Acute Inflammation",
115
+ "id":"4e1715fe-0bc3-494e-b6eb-2d4617245aef",
116
+ "choice_type":"single"
117
+ }
118
+ ```
119
+ ### Data Fields
120
+
121
+ - `id` : a string question identifier for each example
122
+ - `question` : question text (a string)
123
+ - `opa` : Option A
124
+ - `opb` : Option B
125
+ - `opc` : Option C
126
+ - `opd` : Option D
127
+ - `cop` : Correct option, i.e., 1,2,3,4
128
+ - `choice_type` ({"single", "multi"}): Question choice type.
129
+ - "single": Single-choice question, where each choice contains a single option.
130
+ - "multi": Multi-choice question, where each choice contains a combination of multiple suboptions.
131
+ - `exp` : Expert's explanation of the answer
132
+ - `subject_name` : Medical Subject name of the particular question
133
+ - `topic_name` : Medical topic name from the particular subject
134
+
135
+ ### Data Splits
136
+
137
+ The goal of MedMCQA is to emulate the rigor of real word medical exams. To enable that, a predefined split of the dataset is provided. The split is by exams instead of the given questions. This also ensures the reusability and generalization ability of the models.
138
+
139
+ The training set of MedMCQA consists of all the collected mock & online test series, whereas the test set consists of all AIIMS PG exam MCQs (years 1991-present). The development set consists of NEET PG exam MCQs (years 2001-present) to approximate real exam evaluation.
140
+
141
+ Similar questions from train , test and dev set were removed based on similarity. The final split sizes are as follow:
142
+
143
+ | | Train | Test | Valid |
144
+ | ----- | ------ | ----- | ---- |
145
+ | Question #| 182,822 | 6,150 | 4,183|
146
+ | Vocab | 94,231 | 11,218 | 10,800 |
147
+ | Max Ques tokens | 220 | 135| 88 |
148
+ | Max Ans tokens | 38 | 21 | 25 |
149
+
150
+ ## Dataset Creation
151
+
152
+ ### Curation Rationale
153
+
154
+ Before this attempt, very few works have been done to construct biomedical MCQA datasets (Vilares and Gomez-Rodr, 2019), and they are (1) mostly small, containing up to few thousand questions, and (2) cover a limited number of Medical topics and Subjects. This paper addresses the aforementioned limitations by introducing MedMCQA, a new large-scale, Multiple-Choice Question Answering
155
+ (MCQA) dataset designed to address real-world medical entrance exam questions.
156
+
157
+ ### Source Data
158
+
159
+ #### Initial Data Collection and Normalization
160
+
161
+ Historical Exam questions from official websites - AIIMS & NEET PG (1991- present)
162
+ The raw data is collected from open websites and books
163
+
164
+
165
+ #### Who are the source language producers?
166
+
167
+ The dataset was created by Ankit Pal, Logesh Kumar Umapathi and Malaikannan Sankarasubbu
168
+
169
+ ### Annotations
170
+
171
+ #### Annotation process
172
+
173
+ The dataset does not contain any additional annotations.
174
+
175
+
176
+
177
+ #### Who are the annotators?
178
+
179
+ [Needs More Information]
180
+
181
+ ### Personal and Sensitive Information
182
+
183
+ [Needs More Information]
184
+
185
+ ## Considerations for Using the Data
186
+
187
+ ### Social Impact of Dataset
188
+
189
+ [Needs More Information]
190
+
191
+ ### Discussion of Biases
192
+
193
+ [Needs More Information]
194
+
195
+ ### Other Known Limitations
196
+
197
+ [Needs More Information]
198
+
199
+ ## Additional Information
200
+
201
+ ### Dataset Curators
202
+
203
+ [Needs More Information]
204
+
205
+ ### Licensing Information
206
+
207
+ [Needs More Information]
208
+
209
+ ### Citation Information
210
+
211
+ If you find this useful in your research, please consider citing the dataset paper
212
+
213
+ ```
214
+ @InProceedings{pmlr-v174-pal22a,
215
+ title = {MedMCQA: A Large-scale Multi-Subject Multi-Choice Dataset for Medical domain Question Answering},
216
+ author = {Pal, Ankit and Umapathi, Logesh Kumar and Sankarasubbu, Malaikannan},
217
+ booktitle = {Proceedings of the Conference on Health, Inference, and Learning},
218
+ pages = {248--260},
219
+ year = {2022},
220
+ editor = {Flores, Gerardo and Chen, George H and Pollard, Tom and Ho, Joyce C and Naumann, Tristan},
221
+ volume = {174},
222
+ series = {Proceedings of Machine Learning Research},
223
+ month = {07--08 Apr},
224
+ publisher = {PMLR},
225
+ pdf = {https://proceedings.mlr.press/v174/pal22a/pal22a.pdf},
226
+ url = {https://proceedings.mlr.press/v174/pal22a.html},
227
+ abstract = {This paper introduces MedMCQA, a new large-scale, Multiple-Choice Question Answering (MCQA) dataset designed to address real-world medical entrance exam questions. More than 194k high-quality AIIMS &amp; NEET PG entrance exam MCQs covering 2.4k healthcare topics and 21 medical subjects are collected with an average token length of 12.77 and high topical diversity. Each sample contains a question, correct answer(s), and other options which requires a deeper language understanding as it tests the 10+ reasoning abilities of a model across a wide range of medical subjects &amp; topics. A detailed explanation of the solution, along with the above information, is provided in this study.}
228
+ }
229
+ ```
230
+
231
+ ### Contributions
232
+
233
+ Thanks to [@monk1337](https://github.com/monk1337) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"default": {"description": "MedMCQA is a large-scale, Multiple-Choice Question Answering (MCQA) dataset designed to address real-world medical entrance exam questions. \nMedMCQA has more than 194k high-quality AIIMS & NEET PG entrance exam MCQs covering 2.4k healthcare topics and 21 medical subjects are collected with an average token length of 12.77 and high topical diversity.\nThe dataset contains questions about the following topics: Anesthesia, Anatomy, Biochemistry, Dental, ENT, Forensic Medicine (FM)\nObstetrics and Gynecology (O&G), Medicine, Microbiology, Ophthalmology, Orthopedics Pathology, Pediatrics, Pharmacology, Physiology, \nPsychiatry, Radiology Skin, Preventive & Social Medicine (PSM) and Surgery\n", "citation": "CHILL'2022", "homepage": "https://medmcqa.github.io", "license": "Apache License 2.0", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "opa": {"dtype": "string", "id": null, "_type": "Value"}, "opb": {"dtype": "string", "id": null, "_type": "Value"}, "opc": {"dtype": "string", "id": null, "_type": "Value"}, "opd": {"dtype": "string", "id": null, "_type": "Value"}, "cop": {"num_classes": 4, "names": ["a", "b", "c", "d"], "id": null, "_type": "ClassLabel"}, "choice_type": {"dtype": "string", "id": null, "_type": "Value"}, "exp": {"dtype": "string", "id": null, "_type": "Value"}, "subject_name": {"dtype": "string", "id": null, "_type": "Value"}, "topic_name": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "med_mcqa", "config_name": "default", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 131904057, "num_examples": 182822, "dataset_name": "med_mcqa"}, "test": {"name": "test", "num_bytes": 1447829, "num_examples": 6150, "dataset_name": "med_mcqa"}, "validation": {"name": "validation", "num_bytes": 2221468, "num_examples": 4183, "dataset_name": "med_mcqa"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=15VkJdq5eyWIkfb_aoD3oS8i4tScbHYky": {"num_bytes": 55285460, "checksum": "16c1fbc6f47d548d2af7837b18e893aa45f45c0be9bda0a9adfff3c625bf9262"}}, "download_size": 55285460, "post_processing_size": null, "dataset_size": 135573354, "size_in_bytes": 190858814}}
dummy/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f53f5b9041a97e523757de4fd2b5574fd40f5cdae57845ea8584934eceb5345e
3
+ size 3791
medmcqa.py ADDED
@@ -0,0 +1,116 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """MedMCQA : A Large-scale Multi-Subject Multi-Choice Dataset for Medical domain Question Answering"""
16
+
17
+
18
+ import json
19
+ import os
20
+
21
+ import datasets
22
+
23
+
24
+ _DESCRIPTION = """\
25
+ MedMCQA is a large-scale, Multiple-Choice Question Answering (MCQA) dataset designed to address real-world medical entrance exam questions.
26
+ MedMCQA has more than 194k high-quality AIIMS & NEET PG entrance exam MCQs covering 2.4k healthcare topics and 21 medical subjects are collected with an average token length of 12.77 and high topical diversity.
27
+ The dataset contains questions about the following topics: Anesthesia, Anatomy, Biochemistry, Dental, ENT, Forensic Medicine (FM)
28
+ Obstetrics and Gynecology (O&G), Medicine, Microbiology, Ophthalmology, Orthopedics Pathology, Pediatrics, Pharmacology, Physiology,
29
+ Psychiatry, Radiology Skin, Preventive & Social Medicine (PSM) and Surgery
30
+ """
31
+
32
+
33
+ _HOMEPAGE = "https://medmcqa.github.io"
34
+
35
+ _LICENSE = "Apache License 2.0"
36
+ _URL = "https://drive.google.com/uc?export=download&id=15VkJdq5eyWIkfb_aoD3oS8i4tScbHYky"
37
+ _CITATION = """\
38
+ @InProceedings{pmlr-v174-pal22a,
39
+ title = {MedMCQA: A Large-scale Multi-Subject Multi-Choice Dataset for Medical domain Question Answering},
40
+ author = {Pal, Ankit and Umapathi, Logesh Kumar and Sankarasubbu, Malaikannan},
41
+ booktitle = {Proceedings of the Conference on Health, Inference, and Learning},
42
+ pages = {248--260},
43
+ year = {2022},
44
+ editor = {Flores, Gerardo and Chen, George H and Pollard, Tom and Ho, Joyce C and Naumann, Tristan},
45
+ volume = {174},
46
+ series = {Proceedings of Machine Learning Research},
47
+ month = {07--08 Apr},
48
+ publisher = {PMLR},
49
+ pdf = {https://proceedings.mlr.press/v174/pal22a/pal22a.pdf},
50
+ url = {https://proceedings.mlr.press/v174/pal22a.html},
51
+ abstract = {This paper introduces MedMCQA, a new large-scale, Multiple-Choice Question Answering (MCQA) dataset designed to address real-world medical entrance exam questions. More than 194k high-quality AIIMS & NEET PG entrance exam MCQs covering 2.4k healthcare topics and 21 medical subjects are collected with an average token length of 12.77 and high topical diversity. Each sample contains a question, correct answer(s), and other options which requires a deeper language understanding as it tests the 10+ reasoning abilities of a model across a wide range of medical subjects & topics. A detailed explanation of the solution, along with the above information, is provided in this study.}
52
+ }
53
+ """
54
+
55
+
56
+ class MedMCQA(datasets.GeneratorBasedBuilder):
57
+ """MedMCQA : A Large-scale Multi-Subject Multi-Choice Dataset for Medical domain Question Answering"""
58
+
59
+ VERSION = datasets.Version("1.1.0")
60
+
61
+ def _info(self):
62
+
63
+ features = datasets.Features(
64
+ {
65
+ "id": datasets.Value("string"),
66
+ "question": datasets.Value("string"),
67
+ "opa": datasets.Value("string"),
68
+ "opb": datasets.Value("string"),
69
+ "opc": datasets.Value("string"),
70
+ "opd": datasets.Value("string"),
71
+ "cop": datasets.features.ClassLabel(names=["a", "b", "c", "d"]),
72
+ "choice_type": datasets.Value("string"),
73
+ "exp": datasets.Value("string"),
74
+ "subject_name": datasets.Value("string"),
75
+ "topic_name": datasets.Value("string"),
76
+ }
77
+ )
78
+ return datasets.DatasetInfo(
79
+ description=_DESCRIPTION,
80
+ features=features,
81
+ homepage=_HOMEPAGE,
82
+ license=_LICENSE,
83
+ citation=_CITATION,
84
+ )
85
+
86
+ def _split_generators(self, dl_manager):
87
+ """Returns SplitGenerators."""
88
+ data_dir = dl_manager.download_and_extract(_URL)
89
+ return [
90
+ datasets.SplitGenerator(
91
+ name=datasets.Split.TRAIN,
92
+ gen_kwargs={
93
+ "filepath": os.path.join(data_dir, "train.json"),
94
+ },
95
+ ),
96
+ datasets.SplitGenerator(
97
+ name=datasets.Split.TEST,
98
+ gen_kwargs={
99
+ "filepath": os.path.join(data_dir, "test.json"),
100
+ },
101
+ ),
102
+ datasets.SplitGenerator(
103
+ name=datasets.Split.VALIDATION,
104
+ gen_kwargs={
105
+ "filepath": os.path.join(data_dir, "dev.json"),
106
+ },
107
+ ),
108
+ ]
109
+
110
+ def _generate_examples(self, filepath):
111
+ with open(filepath, encoding="utf-8") as f:
112
+ for key, row in enumerate(f):
113
+ data = json.loads(row)
114
+ data["cop"] = int(data.get("cop", 0)) - 1
115
+ data["exp"] = data.get("exp", "")
116
+ yield key, data