Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
Korean
ArXiv:
Libraries:
Datasets
pandas
License:
sean0042 commited on
Commit
f351d7e
1 Parent(s): 4f00fad

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -5
README.md CHANGED
@@ -218,16 +218,30 @@ dataset_info:
218
 
219
  # KorMedMCQA : Multi-Choice Question Answering Benchmark for Korean Healthcare Professional Licensing Examinations
220
 
221
- We introduce KorMedMCQA, the first Korean multiple-choice question answering (MCQA) benchmark derived from Korean healthcare professional licensing examinations, covering from the year 2012 to year 2023.
222
- This dataset consists of a selection of questions from the license examinations for doctors, nurses, and pharmacists, featuring a diverse array of subjects.
223
- We conduct baseline experiments on various large language models, including proprietary/open-source, multilingual/Korean-additional pretrained, and clinical context pretrained models, highlighting the potential for further enhancements.
224
- We make our data publicly available on HuggingFace and provide a evaluation script via LM-Harness, inviting further exploration and advancement in Korean healthcare environments.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
225
 
226
  Paper : https://arxiv.org/abs/2403.01469
227
 
228
  ## Notice
229
 
230
- In collaboration with [Hugging Face](https://huggingface.co/datasets/ChuGyouk/KorMedMCQA_edited), we have made the following updates to the KorMedMCQA dataset:
231
 
232
  1. **Dentist Exam**: Incorporated exam questions from 2021 to 2024.
233
  2. **Updated Test Sets**: Added the 2024 exam questions for the doctor, nurse, and pharmacist test sets.
 
218
 
219
  # KorMedMCQA : Multi-Choice Question Answering Benchmark for Korean Healthcare Professional Licensing Examinations
220
 
221
+ We present KorMedMCQA, the first Korean Medical Multiple-Choice Question
222
+ Answering benchmark, derived from professional healthcare licensing
223
+ examinations conducted in Korea between 2012 and 2024. The dataset contains
224
+ 7,469 questions from examinations for doctor, nurse, pharmacist, and dentist,
225
+ covering a wide range of medical disciplines. We evaluate the performance of 59
226
+ large language models, spanning proprietary and open-source models,
227
+ multilingual and Korean-specialized models, and those fine-tuned for clinical
228
+ applications. Our results show that applying Chain of Thought (CoT) reasoning
229
+ can enhance the model performance by up to 4.5% compared to direct answering
230
+ approaches. We also investigate whether MedQA, one of the most widely used
231
+ medical benchmarks derived from the U.S. Medical Licensing Examination, can
232
+ serve as a reliable proxy for evaluating model performance in other regions-in
233
+ this case, Korea. Our correlation analysis between model scores on KorMedMCQA
234
+ and MedQA reveals that these two benchmarks align no better than benchmarks
235
+ from entirely different domains (e.g., MedQA and MMLU-Pro). This finding
236
+ underscores the substantial linguistic and clinical differences between Korean
237
+ and U.S. medical contexts, reinforcing the need for region-specific medical QA
238
+ benchmarks.
239
 
240
  Paper : https://arxiv.org/abs/2403.01469
241
 
242
  ## Notice
243
 
244
+ We have made the following updates to the KorMedMCQA dataset:
245
 
246
  1. **Dentist Exam**: Incorporated exam questions from 2021 to 2024.
247
  2. **Updated Test Sets**: Added the 2024 exam questions for the doctor, nurse, and pharmacist test sets.