Datasets:
File size: 4,089 Bytes
07462cf e25ca88 3cf8a6a e25ca88 3b4ab6e e25ca88 c660897 e25ca88 bd24481 0eb58fe 7ab74f3 0eb58fe e25ca88 b9ec361 e25ca88 4ed7e80 b9ec361 e25ca88 8aaa6fb e25ca88 81eb226 0eb58fe f75a317 8a1c9a6 f75a317 8a1c9a6 f75a317 8a1c9a6 f75a317 8a1c9a6 f75a317 289dfe3 f75a317 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 |
---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
- other
license:
- mit
multilinguality:
- monolingual
pretty_name: MedQA Textbook (English) Corpus
size_categories:
- 10K<n<100K
source_datasets:
- med_qa
tags:
- medical
- clinical medicine
- biology
task_categories:
- text-generation
task_ids:
- language-modeling
---
# Dataset Card for MedQA English Textbooks
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
![image/png](https://huggingface.co/datasets/cogbuji/medqa_corpus_en/resolve/main/shelves.png?download=true)
## Dataset Description
### Dataset Summary
[MedQA](https://github.com/jind11/MedQA) includes
> prepared text materials from a total of 18 English medical textbooks that have been widely used by medical students and USMLE takers" [Jin, Di, et al. 2020].
This dataset is derived from this medical textbook content (those in English), providing subsets that coincide with Medical
subspecialties for use in pre-training medical LLMs with gold standard domain text.
### Languages
English
## Dataset Structure
### Data Instances
Records have the following structure
```json
{"text": "The manifestations of acute intestinal obstruction depend on the nature of the underlying [..]",
"source": "textbooks/en/InternalMed_Harrison.txt"}
```
## Dataset Creation
### Curation Rationale
The MedQA dataset includes raw text corpus that is excluded from most of its [derivations](https://huggingface.co/datasets/bigbio/med_qa)
and their [dataset loading scripts](https://huggingface.co/datasets/bigbio/med_qa/blob/main/med_qa.py) . This raw text is valuable for pre-training of medical LLMS.
### Source Data
#### Initial Data Collection and Normalization
Langchain's RecursiveCharacterTextSplitter is used for chunking and the most commonly-appearing non-ASCII characters
are replaced with readable equivalents. Chunks comprising less than 90% ASCII characters were excluded. The textbooks
were then broken into separate subsets, indicated below along with the textbook source(s) they comprise:
- Core Clinical Medicine (_*core_clinical*_)
- Anatomy_Gray.txt (1,736 records), First_Aid_Step1.txt (489 records), First_Aid_Step2.txt (800 records), Immunology_Janeway.txt (2,996 records), InternalMed_Harrison.txt (20,583 records), Neurology_Adams.txt (7,732 records), Obstentrics_Williams.txt (5,392 records), Pathoma_Husain.txt (280 records), Pediatrics_Nelson.txt (2,575 records), and Surgery_Schwartz.txt (7,803 records)
- Basic Biology (_*basic_biology*_)
- Biochemistry_Lippincott.txt (1,193 records), Cell_Biology_Alberts.txt (4,275 records), Histology_Ross.txt (2,685 records), Pathology_Robbins.txt (3,156 records), and Physiology_Levy.txt (2,627 records)
- Pharmacology (_*pharmacology*_)
- Pharmacology_Katzung.txt (4,505 records)
- Psychiatry (_*psychiatry*_)
- Psichiatry_DSM-5.txt (2,414 records)
So, you can load the basic biology subset of the corpus via:
```python
In [1]: import datasets
In [2]: ds = datasets.load_dataset('cogbuji/medqa_corpus_en', 'basic_biology')
Generating train split: 50386 examples [00:00, 92862.56 examples/s]
In [3]: ds
Out[3]:
DatasetDict({
train: Dataset({
features: ['text', 'source'],
num_rows: 50386
})
})
``` |