--- annotations_creators: - no-annotation language: - en language_creators: - found - other license: - mit multilinguality: - monolingual pretty_name: MedQA Textbook (English) Corpus size_categories: - 10K prepared text materials from a total of 18 English medical textbooks that have been widely used by medical students and USMLE takers" [Jin, Di, et al. 2020]. This dataset is derived from this medical textbook content (those in English), providing subsets that coincide with Medical subspecialties for use in pre-training medical LLMs with gold standard domain text. ### Languages English ## Dataset Structure ### Data Instances Records have the following structure ```json {"text": "The manifestations of acute intestinal obstruction depend on the nature of the underlying [..]", "source": "textbooks/en/InternalMed_Harrison.txt"} ``` ## Dataset Creation ### Curation Rationale The MedQA dataset includes raw text corpus that is excluded from most of its [derivations](https://huggingface.co/datasets/bigbio/med_qa) and their [dataset loading scripts](https://huggingface.co/datasets/bigbio/med_qa/blob/main/med_qa.py) . This raw text is valuable for pre-training of medical LLMS. ### Source Data #### Initial Data Collection and Normalization Langchain's RecursiveCharacterTextSplitter is used for chunking and the most commonly-appearing non-ASCII characters are replaced with readable equivalents. Chunks comprising less than 90% ASCII characters were excluded. The textbooks were then broken into separate subsets, indicated below along with the textbook source(s) they comprise: - Core Clinical Medicine (_*core_clinical*_) - Anatomy_Gray.txt (1,736 records), First_Aid_Step1.txt (489 records), First_Aid_Step2.txt (800 records), Immunology_Janeway.txt (2,996 records), InternalMed_Harrison.txt (20,583 records), Neurology_Adams.txt (7,732 records), Obstentrics_Williams.txt (5,392 records), Pathoma_Husain.txt (280 records), Pediatrics_Nelson.txt (2,575 records), and Surgery_Schwartz.txt (7,803 records) - Basic Biology (_*basic_biology*_) - Biochemistry_Lippincott.txt (1,193 records), Cell_Biology_Alberts.txt (4,275 records), Histology_Ross.txt (2,685 records), Pathology_Robbins.txt (3,156 records), and Physiology_Levy.txt (2,627 records) - Pharmacology (_*pharmacology*_) - Pharmacology_Katzung.txt (4,505 records) - Psychiatry (_*psychiatry*_) - Psichiatry_DSM-5.txt (2,414 records) So, you can load the basic biology subset of the corpus via: ```python In [1]: import datasets In [2]: ds = datasets.load_dataset('cogbuji/medqa_corpus_en', 'basic_biology') Generating train split: 50386 examples [00:00, 92862.56 examples/s] In [3]: ds Out[3]: DatasetDict({ train: Dataset({ features: ['text', 'source'], num_rows: 50386 }) }) ```