--- annotations_creators: - no-annotation language: - en language_creators: - found - other license: - mit multilinguality: - monolingual pretty_name: MedQA Textbook (English) Corpus size_categories: - 10K prepared text materials from a total of 18 English medical textbooks that have been widely used by medical students and USMLE takers" [Jin, Di, et al. 2020]. This dataset is derived from these medical textbooks (those in English), providing subsets that coincide with Medical subspecialties for use in pre-training medical LLMs with gold standard domain text. ### Languages English ## Dataset Structure ### Data Instances Records have the following structure ```json {"text": "The manifestations of acute intestinal obstruction depend on the nature of the underlying [..]", "source": "textbooks/en/InternalMed_Harrison.txt"} ``` ## Dataset Creation ### Curation Rationale The MedQA dataset includes raw text corpus that is excluded from most of its derivations and the raw text is valuable for pre-training of medical LLMS. ### Source Data #### Initial Data Collection and Normalization Langchain's RecursiveCharacterTextSplitter is used for chunking and the most commonly-appearing non-ASCII characters are replaced with readable equivalents. The textbooks are then broken into separate subsets, indicated below along with the textbooks they comprise: - Core Clinical Medicine (_*core_clinical*_) - Anatomy_Gray.txt, First_Aid_Step1.txt, First_Aid_Step2.txt, Immunology_Janeway.txt, InternalMed_Harrison.txt, Neurology_Adams.txt, Obstentrics_Williams.txt, Pathoma_Husain.txt, Pediatrics_Nelson.txt, and Surgery_Schwartz.txt - Basic Biology (_*basic_biology*_) - Biochemistry_Lippincott.txt, Cell_Biology_Alberts.txt, Histology_Ross.txt, Pathology_Robbins.txt, and Physiology_Levy.txt - Pharmacology (_*pharmacology*_) - Pharmacology_Katzung.txt - Psychiatry (_*psychiatry*_) - Psichiatry_DSM-5.txt So, you can load the basic biology subset of the corpus via: ```python In [1]: import datasets In [2]: ds = datasets.load_dataset('cogbuji/medqa_corpus_en', 'basic_biology') Generating train split: 50386 examples [00:00, 92862.56 examples/s] In [3]: ds Out[3]: DatasetDict({ train: Dataset({ features: ['text', 'source'], num_rows: 50386 }) }) ```