cogbuji commited on
Commit
f75a317
1 Parent(s): bd24481

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +21 -5
README.md CHANGED
@@ -88,11 +88,27 @@ Langchain's RecursiveCharacterTextSplitter is used for chunking and the most com
88
  are replaced with readable equivalents. The textbooks are then broken into separate subsets, indicated below along with
89
  the textbooks they comprise:
90
 
91
- - _*Core Clinical Medicine*_
92
  - Anatomy_Gray.txt, First_Aid_Step1.txt, First_Aid_Step2.txt, Immunology_Janeway.txt, InternalMed_Harrison.txt, Neurology_Adams.txt, Obstentrics_Williams.txt, Pathoma_Husain.txt, Pediatrics_Nelson.txt, and Surgery_Schwartz.txt
93
- - _*Basic Biology*_
94
  - Biochemistry_Lippincott.txt, Cell_Biology_Alberts.txt, Histology_Ross.txt, Pathology_Robbins.txt, and Physiology_Levy.txt
95
- - _*Pharmacology*_
96
  - Pharmacology_Katzung.txt
97
- - _*Psychiatry*_
98
- - Psichiatry_DSM-5.txt
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
88
  are replaced with readable equivalents. The textbooks are then broken into separate subsets, indicated below along with
89
  the textbooks they comprise:
90
 
91
+ - Core Clinical Medicine (_*core_clinical*_)
92
  - Anatomy_Gray.txt, First_Aid_Step1.txt, First_Aid_Step2.txt, Immunology_Janeway.txt, InternalMed_Harrison.txt, Neurology_Adams.txt, Obstentrics_Williams.txt, Pathoma_Husain.txt, Pediatrics_Nelson.txt, and Surgery_Schwartz.txt
93
+ - Basic Biology (_*basic_biology*_)
94
  - Biochemistry_Lippincott.txt, Cell_Biology_Alberts.txt, Histology_Ross.txt, Pathology_Robbins.txt, and Physiology_Levy.txt
95
+ - Pharmacology (_*pharmacology*_)
96
  - Pharmacology_Katzung.txt
97
+ - Psychiatry (_*psychiatry*_)
98
+ - Psichiatry_DSM-5.txt
99
+
100
+ So, you can load the basic biology subset of the corpus via:
101
+
102
+ ```python
103
+ In [1]: import datasets
104
+ In [2]: ds = datasets.load_dataset('cogbuji/medqa_corpus_en', 'basic_biology')
105
+ In [3]: ds
106
+ Generating train split: 50386 examples [00:00, 92862.56 examples/s]
107
+ Out[3]:
108
+ DatasetDict({
109
+ train: Dataset({
110
+ features: ['text', 'source'],
111
+ num_rows: 50386
112
+ })
113
+ })
114
+ ```