albertvillanova HF staff commited on
Commit
a806618
1 Parent(s): 0da2d0f

Add configs with processed data to medical_dialog dataset (#4127)

Browse files

* Add configs with processed data in medical_dialog dataset

* Update metadata JSON

* Update dataset card

* Rename dummy data dirs

* Fix script

Commit from https://github.com/huggingface/datasets/commit/eab78694e17f10c200bceb60c6f21a2f70eadf68

README.md CHANGED
@@ -14,7 +14,7 @@ licenses:
14
  multilinguality:
15
  - monolingual
16
  size_categories:
17
- - n<1K
18
  source_datasets:
19
  - original
20
  task_categories:
@@ -25,7 +25,7 @@ paperswithcode_id: null
25
  pretty_name: MedDialog
26
  ---
27
 
28
- # Dataset Card for [Dataset Name]
29
 
30
  ## Table of Contents
31
  - [Dataset Description](#dataset-description)
@@ -53,11 +53,11 @@ pretty_name: MedDialog
53
 
54
  ## Dataset Description
55
 
56
- - **Homepage:** https://github.com/UCSD-AI4H/Medical-Dialogue-System
57
- - **Repository:** Hosted on [this link](https://drive.google.com/drive/folders/1r09_i8nJ9c1nliXVGXwSqRYqklcHd9e2) for Chinese and [this link](https://drive.google.com/drive/folders/1g29ssimdZ6JzTST6Y8g6h-ogUNReBtJD) for English.
58
- - **Paper:** Details about the dataset can be found in [this arxiv papaer](https://arxiv.org/abs/2004.03329)
59
- - **Leaderboard:**
60
- - **Point of Contact:**
61
 
62
  ### Dataset Summary
63
 
@@ -79,7 +79,16 @@ Monolingual. The datasets are in English (EN) and Chinese (ZH)
79
  ## Dataset Structure
80
 
81
  ### Data Instances
82
- #### For English:
 
 
 
 
 
 
 
 
 
83
 
84
  Each consultation consists of the below:
85
  - ID
@@ -89,7 +98,7 @@ Each consultation consists of the below:
89
 
90
  The dataset is built from [icliniq.com](https://www.icliniq.com/), [healthcaremagic.com](https://www.healthcaremagic.com/), [healthtap.com](https://www.healthtap.com/) and all copyrights of the data belong to these websites.
91
 
92
- #### For Chinese:
93
 
94
  Each consultation consists of the below:
95
  - ID
@@ -113,6 +122,26 @@ One example for chinese is
113
  }
114
  ```
115
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
116
 
117
  ### Data Fields
118
 
@@ -128,13 +157,26 @@ These are arranged as below in the prepared dataset. Each item will be represent
128
  - "dialogue_url": string - url of the conversation
129
  - "dialogue_turns": datasets.Sequence - sequence of dialogues between patient and the doctor.Consists ClassLabel(names=["病人", "医生"]), and "utterance"(string) for each turn. (ClassLable(names=["Patient", "Doctor"]) for english)
130
 
 
 
 
 
 
 
131
 
132
  ### Data Splits
133
 
134
- There are no data splits on the original data. The "train" split for each language contains:
135
  - en: 229674 examples
136
  - zh: 1921127 examples
137
 
 
 
 
 
 
 
 
138
  ## Dataset Creation
139
 
140
  ### Curation Rationale
@@ -187,15 +229,17 @@ Medical dialogue systems are promising in assisting in telemedicine to increase
187
 
188
  ### Licensing Information
189
 
190
- [More Information Needed]
191
 
192
  ### Citation Information
 
193
  @article{chen2020meddiag,
194
  title={MedDialog: a large-scale medical dialogue dataset},
195
  author={Chen, Shu and Ju, Zeqian and Dong, Xiangyu and Fang, Hongchao and Wang, Sicheng and Yang, Yue and Zeng, Jiaqi and Zhang, Ruisi and Zhang, Ruoyu and Zhou, Meng and Zhu, Penghui and Xie, Pengtao},
196
  journal={arXiv preprint arXiv:2004.03329},
197
  year={2020}
198
  }
 
199
 
200
  ### Contributions
201
 
14
  multilinguality:
15
  - monolingual
16
  size_categories:
17
+ - 1M<n<10M
18
  source_datasets:
19
  - original
20
  task_categories:
25
  pretty_name: MedDialog
26
  ---
27
 
28
+ # Dataset Card for MedDialog
29
 
30
  ## Table of Contents
31
  - [Dataset Description](#dataset-description)
53
 
54
  ## Dataset Description
55
 
56
+ [//]: # (- **Homepage:** )
57
+ - **Repository:** https://github.com/UCSD-AI4H/Medical-Dialogue-System
58
+ - **Paper:** [MedDialog: Two Large-scale Medical Dialogue Datasets](https://arxiv.org/abs/2004.03329)
59
+ [//]: # (- **Leaderboard:** )
60
+ [//]: # (- **Point of Contact:** )
61
 
62
  ### Dataset Summary
63
 
79
  ## Dataset Structure
80
 
81
  ### Data Instances
82
+
83
+ There are 4 configurations:
84
+ - Raw data:
85
+ - en
86
+ - zh
87
+ - Processed data:
88
+ - processed.en
89
+ - processed.zh
90
+
91
+ #### en
92
 
93
  Each consultation consists of the below:
94
  - ID
98
 
99
  The dataset is built from [icliniq.com](https://www.icliniq.com/), [healthcaremagic.com](https://www.healthcaremagic.com/), [healthtap.com](https://www.healthtap.com/) and all copyrights of the data belong to these websites.
100
 
101
+ #### zh
102
 
103
  Each consultation consists of the below:
104
  - ID
122
  }
123
  ```
124
 
125
+ #### processed.en
126
+ ```
127
+ {
128
+ 'description': 'throat a bit sore and want to get a good imune booster, especially in light of the virus. please advise. have not been in contact with nyone with the virus.',
129
+ 'utterances': [
130
+ 'patient: throat a bit sore and want to get a good imune booster, especially in light of the virus. please advise. have not been in contact with nyone with the virus.',
131
+ "doctor: during this pandemic. throat pain can be from a strep throat infection (antibiotics needed), a cold or influenza or other virus, or from some other cause such as allergies or irritants. usually, a person sees the doctor (call first) if the sore throat is bothersome, recurrent, or doesn't go away quickly. covid-19 infections tend to have cough, whereas strep throat usually lacks cough but has more throat pain. (3/21/20)"
132
+ ]
133
+ }
134
+ ```
135
+
136
+ #### processed.zh
137
+ ```
138
+ {
139
+ 'utterances': [
140
+ '病人:强制性脊柱炎,晚上睡觉翻身时腰骶骨区域疼痛,其他身体任何部位均不疼痛。',
141
+ '医生:应该没有问题,但最好把图像上传看看。'
142
+ ]
143
+ }
144
+ ```
145
 
146
  ### Data Fields
147
 
157
  - "dialogue_url": string - url of the conversation
158
  - "dialogue_turns": datasets.Sequence - sequence of dialogues between patient and the doctor.Consists ClassLabel(names=["病人", "医生"]), and "utterance"(string) for each turn. (ClassLable(names=["Patient", "Doctor"]) for english)
159
 
160
+ #### processed.en
161
+ - `description` (str): Description of the dialog.
162
+ - `utterances` (list of str): Dialog utterances between patient and doctor.
163
+
164
+ #### processed.zh
165
+ - `utterances` (list of str): Dialog utterances between patient and doctor.
166
 
167
  ### Data Splits
168
 
169
+ There are no data splits on the original raw data. The "train" split for each language contains:
170
  - en: 229674 examples
171
  - zh: 1921127 examples
172
 
173
+ For processed configurations, data is split into train, validation and test, with the following number of examples:
174
+
175
+ | | train | validation | test |
176
+ |--------------|--------:|-----------:|-------:|
177
+ | processed.en | 482 | 60 | 61 |
178
+ | processed.zh | 2725989 | 340748 | 340754 |
179
+
180
  ## Dataset Creation
181
 
182
  ### Curation Rationale
229
 
230
  ### Licensing Information
231
 
232
+ Unknow.
233
 
234
  ### Citation Information
235
+ ```
236
  @article{chen2020meddiag,
237
  title={MedDialog: a large-scale medical dialogue dataset},
238
  author={Chen, Shu and Ju, Zeqian and Dong, Xiangyu and Fang, Hongchao and Wang, Sicheng and Yang, Yue and Zeng, Jiaqi and Zhang, Ruisi and Zhang, Ruoyu and Zhou, Meng and Zhu, Penghui and Xie, Pengtao},
239
  journal={arXiv preprint arXiv:2004.03329},
240
  year={2020}
241
  }
242
+ ```
243
 
244
  ### Contributions
245
 
dataset_infos.json CHANGED
@@ -1 +1 @@
1
- {"en": {"description": "The MedDialog dataset (English) contains conversations (in English) between doctors and patients.It has 0.26 million dialogues. The data is continuously growing and more dialogues will be added. The raw dialogues are from healthcaremagic.com and icliniq.com.\nAll copyrights of the data belong to healthcaremagic.com and icliniq.com.\n", "citation": "@article{chen2020meddiag,\n title={MedDialog: a large-scale medical dialogue dataset},\n author={Chen, Shu and Ju, Zeqian and Dong, Xiangyu and Fang, Hongchao and Wang, Sicheng and Yang, Yue and Zeng, Jiaqi and Zhang, Ruisi and Zhang, Ruoyu and Zhou, Meng and Zhu, Penghui and Xie, Pengtao},\n journal={arXiv preprint arXiv:2004.03329},\n year={2020}\n}\n", "homepage": "https://github.com/UCSD-AI4H/Medical-Dialogue-System", "license": "", "features": {"file_name": {"dtype": "string", "id": null, "_type": "Value"}, "dialogue_id": {"dtype": "int32", "id": null, "_type": "Value"}, "dialogue_url": {"dtype": "string", "id": null, "_type": "Value"}, "dialogue_turns": {"feature": {"speaker": {"num_classes": 2, "names": ["Patient", "Doctor"], "names_file": null, "id": null, "_type": "ClassLabel"}, "utterance": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "medical_dialog", "config_name": "en", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 290274759, "num_examples": 229674, "dataset_name": "medical_dialog"}}, "download_checksums": {}, "download_size": 0, "post_processing_size": null, "dataset_size": 290274759, "size_in_bytes": 290274759}, "zh": {"description": "The MedDialog dataset (English) contains conversations (in English) between doctors and patients.It has 0.26 million dialogues. The data is continuously growing and more dialogues will be added. The raw dialogues are from healthcaremagic.com and icliniq.com.\nAll copyrights of the data belong to healthcaremagic.com and icliniq.com.\n", "citation": "@article{chen2020meddiag,\n title={MedDialog: a large-scale medical dialogue dataset},\n author={Chen, Shu and Ju, Zeqian and Dong, Xiangyu and Fang, Hongchao and Wang, Sicheng and Yang, Yue and Zeng, Jiaqi and Zhang, Ruisi and Zhang, Ruoyu and Zhou, Meng and Zhu, Penghui and Xie, Pengtao},\n journal={arXiv preprint arXiv:2004.03329},\n year={2020}\n}\n", "homepage": "https://github.com/UCSD-AI4H/Medical-Dialogue-System", "license": "", "features": {"file_name": {"dtype": "string", "id": null, "_type": "Value"}, "dialogue_id": {"dtype": "int32", "id": null, "_type": "Value"}, "dialogue_url": {"dtype": "string", "id": null, "_type": "Value"}, "dialogue_turns": {"feature": {"speaker": {"num_classes": 2, "names": ["\u75c5\u4eba", "\u533b\u751f"], "names_file": null, "id": null, "_type": "ClassLabel"}, "utterance": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "medical_dialog", "config_name": "zh", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1092063621, "num_examples": 1921127, "dataset_name": "medical_dialog"}}, "download_checksums": {}, "download_size": 0, "post_processing_size": null, "dataset_size": 1092063621, "size_in_bytes": 1092063621}}
1
+ {"en": {"description": "The MedDialog dataset (English) contains conversations (in English) between doctors and patients.It has 0.26 million dialogues. The data is continuously growing and more dialogues will be added. The raw dialogues are from healthcaremagic.com and icliniq.com.\nAll copyrights of the data belong to healthcaremagic.com and icliniq.com.\n", "citation": "@article{chen2020meddiag,\n title={MedDialog: a large-scale medical dialogue dataset},\n author={Chen, Shu and Ju, Zeqian and Dong, Xiangyu and Fang, Hongchao and Wang, Sicheng and Yang, Yue and Zeng, Jiaqi and Zhang, Ruisi and Zhang, Ruoyu and Zhou, Meng and Zhu, Penghui and Xie, Pengtao},\n journal={arXiv preprint arXiv:2004.03329},\n year={2020}\n}\n", "homepage": "https://github.com/UCSD-AI4H/Medical-Dialogue-System", "license": "", "features": {"file_name": {"dtype": "string", "id": null, "_type": "Value"}, "dialogue_id": {"dtype": "int32", "id": null, "_type": "Value"}, "dialogue_url": {"dtype": "string", "id": null, "_type": "Value"}, "dialogue_turns": {"feature": {"speaker": {"num_classes": 2, "names": ["Patient", "Doctor"], "id": null, "_type": "ClassLabel"}, "utterance": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "medical_dialog", "config_name": "en", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 290274759, "num_examples": 229674, "dataset_name": "medical_dialog"}}, "download_checksums": {}, "download_size": 0, "post_processing_size": null, "dataset_size": 290274759, "size_in_bytes": 290274759}, "zh": {"description": "The MedDialog dataset (English) contains conversations (in English) between doctors and patients.It has 0.26 million dialogues. The data is continuously growing and more dialogues will be added. The raw dialogues are from healthcaremagic.com and icliniq.com.\nAll copyrights of the data belong to healthcaremagic.com and icliniq.com.\n", "citation": "@article{chen2020meddiag,\n title={MedDialog: a large-scale medical dialogue dataset},\n author={Chen, Shu and Ju, Zeqian and Dong, Xiangyu and Fang, Hongchao and Wang, Sicheng and Yang, Yue and Zeng, Jiaqi and Zhang, Ruisi and Zhang, Ruoyu and Zhou, Meng and Zhu, Penghui and Xie, Pengtao},\n journal={arXiv preprint arXiv:2004.03329},\n year={2020}\n}\n", "homepage": "https://github.com/UCSD-AI4H/Medical-Dialogue-System", "license": "", "features": {"file_name": {"dtype": "string", "id": null, "_type": "Value"}, "dialogue_id": {"dtype": "int32", "id": null, "_type": "Value"}, "dialogue_url": {"dtype": "string", "id": null, "_type": "Value"}, "dialogue_turns": {"feature": {"speaker": {"num_classes": 2, "names": ["\u75c5\u4eba", "\u533b\u751f"], "id": null, "_type": "ClassLabel"}, "utterance": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "medical_dialog", "config_name": "zh", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1092063621, "num_examples": 1921127, "dataset_name": "medical_dialog"}}, "download_checksums": {}, "download_size": 0, "post_processing_size": null, "dataset_size": 1092063621, "size_in_bytes": 1092063621}, "processed.en": {"description": "The MedDialog dataset (English) contains conversations (in English) between doctors and patients.It has 0.26 million dialogues. The data is continuously growing and more dialogues will be added. The raw dialogues are from healthcaremagic.com and icliniq.com.\nAll copyrights of the data belong to healthcaremagic.com and icliniq.com.\n", "citation": "@article{chen2020meddiag,\n title={MedDialog: a large-scale medical dialogue dataset},\n author={Chen, Shu and Ju, Zeqian and Dong, Xiangyu and Fang, Hongchao and Wang, Sicheng and Yang, Yue and Zeng, Jiaqi and Zhang, Ruisi and Zhang, Ruoyu and Zhou, Meng and Zhu, Penghui and Xie, Pengtao},\n journal={arXiv preprint arXiv:2004.03329},\n year={2020}\n}\n", "homepage": "https://github.com/UCSD-AI4H/Medical-Dialogue-System", "license": "Copyright", "features": {"description": {"dtype": "string", "id": null, "_type": "Value"}, "utterances": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "medical_dialog", "config_name": "processed.en", "version": {"version_str": "2.0.0", "description": null, "major": 2, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 370745, "num_examples": 482, "dataset_name": "medical_dialog"}, "validation": {"name": "validation", "num_bytes": 52145, "num_examples": 60, "dataset_name": "medical_dialog"}, "test": {"name": "test", "num_bytes": 46514, "num_examples": 61, "dataset_name": "medical_dialog"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=1ria4E6IdTIPsikL4Glm3uy1tFKJKw0W8": {"num_bytes": 414490, "checksum": "568a9c6c670502eec3319c78e9d12c0aebb883c0d1e45095b5dd5f99d8b6b874"}, "https://drive.google.com/uc?export=download&id=1KAZneuwdfEVQQM6euCX4pMDP-9DQpiB5": {"num_bytes": 57706, "checksum": "a5cd29f17fcfedf01af41410e12e47474ba1176f376136e18fc0446b7e2f52b2"}, "https://drive.google.com/uc?export=download&id=10izqL71kcgnteYsf87Vh6j_mZ8sZM2Rc": {"num_bytes": 52018, "checksum": "316e5b3eb03ec7210b0d84414df0e84a42b396205d72a2b5fdba533fd19a5ebd"}}, "download_size": 524214, "post_processing_size": null, "dataset_size": 469404, "size_in_bytes": 993618}, "processed.zh": {"description": "The MedDialog dataset (English) contains conversations (in English) between doctors and patients.It has 0.26 million dialogues. The data is continuously growing and more dialogues will be added. The raw dialogues are from healthcaremagic.com and icliniq.com.\nAll copyrights of the data belong to healthcaremagic.com and icliniq.com.\n", "citation": "@article{chen2020meddiag,\n title={MedDialog: a large-scale medical dialogue dataset},\n author={Chen, Shu and Ju, Zeqian and Dong, Xiangyu and Fang, Hongchao and Wang, Sicheng and Yang, Yue and Zeng, Jiaqi and Zhang, Ruisi and Zhang, Ruoyu and Zhou, Meng and Zhu, Penghui and Xie, Pengtao},\n journal={arXiv preprint arXiv:2004.03329},\n year={2020}\n}\n", "homepage": "https://github.com/UCSD-AI4H/Medical-Dialogue-System", "license": "Copyright", "features": {"utterances": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "medical_dialog", "config_name": "processed.zh", "version": {"version_str": "2.0.0", "description": null, "major": 2, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1571262099, "num_examples": 2725989, "dataset_name": "medical_dialog"}, "validation": {"name": "validation", "num_bytes": 197117565, "num_examples": 340748, "dataset_name": "medical_dialog"}, "test": {"name": "test", "num_bytes": 196526738, "num_examples": 340754, "dataset_name": "medical_dialog"}}, "download_checksums": {"https://drive.google.com/uc?export=download&id=1AaDJoHaiHAwEZwtskRH8oL1UP4FRgmgx": {"num_bytes": 1665206303, "checksum": "fd34385487755d95783cf834921bff14ceb74d9a244962577140c9e291dce4e9"}, "https://drive.google.com/uc?export=download&id=1TvfZCmQqP1kURIfEinOcj5VOPelTuGwI": {"num_bytes": 208871784, "checksum": "ed6b04ff4d62a4fa5b5b85327d692302b3369c0d28e9da887c12ec78ea778ce4"}, "https://drive.google.com/uc?export=download&id=1pmmG95Yl6mMXRXDDSRb9-bYTxOE7ank5": {"num_bytes": 208276068, "checksum": "b1118b614f866089a1daf18107a72dd5ba77c50a1e9ca145491ddcef89d797b7"}}, "download_size": 2082354155, "post_processing_size": null, "dataset_size": 1964906402, "size_in_bytes": 4047260557}}
dummy/en/{1.0.0 → 2.0.0}/dummy_data.zip RENAMED
File without changes
dummy/zh/{1.0.0 → 2.0.0}/dummy_data.zip RENAMED
File without changes
medical_dialog.py CHANGED
@@ -15,6 +15,7 @@
15
 
16
 
17
  import copy
 
18
  import os
19
  import re
20
 
@@ -41,20 +42,48 @@ All copyrights of the data belong to healthcaremagic.com and icliniq.com.
41
 
42
  _HOMEPAGE = "https://github.com/UCSD-AI4H/Medical-Dialogue-System"
43
 
44
- _LICENSE = ""
 
 
 
 
 
 
 
 
 
 
 
 
 
 
45
 
46
 
47
  class MedicalDialog(datasets.GeneratorBasedBuilder):
48
- VERSION = datasets.Version("1.0.0")
49
 
50
  BUILDER_CONFIGS = [
51
- datasets.BuilderConfig(name="en", description="The dataset of medical dialogs in English.", version=VERSION),
52
- datasets.BuilderConfig(name="zh", description="The dataset of medical dialogs in Chinese.", version=VERSION),
 
 
 
 
 
 
 
 
 
 
53
  ]
54
 
55
  @property
56
  def manual_download_instructions(self):
57
- return """\
 
 
 
 
58
  \n For English:\nYou need to go to https://drive.google.com/drive/folders/1g29ssimdZ6JzTST6Y8g6h-ogUNReBtJD?usp=sharing,\
59
  and manually download the dataset from Google Drive. Once it is completed,
60
  a file named Medical-Dialogue-Dataset-English-<timestamp-info>.zip will appear in your Downloads folder(
@@ -73,6 +102,7 @@ class MedicalDialog(datasets.GeneratorBasedBuilder):
73
  - A caution while downloading from drive. It is better to download single files since creating a zip might not include files <500 MB. This has been observed mutiple times.
74
  - After downloading the files and adding them to the appropriate folder, the path of the folder can be given as input tu the data_dir path.
75
  """
 
76
 
77
  def _info(self):
78
  if self.config.name == "zh":
@@ -89,8 +119,7 @@ class MedicalDialog(datasets.GeneratorBasedBuilder):
89
  ),
90
  }
91
  )
92
-
93
- if self.config.name == "en":
94
  features = datasets.Features(
95
  {
96
  "file_name": datasets.Value("string"),
@@ -104,35 +133,48 @@ class MedicalDialog(datasets.GeneratorBasedBuilder):
104
  ),
105
  }
106
  )
107
-
 
 
 
 
 
 
 
 
 
 
 
 
108
  return datasets.DatasetInfo(
109
- # This is the description that will appear on the datasets page.
110
  description=_DESCRIPTION,
111
  features=features,
112
- supervised_keys=None,
113
- # Homepage of the dataset for documentation
114
  homepage=_HOMEPAGE,
115
- # License for the dataset if available
116
  license=_LICENSE,
117
- # Citation for the dataset
118
  citation=_CITATION,
119
  )
120
 
121
  def _split_generators(self, dl_manager):
122
  """Returns SplitGenerators."""
123
- path_to_manual_file = os.path.abspath(os.path.expanduser(dl_manager.manual_dir))
124
- if not os.path.exists(path_to_manual_file):
125
- raise FileNotFoundError(
126
- f"{path_to_manual_file} does not exist. Make sure you insert a manual dir via `datasets.load_dataset('medical_dialog', data_dir=...)`. Manual download instructions: {self.manual_download_instructions})"
127
- )
128
-
129
- filepaths = [
130
- os.path.join(path_to_manual_file, txt_file_name)
131
- for txt_file_name in sorted(os.listdir(path_to_manual_file))
132
- if txt_file_name.endswith("txt")
133
- ]
134
-
135
- return [datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepaths": filepaths})]
 
 
 
 
 
 
136
 
137
  def _generate_examples(self, filepaths):
138
  """Yields examples. Iterates over each file and give the creates the corresponding features.
@@ -141,132 +183,152 @@ class MedicalDialog(datasets.GeneratorBasedBuilder):
141
  - The code makes some assumption on the structure of the raw .txt file.
142
  - There are some checks to separate different id's. Hopefully, should not cause further issues later when more txt files are added.
143
  """
144
- data_lang = self.config.name
145
- id_ = -1
146
- for filepath in filepaths:
147
- with open(filepath, encoding="utf-8") as f_in:
148
- # Parameters to just "sectionize" the raw data
149
- last_part = ""
150
- last_dialog = {}
151
- last_list = []
152
- last_user = ""
153
- check_list = []
154
-
155
- # These flags are present to have a single function address both chinese and english data
156
- # English data is a little hahazard (i.e. the sentences spans multiple different lines),
157
- # Chinese is compact with one line for doctor and patient.
158
- conv_flag = False
159
- des_flag = False
160
-
161
- while True:
162
- line = f_in.readline()
163
- if not line:
164
- break
165
-
166
- # Extracting the dialog id
167
- if line[:2] == "id": # Hardcode alert!
168
- # Handling ID references that may come in the description
169
- # These were observed in the Chinese dataset and were not
170
- # followed by numbers
171
- try:
172
- dialogue_id = int(re.findall(r"\d+", line)[0])
173
- except IndexError:
174
- continue
175
-
176
- # Extracting the url
177
- if line[:4] == "http": # Hardcode alert!
178
- dialogue_url = line.rstrip()
179
-
180
- # Extracting the patient info from description.
181
- if line[:11] == "Description": # Hardcode alert!
182
- last_part = "description"
183
- last_dialog = {}
184
- last_list = []
185
- last_user = ""
186
- last_conv = {"speaker": "", "utterance": ""}
187
- while True:
188
- line = f_in.readline()
189
- if (not line) or (line in ["\n", "\n\r"]):
190
- break
191
  else:
192
- if data_lang == "zh": # Condition in chinese
193
- if line[:5] == "病情描述:": # Hardcode alert!
194
- last_user = "病人"
195
- sen = f_in.readline().rstrip()
196
- des_flag = True
197
-
198
- if data_lang == "en":
199
- last_user = "Patient"
200
- sen = line.rstrip()
201
- des_flag = True
202
-
203
- if des_flag:
204
- if sen == "":
205
- continue
206
- if sen in check_list:
207
- last_conv["speaker"] = ""
208
- last_conv["utterance"] = ""
209
- else:
210
- last_conv["speaker"] = last_user
211
- last_conv["utterance"] = sen
212
- check_list.append(sen)
213
- des_flag = False
214
- break
215
- # Extracting the conversation info from dialogue.
216
- elif line[:8] == "Dialogue": # Hardcode alert!
217
- if last_part == "description" and len(last_conv["utterance"]) > 0:
218
- last_part = "dialogue"
219
- if data_lang == "zh":
220
- last_user = "病人"
221
-
222
- if data_lang == "en":
223
- last_user = "Patient"
224
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
225
  while True:
226
  line = f_in.readline()
227
  if (not line) or (line in ["\n", "\n\r"]):
228
- conv_flag = False
229
- last_user = ""
230
- last_list.append(copy.deepcopy(last_conv))
231
- # To ensure close of conversation, only even number of sentences
232
- # are extracted
233
- last_turn = len(last_list)
234
- if int(last_turn / 2) > 0:
235
- temp = int(last_turn / 2)
236
- id_ += 1
237
- last_dialog["file_name"] = filepath
238
- last_dialog["dialogue_id"] = dialogue_id
239
- last_dialog["dialogue_url"] = dialogue_url
240
- last_dialog["dialogue_turns"] = last_list[: temp * 2]
241
- yield id_, last_dialog
242
  break
 
 
 
 
 
 
 
 
 
 
 
243
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
244
  if data_lang == "zh":
245
- if line[:3] == "病人:" or line[:3] == "医生:": # Hardcode alert!
246
- user = line[:2] # Hardcode alert!
247
- line = f_in.readline()
248
- conv_flag = True
249
 
250
- # The elif block is to ensure that multi-line sentences are captured.
251
- # This has been observed only in english.
252
  if data_lang == "en":
253
- if line.strip() == "Patient:" or line.strip() == "Doctor:": # Hardcode alert!
254
- user = line.replace(":", "").rstrip()
255
- line = f_in.readline()
256
- conv_flag = True
257
- elif line[:2] != "id": # Hardcode alert!
258
- conv_flag = True
259
-
260
- # Continues till the next ID is parsed
261
- if conv_flag:
262
- sen = line.rstrip()
263
- if sen == "":
264
- continue
265
-
266
- if user == last_user:
267
- last_conv["utterance"] = last_conv["utterance"] + sen
268
- else:
269
- last_user = user
270
  last_list.append(copy.deepcopy(last_conv))
271
- last_conv["utterance"] = sen
272
- last_conv["speaker"] = user
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
 
16
 
17
  import copy
18
+ import json
19
  import os
20
  import re
21
 
42
 
43
  _HOMEPAGE = "https://github.com/UCSD-AI4H/Medical-Dialogue-System"
44
 
45
+ _LICENSE = "Unknown"
46
+
47
+ # URLS of processed data
48
+ _URLS = {
49
+ "en": {
50
+ "train": "https://drive.google.com/uc?export=download&id=1ria4E6IdTIPsikL4Glm3uy1tFKJKw0W8",
51
+ "validation": "https://drive.google.com/uc?export=download&id=1KAZneuwdfEVQQM6euCX4pMDP-9DQpiB5",
52
+ "test": "https://drive.google.com/uc?export=download&id=10izqL71kcgnteYsf87Vh6j_mZ8sZM2Rc",
53
+ },
54
+ "zh": {
55
+ "train": "https://drive.google.com/uc?export=download&id=1AaDJoHaiHAwEZwtskRH8oL1UP4FRgmgx",
56
+ "validation": "https://drive.google.com/uc?export=download&id=1TvfZCmQqP1kURIfEinOcj5VOPelTuGwI",
57
+ "test": "https://drive.google.com/uc?export=download&id=1pmmG95Yl6mMXRXDDSRb9-bYTxOE7ank5",
58
+ },
59
+ }
60
 
61
 
62
  class MedicalDialog(datasets.GeneratorBasedBuilder):
63
+ VERSION = datasets.Version("2.0.0")
64
 
65
  BUILDER_CONFIGS = [
66
+ datasets.BuilderConfig(
67
+ name="en", description="The raw dataset of medical dialogs in English.", version=VERSION
68
+ ),
69
+ datasets.BuilderConfig(
70
+ name="zh", description="The raw dataset of medical dialogs in Chinese.", version=VERSION
71
+ ),
72
+ datasets.BuilderConfig(
73
+ name="processed.en", description="The processed dataset of medical dialogs in English.", version=VERSION
74
+ ),
75
+ datasets.BuilderConfig(
76
+ name="processed.zh", description="The processed dataset of medical dialogs in Chinese.", version=VERSION
77
+ ),
78
  ]
79
 
80
  @property
81
  def manual_download_instructions(self):
82
+ *processed, _ = self.config.name.split(".")
83
+ return (
84
+ None
85
+ if processed
86
+ else """\
87
  \n For English:\nYou need to go to https://drive.google.com/drive/folders/1g29ssimdZ6JzTST6Y8g6h-ogUNReBtJD?usp=sharing,\
88
  and manually download the dataset from Google Drive. Once it is completed,
89
  a file named Medical-Dialogue-Dataset-English-<timestamp-info>.zip will appear in your Downloads folder(
102
  - A caution while downloading from drive. It is better to download single files since creating a zip might not include files <500 MB. This has been observed mutiple times.
103
  - After downloading the files and adding them to the appropriate folder, the path of the folder can be given as input tu the data_dir path.
104
  """
105
+ )
106
 
107
  def _info(self):
108
  if self.config.name == "zh":
119
  ),
120
  }
121
  )
122
+ elif self.config.name == "en":
 
123
  features = datasets.Features(
124
  {
125
  "file_name": datasets.Value("string"),
133
  ),
134
  }
135
  )
136
+ elif self.config.name == "processed.en":
137
+ features = datasets.Features(
138
+ {
139
+ "description": datasets.Value("string"),
140
+ "utterances": datasets.Sequence(datasets.Value("string")),
141
+ }
142
+ )
143
+ elif self.config.name == "processed.zh":
144
+ features = datasets.Features(
145
+ {
146
+ "utterances": datasets.Sequence(datasets.Value("string")),
147
+ }
148
+ )
149
  return datasets.DatasetInfo(
 
150
  description=_DESCRIPTION,
151
  features=features,
 
 
152
  homepage=_HOMEPAGE,
 
153
  license=_LICENSE,
 
154
  citation=_CITATION,
155
  )
156
 
157
  def _split_generators(self, dl_manager):
158
  """Returns SplitGenerators."""
159
+ *processed, lang = self.config.name.split(".")
160
+ if processed:
161
+ data_dir = dl_manager.download(_URLS[lang])
162
+ splits = [datasets.Split.TRAIN, datasets.Split.VALIDATION, datasets.Split.TEST]
163
+ return [datasets.SplitGenerator(name=split, gen_kwargs={"filepaths": data_dir[split]}) for split in splits]
164
+ else:
165
+ path_to_manual_file = os.path.abspath(os.path.expanduser(dl_manager.manual_dir))
166
+ if not os.path.exists(path_to_manual_file):
167
+ raise FileNotFoundError(
168
+ f"{path_to_manual_file} does not exist. Make sure you insert a manual dir via `datasets.load_dataset('medical_dialog', data_dir=...)`. Manual download instructions: {self.manual_download_instructions})"
169
+ )
170
+
171
+ filepaths = [
172
+ os.path.join(path_to_manual_file, txt_file_name)
173
+ for txt_file_name in sorted(os.listdir(path_to_manual_file))
174
+ if txt_file_name.endswith("txt")
175
+ ]
176
+
177
+ return [datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepaths": filepaths})]
178
 
179
  def _generate_examples(self, filepaths):
180
  """Yields examples. Iterates over each file and give the creates the corresponding features.
183
  - The code makes some assumption on the structure of the raw .txt file.
184
  - There are some checks to separate different id's. Hopefully, should not cause further issues later when more txt files are added.
185
  """
186
+ *processed, data_lang = self.config.name.split(".")
187
+ if processed:
188
+ with open(filepaths, encoding="utf-8") as f:
189
+ if self.config.name == "processed.en":
190
+ data = json.load(f)
191
+ for idx, item in enumerate(data):
192
+ yield idx, item
193
+ elif self.config.name == "processed.zh":
194
+ idx = 0
195
+ array = ""
196
+ for line in f:
197
+ if line[0] not in ["[", "]"]:
198
+ if line != " ],\n":
199
+ array += line
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
200
  else:
201
+ array += "]"
202
+ item = json.loads(array)
203
+ yield idx, {"utterances": item}
204
+ idx += 1
205
+ array = ""
206
+ else:
207
+ id_ = -1
208
+ for filepath in filepaths:
209
+ with open(filepath, encoding="utf-8") as f_in:
210
+ # Parameters to just "sectionize" the raw data
211
+ last_part = ""
212
+ last_dialog = {}
213
+ last_list = []
214
+ last_user = ""
215
+ check_list = []
216
+
217
+ # These flags are present to have a single function address both chinese and english data
218
+ # English data is a little hahazard (i.e. the sentences spans multiple different lines),
219
+ # Chinese is compact with one line for doctor and patient.
220
+ conv_flag = False
221
+ des_flag = False
222
+
223
+ while True:
224
+ line = f_in.readline()
225
+ if not line:
226
+ break
227
+
228
+ # Extracting the dialog id
229
+ if line[:2] == "id": # Hardcode alert!
230
+ # Handling ID references that may come in the description
231
+ # These were observed in the Chinese dataset and were not
232
+ # followed by numbers
233
+ try:
234
+ dialogue_id = int(re.findall(r"\d+", line)[0])
235
+ except IndexError:
236
+ continue
237
+
238
+ # Extracting the url
239
+ if line[:4] == "http": # Hardcode alert!
240
+ dialogue_url = line.rstrip()
241
+
242
+ # Extracting the patient info from description.
243
+ if line[:11] == "Description": # Hardcode alert!
244
+ last_part = "description"
245
+ last_dialog = {}
246
+ last_list = []
247
+ last_user = ""
248
+ last_conv = {"speaker": "", "utterance": ""}
249
  while True:
250
  line = f_in.readline()
251
  if (not line) or (line in ["\n", "\n\r"]):
 
 
 
 
 
 
 
 
 
 
 
 
 
 
252
  break
253
+ else:
254
+ if data_lang == "zh": # Condition in chinese
255
+ if line[:5] == "病情描述:": # Hardcode alert!
256
+ last_user = "病人"
257
+ sen = f_in.readline().rstrip()
258
+ des_flag = True
259
+
260
+ if data_lang == "en":
261
+ last_user = "Patient"
262
+ sen = line.rstrip()
263
+ des_flag = True
264
 
265
+ if des_flag:
266
+ if sen == "":
267
+ continue
268
+ if sen in check_list:
269
+ last_conv["speaker"] = ""
270
+ last_conv["utterance"] = ""
271
+ else:
272
+ last_conv["speaker"] = last_user
273
+ last_conv["utterance"] = sen
274
+ check_list.append(sen)
275
+ des_flag = False
276
+ break
277
+ # Extracting the conversation info from dialogue.
278
+ elif line[:8] == "Dialogue": # Hardcode alert!
279
+ if last_part == "description" and len(last_conv["utterance"]) > 0:
280
+ last_part = "dialogue"
281
  if data_lang == "zh":
282
+ last_user = "病人"
 
 
 
283
 
 
 
284
  if data_lang == "en":
285
+ last_user = "Patient"
286
+
287
+ while True:
288
+ line = f_in.readline()
289
+ if (not line) or (line in ["\n", "\n\r"]):
290
+ conv_flag = False
291
+ last_user = ""
 
 
 
 
 
 
 
 
 
 
292
  last_list.append(copy.deepcopy(last_conv))
293
+ # To ensure close of conversation, only even number of sentences
294
+ # are extracted
295
+ last_turn = len(last_list)
296
+ if int(last_turn / 2) > 0:
297
+ temp = int(last_turn / 2)
298
+ id_ += 1
299
+ last_dialog["file_name"] = filepath
300
+ last_dialog["dialogue_id"] = dialogue_id
301
+ last_dialog["dialogue_url"] = dialogue_url
302
+ last_dialog["dialogue_turns"] = last_list[: temp * 2]
303
+ yield id_, last_dialog
304
+ break
305
+
306
+ if data_lang == "zh":
307
+ if line[:3] == "病人:" or line[:3] == "医生:": # Hardcode alert!
308
+ user = line[:2] # Hardcode alert!
309
+ line = f_in.readline()
310
+ conv_flag = True
311
+
312
+ # The elif block is to ensure that multi-line sentences are captured.
313
+ # This has been observed only in english.
314
+ if data_lang == "en":
315
+ if line.strip() == "Patient:" or line.strip() == "Doctor:": # Hardcode alert!
316
+ user = line.replace(":", "").rstrip()
317
+ line = f_in.readline()
318
+ conv_flag = True
319
+ elif line[:2] != "id": # Hardcode alert!
320
+ conv_flag = True
321
+
322
+ # Continues till the next ID is parsed
323
+ if conv_flag:
324
+ sen = line.rstrip()
325
+ if sen == "":
326
+ continue
327
+
328
+ if user == last_user:
329
+ last_conv["utterance"] = last_conv["utterance"] + sen
330
+ else:
331
+ last_user = user
332
+ last_list.append(copy.deepcopy(last_conv))
333
+ last_conv["utterance"] = sen
334
+ last_conv["speaker"] = user