ticoAg commited on
Commit
0d0c10b
1 Parent(s): 6ead0ac

Upload 7 files

Browse files
.gitattributes CHANGED
@@ -26,7 +26,6 @@
26
  *.safetensors filter=lfs diff=lfs merge=lfs -text
27
  saved_model/**/* filter=lfs diff=lfs merge=lfs -text
28
  *.tar.* filter=lfs diff=lfs merge=lfs -text
29
- *.tar filter=lfs diff=lfs merge=lfs -text
30
  *.tflite filter=lfs diff=lfs merge=lfs -text
31
  *.tgz filter=lfs diff=lfs merge=lfs -text
32
  *.wasm filter=lfs diff=lfs merge=lfs -text
@@ -53,3 +52,17 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
53
  *.jpg filter=lfs diff=lfs merge=lfs -text
54
  *.jpeg filter=lfs diff=lfs merge=lfs -text
55
  *.webp filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
  *.safetensors filter=lfs diff=lfs merge=lfs -text
27
  saved_model/**/* filter=lfs diff=lfs merge=lfs -text
28
  *.tar.* filter=lfs diff=lfs merge=lfs -text
 
29
  *.tflite filter=lfs diff=lfs merge=lfs -text
30
  *.tgz filter=lfs diff=lfs merge=lfs -text
31
  *.wasm filter=lfs diff=lfs merge=lfs -text
 
52
  *.jpg filter=lfs diff=lfs merge=lfs -text
53
  *.jpeg filter=lfs diff=lfs merge=lfs -text
54
  *.webp filter=lfs diff=lfs merge=lfs -text
55
+ pretrain/test.json filter=lfs diff=lfs merge=lfs -text
56
+ pretrain/train.json filter=lfs diff=lfs merge=lfs -text
57
+ pretrain/validation.json filter=lfs diff=lfs merge=lfs -text
58
+ */*.json filter=lfs diff=lfs merge=lfs -text
59
+ *.json filter=lfs diff=lfs merge=lfs -text
60
+ pretrain/test_encyclopedia.json filter=lfs diff=lfs merge=lfs -text
61
+ pretrain/valid_encyclopedia.json filter=lfs diff=lfs merge=lfs -text
62
+ pretrain/train_encyclopedia.json filter=lfs diff=lfs merge=lfs -text
63
+ finetune/test_en_1.json filter=lfs diff=lfs merge=lfs -text
64
+ finetune/test_zh_0.json filter=lfs diff=lfs merge=lfs -text
65
+ finetune/train_en_1.json filter=lfs diff=lfs merge=lfs -text
66
+ finetune/train_zh_0.json filter=lfs diff=lfs merge=lfs -text
67
+ finetune/valid_en_1.json filter=lfs diff=lfs merge=lfs -text
68
+ finetune/valid_zh_0.json filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,157 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ language:
4
+ - zh
5
+ - en
6
+ tags:
7
+ - text-generation
8
+ pretty_name: medical
9
+ task_categories:
10
+ - text-generation
11
+ size_categories:
12
+ - 1M<n<10M
13
  ---
14
+
15
+ # Dataset Card for medical
16
+ 中文医疗数据集
17
+
18
+ - LLM Supervised Finetuning repository: https://github.com/shibing624/textgen
19
+ - MeidcalGPT repository: https://github.com/shibing624/MedicalGPT
20
+
21
+ ## Dataset Description
22
+
23
+ medical is a Chinese Medical dataset. 医疗数据集,可用于医疗领域大模型训练。
24
+
25
+ ```
26
+ tree medical
27
+ |-- finetune # 监督微调数据集,可用于SFT和RLHF
28
+ | |-- test_en_1.json
29
+ | |-- test_zh_0.json
30
+ | |-- train_en_1.json
31
+ | |-- train_zh_0.json
32
+ | |-- valid_en_1.json
33
+ | `-- valid_zh_0.json
34
+ |-- medical.py # hf dataset 数据展示用
35
+ |-- pretrain # 二次预训练数据集
36
+ | |-- medical_book_zh.json
37
+ | |-- test_encyclopedia.json
38
+ | |-- train_encyclopedia.json
39
+ | `-- valid_encyclopedia.json
40
+ |-- README.md
41
+ `-- reward # 奖励模型数据集
42
+ |-- test.json
43
+ |-- train.json
44
+ `-- valid.json
45
+ ```
46
+
47
+
48
+
49
+
50
+ ### Original Dataset Summary
51
+
52
+ #### pretrain
53
+ - train_encyclopedia.json: 共36万条,来自医疗百科数据[FreedomIntelligence/huatuo_encyclopedia_qa](https://huggingface.co/datasets/FreedomIntelligence/huatuo_encyclopedia_qa) , 拼接 questions 和 answers,形成 text 文本字段,语句通顺,用于预训练注入医疗知识。
54
+ - medical_book_zh.json: 共8475条,来自医疗教材的文本数据,来源:https://github.com/jind11/MedQA, 原始数据集:[google drive](https://drive.google.com/u/0/uc?export=download&confirm=t&id=1ImYUSLk9JbgHXOemfvyiDiirluZHPeQw) ,只对长段落切分为2048字的小段落了。
55
+ #### finetune
56
+ - train_zh_0.json: 共195万条,来自1)中文医疗对话数据集[Toyhom/Chinese-medical-dialogue-data](https://github.com/Toyhom/Chinese-medical-dialogue-data)的六个科室医疗问诊数据,
57
+ 有79万条;2)在线医疗百科 huatuo_encyclopedia_qa ,有36万条;3)医疗知识图谱 huatuo_knowledge_graph_qa,有79万条。三部分合并,共195万条。
58
+ - train_en_1.json:共11万条,来自英文医疗问诊对话数据[Kent0n-Li/ChatDoctor](https://github.com/Kent0n-Li/ChatDoctor),合并了HealthCareMagic-100k、GenMedGPT-5k 数据集,共11万条。
59
+ #### reward
60
+ - train.json 共4000条,问题来自中文医疗对话数据集[Toyhom/Chinese-medical-dialogue-data](https://github.com/Toyhom/Chinese-medical-dialogue-data)的随机4000条提问,`response_chosen`来自该数据集的医生答复,
61
+ `response_rejected`来自本草模型[SCIR-HI/Huatuo-Llama-Med-Chinese](https://github.com/SCIR-HI/Huatuo-Llama-Med-Chinese)的答复。
62
+
63
+ ### Supported Tasks and Leaderboards
64
+ 中文医疗对话模型
65
+
66
+ The dataset designed for medical task training pretrained language models.
67
+
68
+ ### Languages
69
+
70
+ The data are in Chinese.
71
+
72
+ ## Dataset Structure
73
+
74
+ ### Data Instances
75
+
76
+ An example of "train" looks as follows:
77
+
78
+ head pretrain/train_encyclopedia.json
79
+ ```json
80
+ {"text": "怀孕后嘴巴很淡怎么办?有孕妇在怀孕之后,发现自己嘴巴比较淡,出现这种情况的原因其实也非常的复杂,首先和妊娠反应有直接的关系,这是一种正常的情况,另外有些孕妇平常不注意口腔卫生,舌苔比较厚或者自己有了一些消化系统方面的疾病,这就要求人们必须要及时的进行处理。女性在怀孕之后,身体就会出现一些明显的变化,首先人们月经会停止,另外也会有恶心、呕吐等一些妊娠反应,不过这些都是正常的。有些孕妇发现自己在怀孕之后,口味发生了很大的变化,嘴巴变得非常的淡。其实这也和激素变化有直接的关系,可能是妊娠反应所致,在怀孕期间,因为受到体内激素水平的变化,所以就会有肠胃系统的改变,人们可能会出现食欲不振,消化不良等症状表现,也有一些孕妇会发现自己嘴巴没有味道,会有口苦的症状,而这也是正常的孕期反应,人们在平常要多喝一些水,多吃一些清淡营养的食物慢慢就会改善。也有可能是舌苔过厚所致,孕妇嘴巴里面没有味道,很有可能是舌苔比较重、舌苔过厚导致的,这样就会影响到味蕾对味道的敏感度,不仅嘴巴里面没有味道,甚至有些孕妇在说话的时候也会有口臭,这就要求人们在每天早上漱口的时候,必须要用牙刷刷舌苔开始,不要清理的特别深,以免人们会有呕吐,慢慢习惯之后再往深一点的清洗,一般2到3天就会得到改善。嘴巴感到比较淡,其实也和脾胃虚寒有直接的关系,消化系统疾病,内分泌疾病,营养不良等,但有可能导致舌头部位因为味蕾的敏感度下降,产生口淡之感,患者会有食欲不振的表现,发现病症及时就诊治疗。"}
81
+ ```
82
+
83
+ head finetune/train_zh_0.json
84
+ ```json
85
+ {"instruction": "曲匹地尔片的用法用量", "input": "", "output": "注意:同种药品可由于不同的包装规格有���同的用法或用量。本文只供参考。如果不确定,请参看药品随带的说明书或向医生询问。口服。一次50~100mg(1-2片),3次/日,或遵医嘱。"}
86
+ ```
87
+
88
+ head finetune/train_en_1.json
89
+ ```json
90
+ {"instruction": "If you are a doctor, please answer the medical questions based on the patient's description.", "input": "Doctor, I have been experiencing sudden and frequent panic attacks. I don't know what to do.", "output": "Well, based on what you're telling me, it sounds like you may be suffering from panic disorder. The best course of action is to start with psychotherapy and mental health counseling. Additionally, we should conduct an electrocardiogram to make sure that there are no physical issues causing your panic attacks. We will also need to perform a depression screen and a toxicology screen to rule out any other underlying causes. Finally, I would recommend a comprehensive psychological and psychiatric evaluation and therapy to help manage your symptoms."}
91
+ ```
92
+
93
+ head reward/train.json
94
+ ```json
95
+ {"question": "孩子感统失调,学习不行怎么办?,孩子说话晚,走路晚,现在小学二年级,学习跟不上,理解力差,运动协调性差,家里很着急,不知怎么办。", "response_chosen": "病情分析:你好!孩子说话晚,走路也晚,很可能是大脑本身发育不好引起的发育迟缓。而五岁时所致的智力检查为临界范围,那就是说孩子的智商是有问题的,也应考虑与大脑发育不好有关。指导意见:人的大脑在头一年发育最快,可塑性最强,在头三年可塑性还是可以的,超过三岁再进行训练,效果就不怎么好了。建议再给孩子做一做智力测试,如果孩子的智商还是在临界范围,那就要考虑让孩子去特殊学校进行康复训练,而不是继续在普通小学就读,否则对孩子来说,就是强人所难了。希望自己的孩子能聪明,这是每个家长都会有的心愿,但如果孩子自身的条件就是不能跟上同龄孩子,那家长也要面对这个事实的,对吗?医生询问:", "response_rejected": "建议家长先带孩子去正规医院做全面检查以确定病因和病情严重程度;同时可以进行物理治疗、康复训练等辅助治疗方法。"}
96
+ ```
97
+
98
+ ### Data Fields
99
+
100
+ #### 预训练数据集 pretrain
101
+ 字段解释:
102
+ - text: 文本
103
+
104
+ #### 指令微调数据集 finetune
105
+ 字段解释:
106
+ - instruction: 指令
107
+ - input:问题(可为空)
108
+ - output:答复
109
+
110
+ #### 奖励模型数据集 reward
111
+ 字段解释:
112
+ - question: 问题
113
+ - response_chosen: 优质回答
114
+ - response_rejected: 低质回答
115
+
116
+ ### Data Splits
117
+
118
+ ```
119
+ > wc -l medical/*/*
120
+ 500 medical/finetune/test_en_1.json
121
+ 500 medical/finetune/test_zh_0.json
122
+ 116617 medical/finetune/train_en_1.json
123
+ 1949972 medical/finetune/train_zh_0.json
124
+ 500 medical/finetune/valid_en_1.json
125
+ 500 medical/finetune/valid_zh_0.json
126
+ 8475 medical/pretrain/medical_book_zh.json
127
+ 500 medical/pretrain/test_encyclopedia.json
128
+ 361420 medical/pretrain/train_encyclopedia.json
129
+ 500 medical/pretrain/valid_encyclopedia.json
130
+ 100 medical/reward/test.json
131
+ 3800 medical/reward/train.json
132
+ 100 medical/reward/valid.json
133
+ 2443484 total
134
+ ```
135
+
136
+ ### Licensing Information
137
+
138
+ The dataset is available under the Apache 2.0.
139
+
140
+
141
+ ### Citation Information
142
+
143
+ - https://github.com/Toyhom/Chinese-medical-dialogue-data
144
+ - https://github.com/FreedomIntelligence/Huatuo-26M/blob/main/README_zh-CN.md
145
+ - https://huggingface.co/datasets/FreedomIntelligence/huatuo_encyclopedia_qa
146
+ - https://huggingface.co/datasets/FreedomIntelligence/huatuo_knowledge_graph_qa
147
+ - https://github.com/Kent0n-Li/ChatDoctor
148
+
149
+ 附上几个优质的reward model dataset:
150
+ - https://huggingface.co/datasets/Dahoas/synthetic-instruct-gptj-pairwise
151
+ - https://huggingface.co/datasets/sunzeyeah/chinese_chatgpt_corpus
152
+ - https://huggingface.co/datasets/Cohere/miracl-zh-queries-22-12
153
+ - https://huggingface.co/datasets/Dahoas/rm-static
154
+
155
+ ### Contributions
156
+
157
+ [shibing624](https://github.com/shibing624) 整理并上传
medical.py ADDED
@@ -0,0 +1,173 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # -*- coding: utf-8 -*-
2
+ """
3
+ @author:XuMing(xuming624@qq.com)
4
+ @description:
5
+
6
+ Natural Language Generation Chinese Corpus.(medical)
7
+ """
8
+
9
+ import os
10
+ import json
11
+ import datasets
12
+ _DESCRIPTION = """纯文本数据,中文医疗数据集,包含预训练数据的百科数据,指令微调数据和奖励模型数据。"""
13
+ _HOMEPAGE = "https://github.com/shibing624/MedicalGPT"
14
+ _CITATION = ""
15
+ _LICENSE = ""
16
+ _BASE_URL = "https://huggingface.co/datasets/shibing624/medical/resolve/main/"
17
+ # file url: https://huggingface.co/datasets/shibing624/medical/resolve/main/finetune/test_zh_0.json
18
+
19
+ class NewDataset(datasets.GeneratorBasedBuilder):
20
+ """Medical Chinese Version"""
21
+
22
+ VERSION = datasets.Version("1.0.1")
23
+
24
+ BUILDER_CONFIGS = [
25
+ datasets.BuilderConfig(name="pretrain", version=VERSION, description="pretrain data"),
26
+ datasets.BuilderConfig(name="finetune", version=VERSION, description="finetune data"),
27
+ datasets.BuilderConfig(name="reward", version=VERSION, description="reward data"),
28
+ ]
29
+
30
+ def _info(self):
31
+ if self.config.name == "pretrain":
32
+ features = datasets.Features(
33
+ {
34
+ "text": datasets.Value("string")
35
+ }
36
+ )
37
+ elif self.config.name == 'finetune':
38
+ features = datasets.Features(
39
+ {
40
+ "instruction": datasets.Value("string"),
41
+ "input": datasets.Value("string"),
42
+ "output": datasets.Value("string")
43
+ }
44
+ )
45
+ elif self.config.name == 'reward':
46
+ features = datasets.Features(
47
+ {
48
+ "question": datasets.Value("string"),
49
+ "response_chosen": datasets.Value("string"),
50
+ "response_rejected": datasets.Value("string")
51
+ }
52
+ )
53
+
54
+ return datasets.DatasetInfo(
55
+ # This is the description that will appear on the datasets page.
56
+ description=_DESCRIPTION,
57
+ # This defines the different columns of the dataset and their types
58
+ features=features, # Here we define them above because they are different between the two configurations
59
+ # If there's a common (input, target) tuple from the features, uncomment supervised_keys line below and
60
+ # specify them. They'll be used if as_supervised=True in builder.as_dataset.
61
+ # supervised_keys=("sentence", "label"),
62
+ # Homepage of the dataset for documentation
63
+ homepage=_HOMEPAGE,
64
+ # License for the dataset if available
65
+ license=_LICENSE,
66
+ # Citation for the dataset
67
+ citation=_CITATION,
68
+ )
69
+
70
+ def _split_generators(self, dl_manager):
71
+ data_url = _BASE_URL + self.config.name
72
+
73
+ if self.config.name == 'pretrain':
74
+ return [
75
+ datasets.SplitGenerator(
76
+ name=datasets.Split.TRAIN,
77
+ gen_kwargs={
78
+ "filepath": dl_manager.download_and_extract(f"{data_url}/train_encyclopedia.json"),
79
+ "split": "train"
80
+ },
81
+ ),
82
+ datasets.SplitGenerator(
83
+ name=datasets.Split.VALIDATION,
84
+ gen_kwargs={
85
+ "filepath": dl_manager.download_and_extract(f"{data_url}/valid_encyclopedia.json"),
86
+ "split": "dev"
87
+ },
88
+ ),
89
+ datasets.SplitGenerator(
90
+ name=datasets.Split.TEST,
91
+ gen_kwargs={
92
+ "filepath": dl_manager.download_and_extract(f"{data_url}/test_encyclopedia.json"),
93
+ "split": "test"
94
+ },
95
+ ),
96
+ ]
97
+ elif self.config.name == 'finetune':
98
+ return [
99
+ datasets.SplitGenerator(
100
+ name=datasets.Split.TRAIN,
101
+ gen_kwargs={
102
+ "filepath": dl_manager.download_and_extract([f"{data_url}/train_zh_0.json", f"{data_url}/train_en_1.json"]),
103
+ "split": "train"
104
+ },
105
+ ),
106
+ datasets.SplitGenerator(
107
+ name=datasets.Split.VALIDATION,
108
+ gen_kwargs={
109
+ "filepath": dl_manager.download_and_extract([f"{data_url}/valid_zh_0.json", f"{data_url}/valid_en_1.json"]),
110
+ "split": "dev"
111
+ },
112
+ ),
113
+ datasets.SplitGenerator(
114
+ name=datasets.Split.TEST,
115
+ gen_kwargs={
116
+ "filepath": dl_manager.download_and_extract([f"{data_url}/test_zh_0.json", f"{data_url}/test_en_1.json"]),
117
+ "split": "test"
118
+ },
119
+ ),
120
+ ]
121
+ elif self.config.name == 'reward':
122
+ return [
123
+ datasets.SplitGenerator(
124
+ name=datasets.Split.TRAIN,
125
+ gen_kwargs={
126
+ "filepath": dl_manager.download_and_extract(f"{data_url}/train.json"),
127
+ "split": "train"
128
+ },
129
+ ),
130
+ datasets.SplitGenerator(
131
+ name=datasets.Split.VALIDATION,
132
+ gen_kwargs={
133
+ "filepath": dl_manager.download_and_extract(f"{data_url}/valid.json"),
134
+ "split": "dev"
135
+ },
136
+ ),
137
+ datasets.SplitGenerator(
138
+ name=datasets.Split.TEST,
139
+ gen_kwargs={
140
+ "filepath": dl_manager.download_and_extract(f"{data_url}/test.json"),
141
+ "split": "test"
142
+ },
143
+ ),
144
+ ]
145
+
146
+ # method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
147
+ def _generate_examples(self, filepath, split):
148
+ id = 0
149
+ if isinstance(filepath, str):
150
+ filepath = [filepath]
151
+ for file in filepath:
152
+ with open(file, encoding="utf-8") as f:
153
+ for key, row in enumerate(f):
154
+ data = json.loads(row)
155
+ if self.config.name == "pretrain":
156
+ yield id, {
157
+ "text": data["text"]
158
+ }
159
+ elif self.config.name == 'finetune':
160
+ yield id, {
161
+ "instruction": data["instruction"],
162
+ "input": data["input"],
163
+ "output": data["output"]
164
+ }
165
+ elif self.config.name == 'reward':
166
+ yield id, {
167
+ "question": data["question"],
168
+ "response_chosen": data["response_chosen"],
169
+ "response_rejected": data["response_rejected"]
170
+ }
171
+ id += 1
172
+
173
+
pretrain/medical_book_zh.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:850c474cc721b1c2f042149823c91731aefd1d90007ffe931452aba400c511a3
3
+ size 40157289
pretrain/test_encyclopedia.json ADDED
The diff for this file is too large to render. See raw diff
 
pretrain/train_encyclopedia.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8bc26de65bd4abde415e50627799a152474902aa39b46da5eb3f3296713eedc2
3
+ size 591029894
pretrain/valid_encyclopedia.json ADDED
The diff for this file is too large to render. See raw diff