Henrychur commited on
Commit
14df380
1 Parent(s): 1894fa5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +93 -1
README.md CHANGED
@@ -1,4 +1,96 @@
1
  ---
2
  license: cc-by-nc-sa-4.0
 
 
 
 
 
 
 
 
 
 
 
3
  ---
4
- Dataset card is coming soon.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-nc-sa-4.0
3
+ language:
4
+ - en
5
+ - zh
6
+ - ja
7
+ - fr
8
+ - ru
9
+ - es
10
+ tags:
11
+ - medical
12
+ size_categories:
13
+ - 10B<n<100B
14
  ---
15
+ # MMedC
16
+ [💻Github Repo](https://github.com/MAGIC-AI4Med/MMedLM) [🖨️arXiv Paper](https://arxiv.org/abs/2402.13963)
17
+
18
+ The official pre-training dataset for "Towards Building Multilingual Language Model for Medicine".
19
+
20
+ ## Introduction
21
+ This repo contains MMedC, a multilingual medical corpus with 25.5 billion tokens.
22
+ | Language | Family | Filtering Content | Textbooks | Websites | Small-scale Dataset | TotAmt |
23
+ |-----------|---------------|-------------------|-----------|----------|---------------------|--------|
24
+ | English | Indo-European | 6.56 | 4.00 | 0.00 | 0.00 | 10.56 |
25
+ | Spanish | Indo-European | 3.98 | 0.31 | 0.05 | 0.02 | 4.35 |
26
+ | French | Indo-European | 1.90 | 0.02 | 0.00 | 0.17 | 2.10 |
27
+ | Russian | Indo-European | 1.29 | 0.40 | 0.00 | 0.00 | 1.69 |
28
+ | Chinese | Sino-Tibetan | 3.34 | 1.21 | 0.00 | 0.19 | 4.74 |
29
+ | Japaneses | Sino-Tibetan | 1.93 | 0.00 | 0.10 | 0.01 | 2.05 |
30
+
31
+ - English Textbooks is not included in this repo due to copyright issues. For this part of 4B English corpus, please refer to [PMC-LLaMA](https://github.com/chaoyi-wu/PMC-LLaMA)
32
+
33
+
34
+ You can download the MMedC.zip file to access all the data. The data are saved in txt format, and the zip file contains four folders corresponding to four types of data sources: filtering content, medical websites, medical textbooks, and small-scale datasets. Please refer to our paper for details.
35
+
36
+ You can use the following method to obtain the paths to all txt files in the directory. Afterward, you can read these txt files and customize subsequent operations.
37
+ ```python
38
+ import os
39
+ txt_root = "PATH/TO/MMEDC"
40
+ txt_paths = []
41
+ for root, dirs, files in os.walk(txt_root):
42
+ if 'cultural_filtered_data_used' not in root:
43
+ for file in files:
44
+ if file.endswith('.txt'):
45
+ txt_paths.append(os.path.join(root, file))
46
+ ```
47
+
48
+
49
+ Our [GitHub](https://github.com/MAGIC-AI4Med/MMedLM) provides a data collection pipeline as well as our data preprocessing code.
50
+
51
+
52
+ ## News
53
+ [2024.2.21] Our pre-print paper is released ArXiv. Dive into our findings [here](https://arxiv.org/abs/2402.13963).
54
+
55
+ [2024.2.20] We release [MMedLM](https://huggingface.co/Henrychur/MMedLM) and [MMedLM 2](https://huggingface.co/Henrychur/MMedLM2). With an auto-regressive continues training on MMedC, these models achieves superior performance compared to all other open-source models, even rivaling GPT-4 on MMedBench.
56
+
57
+ [2023.2.20] We release [MMedC](https://huggingface.co/datasets/Henrychur/MMedC), a multilingual medical corpus containing 25.5B tokens.
58
+
59
+ [2023.2.20] We release [MMedBench](https://huggingface.co/datasets/Henrychur/MMedBench), a new multilingual medical multi-choice question-answering
60
+ benchmark with rationale. Check out the leaderboard [here](https://henrychur.github.io/MultilingualMedQA/).
61
+
62
+ ## Evaluation on MMedBench
63
+ The further pretrained MMedLM 2 showcast it's great performance in medical domain across different language.
64
+
65
+ | Method | Size | Year | MMedC | MMedBench | English | Chinese | Japanese | French | Russian | Spanish | Avg. |
66
+ |------------------|------|---------|-----------|-----------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|
67
+ | GPT-3.5 | - | 2022.12 | &#10007; | &#10007; | 56.88 | 52.29 | 34.63 | 32.48 | 66.36 | 66.06 | 51.47 |
68
+ | GPT-4 | - | 2023.3 | &#10007; | &#10007; | 78.00 | 75.07 | 72.91 | 56.59 | 83.62 | 85.67 | 74.27 |
69
+ | Gemini-1.0 pro | - | 2024.1 | &#10007; | &#10007; | 53.73 | 60.19 | 44.22 | 29.90 | 73.44 | 69.69 | 55.20 |
70
+ | BLOOMZ | 7B | 2023.5 | &#10007; | trainset | 43.28 | 58.06 | 32.66 | 26.37 | 62.89 | 47.34 | 45.10 |
71
+ | InternLM | 7B | 2023.7 | &#10007; | trainset | 44.07 | 64.62 | 37.19 | 24.92 | 58.20 | 44.97 | 45.67 |
72
+ | Llama\ 2 | 7B | 2023.7 | &#10007; | trainset | 43.36 | 50.29 | 25.13 | 20.90 | 66.80 | 47.10 | 42.26 |
73
+ | MedAlpaca | 7B | 2023.3 | &#10007; | trainset | 46.74 | 44.80 | 29.64 | 21.06 | 59.38 | 45.00 | 41.11 |
74
+ | ChatDoctor | 7B | 2023.4 | &#10007; | trainset | 43.52 | 43.26 | 25.63 | 18.81 | 62.50 | 43.44 | 39.53 |
75
+ | PMC-LLaMA | 7B | 2023.4 | &#10007; | trainset | 47.53 | 42.44 | 24.12 | 20.74 | 62.11 | 43.29 | 40.04 |
76
+ | Mistral | 7B | 2023.10 | &#10007; | trainset | 61.74 | 71.10 | 44.72 | 48.71 | 74.22 | 63.86 | 60.73 |
77
+ | InternLM\ 2 | 7B | 2024.2 | &#10007; | trainset | 57.27 | 77.55 | 47.74 | 41.00 | 68.36 | 59.59 | 58.59 |
78
+ | MMedLM~(Ours) | 7B | - | &#10007; | trainset | 49.88 | 70.49 | 46.23 | 36.66 | 72.27 | 54.52 | 55.01 |
79
+ | MMedLM\ 2~(Ours) | 7B | - | &#10007; | trainset | 61.74 | 80.01 | 61.81 | 52.09 | 80.47 | 67.65 | 67.30 |
80
+ - GPT and Gemini is evluated under zero-shot setting through API
81
+ - Open-source models first undergo training on the trainset of MMedBench before evaluate.
82
+
83
+ ## Contact
84
+ If you have any question, please feel free to contact qiupengcheng@pjlab.org.cn.
85
+
86
+ ## Citation
87
+ ```
88
+ @misc{qiu2024building,
89
+ title={Towards Building Multilingual Language Model for Medicine},
90
+ author={Pengcheng Qiu and Chaoyi Wu and Xiaoman Zhang and Weixiong Lin and Haicheng Wang and Ya Zhang and Yanfeng Wang and Weidi Xie},
91
+ year={2024},
92
+ eprint={2402.13963},
93
+ archivePrefix={arXiv},
94
+ primaryClass={cs.CL}
95
+ }
96
+ ```