File size: 4,176 Bytes
54070ae
06af139
 
 
 
 
 
 
 
 
 
 
 
 
54070ae
06af139
 
54070ae
06af139
54070ae
 
06af139
 
 
 
 
 
 
 
 
 
 
54070ae
2678883
54070ae
 
06af139
 
54070ae
06af139
54070ae
06af139
54070ae
06af139
 
54070ae
06af139
 
54070ae
06af139
 
 
 
 
 
 
 
 
 
54070ae
 
 
06af139
 
54070ae
06af139
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
---
license: llama3
datasets:
- Henrychur/MMedC
- axiong/pmc_llama_instructions
language:
- en
- zh
- ja
- fr
- ru
- es
tags:
- medical
---
# MMedLM
[💻Github Repo](https://github.com/MAGIC-AI4Med/MMedLM)   [🖨️arXiv Paper](https://arxiv.org/abs/2402.13963)

The official model weights for "Towards Building Multilingual Language Model for Medicine".


## Introduction
This repo contains MMed-Llama 3-8B-EnIns, which is based on MMed-Llama 3-8B. We further fine-tune the model on **English instruction fine-tuning dataset**(from PMC-LLaMA). We did this for a fair comparison with existing models on commonly-used English benchmarks.
Notice that, MMed-Llama 3-8B-EnIns has only been trained on pmc_llama_instructions, which is a English medical SFT dataset. So this model's ability to respond multilingual input is still limited.
  
The model can be loaded as follows:
```py
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Henrychur/MMed-Llama-3-8B-EnIns")
model = AutoModelForCausalLM.from_pretrained("Henrychur/MMed-Llama-3-8B-EnIns", torch_dtype=torch.float16)
```

- Inference format is the same as Llama 3, coming soon...


## News
[2024.2.21] Our pre-print paper is released ArXiv. Dive into our findings [here](https://arxiv.org/abs/2402.13963).

[2024.2.20] We release [MMedLM](https://huggingface.co/Henrychur/MMedLM) and [MMedLM 2](https://huggingface.co/Henrychur/MMedLM2). With an auto-regressive continues training on MMedC, these models achieves superior performance compared to all other open-source models, even rivaling GPT-4 on MMedBench.

[2023.2.20] We release [MMedC](https://huggingface.co/datasets/Henrychur/MMedC), a multilingual medical corpus containing 25.5B tokens.

[2023.2.20] We release [MMedBench](https://huggingface.co/datasets/Henrychur/MMedBench), a new multilingual medical multi-choice question-answering
benchmark with rationale. Check out the leaderboard [here](https://henrychur.github.io/MultilingualMedQA/).

## Evaluation on Commonly-used English Benchmark
The further pretrained MMed-Llama3 also showcast it's great performance in medical domain on different English benchmarks.

| Method              | Size | Year    | MedQA    | MedMCQA  | PubMedQA | MMLU_CK  | MMLU_MG  | MMLU_AN  | MMLU_PM  | MMLU_CB  | MMLU_CM  | Avg.      |
| ------------------- | ---- | ------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | --------- |
| MedAlpaca           | 7B   | 2023.3  | 41.7     | 37.5     | 72.8     | 57.4     | 69.0     | 57.0     | 67.3     | 65.3     | 54.3     | 58.03     |
| PMC-LLaMA           | 13B  | 2023.9  | 56.4     | 56.0     | 77.9     | -        | -        | -        | -        | -        | -        | -         |
| MEDITRON            | 7B   | 2023.11 | 57.2     | 59.2     | 74.4     | 64.6     | 59.9     | 49.3     | 55.4     | 53.8     | 44.8     | 57.62     |
| Mistral             | 7B   | 2023.12 | 50.8     | 48.2     | 75.4     | 68.7     | 71.0     | 55.6     | 68.4     | 68.1     | 59.5     | 62.97     |
| Gemma               | 7B   | 2024.2  | 47.2     | 49.0     | 76.2     | 69.8     | 70.0     | 59.3     | 66.2     | **79.9** | 60.1     | 64.19     |
| BioMistral          | 7B   | 2024.2  | 50.6     | 48.1     | 77.5     | 59.9     | 64.0     | 56.5     | 60.4     | 59.0     | 54.7     | 58.97     |
| Llama 3             | 8B   | 2024.4  | 60.9     | 50.7     | 73.0     | **72.1** | 76.0     | 63.0     | 77.2     | **79.9** | 64.2     | 68.56     |
| MMed-Llama 3~(Ours) | 8B   | -       | **65.4** | **63.5** | **80.1** | 71.3     | **85.0** | **69.6** | **77.6** | 74.3     | **66.5** | **72.59** |



## Contact
If you have any question, please feel free to contact qiupengcheng@pjlab.org.cn.

## Citation
```
@misc{qiu2024building,
      title={Towards Building Multilingual Language Model for Medicine}, 
      author={Pengcheng Qiu and Chaoyi Wu and Xiaoman Zhang and Weixiong Lin and Haicheng Wang and Ya Zhang and Yanfeng Wang and Weidi Xie},
      year={2024},
      eprint={2402.13963},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```