Text Generation
Transformers
PyTorch
English
gpt2
medical
text-generation-inference
Inference Endpoints
Files changed (1) hide show
  1. README.md +136 -0
README.md CHANGED
@@ -0,0 +1,136 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - raidium/ECNQA_generated_questions
5
+ library_name: transformers
6
+ tags:
7
+ - medical
8
+ ---
9
+
10
+ # Model Card for Raidium MQG model
11
+
12
+
13
+ The model is introduced in the paper "Efficient Medical Question Answering with Knowledge-Augmented Question Generation".
14
+
15
+ Paper: [https://arxiv.org/abs/2405.14654](https://arxiv.org/abs/2405.14654)
16
+
17
+ MQG is is a transformer language model pre-trained on a series of medical textbooks, and medical questions generated by GPT-4. The weights are initialized with
18
+ [BioMedLM](https://huggingface.co/stanford-crfm/BioMedLM), then further pre-trained on those datasets.
19
+
20
+ The questions have been generated from prompt containing medical data from the textbooks.
21
+ They are available here: [ECNQA_generated_questions](https://huggingface.co/datasets/raidium/ECNQA_generated_questions).
22
+
23
+ MQG is designed to be fine-tuned for Medical Question Answering tasks.
24
+
25
+ ## Model Details
26
+
27
+ ### Model Description
28
+
29
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62cdea59a9be5c195561c2b8/tMb8cNuV6ZYnjrnUC1Tg2.png)
30
+
31
+
32
+ In the expanding field of language model applications, medical knowledge representation remains a significant challenge due to the specialized nature of the domain.
33
+ Large language models, such as GPT-4, obtain reasonable scores on medical question answering tasks, but smaller models are far behind.
34
+ In this work, we introduce a method to improve the proficiency of a small language model in the medical domain by employing a two-fold approach.
35
+ We first fine-tune the model on a corpus of medical textbooks.
36
+ Then, we use GPT-4 to generate questions similar to the downstream task, prompted with textbook knowledge, and use them to fine-tune the model.
37
+ Additionally, we introduce ECN-QA, a novel medical question answering dataset containing ``progressive questions'' composed of related sequential questions.
38
+ We show the benefits of our training strategy on this dataset.
39
+ The study's findings highlight the potential of small language models in the medical domain when appropriately fine-tuned.
40
+
41
+
42
+ - **Developed by:** Raidium
43
+ - **Model type:** Transformer
44
+ - **License:** Aopache 2.0
45
+ - **Finetuned from model:** [BioMedLM](https://huggingface.co/stanford-crfm/BioMedLM)
46
+
47
+ ### Model Sources [optional]
48
+
49
+ <!-- Provide the basic links for the model. -->
50
+
51
+ - **Repository:** [https://github.com/raidium-med/MQG]
52
+ - **Paper:** [https://arxiv.org/abs/2405.14654](https://arxiv.org/abs/2405.14654)
53
+
54
+ ## Uses
55
+
56
+ ### Direct Use
57
+
58
+ MQG is trained using next-token-prediction on generated questions.
59
+ Therefore, it can be used out-of-the-box to generate potential answers for medical question answering tasks.
60
+ However, the generated questions might contain some errors, so it is advised to fine-tune the model on your dataset, and use the models to rank the potential answers.
61
+
62
+ ### Downstream Use
63
+
64
+ MQG can be fine-tuned for Medical Question Answering tasks.
65
+ For multiple choice questions, a classification head should be appended at the end of the model, to rank different proposed answers.
66
+
67
+ ### Out-of-Scope Use
68
+
69
+ This model should not be used for datasets outside medical tasks.
70
+
71
+ ## Bias, Risks, and Limitations
72
+
73
+ There is no guarantee that the model answers medical questions correctly. It should only be used for academic purposes, and not in clinical care.
74
+
75
+ ## Training Details
76
+
77
+ ### Training Data
78
+
79
+ The model is trained on a corpus of medical textbooks, and further pre-trained on generated questions: [ECNQA_generated_questions](https://huggingface.co/datasets/raidium/ECNQA_generated_questions).
80
+
81
+ ### Training Procedure
82
+
83
+ MGQ is trained using next-token-prediction on both datasets.
84
+
85
+ #### Training Hyperparameters
86
+
87
+ - **Training regime:** fp16 mixed-precision training. <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
88
+
89
+ ## Evaluation
90
+
91
+ ### Testing Data, Factors & Metrics
92
+
93
+ #### Testing Data
94
+
95
+ We tested the model on a medical question answering dataset, ECN-QA, based on the french medical residency examination.
96
+ It is composed of "single" and "progressive" questions (i.e a serie of multiple related questions).
97
+ It is a multiple-choice question dataset, containing 5 propositions for each question.
98
+
99
+ #### Metrics
100
+
101
+ We use the accuracy to evaluate the model on Medical Question Answering.
102
+
103
+ ### Results
104
+
105
+ See paper: [https://arxiv.org/abs/2405.14654](https://arxiv.org/abs/2405.14654)
106
+
107
+ ### Model Architecture and Objective
108
+
109
+ The model is based on BioMedLM's architecture, which is modified from GPT-2 architecture.
110
+
111
+ ### Compute Infrastructure
112
+
113
+ #### Hardware
114
+
115
+ The model was trained on the Jean-Zay supercomputer, on multiple nodes with 4 A100 gpus.
116
+
117
+ #### Software
118
+
119
+ Pytorch, DeepSpeed
120
+
121
+ ## Citation
122
+
123
+
124
+ **BibTeX:**
125
+ ```
126
+ @article{khlaut2024efficient,
127
+ title={Efficient Medical Question Answering with Knowledge-Augmented Question Generation},
128
+ author={Khlaut, Julien and Dancette, Corentin and Ferreres, Elodie and Bennani, Alaedine and H{\'e}rent, Paul and Manceron, Pierre},
129
+ journal={Clinical NLP Workshop, NAACL 2024},
130
+ year={2024}
131
+ }
132
+ ```
133
+
134
+ ## Model Card Contact
135
+
136
+ julien.khlaut at raidium.fr