Text Generation
Transformers
PyTorch
English
gpt2
medical
text-generation-inference
Inference Endpoints
Files changed (1) hide show
  1. README.md +133 -0
README.md CHANGED
@@ -0,0 +1,133 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - raidium/ECNQA_generated_questions
5
+ library_name: transformers
6
+ tags:
7
+ - medical
8
+ ---
9
+
10
+ # Model Card for Raidium MQG model
11
+
12
+
13
+ The model is introduced in the paper "Efficient Medical Question Answering with Knowledge-Augmented Question Generation".
14
+
15
+ Paper: [https://arxiv.org/abs/2405.14654](https://arxiv.org/abs/2405.14654)
16
+
17
+ MQG is is a transformer language model pre-trained on a series of medical textbooks, and medical questions generated by GPT-4. The weights are initialized with
18
+ [BioMedLM](https://huggingface.co/stanford-crfm/BioMedLM), then further pre-trained on those datasets.
19
+
20
+ The questions have been generated from prompt containing medical data from the textbooks.
21
+ They are available here: [ECNQA_generated_questions](https://huggingface.co/datasets/raidium/ECNQA_generated_questions).
22
+
23
+ MQG is designed to be fine-tuned for Medical Question Answering tasks.
24
+
25
+ ## Model Details
26
+
27
+ ### Model Description
28
+
29
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62cdea59a9be5c195561c2b8/tMb8cNuV6ZYnjrnUC1Tg2.png)
30
+
31
+ In the expanding field of language model applications, medical knowledge representation remains a significant challenge due to the specialized nature of the domain.
32
+ Large language models, such as GPT-4, obtain reasonable scores on medical question answering tasks, but smaller models are far behind.
33
+ In this work, we introduce a method to improve the proficiency of a small language model in the medical domain by employing a two-fold approach.
34
+ We first fine-tune the model on a corpus of medical textbooks. Then, we use GPT-4 to generate questions similar to the downstream task, prompted with textbook knowledge, and use them to fine-tune the model.
35
+ We show the benefits of our training strategy on a medical answering question dataset.
36
+ The study's findings highlight the potential of small language models in the medical domain when appropriately fine-tuned.
37
+
38
+
39
+ - **Developed by:** Raidium
40
+ - **Model type:** Transformer
41
+ - **License:** Aopache 2.0
42
+ - **Finetuned from model:** [BioMedLM](https://huggingface.co/stanford-crfm/BioMedLM)
43
+
44
+ ### Model Sources [optional]
45
+
46
+ <!-- Provide the basic links for the model. -->
47
+
48
+ - **Repository:** [https://github.com/raidium-med/MQG]
49
+ - **Paper:** [https://arxiv.org/abs/2405.14654](https://arxiv.org/abs/2405.14654)
50
+
51
+ ## Uses
52
+
53
+ ### Direct Use
54
+
55
+ MQG is trained using next-token-prediction on generated questions.
56
+ Therefore, it can be used out-of-the-box to generate potential answers for medical question answering tasks.
57
+ However, the generated questions might contain some errors, so it is advised to fine-tune the model on your dataset, and use the models to rank the potential answers.
58
+
59
+ ### Downstream Use
60
+
61
+ MQG can be fine-tuned for Medical Question Answering tasks.
62
+ For multiple choice questions, a classification head should be appended at the end of the model, to rank different proposed answers.
63
+
64
+ ### Out-of-Scope Use
65
+
66
+ This model should not be used for datasets outside medical tasks.
67
+
68
+ ## Bias, Risks, and Limitations
69
+
70
+ There is no guarantee that the model answers medical questions correctly. It should only be used for academic purposes, and not in clinical care.
71
+
72
+ ## Training Details
73
+
74
+ ### Training Data
75
+
76
+ The model is trained on a corpus of medical textbooks, and further pre-trained on generated questions: [ECNQA_generated_questions](https://huggingface.co/datasets/raidium/ECNQA_generated_questions).
77
+
78
+ ### Training Procedure
79
+
80
+ MGQ is trained using next-token-prediction on both datasets.
81
+
82
+ #### Training Hyperparameters
83
+
84
+ - **Training regime:** fp16 mixed-precision training. <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
85
+
86
+ ## Evaluation
87
+
88
+ ### Testing Data, Factors & Metrics
89
+
90
+ #### Testing Data
91
+
92
+ We tested the model on a medical question answering dataset, ECN-QA, based on the french medical residency examination.
93
+ It is composed of "single" and "progressive" questions (i.e a serie of multiple related questions).
94
+ It is a multiple-choice question dataset, containing 5 propositions for each question.
95
+
96
+ #### Metrics
97
+
98
+ We use the accuracy to evaluate the model on Medical Question Answering.
99
+
100
+ ### Results
101
+
102
+ See paper: [https://arxiv.org/abs/2405.14654](https://arxiv.org/abs/2405.14654)
103
+
104
+ ### Model Architecture and Objective
105
+
106
+ The model is based on BioMedLM's architecture, which is modified from GPT-2 architecture.
107
+
108
+ ### Compute Infrastructure
109
+
110
+ #### Hardware
111
+
112
+ The model was trained on the Jean-Zay supercomputer, on multiple nodes with 4 A100 gpus.
113
+
114
+ #### Software
115
+
116
+ Pytorch, DeepSpeed
117
+
118
+ ## Citation
119
+
120
+
121
+ **BibTeX:**
122
+ ```
123
+ @article{khlaut2024efficient,
124
+ title={Efficient Medical Question Answering with Knowledge-Augmented Question Generation},
125
+ author={Khlaut, Julien and Dancette, Corentin and Ferreres, Elodie and Bennani, Alaedine and H{\'e}rent, Paul and Manceron, Pierre},
126
+ journal={Clinical NLP Workshop, NAACL 2024},
127
+ year={2024}
128
+ }
129
+ ```
130
+
131
+ ## Model Card Contact
132
+
133
+ julien.khlaut at raidium.fr