aaditya commited on
Commit
2350e39
1 Parent(s): 616f15b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +278 -1
README.md CHANGED
@@ -42,6 +42,8 @@ widget:
42
  ---
43
 
44
 
 
 
45
 
46
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/fJIOPJnY6Ff6fUiSIuMEt.png)
47
 
@@ -72,6 +74,8 @@ widget:
72
  </a>
73
  </p>
74
 
 
 
75
  Introducing OpenBioLLM-70B: A State-of-the-Art Open Source Biomedical Large Language Model
76
 
77
 
@@ -83,8 +87,281 @@ OpenBioLLM-70B is an advanced open source language model designed specifically f
83
 
84
  🧠 **Advanced Training Techniques**: OpenBioLLM-70B builds upon the powerful foundations of the **Meta-Llama-3-70B-Instruct** and [Meta-Llama-3-70B-Instruct](meta-llama/Meta-Llama-3-70B-Instruct) models. It incorporates the same dataset and fine-tuning recipe as the Starling model, along with a custom diverse medical instruction dataset and a novel merge method. Key components of the training pipeline include:
85
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
86
 
87
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/oPchsJsEpQoGcGXVbh7YS.png)
88
 
89
 
 
90
 
 
 
 
 
 
42
  ---
43
 
44
 
45
+ <div align="center">
46
+ <img width="260px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/BrQCb95lmEIFz79QAmoNA.png"></div>
47
 
48
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/fJIOPJnY6Ff6fUiSIuMEt.png)
49
 
 
74
  </a>
75
  </p>
76
 
77
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/KGmRE5w2sepNtwsEu8t7K.jpeg)
78
+
79
  Introducing OpenBioLLM-70B: A State-of-the-Art Open Source Biomedical Large Language Model
80
 
81
 
 
87
 
88
  🧠 **Advanced Training Techniques**: OpenBioLLM-70B builds upon the powerful foundations of the **Meta-Llama-3-70B-Instruct** and [Meta-Llama-3-70B-Instruct](meta-llama/Meta-Llama-3-70B-Instruct) models. It incorporates the same dataset and fine-tuning recipe as the Starling model, along with a custom diverse medical instruction dataset and a novel merge method. Key components of the training pipeline include:
89
 
90
+ <div align="center">
91
+ <img width="1200px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/oPchsJsEpQoGcGXVbh7YS.png">
92
+ </div>
93
+
94
+
95
+ - **Reward Model**: [Nexusflow/Starling-RM-34B](https://huggingface.co/Nexusflow/Starling-RM-34B)
96
+ - **Policy Optimization**: [Fine-Tuning Language Models from Human Preferences (PPO)](https://arxiv.org/abs/1909.08593)
97
+ - **Ranking Dataset**: [berkeley-nest/Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar)
98
+ - **Fine-tuning dataset**: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated)
99
+
100
+ This combination of cutting-edge techniques enables OpenBioLLM-70B to align with key capabilities and preferences for biomedical applications.
101
+
102
+ ⚙️ **Release Details**:
103
+
104
+ - **Model Size**: 70 billion parameters
105
+ - **Quantization**: Optimized quantized versions available [Here](https://huggingface.co/aaditya/OpenBioLLM-70B-GGUF)
106
+ - **Language(s) (NLP):** en
107
+ - **Developed By**: [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) from Saama AI Labs
108
+ - **License:** Meta-Llama License
109
+ - **Fine-tuned from models:** [Meta-Llama-3-70B-Instruct](meta-llama/Meta-Llama-3-70B-Instruct) & [Starling-RM-34B](https://huggingface.co/Nexusflow/Starling-RM-34B)
110
+ - **Resources for more information:**
111
+ - Paper: Coming soon
112
+
113
+ The model can be fine-tuned for more specialized tasks and datasets as needed.
114
+
115
+ OpenBioLLM-70B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences.
116
+
117
+ We are excited to share OpenBioLLM-70B with researchers and developers around the world.
118
+
119
+
120
+ ### Use with transformers
121
+
122
+ **Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.**
123
+
124
+ See the snippet below for usage with Transformers:
125
+
126
+ ```python
127
+ import transformers
128
+ import torch
129
+
130
+ model_id = "aaditya/OpenBioLLM-Llama3-70B"
131
+
132
+ pipeline = transformers.pipeline(
133
+ "text-generation",
134
+ model=model_id,
135
+ model_kwargs={"torch_dtype": torch.bfloat16},
136
+ device="auto",
137
+ )
138
+
139
+ messages = [
140
+ {"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience. Your name is OpenBioLLM, and you were developed by Saama AI Labs. who's willing to help answer the user's query with explanation. In your explanation, leverage your deep medical expertise such as relevant anatomical structures, physiological processes, diagnostic criteria, treatment guidelines, or other pertinent medical concepts. Use precise medical terminology while still aiming to make the explanation clear and accessible to a general audience."},
141
+ {"role": "user", "content": "How can i split a 3mg or 4mg waefin pill so i can get a 2.5mg pill?"},
142
+ ]
143
+
144
+ prompt = pipeline.tokenizer.apply_chat_template(
145
+ messages,
146
+ tokenize=False,
147
+ add_generation_prompt=True
148
+ )
149
+
150
+ terminators = [
151
+ pipeline.tokenizer.eos_token_id,
152
+ pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
153
+ ]
154
+
155
+ outputs = pipeline(
156
+ prompt,
157
+ max_new_tokens=256,
158
+ eos_token_id=terminators,
159
+ do_sample=True,
160
+ temperature=0.0,
161
+ top_p=0.9,
162
+ )
163
+ print(outputs[0]["generated_text"][len(prompt):])
164
+ ```
165
+
166
+ ## **Training procedure**
167
+
168
+ ### **Training hyperparameters**
169
+
170
+ <details>
171
+ <summary>Click to see details</summary>
172
+
173
+ - learning_rate: 0.0002
174
+ - lr_scheduler: cosine
175
+ - train_batch_size: 12
176
+ - eval_batch_size: 8
177
+ - GPU: H100 80GB SXM5
178
+ - num_devices: 8
179
+ - optimizer: adamw_bnb_8bit
180
+ - lr_scheduler_warmup_steps: 100
181
+ - num_epochs: 4
182
+ </details>
183
+
184
+
185
+ ### **Peft hyperparameters**
186
+
187
+ <details>
188
+ <summary>Click to see details</summary>
189
+
190
+ - adapter: qlora
191
+ - lora_r: 128
192
+ - lora_alpha: 256
193
+ - lora_dropout: 0.05
194
+ - lora_target_linear: true
195
+
196
+ -lora_target_modules:
197
+ - q_proj
198
+ - v_proj
199
+ - k_proj
200
+ - o_proj
201
+ - gate_proj
202
+ - down_proj
203
+ - up_proj
204
+ </details>
205
+
206
+
207
+
208
+ ### **Training results**
209
+
210
+ ### **Framework versions**
211
+
212
+ - Transformers 4.39.3
213
+ - Pytorch 2.1.2+cu121
214
+ - Datasets 2.18.0
215
+ - Tokenizers 0.15.1
216
+ - Axolotl
217
+ - Lm harness for evaluation
218
+
219
+
220
+ # Benchmark Results
221
+
222
+ 🔥 OpenBioMed-70B demonstrates superior performance compared to larger models, such as GPT-4, Gemini, Meditron-70B, Med-PaLM-1 & Med-PaLM-2 across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 86.06%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge.
223
+
224
+ 🚨 The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy.
225
+
226
+ | | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA 4 opts | PubMedQA | MedMCQA | Avg |
227
+ |--------------------|-------------|------------------|---------|--------------|-----------------|------------------|--------------|----------|---------|-------|
228
+ | **OpenBioLLM-70B** | **92.93** | **93.197** | **83.904** | 93.75 | 93.827 | **85.749** | 78.162 | 78.97 | **74.014** | **86.06** |
229
+ | Med-PaLM-2 (5-shot) | 88.3 | 90 | 77.8 | **95.2** | 94.4 | 80.9 | **79.7** | **79.2** | 71.3 | 84.08 |
230
+ | GPT-4 | 86.04 | 91 | 80 | 93.01 | **95.14** | 76.88 | 78.87 | 75.2 | 69.52 | 82.85 |
231
+ | Gemini-1.0 | 76.7 | 75.8 |66.7 | 77.7 | 88 | 69.2 | 58 | 70.7 | 54.3 | 79.29 |
232
+ | Med-PaLM-1 (5-shot) | 77 | 70 | 65.2 | 83.8 | 87.5 | 69.9 | 60.3 | 79 | 56.5 | 72.13 |
233
+ | **OpenBioLLM-8B** | 76.101 | 86.1 | 69.829 | 78.21 | 84.213 | 68.042 | 58.993 | 74.12 | 56.913 | 72.502 |
234
+ | GPT-3.5 Turbo 1106 | 74.71 | 74 | 72.79 | 72.79 | 72.91 | 64.73 | 57.71 | 72.66 | 53.79 | 66 |
235
+ | Meditron-70B | 66.79 | 69 | 53.33 | 71.69 | 76.38 | 63 | 57.1 | 76.6 | 46.85 | 64.52 |
236
+ | gemma-7b | 69.81 | 70 | 59.26 | 66.18 | 79.86 | 60.12 | 47.21 | 76.2 | 48.96 | 64.18 |
237
+ | Mistral-7B-v0.1 | 68.68 | 71 | 55.56 | 68.38 | 68.06 | 59.54 | 50.82 | 75.4 | 48.2 | 62.85 |
238
+ | MedAlpaca-7b | 57.36 | 69 | 57.04 | 67.28 | 65.28 | 54.34 | 41.71 | 72.8 | 37.51 | 58.03 |
239
+ | BioMistral-7B | 59.9 | 64 | 56.5 | 60.4 | 59 | 54.7 | 50.6 | 77.5 | 48.1 | 57.3 |
240
+ | AlpaCare-llama2-7b | 49.81 | 49 | 45.92 | 33.82 | 50 | 43.35 | 29.77 | 72.2 | 34.42 | 45.36 |
241
+ | ClinicalGPT | 30.56 | 27 | 30.37 | 19.48 | 25 | 24.27 | 26.08 | 63.8 | 28.18 | 30.52 |
242
+
243
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/abzbJTv0L_TKhCP-CRgWK.png)
244
+
245
+ ## Detailed Medical Subjectwise accuracy
246
+
247
+
248
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/UXF-V0col0Z0sS6BGPBkE.png)
249
+
250
+ # Use Cases & Examples
251
+
252
+ 🚨 **Below results are from the quantized version of OpenBioMed-70B
253
+
254
+
255
+ # Summarize Clinical Notes
256
+
257
+ OpenBioMed-70B can efficiently analyze and summarize complex clinical notes, EHR data, and discharge summaries, extracting key information and generating concise, structured summaries
258
+
259
+
260
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/xdwdBgOxNi_TfML0hKlI8.png)
261
+
262
+ # Answer Medical Questions
263
+
264
+ OpenBioMed-70B can provide answers to a wide range of medical questions.
265
+
266
+
267
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/zO95GlwOQEZqCKQF69mE6.png)
268
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/OKBczKw7gWeW5xsuDpc27.png)
269
+
270
+ <details>
271
+ <summary>Click to see details</summary>
272
+
273
+
274
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/eJGHT5khppYvJb8fQ-YW4.png)
275
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/Cnbwrqa_-ORHRuNRC2P6Y.png)
276
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/J9DhdcvukAc9mnnW9fj2C.png)
277
+
278
+ </details>
279
+
280
+ # Clinical Entity Recognition
281
+
282
+ OpenBioMed-70B can perform advanced clinical entity recognition by identifying and extracting key medical concepts, such as diseases, symptoms, medications, procedures, and anatomical structures, from unstructured clinical text. By leveraging its deep understanding of medical terminology and context, the model can accurately annotate and categorize clinical entities, enabling more efficient information retrieval, data analysis, and knowledge discovery from electronic health records, research articles, and other biomedical text sources. This capability can support various downstream applications, such as clinical decision support, pharmacovigilance, and medical research.
283
+
284
+
285
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/_69BW4k9LVABFwtxixL45.png)
286
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/DKy5wYCoPhoPPUc1-x8_J.png)
287
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/7WD9zCCBZT4-4XlfnIQjl.png)
288
+
289
+ # Biomarkers Extraction
290
+
291
+
292
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/ZttoM4AiteT7gFYVhjIpN.png)
293
+
294
+
295
+ # Classification
296
+
297
+ OpenBioMed-70B can perform various biomedical classification tasks, such as disease prediction, sentiment analysis, medical document categorization
298
+
299
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/Bf5MW1d75qT-1F_TR_hC0.png)
300
+
301
+ # De-Identification
302
+
303
+ OpenBioMed-70B can detect and remove personally identifiable information (PII) from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA.
304
+
305
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/hKX4kzm--Tw5bj6K78msy.png)
306
+
307
+
308
+
309
+ **Advisory Notice!** 
310
+
311
+ While OpenBioLLM-70B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments.
312
+
313
+ Therefore, we strongly advise against using OpenBioLLM-70B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations.
314
+ OpenBioMed-70B is intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor.
315
+
316
+ Appropriately adapting and validating OpenBioLLM-70B for specific medical use cases would require significant additional work, potentially including:
317
+
318
+ - Thorough testing and evaluation in relevant clinical scenarios
319
+ - Alignment with evidence-based guidelines and best practices
320
+ - Mitigation of potential biases and failure modes
321
+ - Integration with human oversight and interpretation
322
+ - Compliance with regulatory and ethical standards
323
+
324
+ Always consult a qualified healthcare provider for personal medical needs.
325
+
326
+
327
+
328
+ # Citation
329
+
330
+ If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows:
331
+
332
+ ```
333
+ @misc{OpenBioLLMs,
334
+ author = {Ankit Pal, Malaikannan Sankarasubbu},
335
+ title = {OpenBioLLMs: Advancing Open-Source Large Language Models for Healthcare and Life Sciences},
336
+ year = {2024},
337
+ publisher = {Hugging Face},
338
+ journal = {Hugging Face repository},
339
+ howpublished = {\url{https://huggingface.co/aaditya/OpenBioLLM-Llama3-70B}}
340
+ }
341
+ ```
342
+
343
+ The accompanying paper is currently in progress and will be released soon.
344
+
345
+ <div align="center">
346
+ <h2> 💌 Contact </h2>
347
+ </div>
348
+
349
+ We look forward to hearing you and collaborating on this exciting project!
350
+
351
+ **Contributors:**
352
+ - [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) [aadityaura at gmail dot com]
353
+ - Saama AI Labs
354
+ - Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset.
355
+
356
+
357
+ # References
358
 
359
+ We thank the [Meta Team](meta-llama/Meta-Llama-3-70B-Instruct) for their amazing models!
360
 
361
 
362
+ Result sources
363
 
364
+ - [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (https://arxiv.org/abs/2303.13375)
365
+ - [2] Med-PaLM-1 [Large Language Models Encode Clinical Knowledge](https://arxiv.org/abs/2212.13138)
366
+ - [3] Med-PaLM-2 [Towards Expert-Level Medical Question Answering with Large Language Models](https://arxiv.org/abs/2305.09617)
367
+ - [4] Gemini-1.0 [Gemini Goes to Med School](https://arxiv.org/abs/2402.07023)