HuggingSara
commited on
Commit
•
7e46469
1
Parent(s):
b9efbfe
Update README.md
Browse files
README.md
CHANGED
@@ -9,6 +9,45 @@ tags:
|
|
9 |
- medical
|
10 |
---
|
11 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12 |
|
13 |
| **Model** | **CKG** | **CBio** | **CMed** | **MedGen** | **ProMed** | **Ana** | **MedMCQA** | **MedQA** | **PubmedQA** | **AVG** |
|
14 |
|-----------------------|------------|-----------|-----------|-------------|-------------|---------|-------------|-----------|--------------|---------|
|
@@ -16,4 +55,18 @@ tags:
|
|
16 |
| Med42-70B | 75.9 | 84.0 | 69.9 | 83.0 | 78.7 | 64.4 | 61.9 | 61.3 | 77.2 | 72.9 |
|
17 |
| Clinical Camel-70B | 69.8 | 79.2 | 67.0 | 69.0 | 71.3 | 62.2 | 47.0 | 53.4 | 74.3 | 65.9 |
|
18 |
| Meditron-70B | 72.3 | 82.5 | 62.8 | 77.8 | 77.9 | 62.7 | **65.1** | 60.7 | 80.0 | 71.3 |
|
19 |
-
| **BiMediX** | **78.9** | **86.1** | **68.2** | **85.0** | **80.5** | **74.1**| 62.7 | **62.8** | **80.2** | **75.4** |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
- medical
|
10 |
---
|
11 |
|
12 |
+
## Model Card for BiMediX-Bilingual
|
13 |
+
|
14 |
+
### Model Details
|
15 |
+
- **Name:** BiMediX
|
16 |
+
- **Version:** 1.0
|
17 |
+
- **Type:** Bilingual Medical Mixture of Experts Large Language Model (LLM)
|
18 |
+
- **Languages:** English
|
19 |
+
- **Model Architecture:** [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
|
20 |
+
- **Training Data:** BiMed1.3M-English, a bilingual dataset with diverse medical interactions.
|
21 |
+
|
22 |
+
### Intended Use
|
23 |
+
- **Primary Use:** Medical interactions in both English and Arabic.
|
24 |
+
- **Capabilities:** MCQA, closed QA and chats.
|
25 |
+
|
26 |
+
## Getting Started
|
27 |
+
|
28 |
+
```python
|
29 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
30 |
+
|
31 |
+
model_id = "BiMediX/BiMediX-Eng"
|
32 |
+
|
33 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
34 |
+
model = AutoModelForCausalLM.from_pretrained(model_id)
|
35 |
+
|
36 |
+
text = "Hello BiMediX! I've been experiencing increased tiredness in the past week."
|
37 |
+
inputs = tokenizer(text, return_tensors="pt")
|
38 |
+
|
39 |
+
outputs = model.generate(**inputs, max_new_tokens=500)
|
40 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
41 |
+
```
|
42 |
+
|
43 |
+
### Training Procedure
|
44 |
+
- **Dataset:** BiMed1.3M-English, million healthcare specialized tokens.
|
45 |
+
- **QLoRA Adaptation:** Implements a low-rank adaptation technique, incorporating learnable low-rank adapter weights into the experts and the routing network. This results in training about 4% of the original parameters.
|
46 |
+
- **Training Resources:** The model underwent training on approximately 288 million tokens from the BiMed1.3M-English corpus.
|
47 |
+
|
48 |
+
### Model Performance
|
49 |
+
- **Benchmarks:** Demonstrates superior performance compared to baseline models in medical benchmarks. This enhancement is attributed to advanced training techniques and a comprehensive dataset, ensuring the model's adeptness in handling complex medical queries and providing accurate information in the healthcare domain.
|
50 |
+
|
51 |
|
52 |
| **Model** | **CKG** | **CBio** | **CMed** | **MedGen** | **ProMed** | **Ana** | **MedMCQA** | **MedQA** | **PubmedQA** | **AVG** |
|
53 |
|-----------------------|------------|-----------|-----------|-------------|-------------|---------|-------------|-----------|--------------|---------|
|
|
|
55 |
| Med42-70B | 75.9 | 84.0 | 69.9 | 83.0 | 78.7 | 64.4 | 61.9 | 61.3 | 77.2 | 72.9 |
|
56 |
| Clinical Camel-70B | 69.8 | 79.2 | 67.0 | 69.0 | 71.3 | 62.2 | 47.0 | 53.4 | 74.3 | 65.9 |
|
57 |
| Meditron-70B | 72.3 | 82.5 | 62.8 | 77.8 | 77.9 | 62.7 | **65.1** | 60.7 | 80.0 | 71.3 |
|
58 |
+
| **BiMediX** | **78.9** | **86.1** | **68.2** | **85.0** | **80.5** | **74.1**| 62.7 | **62.8** | **80.2** | **75.4** |
|
59 |
+
|
60 |
+
### Limitations
|
61 |
+
- Potential issues: hallucinations, toxicity, stereotypes.
|
62 |
+
- Medical diagnoses and recommendations require human evaluation.
|
63 |
+
|
64 |
+
### Safety and Ethical Considerations
|
65 |
+
- **Usage:** Research purposes only.
|
66 |
+
|
67 |
+
### Accessibility
|
68 |
+
- **Availability:** [BiMediX GitHub Repository](https://github.com/mbzuai-oryx/BiMediX).
|
69 |
+
|
70 |
+
### Authors
|
71 |
+
Sara Pieri, Sahal Shaji Mullappilly, Fahad Shahbaz Khan, Rao Muhammad Anwer Salman Khan, Timothy Baldwin, Hisham Cholakkal
|
72 |
+
**Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI)**
|