File size: 3,481 Bytes
4e273ea
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
---
license: cc-by-nc-sa-4.0
language:
- en
metrics:
- accuracy
pipeline_tag: text-generation
tags:
- medical
---
## Model Card for BiMediX-Bilingual

### Model Details
- **Name:** BiMediX
- **Version:** 1.0
- **Type:** Bilingual Medical Mixture of Experts Large Language Model (LLM)
- **Languages:** English
- **Model Architecture:** [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
- **Training Data:** BiMed1.3M-English, a bilingual dataset with diverse medical interactions.

### Intended Use
- **Primary Use:** Medical interactions in both English and Arabic.
- **Capabilities:** MCQA, closed QA and chats.

## Getting Started

```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "BiMediX/BiMediX-Eng"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
text = "Hello BiMediX! I've been experiencing increased tiredness in the past week."
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=500)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```

### Training Procedure
- **Dataset:** BiMed1.3M-English,  million healthcare specialized tokens.
- **QLoRA Adaptation:** Implements a low-rank adaptation technique, incorporating learnable low-rank adapter weights into the experts and the routing network. This results in training about 4% of the original parameters.
- **Training Resources:** The model underwent training on approximately 288 million tokens from the BiMed1.3M-English corpus.

### Model Performance
- **Benchmarks:** Demonstrates superior performance compared to baseline models in medical benchmarks. This enhancement is attributed to advanced training techniques and a comprehensive dataset, ensuring the model's adeptness in handling complex medical queries and providing accurate information in the healthcare domain.


| **Model**                         | **CKG** | **CBio** | **CMed** | **MedGen** | **ProMed** | **Ana** | **MedMCQA** | **MedQA** | **PubmedQA** | **AVG** |
|-----------------------|------------|-----------|-----------|-------------|-------------|---------|-------------|-----------|--------------|---------|
| PMC-LLaMA-13B         | 63.0       | 59.7      | 52.6      | 70.0        | 64.3        | 61.5    | 50.5        | 47.2      | 75.6         | 60.5    |
| Med42-70B             | 75.9       | 84.0      | 69.9      | 83.0        | 78.7        | 64.4    | 61.9        | 61.3      | 77.2         | 72.9    |
| Clinical Camel-70B    | 69.8       | 79.2      | 67.0      | 69.0        | 71.3        | 62.2    | 47.0        | 53.4      | 74.3         | 65.9    |
| Meditron-70B          | 72.3       | 82.5      | 62.8      | 77.8        | 77.9        | 62.7    | **65.1**    | 60.7      | 80.0         | 71.3    |
| **BiMediX**           | **78.9**   | **86.1**  | **68.2**  | **85.0**    | **80.5**    | **74.1**| 62.7        | **62.8**  | **80.2**     | **75.4** |

### Safety and Ethical Considerations
- **Potential issues**: hallucinations, toxicity, stereotypes.
- **Usage:** Research purposes only.

### Accessibility
- **Availability:** [BiMediX GitHub Repository](https://github.com/mbzuai-oryx/BiMediX).
- arxiv.org/abs/2402.13253 

### Authors
Sara Pieri, Sahal Shaji Mullappilly, Fahad Shahbaz Khan, Rao Muhammad Anwer Salman Khan, Timothy Baldwin, Hisham Cholakkal  
**Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI)**