HuggingSara commited on
Commit
aa49649
1 Parent(s): 0e1e8c7

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +63 -0
README.md ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - ar
5
+ metrics:
6
+ - accuracy
7
+ pipeline_tag: text-generation
8
+ tags:
9
+ - medical
10
+ license: cc-by-nc-sa-4.0
11
+ ---
12
+ ## Model Card for BiMediX-Bilingual
13
+
14
+ ### Model Details
15
+ - **Name:** BiMediX
16
+ - **Version:** 1.0
17
+ - **Type:** Bilingual Medical Mixture of Experts Large Language Model (LLM)
18
+ - **Languages:** English, Arabic
19
+ - **Model Architecture:** [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
20
+ - **Training Data:** BiMed1.3M, a bilingual dataset with diverse medical interactions.
21
+
22
+ ### Intended Use
23
+ - **Primary Use:** Medical interactions in both English and Arabic.
24
+ - **Capabilities:** MCQA, closed QA and chats.
25
+
26
+ ## Getting Started
27
+
28
+ ```python
29
+ from transformers import AutoModelForCausalLM, AutoTokenizer
30
+ model_id = "BiMediX/BiMediX-Bi"
31
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
32
+ model = AutoModelForCausalLM.from_pretrained(model_id)
33
+ text = "Hello BiMediX! I've been experiencing increased tiredness in the past week."
34
+ inputs = tokenizer(text, return_tensors="pt")
35
+ outputs = model.generate(**inputs, max_new_tokens=500)
36
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
37
+ ```
38
+
39
+ ### Training Procedure
40
+ - **Dataset:** BiMed1.3M, 632 million healthcare specialized tokens.
41
+ - **QLoRA Adaptation:** Implements a low-rank adaptation technique, incorporating learnable low-rank adapter weights into the experts and the routing network. This results in training about 4% of the original parameters.
42
+ - **Training Resources:** The model underwent training on approximately 632 million tokens from the Arabic-English corpus, including 288 million tokens exclusively for English.
43
+
44
+ ### Model Performance
45
+ - **Benchmarks:** Outperforms the baseline model and Jais-30B in medical evaluations.
46
+
47
+ | **Model** | **CKG** | **CBio** | **CMed** | **MedGen** | **ProMed** | **Ana** | **MedMCQA** | **MedQA** | **PubmedQA** | **AVG** |
48
+ |-----------------------------------|------------|-----------|-----------|-------------|-------------|---------|-------------|-----------|--------------|---------|
49
+ | Jais-30B | 57.4 | 55.2 | 46.2 | 55.0 | 46.0 | 48.9 | 40.2 | 31.0 | 75.5 | 50.6 |
50
+ | Mixtral-8x7B| 59.1 | 57.6 | 52.6 | 59.5 | 53.3 | 54.4 | 43.2 | 40.6 | 74.7 | 55.0 |
51
+ | **BiMediX (Bilingual)** | **70.6** | **72.2** | **59.3** | **74.0** | **64.2** | **59.6**| **55.8** | **54.0** | **78.6** | **65.4**|
52
+
53
+ ### Safety and Ethical Considerations
54
+ - **Potential issues**: hallucinations, toxicity, stereotypes.
55
+ - **Usage:** Research purposes only.
56
+
57
+ ### Accessibility
58
+ - **Availability:** [BiMediX GitHub Repository](https://github.com/mbzuai-oryx/BiMediX).
59
+ - arxiv.org/abs/2402.13253
60
+
61
+ ### Authors
62
+ Sara Pieri, Sahal Shaji Mullappilly, Fahad Shahbaz Khan, Rao Muhammad Anwer Salman Khan, Timothy Baldwin, Hisham Cholakkal
63
+ **Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI)**