Adapters
medical
File size: 2,437 Bytes
4fa4f43
 
 
 
 
 
 
 
277f1aa
032b543
11ff4c3
032b543
277f1aa
 
 
 
 
032b543
 
277f1aa
032b543
277f1aa
 
883658c
277f1aa
 
 
 
 
 
 
 
 
 
883658c
032b543
277f1aa
032b543
277f1aa
032b543
277f1aa
032b543
 
 
 
277f1aa
032b543
 
277f1aa
032b543
277f1aa
032b543
277f1aa
 
 
 
032b543
277f1aa
032b543
277f1aa
 
 
 
 
 
032b543
 
 
277f1aa
032b543
 
 
883658c
277f1aa
883658c
277f1aa
883658c
277f1aa
032b543
 
4fa4f43
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
---
license: apache-2.0
datasets:
- Laurent1/MedQuad-MedicalQnADataset_128tokens_max
library_name: adapter-transformers
tags:
- medical
---
# Model Card for mpt-7b-instruct2-QLoRa-medical-QA

![image/gif](https://cdn-uploads.huggingface.co/production/uploads/6489e1e3eb763749c663f40c/PUBFPpFxsrWRlkYzh7lwX.gif)

<font color="FF0000" size="5"> <b>
This is a QA model for answering medical questions<br /> </b></font>
<br><b>Foundation Model : https://huggingface.co/ibm/mpt-7b-instruct2 <br />
Dataset :  https://huggingface.co/datasets/Laurent1/MedQuad-MedicalQnADataset_128tokens_max <br /></b>
The model has been fine tuned with 2 x GPU T4 (RAM : 2 x 14.8GB) + CPU (RAM : 29GB). <br />


## <b>Model Details</b>

The model is based upon the foundation model : ibm/mpt-7b-instruct2 (Apache 2.0 License).<br />
It has been tuned with Supervised Fine-tuning Trainer and PEFT LoRa.<br />

### Librairies
<ul>
<li>bitsandbytes</li>
<li>einops</li>
<li>peft</li>
<li>trl</li>
<li>datasets</li>
<li>transformers</li>
<li>torch</li>
</ul>


### Notebook used for the training

https://colab.research.google.com/drive/14nxSP5UuJcnIJtEERyk5nehBL3W03FR3?hl=fr

=> Improvements can be achieved by increasing the number of steps and using the full dataset. <br />

### Direct Use


![image/png](https://cdn-uploads.huggingface.co/production/uploads/6489e1e3eb763749c663f40c/b1Vboznz82PwtN4rLNqGC.png)


## <b>Bias, Risks, and Limitations</b>

In order to reduce training duration, the model has been trained only with the first 5100 rows of the dataset.<br />

<font color="FF0000">
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.<br />
Generation of plausible yet incorrect factual information, termed hallucination, is an unsolved issue in large language models.<br />
</font>

## <b>Training Details</b>

<ul>
<li>per_device_train_batch_size = 1</li>
<li>gradient_accumulation_steps = 16</li>
<li>epoch = 5</li>
<li>2 x GPU T4 (RAM : 14.8GB) + CPU (RAM : 29GB)</li>
</ul>

### Training Data

https://huggingface.co/datasets/Laurent1/MedQuad-MedicalQnADataset_128tokens_max

#### Training Hyperparameters


![image/png](https://cdn-uploads.huggingface.co/production/uploads/6489e1e3eb763749c663f40c/C6XTGVrn4D1Sj2kc9Dq2O.png)

#### Times

Training duration : 6287.4s


![image/png](https://cdn-uploads.huggingface.co/production/uploads/6489e1e3eb763749c663f40c/WTQ6v-ruMLF7IevXZDham.png)