Update README.md
Browse files
README.md
CHANGED
@@ -12,12 +12,59 @@ tags:
|
|
12 |
- sft
|
13 |
---
|
14 |
|
15 |
-
#
|
16 |
|
17 |
-
-
|
18 |
-
-
|
19 |
-
-
|
|
|
20 |
|
21 |
-
|
22 |
|
23 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12 |
- sft
|
13 |
---
|
14 |
|
15 |
+
# MedGPT-Llama3.1-8B-v.1
|
16 |
|
17 |
+
- This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B](https://huggingface.co/unsloth/Meta-Llama-3.1-8B) on an dataset created by [Valerio Job](https://huggingface.co/valeriojob) together with GPs based on real medical data.
|
18 |
+
- Version 1 (v.1) of MedGPT is the very first version of MedGPT and the training dataset has been kept simple and small with only 60 examples.
|
19 |
+
- This repo includes the 16bit format of the model as well as the LoRA adapters of the model. There is a separate repo called [valeriojob/MedGPT-Llama3.1-8B-BA-v.1-GGUF](https://huggingface.co/valeriojob/MedGPT-Llama3.1-8B-BA-v.1-GGUF) that includes the quantized versions of this model in GGUF format.
|
20 |
+
- This model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
21 |
|
22 |
+
## Model description
|
23 |
|
24 |
+
This model acts as a supplementary assistance to GPs helping them in medical and admin tasks.
|
25 |
+
|
26 |
+
## Intended uses & limitations
|
27 |
+
|
28 |
+
The fine-tuned model should not be used in production! This model has been created as a initial prototype in the context of a bachelor thesis.
|
29 |
+
|
30 |
+
## Training and evaluation data
|
31 |
+
|
32 |
+
The dataset (train and test) used for fine-tuning this model can be found here: [datasets/valeriojob/BA-v.1](https://huggingface.co/datasets/valeriojob/BA-v.1)
|
33 |
+
|
34 |
+
## Training procedure
|
35 |
+
|
36 |
+
### Training hyperparameters
|
37 |
+
|
38 |
+
The following hyperparameters were used during training:
|
39 |
+
- per_device_train_batch_size = 2,
|
40 |
+
- gradient_accumulation_steps = 4,
|
41 |
+
- warmup_steps = 5,
|
42 |
+
- max_steps = 60,
|
43 |
+
- learning_rate = 2e-4,
|
44 |
+
- fp16 = not is_bfloat16_supported(),
|
45 |
+
- bf16 = is_bfloat16_supported(),
|
46 |
+
- logging_steps = 1,
|
47 |
+
- optim = "adamw_8bit",
|
48 |
+
- weight_decay = 0.01,
|
49 |
+
- lr_scheduler_type = "linear",
|
50 |
+
- seed = 3407,
|
51 |
+
- output_dir = "outputs"
|
52 |
+
|
53 |
+
### Training results
|
54 |
+
|
55 |
+
| Training Loss | Step |
|
56 |
+
|:-------------:|:----:|
|
57 |
+
| 1.793200 | 1 |
|
58 |
+
| 1.635900 | 2 |
|
59 |
+
| 1.493000 | 3 |
|
60 |
+
| 1.227600 | 5 |
|
61 |
+
| 0.640500 | 10 |
|
62 |
+
| 0.438300 | 15 |
|
63 |
+
| 0.370200 | 20 |
|
64 |
+
| 0.205100 | 30 |
|
65 |
+
| 0.094900 | 40 |
|
66 |
+
| 0.068500 | 50 |
|
67 |
+
| 0.059400 | 60 |
|
68 |
+
|
69 |
+
## Licenses
|
70 |
+
- **License:** apache-2.0
|