Update README.md
Browse files
README.md
CHANGED
@@ -3,16 +3,16 @@ library_name: peft
|
|
3 |
tags:
|
4 |
- code
|
5 |
- instruct
|
6 |
-
-
|
7 |
datasets:
|
8 |
- cognitivecomputations/dolphin-coder
|
9 |
-
base_model:
|
10 |
license: apache-2.0
|
11 |
---
|
12 |
|
13 |
### Finetuning Overview:
|
14 |
|
15 |
-
**Model Used:**
|
16 |
|
17 |
**Dataset:** cognitivecomputations/dolphin-coder
|
18 |
|
@@ -25,21 +25,22 @@ license: apache-2.0
|
|
25 |
With the utilization of [MonsterAPI](https://monsterapi.ai)'s [no-code LLM finetuner](https://monsterapi.ai/finetuning), this finetuning:
|
26 |
|
27 |
- Was achieved with great cost-effectiveness.
|
28 |
-
- Completed in a total duration of
|
29 |
-
- Costed `$
|
30 |
|
31 |
#### Hyperparameters & Additional Details:
|
32 |
|
33 |
-
- **Epochs:** 1
|
34 |
-
- **
|
35 |
-
- **Model Path:**
|
36 |
- **Learning Rate:** 0.0002
|
37 |
- **Data Split:** 100% train
|
38 |
- **Gradient Accumulation Steps:** 128
|
39 |
- **lora r:** 32
|
40 |
- **lora alpha:** 64
|
41 |
|
42 |
-
|
|
|
43 |
|
44 |
---
|
45 |
license: apache-2.0
|
|
|
3 |
tags:
|
4 |
- code
|
5 |
- instruct
|
6 |
+
- mistral
|
7 |
datasets:
|
8 |
- cognitivecomputations/dolphin-coder
|
9 |
+
base_model: mistralai/Mistral-7B-v0.1
|
10 |
license: apache-2.0
|
11 |
---
|
12 |
|
13 |
### Finetuning Overview:
|
14 |
|
15 |
+
**Model Used:** mistralai/Mistral-7B-v0.1
|
16 |
|
17 |
**Dataset:** cognitivecomputations/dolphin-coder
|
18 |
|
|
|
25 |
With the utilization of [MonsterAPI](https://monsterapi.ai)'s [no-code LLM finetuner](https://monsterapi.ai/finetuning), this finetuning:
|
26 |
|
27 |
- Was achieved with great cost-effectiveness.
|
28 |
+
- Completed in a total duration of 7hrs 36min for 0.1 epochs using an A6000 48GB GPU.
|
29 |
+
- Costed `$15.2` for the entire run
|
30 |
|
31 |
#### Hyperparameters & Additional Details:
|
32 |
|
33 |
+
- **Epochs:** 0.1
|
34 |
+
- **Cost for full run:** $15.2
|
35 |
+
- **Model Path:** mistralai/Mistral-7B-v0.1
|
36 |
- **Learning Rate:** 0.0002
|
37 |
- **Data Split:** 100% train
|
38 |
- **Gradient Accumulation Steps:** 128
|
39 |
- **lora r:** 32
|
40 |
- **lora alpha:** 64
|
41 |
|
42 |
+
|
43 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6313732454e6e5d9f0f797cd/0O1VKp3SJNfrhTd5earci.png)
|
44 |
|
45 |
---
|
46 |
license: apache-2.0
|