Zangs3011 commited on
Commit
a2d4dd9
1 Parent(s): 94f00b0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -9
README.md CHANGED
@@ -3,16 +3,16 @@ library_name: peft
3
  tags:
4
  - code
5
  - instruct
6
- - code-llama
7
  datasets:
8
  - cognitivecomputations/dolphin-coder
9
- base_model: codellama/CodeLlama-7b-hf
10
  license: apache-2.0
11
  ---
12
 
13
  ### Finetuning Overview:
14
 
15
- **Model Used:** codellama/CodeLlama-7b-hf
16
 
17
  **Dataset:** cognitivecomputations/dolphin-coder
18
 
@@ -25,21 +25,22 @@ license: apache-2.0
25
  With the utilization of [MonsterAPI](https://monsterapi.ai)'s [no-code LLM finetuner](https://monsterapi.ai/finetuning), this finetuning:
26
 
27
  - Was achieved with great cost-effectiveness.
28
- - Completed in a total duration of 15hr 31mins for 1 epochs using an A6000 48GB GPU.
29
- - Costed `$31.31` for the entire 1 epoch.
30
 
31
  #### Hyperparameters & Additional Details:
32
 
33
- - **Epochs:** 1
34
- - **Total Finetuning Cost:** $31.31
35
- - **Model Path:** codellama/CodeLlama-7b-hf
36
  - **Learning Rate:** 0.0002
37
  - **Data Split:** 100% train
38
  - **Gradient Accumulation Steps:** 128
39
  - **lora r:** 32
40
  - **lora alpha:** 64
41
 
42
- ![Train Loss](https://cdn-uploads.huggingface.co/production/uploads/63ba46aa0a9866b28cb19a14/aNujXePogMlJZmoi1Bq56.png)
 
43
 
44
  ---
45
  license: apache-2.0
 
3
  tags:
4
  - code
5
  - instruct
6
+ - mistral
7
  datasets:
8
  - cognitivecomputations/dolphin-coder
9
+ base_model: mistralai/Mistral-7B-v0.1
10
  license: apache-2.0
11
  ---
12
 
13
  ### Finetuning Overview:
14
 
15
+ **Model Used:** mistralai/Mistral-7B-v0.1
16
 
17
  **Dataset:** cognitivecomputations/dolphin-coder
18
 
 
25
  With the utilization of [MonsterAPI](https://monsterapi.ai)'s [no-code LLM finetuner](https://monsterapi.ai/finetuning), this finetuning:
26
 
27
  - Was achieved with great cost-effectiveness.
28
+ - Completed in a total duration of 7hrs 36min for 0.1 epochs using an A6000 48GB GPU.
29
+ - Costed `$15.2` for the entire run
30
 
31
  #### Hyperparameters & Additional Details:
32
 
33
+ - **Epochs:** 0.1
34
+ - **Cost for full run:** $15.2
35
+ - **Model Path:** mistralai/Mistral-7B-v0.1
36
  - **Learning Rate:** 0.0002
37
  - **Data Split:** 100% train
38
  - **Gradient Accumulation Steps:** 128
39
  - **lora r:** 32
40
  - **lora alpha:** 64
41
 
42
+
43
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6313732454e6e5d9f0f797cd/0O1VKp3SJNfrhTd5earci.png)
44
 
45
  ---
46
  license: apache-2.0