jlpan commited on
Commit
e11d55a
1 Parent(s): f338e35

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -16
README.md CHANGED
@@ -6,7 +6,6 @@ tags:
6
  model-index:
7
  - name: starcoder-c2py-snippet1
8
  results: []
9
- library_name: peft
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -16,7 +15,7 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  This model is a fine-tuned version of [bigcode/starcoder](https://huggingface.co/bigcode/starcoder) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 0.2014
20
 
21
  ## Model description
22
 
@@ -43,29 +42,22 @@ The following hyperparameters were used during training:
43
  - total_train_batch_size: 32
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: cosine
46
- - lr_scheduler_warmup_steps: 10
47
- - training_steps: 100
48
 
49
  ### Training results
50
 
51
  | Training Loss | Epoch | Step | Validation Loss |
52
  |:-------------:|:-----:|:----:|:---------------:|
53
- | 8.2651 | 0.1 | 10 | 4.2201 |
54
- | 0.9761 | 0.2 | 20 | 0.5205 |
55
- | 0.3183 | 0.3 | 30 | 0.2766 |
56
- | 0.1887 | 1.04 | 40 | 0.2384 |
57
- | 0.1867 | 1.14 | 50 | 0.2171 |
58
- | 0.1732 | 1.24 | 60 | 0.2072 |
59
- | 0.156 | 1.34 | 70 | 0.2034 |
60
- | 0.1415 | 2.08 | 80 | 0.2022 |
61
- | 0.1614 | 2.17 | 90 | 0.2016 |
62
- | 0.1568 | 2.27 | 100 | 0.2014 |
63
 
64
 
65
  ### Framework versions
66
 
67
- - PEFT 0.5.0.dev0
68
- - PEFT 0.5.0.dev0
69
  - Transformers 4.32.0.dev0
70
  - Pytorch 2.0.1+cu117
71
  - Datasets 2.12.0
 
6
  model-index:
7
  - name: starcoder-c2py-snippet1
8
  results: []
 
9
  ---
10
 
11
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
15
 
16
  This model is a fine-tuned version of [bigcode/starcoder](https://huggingface.co/bigcode/starcoder) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
+ - Loss: 0.2601
19
 
20
  ## Model description
21
 
 
42
  - total_train_batch_size: 32
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: cosine
45
+ - lr_scheduler_warmup_steps: 5
46
+ - training_steps: 50
47
 
48
  ### Training results
49
 
50
  | Training Loss | Epoch | Step | Validation Loss |
51
  |:-------------:|:-----:|:----:|:---------------:|
52
+ | 7.249 | 0.2 | 10 | 2.0348 |
53
+ | 0.6338 | 0.4 | 20 | 0.5047 |
54
+ | 0.3306 | 0.6 | 30 | 0.3064 |
55
+ | 0.2144 | 1.07 | 40 | 0.2655 |
56
+ | 0.2195 | 1.27 | 50 | 0.2601 |
 
 
 
 
 
57
 
58
 
59
  ### Framework versions
60
 
 
 
61
  - Transformers 4.32.0.dev0
62
  - Pytorch 2.0.1+cu117
63
  - Datasets 2.12.0