jjovalle99 commited on
Commit
ede022b
1 Parent(s): 1924896

gemma7b-ft-lora-sql-v2adapters

Browse files
Files changed (3) hide show
  1. README.md +31 -15
  2. adapter_config.json +6 -6
  3. training_args.bin +1 -1
README.md CHANGED
@@ -20,7 +20,7 @@ should probably proofread and complete it, then remove this comment. -->
20
 
21
  This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b) on the generator dataset.
22
  It achieves the following results on the evaluation set:
23
- - Loss: 0.9175
24
 
25
  ## Model description
26
 
@@ -39,29 +39,45 @@ More information needed
39
  ### Training hyperparameters
40
 
41
  The following hyperparameters were used during training:
42
- - learning_rate: 0.0002
43
- - train_batch_size: 4
44
  - eval_batch_size: 8
45
  - seed: 1399
46
- - gradient_accumulation_steps: 8
47
  - total_train_batch_size: 32
48
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
- - lr_scheduler_type: constant
50
- - lr_scheduler_warmup_steps: 10
51
- - training_steps: 100
52
 
53
  ### Training results
54
 
55
  | Training Loss | Epoch | Step | Validation Loss |
56
  |:-------------:|:-----:|:----:|:---------------:|
57
- | 20.0926 | 0.95 | 5 | 18.5680 |
58
- | 16.2448 | 1.9 | 10 | 10.6329 |
59
- | 5.1408 | 2.86 | 15 | 1.3605 |
60
- | 1.1787 | 3.81 | 20 | 0.9861 |
61
- | 0.9186 | 4.76 | 25 | 0.8888 |
62
- | 0.7896 | 5.71 | 30 | 0.8632 |
63
- | 0.6787 | 6.67 | 35 | 0.8657 |
64
- | 0.5448 | 7.62 | 40 | 0.9175 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
65
 
66
 
67
  ### Framework versions
 
20
 
21
  This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b) on the generator dataset.
22
  It achieves the following results on the evaluation set:
23
+ - Loss: 0.4155
24
 
25
  ## Model description
26
 
 
39
  ### Training hyperparameters
40
 
41
  The following hyperparameters were used during training:
42
+ - learning_rate: 0.0003
43
+ - train_batch_size: 8
44
  - eval_batch_size: 8
45
  - seed: 1399
46
+ - gradient_accumulation_steps: 4
47
  - total_train_batch_size: 32
48
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
+ - lr_scheduler_type: cosine
50
+ - lr_scheduler_warmup_steps: 100
51
+ - training_steps: 500
52
 
53
  ### Training results
54
 
55
  | Training Loss | Epoch | Step | Validation Loss |
56
  |:-------------:|:-----:|:----:|:---------------:|
57
+ | 16.1657 | 0.06 | 20 | 13.6485 |
58
+ | 7.8281 | 0.13 | 40 | 0.7808 |
59
+ | 0.6243 | 0.19 | 60 | 0.5270 |
60
+ | 0.5179 | 0.25 | 80 | 0.4859 |
61
+ | 0.4908 | 0.32 | 100 | 0.4754 |
62
+ | 0.4752 | 0.38 | 120 | 0.4600 |
63
+ | 0.4877 | 0.45 | 140 | 0.4584 |
64
+ | 0.4626 | 0.51 | 160 | 0.4560 |
65
+ | 0.4569 | 0.57 | 180 | 0.4428 |
66
+ | 0.4504 | 0.64 | 200 | 0.4354 |
67
+ | 0.4432 | 0.7 | 220 | 0.4348 |
68
+ | 0.4395 | 0.76 | 240 | 0.4317 |
69
+ | 0.4338 | 0.83 | 260 | 0.4256 |
70
+ | 0.4308 | 0.89 | 280 | 0.4260 |
71
+ | 0.4283 | 0.95 | 300 | 0.4210 |
72
+ | 0.4146 | 1.02 | 320 | 0.4225 |
73
+ | 0.3848 | 1.08 | 340 | 0.4186 |
74
+ | 0.3812 | 1.14 | 360 | 0.4185 |
75
+ | 0.38 | 1.21 | 380 | 0.4200 |
76
+ | 0.3795 | 1.27 | 400 | 0.4171 |
77
+ | 0.3766 | 1.34 | 420 | 0.4174 |
78
+ | 0.3772 | 1.4 | 440 | 0.4136 |
79
+ | 0.3777 | 1.46 | 460 | 0.4148 |
80
+ | 0.379 | 1.53 | 480 | 0.4155 |
81
 
82
 
83
  ### Framework versions
adapter_config.json CHANGED
@@ -10,22 +10,22 @@
10
  "layers_to_transform": null,
11
  "loftq_config": {},
12
  "lora_alpha": 32,
13
- "lora_dropout": 0.1,
14
  "megatron_config": null,
15
  "megatron_core": "megatron.core",
16
  "modules_to_save": null,
17
  "peft_type": "LORA",
18
- "r": 8,
19
  "rank_pattern": {},
20
  "revision": null,
21
  "target_modules": [
 
22
  "v_proj",
 
 
23
  "o_proj",
24
  "up_proj",
25
- "gate_proj",
26
- "k_proj",
27
- "q_proj",
28
- "down_proj"
29
  ],
30
  "task_type": "CAUSAL_LM",
31
  "use_dora": false,
 
10
  "layers_to_transform": null,
11
  "loftq_config": {},
12
  "lora_alpha": 32,
13
+ "lora_dropout": 0.05,
14
  "megatron_config": null,
15
  "megatron_core": "megatron.core",
16
  "modules_to_save": null,
17
  "peft_type": "LORA",
18
+ "r": 16,
19
  "rank_pattern": {},
20
  "revision": null,
21
  "target_modules": [
22
+ "k_proj",
23
  "v_proj",
24
+ "down_proj",
25
+ "q_proj",
26
  "o_proj",
27
  "up_proj",
28
+ "gate_proj"
 
 
 
29
  ],
30
  "task_type": "CAUSAL_LM",
31
  "use_dora": false,
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e7ab74726f0a87bc9b9a37cc68bc41fc4d2d4fc803dcc9ba3d76745392bc951a
3
  size 4920
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f90e03f124eede9ebe92ac2a7cb4cd187068b1c799e0422942648c31ca3de583
3
  size 4920