yunaseo commited on
Commit
c12296c
1 Parent(s): d888923

yunaseo/google_gemma_lora_emotion_detection

Browse files
Files changed (4) hide show
  1. README.md +35 -57
  2. adapter_config.json +35 -0
  3. adapter_model.safetensors +3 -0
  4. training_args.bin +1 -1
README.md CHANGED
@@ -1,8 +1,9 @@
1
  ---
2
  license: gemma
3
- base_model: google/gemma-1.1-2b-it
4
  tags:
5
  - generated_from_trainer
 
6
  metrics:
7
  - accuracy
8
  model-index:
@@ -17,10 +18,10 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  This model is a fine-tuned version of [google/gemma-1.1-2b-it](https://huggingface.co/google/gemma-1.1-2b-it) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 0.8963
21
- - F1 Micro: 0.6927
22
- - F1 Macro: 0.5713
23
- - Accuracy: 0.2537
24
 
25
  ## Model description
26
 
@@ -40,11 +41,11 @@ More information needed
40
 
41
  The following hyperparameters were used during training:
42
  - learning_rate: 0.0001
43
- - train_batch_size: 8
44
- - eval_batch_size: 8
45
  - seed: 42
46
  - gradient_accumulation_steps: 4
47
- - total_train_batch_size: 32
48
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
  - lr_scheduler_type: linear
50
  - num_epochs: 5
@@ -53,59 +54,36 @@ The following hyperparameters were used during training:
53
 
54
  | Training Loss | Epoch | Step | Validation Loss | F1 Micro | F1 Macro | Accuracy |
55
  |:-------------:|:------:|:----:|:---------------:|:--------:|:--------:|:--------:|
56
- | 0.7843 | 0.1035 | 20 | 0.6332 | 0.6164 | 0.4369 | 0.1301 |
57
- | 0.5717 | 0.2070 | 40 | 0.5575 | 0.6468 | 0.5359 | 0.1793 |
58
- | 0.5341 | 0.3105 | 60 | 0.5292 | 0.6788 | 0.5562 | 0.2006 |
59
- | 0.5054 | 0.4140 | 80 | 0.5143 | 0.6830 | 0.5716 | 0.2045 |
60
- | 0.4748 | 0.5175 | 100 | 0.5039 | 0.6875 | 0.5797 | 0.1754 |
61
- | 0.5144 | 0.6210 | 120 | 0.5028 | 0.6804 | 0.5988 | 0.1631 |
62
- | 0.5055 | 0.7245 | 140 | 0.5101 | 0.6823 | 0.5728 | 0.2039 |
63
- | 0.5124 | 0.8279 | 160 | 0.4851 | 0.6854 | 0.5947 | 0.1793 |
64
- | 0.488 | 0.9314 | 180 | 0.4906 | 0.6777 | 0.5947 | 0.1638 |
65
- | 0.4867 | 1.0349 | 200 | 0.4970 | 0.6845 | 0.6033 | 0.2227 |
66
- | 0.3367 | 1.1384 | 220 | 0.5478 | 0.6977 | 0.5848 | 0.2188 |
67
- | 0.3342 | 1.2419 | 240 | 0.5531 | 0.6860 | 0.5898 | 0.2110 |
68
- | 0.3161 | 1.3454 | 260 | 0.5754 | 0.6719 | 0.5768 | 0.1955 |
69
- | 0.3312 | 1.4489 | 280 | 0.5335 | 0.6840 | 0.5906 | 0.1961 |
70
- | 0.3633 | 1.5524 | 300 | 0.5255 | 0.6799 | 0.5940 | 0.1883 |
71
- | 0.3199 | 1.6559 | 320 | 0.5461 | 0.6722 | 0.5868 | 0.1922 |
72
- | 0.3385 | 1.7594 | 340 | 0.5417 | 0.6888 | 0.5795 | 0.2149 |
73
- | 0.3292 | 1.8629 | 360 | 0.5324 | 0.6883 | 0.5969 | 0.1981 |
74
- | 0.3347 | 1.9664 | 380 | 0.5274 | 0.6890 | 0.5881 | 0.2006 |
75
- | 0.2122 | 2.0699 | 400 | 0.6957 | 0.6755 | 0.5671 | 0.2350 |
76
- | 0.1289 | 2.1734 | 420 | 0.6570 | 0.6814 | 0.5825 | 0.1974 |
77
- | 0.1505 | 2.2768 | 440 | 0.6495 | 0.6854 | 0.5857 | 0.2117 |
78
- | 0.1345 | 2.3803 | 460 | 0.7193 | 0.6813 | 0.5681 | 0.2045 |
79
- | 0.1438 | 2.4838 | 480 | 0.7042 | 0.6782 | 0.5649 | 0.2065 |
80
- | 0.14 | 2.5873 | 500 | 0.6777 | 0.6855 | 0.5826 | 0.2104 |
81
- | 0.146 | 2.6908 | 520 | 0.6699 | 0.6837 | 0.5840 | 0.2129 |
82
- | 0.138 | 2.7943 | 540 | 0.6954 | 0.6884 | 0.5820 | 0.2369 |
83
- | 0.1302 | 2.8978 | 560 | 0.7090 | 0.6828 | 0.5777 | 0.2220 |
84
- | 0.1324 | 3.0013 | 580 | 0.7075 | 0.6845 | 0.5818 | 0.2259 |
85
- | 0.0472 | 3.1048 | 600 | 0.8346 | 0.6867 | 0.5575 | 0.2414 |
86
- | 0.0544 | 3.2083 | 620 | 0.7725 | 0.6785 | 0.5706 | 0.2207 |
87
- | 0.0483 | 3.3118 | 640 | 0.8136 | 0.6865 | 0.5659 | 0.2291 |
88
- | 0.0465 | 3.4153 | 660 | 0.8333 | 0.6797 | 0.5613 | 0.2278 |
89
- | 0.0511 | 3.5188 | 680 | 0.8234 | 0.6852 | 0.5641 | 0.2265 |
90
- | 0.0511 | 3.6223 | 700 | 0.8298 | 0.6905 | 0.5712 | 0.2401 |
91
- | 0.0406 | 3.7257 | 720 | 0.8292 | 0.6886 | 0.5721 | 0.2421 |
92
- | 0.0565 | 3.8292 | 740 | 0.8266 | 0.6927 | 0.5721 | 0.2408 |
93
- | 0.0554 | 3.9327 | 760 | 0.7764 | 0.6887 | 0.5765 | 0.2350 |
94
- | 0.0319 | 4.0362 | 780 | 0.8450 | 0.6825 | 0.5650 | 0.2388 |
95
- | 0.0161 | 4.1397 | 800 | 0.8948 | 0.6892 | 0.5648 | 0.2524 |
96
- | 0.0174 | 4.2432 | 820 | 0.9146 | 0.6910 | 0.5659 | 0.2570 |
97
- | 0.0168 | 4.3467 | 840 | 0.9068 | 0.6874 | 0.5657 | 0.2414 |
98
- | 0.0184 | 4.4502 | 860 | 0.9225 | 0.6872 | 0.5615 | 0.2531 |
99
- | 0.0123 | 4.5537 | 880 | 0.9062 | 0.6882 | 0.5639 | 0.2511 |
100
- | 0.0149 | 4.6572 | 900 | 0.9087 | 0.6889 | 0.5660 | 0.2492 |
101
- | 0.0199 | 4.7607 | 920 | 0.8948 | 0.6917 | 0.5722 | 0.2472 |
102
- | 0.0144 | 4.8642 | 940 | 0.8944 | 0.6929 | 0.5724 | 0.2518 |
103
- | 0.015 | 4.9677 | 960 | 0.8963 | 0.6925 | 0.5709 | 0.2531 |
104
 
105
 
106
  ### Framework versions
107
 
 
108
  - Transformers 4.40.2
109
  - Pytorch 2.2.1+cu121
110
  - Datasets 2.19.1
111
- - Tokenizers 0.19.1
 
1
  ---
2
  license: gemma
3
+ library_name: peft
4
  tags:
5
  - generated_from_trainer
6
+ base_model: google/gemma-1.1-2b-it
7
  metrics:
8
  - accuracy
9
  model-index:
 
18
 
19
  This model is a fine-tuned version of [google/gemma-1.1-2b-it](https://huggingface.co/google/gemma-1.1-2b-it) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.4792
22
+ - F1 Micro: 0.6970
23
+ - F1 Macro: 0.6089
24
+ - Accuracy: 0.2104
25
 
26
  ## Model description
27
 
 
41
 
42
  The following hyperparameters were used during training:
43
  - learning_rate: 0.0001
44
+ - train_batch_size: 16
45
+ - eval_batch_size: 16
46
  - seed: 42
47
  - gradient_accumulation_steps: 4
48
+ - total_train_batch_size: 64
49
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
  - lr_scheduler_type: linear
51
  - num_epochs: 5
 
54
 
55
  | Training Loss | Epoch | Step | Validation Loss | F1 Micro | F1 Macro | Accuracy |
56
  |:-------------:|:------:|:----:|:---------------:|:--------:|:--------:|:--------:|
57
+ | 0.7081 | 0.2067 | 20 | 0.6048 | 0.6244 | 0.5113 | 0.1528 |
58
+ | 0.5228 | 0.4134 | 40 | 0.5096 | 0.6713 | 0.5815 | 0.1883 |
59
+ | 0.5048 | 0.6202 | 60 | 0.4928 | 0.7002 | 0.5865 | 0.2155 |
60
+ | 0.5129 | 0.8269 | 80 | 0.4792 | 0.6970 | 0.6089 | 0.2104 |
61
+ | 0.4842 | 1.0336 | 100 | 0.4801 | 0.6972 | 0.6023 | 0.2369 |
62
+ | 0.3372 | 1.2403 | 120 | 0.5545 | 0.6687 | 0.5877 | 0.1761 |
63
+ | 0.3302 | 1.4470 | 140 | 0.5374 | 0.6895 | 0.6020 | 0.2019 |
64
+ | 0.3342 | 1.6537 | 160 | 0.5330 | 0.6860 | 0.5993 | 0.2117 |
65
+ | 0.3392 | 1.8605 | 180 | 0.5190 | 0.6894 | 0.5913 | 0.2006 |
66
+ | 0.2844 | 2.0672 | 200 | 0.5853 | 0.6891 | 0.5819 | 0.2369 |
67
+ | 0.1458 | 2.2739 | 220 | 0.7038 | 0.6743 | 0.5749 | 0.2097 |
68
+ | 0.1508 | 2.4806 | 240 | 0.6808 | 0.6802 | 0.5834 | 0.1994 |
69
+ | 0.1481 | 2.6873 | 260 | 0.7026 | 0.6773 | 0.5721 | 0.2 |
70
+ | 0.1378 | 2.8941 | 280 | 0.7336 | 0.6790 | 0.5768 | 0.2162 |
71
+ | 0.0961 | 3.1008 | 300 | 0.8397 | 0.6709 | 0.5465 | 0.2272 |
72
+ | 0.0552 | 3.3075 | 320 | 0.8260 | 0.6743 | 0.5654 | 0.2168 |
73
+ | 0.0509 | 3.5142 | 340 | 0.8692 | 0.6777 | 0.5666 | 0.2233 |
74
+ | 0.0489 | 3.7209 | 360 | 0.8505 | 0.6874 | 0.5722 | 0.2388 |
75
+ | 0.0526 | 3.9276 | 380 | 0.8269 | 0.6842 | 0.5778 | 0.2233 |
76
+ | 0.0278 | 4.1344 | 400 | 0.9280 | 0.6813 | 0.5557 | 0.2414 |
77
+ | 0.0187 | 4.3411 | 420 | 0.9390 | 0.6829 | 0.5588 | 0.2382 |
78
+ | 0.0169 | 4.5478 | 440 | 0.9510 | 0.6834 | 0.5612 | 0.2485 |
79
+ | 0.0158 | 4.7545 | 460 | 0.9325 | 0.6819 | 0.5612 | 0.2427 |
80
+ | 0.0161 | 4.9612 | 480 | 0.9311 | 0.6822 | 0.5634 | 0.2440 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
81
 
82
 
83
  ### Framework versions
84
 
85
+ - PEFT 0.10.0
86
  - Transformers 4.40.2
87
  - Pytorch 2.2.1+cu121
88
  - Datasets 2.19.1
89
+ - Tokenizers 0.19.1
adapter_config.json ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "google/gemma-1.1-2b-it",
5
+ "bias": "lora_only",
6
+ "fan_in_fan_out": false,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layer_replication": null,
10
+ "layers_pattern": null,
11
+ "layers_to_transform": null,
12
+ "loftq_config": {},
13
+ "lora_alpha": 256,
14
+ "lora_dropout": 0.01,
15
+ "megatron_config": null,
16
+ "megatron_core": "megatron.core",
17
+ "modules_to_save": null,
18
+ "peft_type": "LORA",
19
+ "r": 128,
20
+ "rank_pattern": {},
21
+ "revision": null,
22
+ "target_modules": [
23
+ "gate_proj",
24
+ "q_proj",
25
+ "v_proj",
26
+ "down_proj",
27
+ "up_proj",
28
+ "score",
29
+ "o_proj",
30
+ "k_proj"
31
+ ],
32
+ "task_type": "SEQ_CLS",
33
+ "use_dora": false,
34
+ "use_rslora": false
35
+ }
adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1cdd6c5dd3983ce0e322336bfe21c89e94579dfb5ac094a05a5c4ef0a43d33d9
3
+ size 630860528
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0d88d070736fcf2cc8a2ba8aa754a00ced2262d0ed4a7816ab632d728fac8986
3
  size 5112
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ea96437ebc9da0288a42bc399bddcb8a110fe9cddc3026a9816daa08f38219c4
3
  size 5112