sam2ai commited on
Commit
eae1fa6
1 Parent(s): aebd2d6

End of training

Browse files
Files changed (2) hide show
  1. README.md +181 -0
  2. adapter_model.bin +3 -0
README.md ADDED
@@ -0,0 +1,181 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ library_name: peft
4
+ tags:
5
+ - axolotl
6
+ - generated_from_trainer
7
+ base_model: google/gemma-2b
8
+ model-index:
9
+ - name: gemma_odia_2b
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
17
+ <details><summary>See axolotl config</summary>
18
+
19
+ axolotl version: `0.4.0`
20
+ ```yaml
21
+ # use google/gemma-7b if you have access
22
+ base_model: google/gemma-2b
23
+ model_type: AutoModelForCausalLM
24
+ tokenizer_type: AutoTokenizer
25
+
26
+ load_in_8bit: false
27
+ load_in_4bit: true
28
+ strict: false
29
+
30
+ # huggingface repo
31
+ datasets:
32
+ - path: OdiaGenAIdata/culturax-odia
33
+ type: completion
34
+ val_set_size: 0.1
35
+ output_dir: ./gemma-odia-2b-pretrain
36
+ hub_model_id: sam2ai/gemma_odia_2b
37
+
38
+ adapter: qlora
39
+ lora_r: 32
40
+ lora_alpha: 16
41
+ lora_dropout: 0.05
42
+ lora_target_linear: true
43
+
44
+ sequence_len: 4096
45
+ sample_packing: true
46
+ pad_to_sequence_len: true
47
+
48
+ wandb_project: gemma-completion-2b-odia
49
+ wandb_entity:
50
+ wandb_watch:
51
+ wandb_name:
52
+ wandb_log_model:
53
+
54
+
55
+ gradient_accumulation_steps: 3
56
+ micro_batch_size: 2
57
+ num_epochs: 10
58
+ optimizer: adamw_bnb_8bit
59
+ lr_scheduler: cosine
60
+ learning_rate: 0.0002
61
+
62
+ train_on_inputs: false
63
+ group_by_length: false
64
+ bf16: auto
65
+ fp16:
66
+ tf32: false
67
+
68
+ gradient_checkpointing: true
69
+ early_stopping_patience:
70
+ resume_from_checkpoint:
71
+ local_rank:
72
+ logging_steps: 1
73
+ xformers_attention:
74
+ flash_attention: false
75
+
76
+ warmup_ratio: 0.1
77
+ evals_per_epoch: 4
78
+ eval_table_size:
79
+ eval_max_new_tokens: 128
80
+ saves_per_epoch: 1
81
+ debug:
82
+ deepspeed:
83
+ weight_decay: 0.0
84
+ fsdp:
85
+ fsdp_config:
86
+ special_tokens:
87
+
88
+ ```
89
+
90
+ </details><br>
91
+
92
+ # gemma_odia_2b
93
+
94
+ This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on the None dataset.
95
+ It achieves the following results on the evaluation set:
96
+ - Loss: 13.3986
97
+
98
+ ## Model description
99
+
100
+ More information needed
101
+
102
+ ## Intended uses & limitations
103
+
104
+ More information needed
105
+
106
+ ## Training and evaluation data
107
+
108
+ More information needed
109
+
110
+ ## Training procedure
111
+
112
+ ### Training hyperparameters
113
+
114
+ The following hyperparameters were used during training:
115
+ - learning_rate: 0.0002
116
+ - train_batch_size: 2
117
+ - eval_batch_size: 2
118
+ - seed: 42
119
+ - distributed_type: multi-GPU
120
+ - num_devices: 8
121
+ - gradient_accumulation_steps: 3
122
+ - total_train_batch_size: 48
123
+ - total_eval_batch_size: 16
124
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
125
+ - lr_scheduler_type: cosine
126
+ - lr_scheduler_warmup_steps: 87
127
+ - num_epochs: 10
128
+
129
+ ### Training results
130
+
131
+ | Training Loss | Epoch | Step | Validation Loss |
132
+ |:-------------:|:-----:|:-----:|:---------------:|
133
+ | 48.3127 | 0.0 | 1 | 48.2905 |
134
+ | 21.4891 | 0.25 | 449 | 21.4957 |
135
+ | 25.8116 | 0.5 | 898 | 26.0510 |
136
+ | 25.3858 | 0.75 | 1347 | 25.6013 |
137
+ | 16.9215 | 1.0 | 1796 | 16.9936 |
138
+ | 16.7894 | 1.24 | 2245 | 16.7975 |
139
+ | 16.8564 | 1.49 | 2694 | 17.0068 |
140
+ | 16.8912 | 1.74 | 3143 | 17.0482 |
141
+ | 16.9407 | 1.99 | 3592 | 17.0556 |
142
+ | 16.7487 | 2.22 | 4041 | 16.8123 |
143
+ | 17.7797 | 2.47 | 4490 | 18.1220 |
144
+ | 14.0039 | 2.72 | 4939 | 14.0630 |
145
+ | 14.7386 | 2.97 | 5388 | 14.7828 |
146
+ | 14.9965 | 3.21 | 5837 | 15.2212 |
147
+ | 15.1822 | 3.46 | 6286 | 15.6448 |
148
+ | 14.1876 | 3.71 | 6735 | 14.5398 |
149
+ | 16.6416 | 3.96 | 7184 | 16.9006 |
150
+ | 17.0568 | 4.19 | 7633 | 17.1808 |
151
+ | 17.4472 | 4.44 | 8082 | 17.5766 |
152
+ | 17.4219 | 4.69 | 8531 | 17.5393 |
153
+ | 17.3064 | 4.94 | 8980 | 17.5467 |
154
+ | 17.2741 | 5.18 | 9429 | 17.5657 |
155
+ | 16.9905 | 5.43 | 9878 | 17.3912 |
156
+ | 16.642 | 5.68 | 10327 | 17.1920 |
157
+ | 16.6345 | 5.93 | 10776 | 17.1085 |
158
+ | 15.5702 | 6.16 | 11225 | 16.0494 |
159
+ | 15.3421 | 6.41 | 11674 | 15.9889 |
160
+ | 13.1025 | 6.66 | 12123 | 13.1419 |
161
+ | 13.1904 | 6.91 | 12572 | 13.2151 |
162
+ | 13.261 | 7.15 | 13021 | 13.3119 |
163
+ | 13.2333 | 7.4 | 13470 | 13.3195 |
164
+ | 13.2705 | 7.65 | 13919 | 13.3380 |
165
+ | 13.3417 | 7.9 | 14368 | 13.3804 |
166
+ | 13.3553 | 8.13 | 14817 | 13.3902 |
167
+ | 13.4078 | 8.38 | 15266 | 13.4614 |
168
+ | 13.394 | 8.63 | 15715 | 13.4338 |
169
+ | 13.3754 | 8.88 | 16164 | 13.4149 |
170
+ | 13.3487 | 9.12 | 16613 | 13.4044 |
171
+ | 13.3807 | 9.37 | 17062 | 13.3903 |
172
+ | 13.3766 | 9.62 | 17511 | 13.3986 |
173
+
174
+
175
+ ### Framework versions
176
+
177
+ - PEFT 0.9.0
178
+ - Transformers 4.40.0.dev0
179
+ - Pytorch 2.4.0.dev20240326+rocm6.0
180
+ - Datasets 2.18.0
181
+ - Tokenizers 0.15.0
adapter_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8cc168f0cc1d8d3d613068c4efd1deb34bb6f8bca2213abc9976d2c0e5721024
3
+ size 156984186