ahmedsamirio commited on
Commit
7a20afa
1 Parent(s): 0b3e3e8

End of training

Browse files
Files changed (2) hide show
  1. README.md +155 -0
  2. adapter_model.bin +3 -0
README.md ADDED
@@ -0,0 +1,155 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: peft
4
+ tags:
5
+ - axolotl
6
+ - generated_from_trainer
7
+ base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
8
+ model-index:
9
+ - name: alpaca-cleaned-tiny-llama
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
17
+ <details><summary>See axolotl config</summary>
18
+
19
+ axolotl version: `0.4.1`
20
+ ```yaml
21
+ base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
22
+ model_type: LlamaForCausalLM
23
+ tokenizer_type: LlamaTokenizer
24
+
25
+ load_in_8bit: true
26
+ load_in_4bit: false
27
+ strict: false
28
+
29
+ datasets:
30
+ # - path: mhenrichsen/alpaca_2k_test
31
+ - path: yahma/alpaca-cleaned
32
+ type: alpaca
33
+ dataset_prepared_path:
34
+ val_set_size: 0.05
35
+ output_dir: ./outputs/alpaca-cleaned-tiny-llama
36
+ hub_model_id: ahmedsamirio/alpaca-cleaned-tiny-llama
37
+
38
+ sequence_len: 4096
39
+ sample_packing: true
40
+ eval_sample_packing: true
41
+ pad_to_sequence_len: true
42
+
43
+ adapter: lora
44
+ lora_model_dir:
45
+ lora_r: 32
46
+ lora_alpha: 16
47
+ lora_dropout: 0.05
48
+ lora_target_linear: true
49
+ lora_fan_in_fan_out:
50
+
51
+ wandb_project: alpaca-tiny-llama
52
+ wandb_entity: ahmedsamirio
53
+ wandb_watch:
54
+ wandb_name:
55
+ wandb_log_model:
56
+
57
+ gradient_accumulation_steps: 4
58
+ micro_batch_size: 2
59
+ num_epochs: 4
60
+ optimizer: adamw_bnb_8bit
61
+ lr_scheduler: cosine
62
+ learning_rate: 0.0002
63
+
64
+ train_on_inputs: false
65
+ group_by_length: false
66
+ bf16: auto
67
+ fp16:
68
+ tf32: false
69
+
70
+ gradient_checkpointing: true
71
+ early_stopping_patience:
72
+ resume_from_checkpoint:
73
+ local_rank:
74
+ logging_steps: 1
75
+ xformers_attention:
76
+ flash_attention: true
77
+
78
+ warmup_steps: 10
79
+ evals_per_epoch: 4
80
+ saves_per_epoch: 1
81
+ debug:
82
+ deepspeed:
83
+ weight_decay: 0.0
84
+ fsdp:
85
+ fsdp_config:
86
+ special_tokens:
87
+
88
+ ```
89
+
90
+ </details><br>
91
+
92
+ # alpaca-cleaned-tiny-llama
93
+
94
+ This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) on the None dataset.
95
+ It achieves the following results on the evaluation set:
96
+ - Loss: 1.1115
97
+
98
+ ## Model description
99
+
100
+ More information needed
101
+
102
+ ## Intended uses & limitations
103
+
104
+ More information needed
105
+
106
+ ## Training and evaluation data
107
+
108
+ More information needed
109
+
110
+ ## Training procedure
111
+
112
+ ### Training hyperparameters
113
+
114
+ The following hyperparameters were used during training:
115
+ - learning_rate: 0.0002
116
+ - train_batch_size: 2
117
+ - eval_batch_size: 2
118
+ - seed: 42
119
+ - gradient_accumulation_steps: 4
120
+ - total_train_batch_size: 8
121
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
122
+ - lr_scheduler_type: cosine
123
+ - lr_scheduler_warmup_steps: 10
124
+ - num_epochs: 4
125
+
126
+ ### Training results
127
+
128
+ | Training Loss | Epoch | Step | Validation Loss |
129
+ |:-------------:|:------:|:----:|:---------------:|
130
+ | 1.3435 | 0.0029 | 1 | 1.4128 |
131
+ | 1.1926 | 0.2498 | 85 | 1.1723 |
132
+ | 1.1275 | 0.4996 | 170 | 1.1518 |
133
+ | 1.1153 | 0.7494 | 255 | 1.1410 |
134
+ | 1.1289 | 0.9993 | 340 | 1.1312 |
135
+ | 1.1267 | 1.2278 | 425 | 1.1276 |
136
+ | 1.1053 | 1.4776 | 510 | 1.1220 |
137
+ | 1.1261 | 1.7274 | 595 | 1.1172 |
138
+ | 1.0991 | 1.9772 | 680 | 1.1144 |
139
+ | 1.0295 | 2.2057 | 765 | 1.1157 |
140
+ | 1.086 | 2.4555 | 850 | 1.1131 |
141
+ | 1.029 | 2.7054 | 935 | 1.1114 |
142
+ | 1.019 | 2.9552 | 1020 | 1.1108 |
143
+ | 1.0158 | 3.1830 | 1105 | 1.1113 |
144
+ | 1.0297 | 3.4328 | 1190 | 1.1123 |
145
+ | 1.0571 | 3.6826 | 1275 | 1.1116 |
146
+ | 1.0306 | 3.9324 | 1360 | 1.1115 |
147
+
148
+
149
+ ### Framework versions
150
+
151
+ - PEFT 0.11.1
152
+ - Transformers 4.41.1
153
+ - Pytorch 2.1.2+cu118
154
+ - Datasets 2.19.1
155
+ - Tokenizers 0.19.1
adapter_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb1af93775600686d7059836a75c7fd62648b2f44ed5afe1abe131e001661d0f
3
+ size 101036698