sahanes commited on
Commit
e5433b8
1 Parent(s): ab65331

End of training

Browse files
Files changed (2) hide show
  1. README.md +154 -3
  2. adapter_model.bin +3 -0
README.md CHANGED
@@ -1,3 +1,154 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: peft
4
+ tags:
5
+ - axolotl
6
+ - generated_from_trainer
7
+ base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
8
+ model-index:
9
+ - name: TinyLlamaB_alpaca_2k
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
17
+ <details><summary>See axolotl config</summary>
18
+
19
+ axolotl version: `0.4.0`
20
+ ```yaml
21
+ base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
22
+ model_type: LlamaForCausalLM
23
+ tokenizer_type: LlamaTokenizer
24
+
25
+ load_in_8bit: true
26
+ load_in_4bit: false
27
+ strict: false
28
+
29
+ datasets:
30
+ - path: mhenrichsen/alpaca_2k_test
31
+ type: alpaca
32
+ dataset_prepared_path:
33
+ val_set_size: 0.05
34
+ output_dir: ./outputs/simple-lora-out/
35
+ hub_model_id: sahanes/TinyLlamaB_alpaca_2k
36
+
37
+ sequence_len: 4096
38
+ sample_packing: true
39
+ eval_sample_packing: false
40
+ pad_to_sequence_len: true
41
+
42
+ adapter: lora
43
+ lora_model_dir:
44
+ lora_r: 32
45
+ lora_alpha: 16
46
+ lora_dropout: 0.05
47
+ lora_target_linear: true
48
+ lora_fan_in_fan_out:
49
+
50
+ wandb_project:
51
+ wandb_entity:
52
+ wandb_watch:
53
+ wandb_name:
54
+ wandb_log_model:
55
+
56
+ gradient_accumulation_steps: 4
57
+ micro_batch_size: 2
58
+ num_epochs: 4
59
+ optimizer: adamw_bnb_8bit
60
+ lr_scheduler: cosine
61
+ learning_rate: 0.0002
62
+
63
+ train_on_inputs: false
64
+ group_by_length: false
65
+ bf16: auto
66
+ fp16:
67
+ tf32: false
68
+
69
+ gradient_checkpointing: true
70
+ early_stopping_patience:
71
+ resume_from_checkpoint:
72
+ local_rank:
73
+ logging_steps: 1
74
+ xformers_attention:
75
+ flash_attention: true
76
+
77
+ warmup_steps: 10
78
+ evals_per_epoch: 4
79
+ saves_per_epoch: 1
80
+ debug:
81
+ deepspeed:
82
+ weight_decay: 0.0
83
+ fsdp:
84
+ fsdp_config:
85
+ special_tokens:
86
+
87
+ ```
88
+
89
+ </details><br>
90
+
91
+ # TinyLlamaB_alpaca_2k
92
+
93
+ This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) on the None dataset.
94
+ It achieves the following results on the evaluation set:
95
+ - Loss: 1.2126
96
+
97
+ ## Model description
98
+
99
+ More information needed
100
+
101
+ ## Intended uses & limitations
102
+
103
+ More information needed
104
+
105
+ ## Training and evaluation data
106
+
107
+ More information needed
108
+
109
+ ## Training procedure
110
+
111
+ ### Training hyperparameters
112
+
113
+ The following hyperparameters were used during training:
114
+ - learning_rate: 0.0002
115
+ - train_batch_size: 2
116
+ - eval_batch_size: 2
117
+ - seed: 42
118
+ - gradient_accumulation_steps: 4
119
+ - total_train_batch_size: 8
120
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
121
+ - lr_scheduler_type: cosine
122
+ - lr_scheduler_warmup_steps: 10
123
+ - num_epochs: 4
124
+
125
+ ### Training results
126
+
127
+ | Training Loss | Epoch | Step | Validation Loss |
128
+ |:-------------:|:------:|:----:|:---------------:|
129
+ | 1.4615 | 0.08 | 1 | 1.4899 |
130
+ | 1.3843 | 0.24 | 3 | 1.4852 |
131
+ | 1.367 | 0.48 | 6 | 1.4392 |
132
+ | 1.2688 | 0.72 | 9 | 1.3396 |
133
+ | 1.2259 | 0.96 | 12 | 1.2948 |
134
+ | 1.2523 | 1.16 | 15 | 1.2808 |
135
+ | 1.2271 | 1.4 | 18 | 1.2543 |
136
+ | 1.1347 | 1.6400 | 21 | 1.2350 |
137
+ | 1.2699 | 1.88 | 24 | 1.2299 |
138
+ | 1.1476 | 2.08 | 27 | 1.2232 |
139
+ | 1.1524 | 2.32 | 30 | 1.2192 |
140
+ | 1.1944 | 2.56 | 33 | 1.2198 |
141
+ | 1.1114 | 2.8 | 36 | 1.2165 |
142
+ | 1.151 | 3.04 | 39 | 1.2135 |
143
+ | 1.1887 | 3.24 | 42 | 1.2125 |
144
+ | 1.101 | 3.48 | 45 | 1.2122 |
145
+ | 1.1879 | 3.7200 | 48 | 1.2126 |
146
+
147
+
148
+ ### Framework versions
149
+
150
+ - PEFT 0.10.0
151
+ - Transformers 4.40.2
152
+ - Pytorch 2.1.2+cu118
153
+ - Datasets 2.19.1
154
+ - Tokenizers 0.19.1
adapter_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b774497ab17dee0158fea5982160c0faf4423e04cbd389d0c8fc10d0a0bb0839
3
+ size 101036698