Kareem Amr commited on
Commit
9cf35a4
1 Parent(s): 426489e

End of training

Browse files
Files changed (2) hide show
  1. README.md +26 -25
  2. adapter_model.bin +1 -1
README.md CHANGED
@@ -2,10 +2,11 @@
2
  license: apache-2.0
3
  library_name: peft
4
  tags:
 
5
  - generated_from_trainer
6
  base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
7
  model-index:
8
- - name: outputs/lora-out
9
  results: []
10
  ---
11
 
@@ -17,13 +18,13 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  axolotl version: `0.4.0`
19
  ```yaml
20
- # # Upload the final model to Huggingface
21
- # hub_model_id: kareemamrr/tinyllama-1.1B_alpaca_2k_lora
22
 
23
- # # Store the training logs in weights and biases
24
- # wandb_entity: kamr54
25
- # wandb_project: tinyllama-1.1B_alpaca_2k_lora
26
- # wandb_name: lora-run
27
 
28
  base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
29
  model_type: LlamaForCausalLM
@@ -88,11 +89,11 @@ special_tokens:
88
 
89
  </details><br>
90
 
91
- # outputs/lora-out
92
 
93
  This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) on the None dataset.
94
  It achieves the following results on the evaluation set:
95
- - Loss: 1.2118
96
 
97
  ## Model description
98
 
@@ -127,22 +128,22 @@ The following hyperparameters were used during training:
127
  | Training Loss | Epoch | Step | Validation Loss |
128
  |:-------------:|:------:|:----:|:---------------:|
129
  | 1.4615 | 0.08 | 1 | 1.4899 |
130
- | 1.385 | 0.24 | 3 | 1.4883 |
131
- | 1.3675 | 0.48 | 6 | 1.4370 |
132
- | 1.2691 | 0.72 | 9 | 1.3388 |
133
- | 1.2268 | 0.96 | 12 | 1.2973 |
134
- | 1.2526 | 1.16 | 15 | 1.2808 |
135
- | 1.2261 | 1.4 | 18 | 1.2527 |
136
- | 1.135 | 1.6400 | 21 | 1.2343 |
137
- | 1.2694 | 1.88 | 24 | 1.2301 |
138
- | 1.149 | 2.08 | 27 | 1.2242 |
139
- | 1.1515 | 2.32 | 30 | 1.2208 |
140
- | 1.195 | 2.56 | 33 | 1.2196 |
141
- | 1.1129 | 2.8 | 36 | 1.2151 |
142
- | 1.1518 | 3.04 | 39 | 1.2133 |
143
- | 1.1887 | 3.24 | 42 | 1.2115 |
144
- | 1.1002 | 3.48 | 45 | 1.2104 |
145
- | 1.189 | 3.7200 | 48 | 1.2118 |
146
 
147
 
148
  ### Framework versions
 
2
  license: apache-2.0
3
  library_name: peft
4
  tags:
5
+ - axolotl
6
  - generated_from_trainer
7
  base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
8
  model-index:
9
+ - name: tinyllama-1.1B_alpaca_2k_lora
10
  results: []
11
  ---
12
 
 
18
 
19
  axolotl version: `0.4.0`
20
  ```yaml
21
+ # Upload the final model to Huggingface
22
+ hub_model_id: kareemamrr/tinyllama-1.1B_alpaca_2k_lora
23
 
24
+ # Store the training logs in weights and biases
25
+ wandb_entity: kamr54
26
+ wandb_project: tinyllama-1.1B_alpaca_2k_peft
27
+ wandb_name: lora-run
28
 
29
  base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
30
  model_type: LlamaForCausalLM
 
89
 
90
  </details><br>
91
 
92
+ # tinyllama-1.1B_alpaca_2k_lora
93
 
94
  This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) on the None dataset.
95
  It achieves the following results on the evaluation set:
96
+ - Loss: 1.2127
97
 
98
  ## Model description
99
 
 
128
  | Training Loss | Epoch | Step | Validation Loss |
129
  |:-------------:|:------:|:----:|:---------------:|
130
  | 1.4615 | 0.08 | 1 | 1.4899 |
131
+ | 1.3847 | 0.24 | 3 | 1.4865 |
132
+ | 1.3673 | 0.48 | 6 | 1.4376 |
133
+ | 1.2673 | 0.72 | 9 | 1.3401 |
134
+ | 1.2257 | 0.96 | 12 | 1.2967 |
135
+ | 1.2511 | 1.16 | 15 | 1.2835 |
136
+ | 1.2267 | 1.4 | 18 | 1.2501 |
137
+ | 1.1348 | 1.6400 | 21 | 1.2330 |
138
+ | 1.2699 | 1.88 | 24 | 1.2276 |
139
+ | 1.1486 | 2.08 | 27 | 1.2258 |
140
+ | 1.1515 | 2.32 | 30 | 1.2224 |
141
+ | 1.1949 | 2.56 | 33 | 1.2175 |
142
+ | 1.1127 | 2.8 | 36 | 1.2158 |
143
+ | 1.1506 | 3.04 | 39 | 1.2126 |
144
+ | 1.1886 | 3.24 | 42 | 1.2110 |
145
+ | 1.1002 | 3.48 | 45 | 1.2106 |
146
+ | 1.1894 | 3.7200 | 48 | 1.2127 |
147
 
148
 
149
  ### Framework versions
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3ac6f031d86dd4b2f7ae3ed25c52bca152199f29e8c2321b131646bda2f22802
3
  size 101036698
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4b4c262a5d4b19857dc9a167ad62c6edc069c36aa01804e19b7e0c13a86a295b
3
  size 101036698