alexander-hm commited on
Commit
9442188
1 Parent(s): 7b13bab

End of training

Browse files
Files changed (7) hide show
  1. README.md +70 -0
  2. all_results.json +12 -0
  3. completed +0 -0
  4. eval_results.json +7 -0
  5. metrics.json +1 -0
  6. train_results.json +8 -0
  7. trainer_state.json +0 -0
README.md ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: huggyllama/llama-13b
3
+ library_name: peft
4
+ license: other
5
+ tags:
6
+ - generated_from_trainer
7
+ model-index:
8
+ - name: llama-13b_oasst1_l0.0002_32-32
9
+ results: []
10
+ ---
11
+
12
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
+ should probably proofread and complete it, then remove this comment. -->
14
+
15
+ # llama-13b_oasst1_l0.0002_32-32
16
+
17
+ This model is a fine-tuned version of [huggyllama/llama-13b](https://huggingface.co/huggyllama/llama-13b) on an unknown dataset.
18
+ It achieves the following results on the evaluation set:
19
+ - Loss: 1.3866
20
+
21
+ ## Model description
22
+
23
+ More information needed
24
+
25
+ ## Intended uses & limitations
26
+
27
+ More information needed
28
+
29
+ ## Training and evaluation data
30
+
31
+ More information needed
32
+
33
+ ## Training procedure
34
+
35
+ ### Training hyperparameters
36
+
37
+ The following hyperparameters were used during training:
38
+ - learning_rate: 0.0002
39
+ - train_batch_size: 1
40
+ - eval_batch_size: 1
41
+ - seed: 0
42
+ - gradient_accumulation_steps: 16
43
+ - total_train_batch_size: 16
44
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
+ - lr_scheduler_type: constant
46
+ - lr_scheduler_warmup_ratio: 0.03
47
+ - training_steps: 0
48
+
49
+ ### Training results
50
+
51
+ | Training Loss | Epoch | Step | Validation Loss |
52
+ |:-------------:|:------:|:----:|:---------------:|
53
+ | 1.4264 | 0.0018 | 1 | 1.6140 |
54
+ | 1.4292 | 0.3392 | 187 | 1.2391 |
55
+ | 1.0776 | 0.6783 | 374 | 1.2320 |
56
+ | 1.3037 | 1.0175 | 561 | 1.2323 |
57
+ | 1.0895 | 1.3566 | 748 | 1.2525 |
58
+ | 1.1146 | 1.6958 | 935 | 1.2393 |
59
+ | 0.7616 | 2.0349 | 1122 | 1.2815 |
60
+ | 0.9368 | 2.3741 | 1309 | 1.3351 |
61
+ | 0.7076 | 2.7132 | 1496 | 1.3530 |
62
+
63
+
64
+ ### Framework versions
65
+
66
+ - PEFT 0.12.1.dev0
67
+ - Transformers 4.45.0.dev0
68
+ - Pytorch 2.3.0+cu121
69
+ - Datasets 2.19.0
70
+ - Tokenizers 0.19.1
all_results.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 2.997959646338699,
3
+ "eval_loss": 1.3866204023361206,
4
+ "eval_runtime": 303.7579,
5
+ "eval_samples_per_second": 3.292,
6
+ "eval_steps_per_second": 3.292,
7
+ "total_flos": 7.23522717038592e+17,
8
+ "train_loss": 1.0721389233020313,
9
+ "train_runtime": 79251.021,
10
+ "train_samples_per_second": 0.334,
11
+ "train_steps_per_second": 0.021
12
+ }
completed ADDED
File without changes
eval_results.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 2.997959646338699,
3
+ "eval_loss": 1.3866204023361206,
4
+ "eval_runtime": 303.7579,
5
+ "eval_samples_per_second": 3.292,
6
+ "eval_steps_per_second": 3.292
7
+ }
metrics.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"run_name": "huggyllama/llama-13b_oasst1_l0.0002_32,32", "train_runtime": 79251.021, "train_samples_per_second": 0.334, "train_steps_per_second": 0.021, "total_flos": 7.23522717038592e+17, "train_loss": 1.0721389233020313, "epoch": 2.997959646338699, "eval_loss": 1.3866204023361206, "eval_runtime": 303.7579, "eval_samples_per_second": 3.292, "eval_steps_per_second": 3.292}
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 2.997959646338699,
3
+ "total_flos": 7.23522717038592e+17,
4
+ "train_loss": 1.0721389233020313,
5
+ "train_runtime": 79251.021,
6
+ "train_samples_per_second": 0.334,
7
+ "train_steps_per_second": 0.021
8
+ }
trainer_state.json ADDED
The diff for this file is too large to render. See raw diff