lapp0 commited on
Commit
26f1024
·
verified ·
1 Parent(s): c82d3f2

End of training

Browse files
README.md ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: gpt2
3
+ library_name: Distily
4
+ license: mit
5
+ tags:
6
+ - generated_from_trainer
7
+ model-index:
8
+ - name: distily_bench_gpt2_activation_loss
9
+ results: []
10
+ ---
11
+
12
+ # distily_bench_gpt2_activation_loss
13
+
14
+ This student model is distilled from the teacher model [gpt2](https://huggingface.co/gpt2) using the dataset (unspecified).
15
+
16
+ The [Distily](https://github.com/lapp0/distily) library was used for this distillation.
17
+
18
+ It achieves the following results on the evaluation set:
19
+ - eval_enwikippl: 216.1260
20
+ - eval_frwikippl: 1157.4716
21
+ - eval_zhwikippl: 725.3146
22
+ - eval_loss: 1.2660
23
+ - eval_runtime: 34.0138
24
+ - eval_samples_per_second: 58.8
25
+ - eval_steps_per_second: 7.35
26
+
27
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
28
+ should probably proofread and complete it, then remove this comment.
29
+
30
+ ## Model description
31
+
32
+ More information needed
33
+
34
+ ## Intended uses & limitations
35
+
36
+ More information needed
37
+
38
+ ## Training and evaluation data
39
+
40
+ More information needed
41
+ -->
42
+
43
+ ## Training procedure
44
+
45
+ ### Training hyperparameters
46
+
47
+ The following hyperparameters were used during training:
48
+ - distillation_objective: MultiObjective(logits_weight=1, logits_loss_fn=(fn:kl_divergence_loss()), activations_weight=0, activations_loss_fn=(fn:soft_mse_loss()), attentions_weight=0, attentions_loss_fn=(fn:soft_mse_loss()))
49
+ - train_embeddings: True
50
+ - learning_rate: 4e-05
51
+ - train_batch_size: 8
52
+ - eval_batch_size: 8
53
+ - seed: 42
54
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
55
+ - lr_scheduler_type: constant
56
+ - num_epochs: 1.0
57
+
58
+ ### Resource Usage
59
+ Peak GPU Memory: 7.9371 GB
60
+
61
+ ### Eval-Phase Metrics
62
+ | step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | zhwikippl |
63
+ | --- | --- | --- | --- | --- | --- | --- | --- | --- |
64
+ | **teacher eval** | | 30.2086 | 57.2728 | | | | | 18.1784 |
65
+ | 0 | 0 | 58863.3555 | 58419.2617 | 5.9616 | 34.0389 | 58.756 | 7.345 | 60583.1562 |
66
+ | 1000 | 0.0404 | 726.5175 | 4290.4102 | 1.9634 | 33.8546 | 59.076 | 7.385 | 10492.5918 |
67
+ | 2000 | 0.0808 | 522.3779 | 3038.8101 | 1.7799 | 34.2286 | 58.431 | 7.304 | 1967.7870 |
68
+ | 3000 | 0.1212 | 436.0217 | 2697.4614 | 1.6705 | 33.9025 | 58.993 | 7.374 | 1091.1235 |
69
+ | 4000 | 0.1616 | 370.4137 | 2260.9023 | 1.5809 | 33.9602 | 58.892 | 7.362 | 907.7313 |
70
+ | 5000 | 0.2020 | 327.6950 | 1962.1614 | 1.5021 | 33.7774 | 59.211 | 7.401 | 865.3570 |
71
+ | 6000 | 0.2424 | 285.5019 | 1729.7585 | 1.4289 | 33.8547 | 59.076 | 7.385 | 883.8088 |
72
+ | 7000 | 0.2828 | 258.4085 | 1491.2831 | 1.3662 | 33.7766 | 59.213 | 7.402 | 626.0604 |
73
+ | 8000 | 0.3232 | 233.4870 | 1228.4403 | 1.3132 | 33.8614 | 59.064 | 7.383 | 850.6914 |
74
+ | 9000 | 0.3636 | 216.1260 | 1157.4716 | 1.2660 | 34.0138 | 58.8 | 7.35 | 725.3146 |
75
+ | 10000 | 0.4040 | 202.6199 | 1178.7196 | 1.2261 | 33.8348 | 59.111 | 7.389 | 783.9336 |
76
+ | 11000 | 0.4444 | 186.9287 | 1086.7654 | 1.1812 | 33.885 | 59.023 | 7.378 | 767.5665 |
77
+ | 12000 | 0.4848 | 168.7043 | 1002.8327 | 1.1325 | 33.9839 | 58.851 | 7.356 | 807.3055 |
78
+ | 13000 | 0.5253 | 157.9653 | 885.8019 | 1.0944 | 33.8811 | 59.03 | 7.379 | 597.1538 |
79
+ | 14000 | 0.5657 | 150.5168 | 838.8755 | 1.0732 | 33.9477 | 58.914 | 7.364 | 586.4849 |
80
+ | 15000 | 0.6061 | 145.0886 | 758.6350 | 1.0454 | 33.9544 | 58.902 | 7.363 | 1364.0833 |
81
+ | 16000 | 0.6465 | 140.9902 | 774.3610 | 1.0288 | 33.8344 | 59.111 | 7.389 | 882.1583 |
82
+ | 17000 | 0.6869 | 135.8739 | 742.5475 | 1.0145 | 33.7421 | 59.273 | 7.409 | 747.0387 |
83
+ | 18000 | 0.7273 | 132.0769 | 685.4412 | 0.9989 | 33.917 | 58.967 | 7.371 | 1488.5706 |
84
+ | 19000 | 0.7677 | 131.6060 | 731.5826 | 0.9887 | 33.9182 | 58.965 | 7.371 | 618.0857 |
85
+ | 20000 | 0.8081 | 128.5655 | 678.8043 | 0.9780 | 33.9033 | 58.991 | 7.374 | 587.1904 |
86
+ | 21000 | 0.8485 | 126.3780 | 655.1987 | 0.9683 | 33.9405 | 58.927 | 7.366 | 467.9376 |
87
+ | 22000 | 0.8889 | 125.1474 | 660.4861 | 0.9594 | 33.8307 | 59.118 | 7.39 | 436.3720 |
88
+ | 23000 | 0.9293 | 123.5350 | 660.9055 | 0.9523 | 33.9643 | 58.885 | 7.361 | 394.6270 |
89
+ | 24000 | 0.9697 | 123.2763 | 653.1232 | 0.9485 | 33.8028 | 59.167 | 7.396 | 397.2708 |
90
+ | 24750 | 1.0 | 122.3797 | 623.2513 | 0.9423 | 33.9114 | 58.977 | 7.372 | 412.1859 |
91
+
92
+ ### Framework versions
93
+ - Distily 0.2.0
94
+ - Transformers 4.44.0
95
+ - Pytorch 2.3.0
96
+ - Datasets 2.20.0
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 50256,
4
+ "eos_token_id": 50256,
5
+ "transformers_version": "4.44.0"
6
+ }
logs/distillation_objective=MultiObjective(logits_weight_1__logits_loss_fn_(fn_kl_divergence_loss())__activations_weight_0__activations_loss_fn_(fn_soft_mse_loss())__attentions_weight_0__attentions_loss_fn/events.out.tfevents.1723582039.93d6cbb3ad53 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1f32346227dc28a76b5048d7c966ee047106262bea4b507d683351dc0306324e
3
+ size 253