lapp0 commited on
Commit
b2c2f51
1 Parent(s): 32d2b5d

End of training

Browse files
README.md ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: gpt2
3
+ library_name: Distily
4
+ license: mit
5
+ tags:
6
+ - generated_from_trainer
7
+ model-index:
8
+ - name: istily_bench_gpt2_simple_objectives
9
+ results: []
10
+ ---
11
+
12
+ # distily_bench_gpt2_simple_objectives
13
+
14
+ This student model is distilled from the teacher model [gpt2](https://huggingface.co/gpt2) using the dataset (unspecified).
15
+
16
+ The [Distily](https://github.com/lapp0/distily) library was used for this distillation.
17
+
18
+ It achieves the following results on the evaluation set:
19
+ - eval_enwikippl: 213.1260
20
+ - eval_frwikippl: 1238.3538
21
+ - eval_zhwikippl: 689.7033
22
+ - eval_loss: 1.2684
23
+ - eval_runtime: 33.9389
24
+ - eval_samples_per_second: 58.929
25
+ - eval_steps_per_second: 7.366
26
+
27
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
28
+ should probably proofread and complete it, then remove this comment.
29
+
30
+ ## Model description
31
+
32
+ More information needed
33
+
34
+ ## Intended uses & limitations
35
+
36
+ More information needed
37
+
38
+ ## Training and evaluation data
39
+
40
+ More information needed
41
+ -->
42
+
43
+ ## Training procedure
44
+
45
+ ### Training hyperparameters
46
+
47
+ The following hyperparameters were used during training:
48
+ - distillation_objective: MultiObjective(logits_weight=1, logits_loss_fn=(fn:kl_divergence_loss()), activations_weight=0, activations_loss_fn=(fn:mse_loss()), attentions_weight=0, attentions_loss_fn=(fn:mse_loss()))
49
+ - train_embeddings: True
50
+ - learning_rate: 4e-05
51
+ - train_batch_size: 8
52
+ - eval_batch_size: 8
53
+ - seed: 42
54
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
55
+ - lr_scheduler_type: constant
56
+ - num_epochs: 1.0
57
+
58
+ ### Resource Usage
59
+ Peak GPU Memory: 7.9371 GB
60
+
61
+ ### Eval-Phase Metrics
62
+ | step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | zhwikippl |
63
+ | --- | --- | --- | --- | --- | --- | --- | --- | --- |
64
+ | **teacher eval** | | 30.2086 | 57.2728 | | | | | 18.1784 |
65
+ | 0 | 0 | 57983.2695 | 56826.7539 | 5.9504 | 33.9223 | 58.958 | 7.37 | 51544.0508 |
66
+ | 1000 | 0.0404 | 716.3218 | 4663.2852 | 1.9522 | 34.1014 | 58.649 | 7.331 | 17271.0391 |
67
+ | 2000 | 0.0808 | 512.1357 | 3224.2202 | 1.7690 | 34.1187 | 58.619 | 7.327 | 2109.2849 |
68
+ | 3000 | 0.1212 | 418.9938 | 2658.5667 | 1.6652 | 34.1292 | 58.601 | 7.325 | 1129.3704 |
69
+ | 4000 | 0.1616 | 367.4342 | 2491.9417 | 1.5763 | 34.0919 | 58.665 | 7.333 | 798.7274 |
70
+ | 5000 | 0.2020 | 317.3523 | 1897.4025 | 1.4963 | 33.965 | 58.884 | 7.361 | 962.9218 |
71
+ | 6000 | 0.2424 | 282.9857 | 1585.8464 | 1.4222 | 33.9768 | 58.864 | 7.358 | 852.0554 |
72
+ | 7000 | 0.2828 | 251.4994 | 1421.8730 | 1.3623 | 33.9388 | 58.93 | 7.366 | 753.7527 |
73
+ | 8000 | 0.3232 | 229.7460 | 1314.6521 | 1.3137 | 34.0289 | 58.773 | 7.347 | 729.5888 |
74
+ | 9000 | 0.3636 | 213.1260 | 1238.3538 | 1.2684 | 33.9389 | 58.929 | 7.366 | 689.7033 |
75
+ | 10000 | 0.4040 | 197.5243 | 1147.7201 | 1.2172 | 34.1028 | 58.646 | 7.331 | 761.6445 |
76
+ | 11000 | 0.4444 | 178.5023 | 1065.9717 | 1.1681 | 34.111 | 58.632 | 7.329 | 697.0179 |
77
+ | 12000 | 0.4848 | 164.3850 | 941.9713 | 1.1267 | 34.1042 | 58.644 | 7.33 | 722.8970 |
78
+ | 13000 | 0.5253 | 157.2920 | 871.0618 | 1.0965 | 34.1353 | 58.59 | 7.324 | 484.9227 |
79
+ | 14000 | 0.5657 | 150.8093 | 806.3426 | 1.0674 | 34.0619 | 58.717 | 7.34 | 539.5954 |
80
+ | 15000 | 0.6061 | 143.2526 | 816.5259 | 1.0499 | 34.2668 | 58.366 | 7.296 | 509.8925 |
81
+ | 16000 | 0.6465 | 139.8671 | 715.0598 | 1.0314 | 34.0375 | 58.759 | 7.345 | 426.2927 |
82
+ | 17000 | 0.6869 | 134.8648 | 739.3088 | 1.0151 | 34.0663 | 58.709 | 7.339 | 458.1682 |
83
+ | 18000 | 0.7273 | 132.5907 | 675.8909 | 1.0007 | 33.9807 | 58.857 | 7.357 | 348.7257 |
84
+ | 19000 | 0.7677 | 129.5074 | 665.1128 | 0.9937 | 34.017 | 58.794 | 7.349 | 350.5464 |
85
+ | 20000 | 0.8081 | 127.9778 | 683.8963 | 0.9837 | 33.9292 | 58.946 | 7.368 | 395.9997 |
86
+ | 21000 | 0.8485 | 125.7319 | 659.5090 | 0.9754 | 33.985 | 58.849 | 7.356 | 518.3367 |
87
+ | 22000 | 0.8889 | 124.8950 | 691.0702 | 0.9696 | 34.2015 | 58.477 | 7.31 | 610.1314 |
88
+ | 23000 | 0.9293 | 123.7751 | 644.4776 | 0.9625 | 34.1656 | 58.538 | 7.317 | 321.7459 |
89
+ | 24000 | 0.9697 | 122.1613 | 658.5797 | 0.9586 | 33.975 | 58.867 | 7.358 | 353.6970 |
90
+ | 24750 | 1.0 | 119.9802 | 652.2029 | 0.9537 | 34.2146 | 58.455 | 7.307 | 339.4447 |
91
+
92
+ ### Framework versions
93
+ - Distily 0.2.0
94
+ - Transformers 4.44.0
95
+ - Pytorch 2.3.0
96
+ - Datasets 2.20.0
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 50256,
4
+ "eos_token_id": 50256,
5
+ "transformers_version": "4.44.0"
6
+ }
logs/distillation_objective=MultiObjective(logits_weight_1__logits_loss_fn_(fn_kl_divergence_loss())__activations_weight_0__activations_loss_fn_(fn_mse_loss())__attentions_weight_0__attentions_loss_fn_(fn_/events.out.tfevents.1723445806.93d6cbb3ad53 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d768a3452a949a211552018f671822fa05c8aa1e34b6cd23c5f3939980282661
3
+ size 253