lapp0 commited on
Commit
1be3a21
1 Parent(s): e9286b4

End of training

Browse files
README.md ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: gpt2
3
+ library_name: Distily
4
+ license: mit
5
+ tags:
6
+ - generated_from_trainer
7
+ model-index:
8
+ - name: distily_bench_gpt2_simple_objectives2
9
+ results: []
10
+ ---
11
+
12
+ # distily_bench_gpt2_simple_objectives2
13
+
14
+ This student model is distilled from the teacher model [gpt2](https://huggingface.co/gpt2) using the dataset (unspecified).
15
+
16
+ The [Distily](https://github.com/lapp0/distily) library was used for this distillation.
17
+
18
+ It achieves the following results on the evaluation set:
19
+ - eval_enwikippl: 218.2680
20
+ - eval_frwikippl: 1270.0094
21
+ - eval_zhwikippl: 675.2138
22
+ - eval_loss: 1.2941
23
+ - eval_runtime: 34.9208
24
+ - eval_samples_per_second: 57.272
25
+ - eval_steps_per_second: 7.159
26
+
27
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
28
+ should probably proofread and complete it, then remove this comment.
29
+
30
+ ## Model description
31
+
32
+ More information needed
33
+
34
+ ## Intended uses & limitations
35
+
36
+ More information needed
37
+
38
+ ## Training and evaluation data
39
+
40
+ More information needed
41
+ -->
42
+
43
+ ## Training procedure
44
+
45
+ ### Training hyperparameters
46
+
47
+ The following hyperparameters were used during training:
48
+ - distillation_objective: MultiObjective(logits_weight=1, logits_loss_fn=(fn:kl_divergence_loss()), activations_weight=0, activations_loss_fn=(fn:soft_mse_loss()), attentions_weight=0, attentions_loss_fn=(fn:soft_mse_loss()))
49
+ - train_embeddings: True
50
+ - learning_rate: 4e-05
51
+ - train_batch_size: 8
52
+ - eval_batch_size: 8
53
+ - seed: 42
54
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
55
+ - lr_scheduler_type: constant
56
+ - num_epochs: 1.0
57
+
58
+ ### Resource Usage
59
+ Peak GPU Memory: 7.9371 GB
60
+
61
+ ### Eval-Phase Metrics
62
+ | step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | zhwikippl |
63
+ | --- | --- | --- | --- | --- | --- | --- | --- | --- |
64
+ | **teacher eval** | | 30.2086 | 57.2728 | | | | | 18.1784 |
65
+ | 0 | 0 | 55232.0742 | 57228.8242 | 5.9266 | 34.236 | 58.418 | 7.302 | 60115.7617 |
66
+ | 1000 | 0.0404 | 739.0357 | 4655.4023 | 2.0050 | 33.9843 | 58.851 | 7.356 | 16372.7070 |
67
+ | 2000 | 0.0808 | 532.8664 | 3281.5566 | 1.8149 | 34.1449 | 58.574 | 7.322 | 2016.1978 |
68
+ | 3000 | 0.1212 | 440.3088 | 2713.1023 | 1.7089 | 34.0465 | 58.743 | 7.343 | 1149.1477 |
69
+ | 4000 | 0.1616 | 383.6159 | 2350.9609 | 1.6207 | 34.0823 | 58.682 | 7.335 | 1149.4546 |
70
+ | 5000 | 0.2020 | 338.9797 | 1973.5388 | 1.5384 | 34.0344 | 58.764 | 7.345 | 964.7234 |
71
+ | 6000 | 0.2424 | 294.0521 | 1662.3237 | 1.4628 | 34.0352 | 58.763 | 7.345 | 726.0898 |
72
+ | 7000 | 0.2828 | 259.6556 | 1392.8989 | 1.3990 | 34.6694 | 57.688 | 7.211 | 985.8208 |
73
+ | 8000 | 0.3232 | 237.2520 | 1358.5637 | 1.3424 | 34.7511 | 57.552 | 7.194 | 601.8773 |
74
+ | 9000 | 0.3636 | 218.2680 | 1270.0094 | 1.2941 | 34.9208 | 57.272 | 7.159 | 675.2138 |
75
+ | 10000 | 0.4040 | 199.4044 | 1168.2943 | 1.2452 | 34.5786 | 57.839 | 7.23 | 576.9305 |
76
+ | 11000 | 0.4444 | 183.3348 | 1062.2205 | 1.1946 | 34.6379 | 57.74 | 7.218 | 719.8147 |
77
+ | 12000 | 0.4848 | 171.0789 | 952.9263 | 1.1565 | 34.6369 | 57.742 | 7.218 | 629.7498 |
78
+ | 13000 | 0.5253 | 159.3822 | 874.8159 | 1.1182 | 34.6139 | 57.78 | 7.223 | 845.4815 |
79
+ | 14000 | 0.5657 | 152.7777 | 857.2919 | 1.0932 | 33.9072 | 58.984 | 7.373 | 728.5178 |
80
+ | 15000 | 0.6061 | 145.9134 | 775.5083 | 1.0677 | 33.9606 | 58.892 | 7.361 | 552.7966 |
81
+ | 16000 | 0.6465 | 139.7585 | 770.7659 | 1.0513 | 33.9337 | 58.938 | 7.367 | 511.8709 |
82
+ | 17000 | 0.6869 | 136.6782 | 720.5764 | 1.0339 | 33.9201 | 58.962 | 7.37 | 610.5389 |
83
+ | 18000 | 0.7273 | 132.8999 | 709.8857 | 1.0204 | 33.9379 | 58.931 | 7.366 | 323.7285 |
84
+ | 19000 | 0.7677 | 130.9128 | 720.5255 | 1.0129 | 33.9104 | 58.979 | 7.372 | 413.1778 |
85
+ | 20000 | 0.8081 | 131.1570 | 715.4633 | 1.0027 | 33.9308 | 58.943 | 7.368 | 440.1761 |
86
+ | 21000 | 0.8485 | 125.3809 | 670.0075 | 0.9936 | 33.8473 | 59.089 | 7.386 | 480.4752 |
87
+ | 22000 | 0.8889 | 126.8006 | 634.7371 | 0.9833 | 33.8416 | 59.099 | 7.387 | 304.7259 |
88
+ | 23000 | 0.9293 | 124.9435 | 607.9310 | 0.9792 | 33.8527 | 59.079 | 7.385 | 334.6737 |
89
+ | 24000 | 0.9697 | 121.6785 | 594.7874 | 0.9726 | 33.9592 | 58.894 | 7.362 | 365.3625 |
90
+ | 24750 | 1.0 | 122.7032 | 590.0676 | 0.9684 | 33.9498 | 58.91 | 7.364 | 327.5989 |
91
+
92
+ ### Framework versions
93
+ - Distily 0.2.0
94
+ - Transformers 4.44.0
95
+ - Pytorch 2.3.0
96
+ - Datasets 2.20.0
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 50256,
4
+ "eos_token_id": 50256,
5
+ "transformers_version": "4.44.0"
6
+ }
logs/distillation_objective=MultiObjective(logits_weight_1__logits_loss_fn_(fn_kl_divergence_loss())__activations_weight_0__activations_loss_fn_(fn_soft_mse_loss())__attentions_weight_0__attentions_loss_fn/events.out.tfevents.1723476868.93d6cbb3ad53 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d891d384d284aac7cab80925dc0781170703f1ac9e1df38e3f8f949dd8da34d1
3
+ size 253