lapp0 commited on
Commit
f80052b
1 Parent(s): 6a4fa98

Training in progress, step 61875

Browse files
README.md CHANGED
@@ -1,9 +1,7 @@
1
  ---
2
- base_model: gpt2
3
- datasets:
4
- - wikimedia/wikipedia
5
- library_name: Distily
6
  license: mit
 
7
  tags:
8
  - bitnet
9
  - 1.58b
@@ -13,147 +11,75 @@ model-index:
13
  results: []
14
  ---
15
 
 
 
16
 
17
- # Summary
18
 
19
- Distilled with [Distily](https://github.com/lapp0/distily) library
20
- using teacher model [gpt2](https://huggingface.co/gpt2)
21
- on dataset [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia).
22
 
23
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
24
- should probably proofread and complete it, then remove this comment.
25
 
26
- # Model description
 
 
27
 
28
  More information needed
29
 
30
- # Intended uses & limitations
31
 
32
  More information needed
33
- -->
34
-
35
- # Model Architecture:
36
- - **Architecture**: `GPT2LMHeadModel`
37
- - **Total Parameters**: 124,439,808
38
- - **Data Type (dtype)**: torch.bfloat16
39
- - **Model Size**: 0.24 GB
40
-
41
-
42
- # Evaluation Metrics Comparison
43
-
44
- | step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | tinystoriesppl | zhwikippl |
45
- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
46
- | **teacher eval** | | 43.25 | 61.25 | | | | | 11.6875 | 19.125 |
47
- | 0 | 0 | 2473901162496.0 | 170424302305280.0 | 25.7744 | 25.131 | 99.479 | 12.455 | 4060086272.0 | 71468255805440.0 |
48
- | 2500 | 0.0404 | 960.0 | 8064.0 | 6.1231 | 25.2285 | 99.094 | 12.407 | 652.0 | 6816.0 |
49
- | 5000 | 0.0808 | 380.0 | 1896.0 | 5.0307 | 25.2563 | 98.985 | 12.393 | 270.0 | 286.0 |
50
- | 7500 | 0.1212 | 230.0 | 824.0 | 4.5129 | 25.128 | 99.491 | 12.456 | 202.0 | 174.0 |
51
- | 10000 | 0.1616 | 171.0 | 628.0 | 4.2261 | 25.2755 | 98.91 | 12.384 | 151.0 | 173.0 |
52
- | 12500 | 0.2020 | 126.5 | 482.0 | 3.8533 | 25.2021 | 99.198 | 12.42 | 106.0 | 156.0 |
53
- | 15000 | 0.2424 | 109.5 | 430.0 | 3.6650 | 25.2487 | 99.015 | 12.397 | 88.0 | 155.0 |
54
- | 17500 | 0.2828 | 93.0 | 350.0 | 3.5198 | 25.1751 | 99.305 | 12.433 | 73.5 | 119.0 |
55
- | 20000 | 0.3232 | 77.5 | 282.0 | 3.3352 | 25.2573 | 98.981 | 12.392 | 63.25 | 135.0 |
56
- | 22500 | 0.3636 | 66.5 | 213.0 | 3.1511 | 25.1782 | 99.292 | 12.431 | 50.75 | 80.0 |
57
- | 25000 | 0.4040 | 63.25 | 197.0 | 3.0803 | 25.2258 | 99.105 | 12.408 | 44.5 | 80.5 |
58
- | 27500 | 0.4444 | 58.5 | 212.0 | 3.0299 | 25.2357 | 99.066 | 12.403 | 41.75 | 68.5 |
59
- | 30000 | 0.4848 | 58.5 | 202.0 | 3.0169 | 25.2481 | 99.017 | 12.397 | 43.25 | 91.5 |
60
- | 32500 | 0.5253 | 58.75 | 173.0 | 3.0014 | 25.2575 | 98.981 | 12.392 | 41.5 | 62.75 |
61
- | 35000 | 0.5657 | 57.25 | 164.0 | 2.9385 | 25.2523 | 99.001 | 12.395 | 38.0 | 49.0 |
62
- | 37500 | 0.6061 | 57.0 | 157.0 | 2.9163 | 25.1539 | 99.388 | 12.443 | 39.25 | 61.75 |
63
- | 40000 | 0.6465 | 54.75 | 172.0 | 2.8984 | 25.2388 | 99.054 | 12.402 | 35.0 | 67.5 |
64
- | 42500 | 0.6869 | 53.0 | 151.0 | 2.8789 | 25.2418 | 99.042 | 12.4 | 35.25 | 49.75 |
65
- | 45000 | 0.7273 | 49.5 | 134.0 | 2.7753 | 25.2511 | 99.005 | 12.395 | 30.25 | 42.25 |
66
- | 47500 | 0.7677 | 50.0 | 124.0 | 2.7506 | 25.2475 | 99.02 | 12.397 | 29.5 | 38.75 |
67
- | 50000 | 0.8081 | 49.0 | 124.5 | 2.7361 | 25.2146 | 99.149 | 12.413 | 28.75 | 38.25 |
68
- | 52500 | 0.8485 | 48.25 | 120.0 | 2.7262 | 25.1855 | 99.264 | 12.428 | 29.125 | 35.0 |
69
- | 55000 | 0.8889 | 47.75 | 117.0 | 2.7099 | 25.2332 | 99.076 | 12.404 | 28.25 | 33.0 |
70
- | 57500 | 0.9293 | 47.25 | 117.5 | 2.7045 | 25.2693 | 98.934 | 12.387 | 28.0 | 32.5 |
71
- | 60000 | 0.9697 | 47.25 | 116.5 | 2.7013 | 25.2549 | 98.991 | 12.394 | 27.875 | 32.25 |
72
- | 61875 | 1.0 | 47.25 | 116.5 | 2.7009 | 25.2212 | 99.123 | 12.41 | 28.0 | 32.25 |
73
-
74
- # Resource Usage Comparison
75
-
76
- - VRAM Use: 7.7830 GB
77
-
78
- # Distillation (Teacher -> Student) Architecture Difference:
79
-
80
- - **Architecture**: `GPT2LMHeadModel` -> `GPT2LMHeadModel`
81
- - **Total Parameters**: 124,439,808 -> 124,439,808
82
- - **Data Type (dtype)**: torch.bfloat16 -> torch.bfloat16
83
- - **Model Size**: 0.24 GB -> 0.24 GB
84
-
85
- <details>
86
- <summary>Module Diff Details</summary>
87
-
88
- ```diff
89
-
90
- ```
91
-
92
- </details>
93
- <br/>
94
-
95
- # Train Dataset
96
- Trained on 145,744,973 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset.
97
-
98
- - Num Samples: `247,500`
99
- - Subset: `20231101.en`
100
- - Split: `train`
101
-
102
-
103
- # Training Objective
104
-
105
- ```
106
- DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=cos, layer_mapper=layer-2))
107
- ```
108
-
109
- # Hyperparameters
110
  The following hyperparameters were used during training:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
111
 
112
- <details>
113
- <summary>Expand</summary>
114
-
115
- - learning_rate: `0.0001`
116
- - train_batch_size: `4`
117
- - eval_batch_size: `8`
118
- - seed: `42`
119
- - optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08`
120
- - lr_scheduler_type: `linear`
121
- - lr_scheduler_warmup_ratio: `0.5`
122
- - num_epochs: `1.0`
123
- - distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=cos, layer_mapper=layer-2))`
124
- - train_embeddings: `True`
125
- - lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7f14d416e830>`
126
- - student_model_name_or_path: `None`
127
- - student_config_name_or_path: `None`
128
- - student_model_config: `None`
129
- - reinitialize_weights: `None`
130
- - copy_teacher_modules: `[('lm_head', False)]`
131
- - student_model_as_bitnet: `True`
132
- - student_model_compile: `False`
133
- - dropout: `None`
134
- - teacher_model_name_or_path: `gpt2`
135
- - teacher_load_in_8bit: `False`
136
- - teacher_load_in_4bit: `False`
137
- - teacher_model_compile: `False`
138
- - dataset_uri: `wikimedia/wikipedia`
139
- - dataset_subset: `20231101.en`
140
- - dataset_split: `train`
141
- - dataset_column_name: `text`
142
- - dataset_sample_size: `250000`
143
- - dataset_test_size: `0.01`
144
- - gradient_accumulation_steps: `1`
145
- - weight_decay: `0.0`
146
- - max_grad_norm: `1.0`
147
- - warmup_ratio: `0.5`
148
- - warmup_steps: `0`
149
- - gradient_checkpointing: `True`
150
-
151
- </details>
152
- <br/>
153
-
154
-
155
- # Framework Versions
156
- - Distily 0.2.0
157
  - Transformers 4.44.1
158
  - Pytorch 2.5.0.dev20240821+cu121
159
  - Datasets 2.21.0
 
 
1
  ---
2
+ library_name: transformers
 
 
 
3
  license: mit
4
+ base_model: gpt2
5
  tags:
6
  - bitnet
7
  - 1.58b
 
11
  results: []
12
  ---
13
 
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
 
17
+ # distily_multi_experiment
18
 
19
+ This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 11.8595
22
 
23
+ ## Model description
 
24
 
25
+ More information needed
26
+
27
+ ## Intended uses & limitations
28
 
29
  More information needed
30
 
31
+ ## Training and evaluation data
32
 
33
  More information needed
34
+
35
+ ## Training procedure
36
+
37
+ ### Training hyperparameters
38
+
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
39
  The following hyperparameters were used during training:
40
+ - learning_rate: 0.0001
41
+ - train_batch_size: 4
42
+ - eval_batch_size: 8
43
+ - seed: 42
44
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
+ - lr_scheduler_type: linear
46
+ - lr_scheduler_warmup_ratio: 0.5
47
+ - num_epochs: 1.0
48
+
49
+ ### Training results
50
+
51
+ | Training Loss | Epoch | Step | Validation Loss |
52
+ |:-------------:|:------:|:-----:|:---------------:|
53
+ | No log | 0 | 0 | 45.5392 |
54
+ | 19.25 | 0.0404 | 2500 | 20.5160 |
55
+ | 17.0 | 0.0808 | 5000 | 18.1646 |
56
+ | 16.375 | 0.1212 | 7500 | 16.8100 |
57
+ | 18.5 | 0.1616 | 10000 | 15.9662 |
58
+ | 18.125 | 0.2020 | 12500 | 14.8913 |
59
+ | 16.125 | 0.2424 | 15000 | 14.2909 |
60
+ | 13.875 | 0.2828 | 17500 | 13.9054 |
61
+ | 12.5625 | 0.3232 | 20000 | 13.4260 |
62
+ | 13.8125 | 0.3636 | 22500 | 12.9026 |
63
+ | 14.5625 | 0.4040 | 25000 | 12.6783 |
64
+ | 15.1875 | 0.4444 | 27500 | 12.5651 |
65
+ | 13.4375 | 0.4848 | 30000 | 12.5742 |
66
+ | 6.8125 | 0.5253 | 32500 | 12.5106 |
67
+ | 12.0 | 0.5657 | 35000 | 12.3849 |
68
+ | 13.9375 | 0.6061 | 37500 | 12.3297 |
69
+ | 5.375 | 0.6465 | 40000 | 12.2764 |
70
+ | 20.625 | 0.6869 | 42500 | 12.2612 |
71
+ | 10.0 | 0.7273 | 45000 | 12.0058 |
72
+ | 18.75 | 0.7677 | 47500 | 11.9614 |
73
+ | 10.0625 | 0.8081 | 50000 | 11.9339 |
74
+ | 16.0 | 0.8485 | 52500 | 11.9123 |
75
+ | 18.625 | 0.8889 | 55000 | 11.8770 |
76
+ | 15.875 | 0.9293 | 57500 | 11.8680 |
77
+ | 11.25 | 0.9697 | 60000 | 11.8611 |
78
+
79
+
80
+ ### Framework versions
81
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
82
  - Transformers 4.44.1
83
  - Pytorch 2.5.0.dev20240821+cu121
84
  - Datasets 2.21.0
85
+ - Tokenizers 0.19.1
logs/attn_loss_fn=cos, attn_weight=25.0, layer_mapper=layer-2, projector=linear/events.out.tfevents.1724395244.e3f806ea38c9 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a453f903c27cbe44de1139bc2db780af9d67c938e7420c82c0ce4271108b2e5e
3
+ size 588
logs/attn_loss_fn=cos, attn_weight=25.0, layer_mapper=layer-2, projector=linear/events.out.tfevents.1724395600.e3f806ea38c9 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5f60468b8dd9872824ef1447d057b6e2134db5fbec40f3041c9ce997f563d817
3
+ size 29632525
logs/attn_loss_fn=kl, attn_weight=5, layer_mapper=all, projector=linear/events.out.tfevents.1724395297.e3f806ea38c9 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:06a03b87a7c91e41f0b989c8f11b45ba086cfee92594031ea1a704ec5b9d152f
3
+ size 5292
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b1e3f1d9530f5ab44df7ced276f9aad18a5896067fa83aebc347729d0fbd1d5b
3
  size 248894656
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1d756009f29a25fc553904bc796d1b9584b75675b011c01743e8f336f6a071c2
3
  size 248894656
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:154c9ae48a6d8c0a8eabb43967035cb981a54c02d7049008d68d63ac6b4d652b
3
- size 5368
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f08a147b8abe3158989303fccd09071bf93778b68c01c0b859d98137de13ece5
3
+ size 1017899144