Training in progress, step 61875
Browse files- README.md +34 -34
- config.json +1 -1
- generation_config.json +1 -1
- logs/attn_loss_fn=cos, attn_weight=25.0, layer_mapper=last_k_2, projector=orthogonal/completed.flag +0 -0
- logs/attn_loss_fn=cos, attn_weight=5, layer_mapper=last, projector=orthogonal/events.out.tfevents.1724382405.f383272e719b +3 -0
- model.safetensors +1 -1
- training_args.bin +1 -1
README.md
CHANGED
@@ -44,42 +44,42 @@ More information needed
|
|
44 |
| step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | tinystoriesppl | zhwikippl |
|
45 |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
|
46 |
| **teacher eval** | | 43.25 | 61.25 | | | | | 11.6875 | 19.125 |
|
47 |
-
| 0 | 0 | 2473901162496.0 | 170424302305280.0 |
|
48 |
-
| 2500 | 0.0404 |
|
49 |
-
| 5000 | 0.0808 |
|
50 |
-
| 7500 | 0.1212 |
|
51 |
-
| 10000 | 0.1616 |
|
52 |
-
| 12500 | 0.2020 |
|
53 |
-
| 15000 | 0.2424 |
|
54 |
-
| 17500 | 0.2828 |
|
55 |
-
| 20000 | 0.3232 |
|
56 |
-
| 22500 | 0.3636 | 66.
|
57 |
-
| 25000 | 0.4040 | 63.
|
58 |
-
| 27500 | 0.4444 |
|
59 |
-
| 30000 | 0.4848 |
|
60 |
-
| 32500 | 0.5253 | 58.
|
61 |
-
| 35000 | 0.5657 |
|
62 |
-
| 37500 | 0.6061 |
|
63 |
-
| 40000 | 0.6465 |
|
64 |
-
| 42500 | 0.6869 |
|
65 |
-
| 45000 | 0.7273 |
|
66 |
-
| 47500 | 0.7677 | 50.
|
67 |
-
| 50000 | 0.8081 |
|
68 |
-
| 52500 | 0.8485 |
|
69 |
-
| 55000 | 0.8889 |
|
70 |
-
| 57500 | 0.9293 |
|
71 |
-
| 60000 | 0.9697 |
|
72 |
-
| 61875 | 1.0 |
|
73 |
|
74 |
# Resource Usage Comparison
|
75 |
|
76 |
-
- VRAM Use: 7.
|
77 |
|
78 |
-
|
79 |
|
80 |
- **Architecture**: `GPT2LMHeadModel` -> `GPT2LMHeadModel`
|
81 |
- **Total Parameters**: 124,439,808 -> 124,439,808
|
82 |
-
- **Data Type (dtype)**:
|
83 |
- **Model Size**: 0.24 GB -> 0.24 GB
|
84 |
|
85 |
<details>
|
@@ -103,7 +103,7 @@ Trained on 145,744,973 tokens from the [wikimedia/wikipedia](https://huggingface
|
|
103 |
# Training Objective
|
104 |
|
105 |
```
|
106 |
-
DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=
|
107 |
```
|
108 |
|
109 |
# Hyperparameters
|
@@ -120,9 +120,9 @@ The following hyperparameters were used during training:
|
|
120 |
- lr_scheduler_type: `linear`
|
121 |
- lr_scheduler_warmup_ratio: `0.5`
|
122 |
- num_epochs: `1.0`
|
123 |
-
- distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=
|
124 |
- train_embeddings: `True`
|
125 |
-
- lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at
|
126 |
- student_model_name_or_path: `None`
|
127 |
- student_config_name_or_path: `None`
|
128 |
- student_model_config: `None`
|
@@ -154,6 +154,6 @@ The following hyperparameters were used during training:
|
|
154 |
|
155 |
# Framework Versions
|
156 |
- Distily 0.2.0
|
157 |
-
- Transformers 4.44.
|
158 |
-
- Pytorch 2.
|
159 |
- Datasets 2.21.0
|
|
|
44 |
| step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | tinystoriesppl | zhwikippl |
|
45 |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
|
46 |
| **teacher eval** | | 43.25 | 61.25 | | | | | 11.6875 | 19.125 |
|
47 |
+
| 0 | 0 | 2473901162496.0 | 170424302305280.0 | 45.7764 | 30.2746 | 82.577 | 10.339 | 4060086272.0 | 71468255805440.0 |
|
48 |
+
| 2500 | 0.0404 | 2040.0 | 20608.0 | 20.8472 | 30.2064 | 82.764 | 10.362 | 1472.0 | 60160.0 |
|
49 |
+
| 5000 | 0.0808 | 488.0 | 3120.0 | 18.4130 | 30.6595 | 81.541 | 10.209 | 338.0 | 1112.0 |
|
50 |
+
| 7500 | 0.1212 | 276.0 | 1296.0 | 17.0012 | 30.289 | 82.538 | 10.334 | 255.0 | 249.0 |
|
51 |
+
| 10000 | 0.1616 | 202.0 | 756.0 | 16.2078 | 30.2709 | 82.588 | 10.34 | 188.0 | 304.0 |
|
52 |
+
| 12500 | 0.2020 | 145.0 | 540.0 | 15.0996 | 30.2185 | 82.731 | 10.358 | 131.0 | 176.0 |
|
53 |
+
| 15000 | 0.2424 | 123.5 | 490.0 | 14.5283 | 30.2533 | 82.636 | 10.346 | 93.0 | 146.0 |
|
54 |
+
| 17500 | 0.2828 | 95.0 | 376.0 | 14.1520 | 30.2403 | 82.671 | 10.35 | 75.5 | 137.0 |
|
55 |
+
| 20000 | 0.3232 | 79.0 | 306.0 | 13.6446 | 30.1204 | 83.0 | 10.392 | 63.25 | 130.0 |
|
56 |
+
| 22500 | 0.3636 | 66.0 | 219.0 | 13.1452 | 30.158 | 82.897 | 10.379 | 50.0 | 80.5 |
|
57 |
+
| 25000 | 0.4040 | 63.0 | 200.0 | 12.9619 | 30.1269 | 82.982 | 10.389 | 43.75 | 77.5 |
|
58 |
+
| 27500 | 0.4444 | 59.0 | 197.0 | 12.8388 | 30.3214 | 82.45 | 10.323 | 40.5 | 73.5 |
|
59 |
+
| 30000 | 0.4848 | 59.5 | 204.0 | 12.8191 | 30.3164 | 82.464 | 10.324 | 40.5 | 70.5 |
|
60 |
+
| 32500 | 0.5253 | 58.25 | 176.0 | 12.7778 | 30.2231 | 82.718 | 10.356 | 38.75 | 61.75 |
|
61 |
+
| 35000 | 0.5657 | 58.25 | 169.0 | 12.6562 | 30.35 | 82.372 | 10.313 | 36.5 | 45.5 |
|
62 |
+
| 37500 | 0.6061 | 56.75 | 158.0 | 12.6014 | 30.3685 | 82.322 | 10.307 | 37.0 | 50.5 |
|
63 |
+
| 40000 | 0.6465 | 55.0 | 156.0 | 12.5674 | 30.3598 | 82.346 | 10.31 | 33.75 | 59.5 |
|
64 |
+
| 42500 | 0.6869 | 54.5 | 147.0 | 12.5141 | 30.3209 | 82.451 | 10.323 | 34.25 | 52.5 |
|
65 |
+
| 45000 | 0.7273 | 50.75 | 135.0 | 12.2860 | 30.244 | 82.661 | 10.349 | 29.5 | 41.75 |
|
66 |
+
| 47500 | 0.7677 | 50.5 | 127.0 | 12.2408 | 30.3366 | 82.409 | 10.318 | 28.875 | 35.0 |
|
67 |
+
| 50000 | 0.8081 | 50.25 | 125.5 | 12.2160 | 30.2563 | 82.627 | 10.345 | 28.625 | 39.0 |
|
68 |
+
| 52500 | 0.8485 | 49.25 | 123.0 | 12.1936 | 30.2253 | 82.712 | 10.356 | 28.5 | 35.5 |
|
69 |
+
| 55000 | 0.8889 | 49.25 | 121.0 | 12.1620 | 30.1898 | 82.81 | 10.368 | 27.875 | 35.0 |
|
70 |
+
| 57500 | 0.9293 | 48.75 | 120.0 | 12.1488 | 30.2559 | 82.628 | 10.345 | 27.75 | 33.5 |
|
71 |
+
| 60000 | 0.9697 | 48.75 | 119.5 | 12.1404 | 30.1517 | 82.914 | 10.381 | 27.625 | 33.25 |
|
72 |
+
| 61875 | 1.0 | 48.75 | 120.0 | 12.1402 | 30.2129 | 82.746 | 10.36 | 27.625 | 33.5 |
|
73 |
|
74 |
# Resource Usage Comparison
|
75 |
|
76 |
+
- VRAM Use: 7.7831 GB
|
77 |
|
78 |
+
`# Distillation (Teacher -> Student) Architecture Difference:
|
79 |
|
80 |
- **Architecture**: `GPT2LMHeadModel` -> `GPT2LMHeadModel`
|
81 |
- **Total Parameters**: 124,439,808 -> 124,439,808
|
82 |
+
- **Data Type (dtype)**: 124439808 -> torch.bfloat16
|
83 |
- **Model Size**: 0.24 GB -> 0.24 GB
|
84 |
|
85 |
<details>
|
|
|
103 |
# Training Objective
|
104 |
|
105 |
```
|
106 |
+
DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=25.0, loss_fn=cos, layer_mapper=layer-2))
|
107 |
```
|
108 |
|
109 |
# Hyperparameters
|
|
|
120 |
- lr_scheduler_type: `linear`
|
121 |
- lr_scheduler_warmup_ratio: `0.5`
|
122 |
- num_epochs: `1.0`
|
123 |
+
- distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=25.0, loss_fn=cos, layer_mapper=layer-2))`
|
124 |
- train_embeddings: `True`
|
125 |
+
- lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7fbd2823f2b0>`
|
126 |
- student_model_name_or_path: `None`
|
127 |
- student_config_name_or_path: `None`
|
128 |
- student_model_config: `None`
|
|
|
154 |
|
155 |
# Framework Versions
|
156 |
- Distily 0.2.0
|
157 |
+
- Transformers 4.44.0
|
158 |
+
- Pytorch 2.3.0
|
159 |
- Datasets 2.21.0
|
config.json
CHANGED
@@ -33,7 +33,7 @@
|
|
33 |
}
|
34 |
},
|
35 |
"torch_dtype": "bfloat16",
|
36 |
-
"transformers_version": "4.44.
|
37 |
"use_cache": true,
|
38 |
"vocab_size": 50257
|
39 |
}
|
|
|
33 |
}
|
34 |
},
|
35 |
"torch_dtype": "bfloat16",
|
36 |
+
"transformers_version": "4.44.0",
|
37 |
"use_cache": true,
|
38 |
"vocab_size": 50257
|
39 |
}
|
generation_config.json
CHANGED
@@ -2,5 +2,5 @@
|
|
2 |
"_from_model_config": true,
|
3 |
"bos_token_id": 50256,
|
4 |
"eos_token_id": 50256,
|
5 |
-
"transformers_version": "4.44.
|
6 |
}
|
|
|
2 |
"_from_model_config": true,
|
3 |
"bos_token_id": 50256,
|
4 |
"eos_token_id": 50256,
|
5 |
+
"transformers_version": "4.44.0"
|
6 |
}
|
logs/attn_loss_fn=cos, attn_weight=25.0, layer_mapper=last_k_2, projector=orthogonal/completed.flag
ADDED
File without changes
|
logs/attn_loss_fn=cos, attn_weight=5, layer_mapper=last, projector=orthogonal/events.out.tfevents.1724382405.f383272e719b
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:def1926856c65a4d22407bbad2ec49686f4e49aec7643a8445809c56bc7d1697
|
3 |
+
size 29632523
|
model.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 248894656
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9de163c64545e506410b51e708d3647559bf19be3b3dd504e77fb8c3b3d3051a
|
3 |
size 248894656
|
training_args.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 1017899144
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3e24bd773ab39cd8217ed631d6db55bc6c7a82ad2f423d1b2d0445e7de80f459
|
3 |
size 1017899144
|