lapp0 commited on
Commit
bb0ba38
1 Parent(s): ee7a091

Training in progress, step 49500

Browse files
README.md CHANGED
@@ -44,7 +44,7 @@ More information needed
44
 
45
  # Resource Usage Comparison
46
 
47
- - VRAM Use: 7.4167 GB
48
 
49
  # Distillation (Teacher -> Student) Architecture Difference:
50
 
@@ -95,7 +95,7 @@ The following hyperparameters were used during training:
95
  <summary>Expand</summary>
96
 
97
  - learning_rate: `0.0002`
98
- - train_batch_size: `4`
99
  - eval_batch_size: `8`
100
  - seed: `42`
101
  - optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08`
@@ -103,7 +103,7 @@ The following hyperparameters were used during training:
103
  - num_epochs: `1.0`
104
  - distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=raw_mse, layer_mapper=layer-2, norm=layernorm, projector=mlp))`
105
  - train_embeddings: `True`
106
- - lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7f367c1b3760>`
107
  - student_model_name_or_path: `None`
108
  - student_config_name_or_path: `distilbert/distilgpt2`
109
  - student_model_config: `None`
@@ -135,4 +135,4 @@ The following hyperparameters were used during training:
135
  - Distily 0.4.1
136
  - Transformers 4.44.2
137
  - Pytorch 2.4.0+cu121
138
- - Datasets 2.18.0
 
44
 
45
  # Resource Usage Comparison
46
 
47
+ - VRAM Use: 8.0694 GB
48
 
49
  # Distillation (Teacher -> Student) Architecture Difference:
50
 
 
95
  <summary>Expand</summary>
96
 
97
  - learning_rate: `0.0002`
98
+ - train_batch_size: `8`
99
  - eval_batch_size: `8`
100
  - seed: `42`
101
  - optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08`
 
103
  - num_epochs: `1.0`
104
  - distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=raw_mse, layer_mapper=layer-2, norm=layernorm, projector=mlp))`
105
  - train_embeddings: `True`
106
+ - lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7f0d5c670a30>`
107
  - student_model_name_or_path: `None`
108
  - student_config_name_or_path: `distilbert/distilgpt2`
109
  - student_model_config: `None`
 
135
  - Distily 0.4.1
136
  - Transformers 4.44.2
137
  - Pytorch 2.4.0+cu121
138
+ - Datasets 2.21.0
logs/attn_norm=None, attn_projector=mlp, attn_weight=5, learning_rate=0.0002, per_device_train_batch_size=8, warmup_ratio=0/events.out.tfevents.1725144071.23668649e3db ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:80ad4841877cf29dcec7743fd156fc56ded3241f3a230f276c1f68f9b20f35a6
3
+ size 23671640
logs/attn_norm=layernorm, attn_projector=mlp, attn_weight=5, learning_rate=0.0002, per_device_train_batch_size=8, warmup_ratio=0/completed.flag ADDED
File without changes
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:daa4c03545a07e34f5a5cd138937ed1c732263297f0117ded641fde7d6e35cdd
3
  size 163832792
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:802a3510b17c0c240080a8a448009d29aae816fc3e3f307166ccde01ee573840
3
  size 163832792
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1f72897658a2d59d35033740339bc55c2e098eb2f7e2201ecf7ffd5189d161cb
3
  size 5560
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dc5c5a3852182f814286138c0bd60f6ffae90db702d5b862ca585f4ec248ed52
3
  size 5560