lapp0 commited on
Commit
b10d829
·
verified ·
1 Parent(s): d342a8f

Training in progress, step 50000

Browse files
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  base_model: gpt2
3
  datasets:
4
- - distily/c4_multilingual_1M
5
  library_name: Distily
6
  license: creativeml-openrail-m
7
  tags:
@@ -18,7 +18,7 @@ model-index:
18
 
19
  Distilled with [Distily](https://github.com/lapp0/distily) library
20
  using teacher model [gpt2](https://huggingface.co/gpt2)
21
- on dataset [distily/c4_multilingual_1M](https://huggingface.co/datasets/distily/c4_multilingual_1M).
22
 
23
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
24
  should probably proofread and complete it, then remove this comment.
@@ -79,12 +79,12 @@ GPT2LMHeadModel(
79
  # Resource Usage
80
 
81
  - Max Train VRAM Use: 15.7135 GB
82
- - Available VRAM: 23.6497 GB
83
  - GPUs:
84
  - 1x NVIDIA GeForce RTX 4090
85
- - CPUs: 28
86
- - CPU Memory: 62.6429 GB
87
- - CPU Memory Bandwidth: 700 GB/s
88
 
89
  # Distillation (Teacher -> Student) Architecture Difference:
90
 
@@ -115,7 +115,7 @@ GPT2LMHeadModel(
115
  <br/>
116
 
117
  # Train Dataset
118
- Trained on 448,494,678 tokens from the [distily/c4_multilingual_1M](https://huggingface.co/datasets/distily/c4_multilingual_1M) dataset.
119
 
120
  - Num Samples: `998,000`
121
  - Subset: `None`
@@ -172,7 +172,7 @@ The following hyperparameters were used during training:
172
  projector='orthogonal'
173
  )
174
  )`
175
- - lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7f5a6ef93eb0>`
176
  - student_model_name_or_path: `None`
177
  - student_config_name_or_path: `distilbert/distilgpt2`
178
  - student_model_config: `None`
@@ -182,13 +182,13 @@ The following hyperparameters were used during training:
182
  - teacher_model_name_or_path: `gpt2`
183
  - teacher_load_in_8bit: `False`
184
  - teacher_load_in_4bit: `False`
185
- - dataset_uri: `distily/c4_multilingual_1M`
186
  - dataset_subset: `None`
187
  - dataset_split: `train`
188
  - dataset_column_name: `text`
189
  - dataset_sample_size: `1000000`
190
  - dataset_test_size: `0.002`
191
- - dataset_shuffle: `False`
192
  - dataset_shuffle_seed: `42`
193
  - gradient_accumulation_steps: `1`
194
  - weight_decay: `0.0`
 
1
  ---
2
  base_model: gpt2
3
  datasets:
4
+ - distily/synth_gpt2_t1_seq_1M
5
  library_name: Distily
6
  license: creativeml-openrail-m
7
  tags:
 
18
 
19
  Distilled with [Distily](https://github.com/lapp0/distily) library
20
  using teacher model [gpt2](https://huggingface.co/gpt2)
21
+ on dataset [distily/synth_gpt2_t1_seq_1M](https://huggingface.co/datasets/distily/synth_gpt2_t1_seq_1M).
22
 
23
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
24
  should probably proofread and complete it, then remove this comment.
 
79
  # Resource Usage
80
 
81
  - Max Train VRAM Use: 15.7135 GB
82
+ - Available VRAM: 23.4329 GB
83
  - GPUs:
84
  - 1x NVIDIA GeForce RTX 4090
85
+ - CPUs: 64
86
+ - CPU Memory: 251.7299 GB
87
+ - CPU Memory Bandwidth: 1600 GB/s
88
 
89
  # Distillation (Teacher -> Student) Architecture Difference:
90
 
 
115
  <br/>
116
 
117
  # Train Dataset
118
+ Trained on 681,027,436 tokens from the [distily/synth_gpt2_t1_seq_1M](https://huggingface.co/datasets/distily/synth_gpt2_t1_seq_1M) dataset.
119
 
120
  - Num Samples: `998,000`
121
  - Subset: `None`
 
172
  projector='orthogonal'
173
  )
174
  )`
175
+ - lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7a7afdb74e80>`
176
  - student_model_name_or_path: `None`
177
  - student_config_name_or_path: `distilbert/distilgpt2`
178
  - student_model_config: `None`
 
182
  - teacher_model_name_or_path: `gpt2`
183
  - teacher_load_in_8bit: `False`
184
  - teacher_load_in_4bit: `False`
185
+ - dataset_uri: `distily/synth_gpt2_t1_seq_1M`
186
  - dataset_subset: `None`
187
  - dataset_split: `train`
188
  - dataset_column_name: `text`
189
  - dataset_sample_size: `1000000`
190
  - dataset_test_size: `0.002`
191
+ - dataset_shuffle: `True`
192
  - dataset_shuffle_seed: `42`
193
  - gradient_accumulation_steps: `1`
194
  - weight_decay: `0.0`
logs/dataset_shuffle=True, dataset_split=train, dataset_subset=None, dataset_uri=distily_filtered_redpajama_multilingual, lr_scheduler_kwargs=None, lr_scheduler_type=constant/events.out.tfevents.1725781824.b57c76173204 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c94b1a0d5f44bc2cfd1573bcc7cd2aa2f55ab658c8774fecc3290237b32014f7
3
- size 1436324
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:00d05dccf8f5ba5aa4048965f2f09377dc241267ab4585f663bd9016764c0e4b
3
+ size 1596619
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:be71e4e82c538387bc593ea6422ca0f5865ebe8510d14f4faf36079bc6878314
3
  size 163832792
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a1a292cdeff20913a9e61397f7e618e7519c47cb9bf60bbc8759459422b0bfaa
3
  size 163832792
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:389128090549d767e1ec04c936f05094ed97b08c4bc5a59e39a861b902387d21
3
- size 5496
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:df27e837bc207577c331df3b67b7c8b3301c875f04f12f54132150840661b885
3
+ size 5560