--- base_model: gpt2 datasets: - wikimedia/wikipedia library_name: Distily license: mit tags: - bitnet - 1.58b - generated_from_trainer model-index: - name: distily_multi_experiment results: [] --- # Summary Distilled with [Distily](https://github.com/lapp0/distily) library using teacher model [gpt2](https://huggingface.co/gpt2) on dataset [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia). # Model Architecture: - **Architecture**: `GPT2LMHeadModel` - **Total Parameters**: 124,439,808 - **Data Type (dtype)**: torch.bfloat16 - **Model Size**: 0.24 GB # Evaluation Metrics Comparison | step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | tinystoriesppl | zhwikippl | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | **teacher eval** | | 43.25 | 61.25 | | | | | 11.6875 | 19.125 | | 0 | 0 | 2473901162496.0 | 170424302305280.0 | 45.7764 | 25.211 | 99.163 | 12.415 | 4060086272.0 | 71468255805440.0 | | 2500 | 0.0404 | 1728.0 | 17536.0 | 20.6058 | 25.224 | 99.112 | 12.409 | 1136.0 | 44800.0 | | 5000 | 0.0808 | 486.0 | 3120.0 | 18.4108 | 25.2493 | 99.013 | 12.396 | 338.0 | 1112.0 | | 7500 | 0.1212 | 276.0 | 1296.0 | 17.0012 | 25.1973 | 99.217 | 12.422 | 255.0 | 247.0 | | 10000 | 0.1616 | 202.0 | 756.0 | 16.2093 | 25.283 | 98.881 | 12.38 | 187.0 | 294.0 | | 12500 | 0.2020 | 145.0 | 540.0 | 15.1000 | 25.2581 | 98.978 | 12.392 | 131.0 | 175.0 | | 15000 | 0.2424 | 124.0 | 486.0 | 14.5265 | 25.2532 | 98.997 | 12.394 | 93.5 | 146.0 | | 17500 | 0.2828 | 95.0 | 374.0 | 14.1524 | 25.2791 | 98.896 | 12.382 | 75.5 | 135.0 | | 20000 | 0.3232 | 78.5 | 302.0 | 13.6402 | 25.2899 | 98.854 | 12.377 | 63.5 | 162.0 | | 22500 | 0.3636 | 67.0 | 219.0 | 13.1496 | 25.2955 | 98.832 | 12.374 | 49.75 | 85.5 | | 25000 | 0.4040 | 63.75 | 204.0 | 12.9650 | 25.2258 | 99.105 | 12.408 | 44.25 | 80.0 | | 27500 | 0.4444 | 59.75 | 202.0 | 12.8388 | 25.2851 | 98.873 | 12.379 | 39.75 | 78.5 | | 30000 | 0.4848 | 59.0 | 192.0 | 12.8199 | 25.3036 | 98.8 | 12.37 | 41.0 | 58.75 | | 32500 | 0.5253 | 58.25 | 177.0 | 12.7685 | 25.2394 | 99.051 | 12.401 | 38.0 | 67.5 | | 35000 | 0.5657 | 57.25 | 172.0 | 12.6534 | 25.2884 | 98.859 | 12.377 | 35.75 | 45.5 | | 37500 | 0.6061 | 56.5 | 157.0 | 12.6015 | 25.2802 | 98.892 | 12.381 | 36.75 | 49.0 | | 40000 | 0.6465 | 55.5 | 156.0 | 12.5839 | 25.2894 | 98.856 | 12.377 | 33.75 | 63.25 | | 42500 | 0.6869 | 55.0 | 148.0 | 12.5142 | 25.2104 | 99.166 | 12.416 | 35.0 | 43.0 | | 45000 | 0.7273 | 51.25 | 135.0 | 12.2868 | 25.2856 | 98.87 | 12.379 | 29.625 | 46.5 | | 47500 | 0.7677 | 50.75 | 125.0 | 12.2409 | 25.2644 | 98.954 | 12.389 | 29.0 | 37.25 | | 50000 | 0.8081 | 50.75 | 124.5 | 12.2177 | 25.286 | 98.869 | 12.378 | 28.875 | 39.0 | | 52500 | 0.8485 | 49.5 | 121.5 | 12.1931 | 25.2638 | 98.956 | 12.389 | 28.5 | 35.25 | | 55000 | 0.8889 | 49.5 | 120.5 | 12.1621 | 25.2759 | 98.908 | 12.383 | 28.0 | 35.75 | | 57500 | 0.9293 | 49.0 | 120.0 | 12.1492 | 25.2276 | 99.098 | 12.407 | 27.875 | 34.25 | | 60000 | 0.9697 | 49.0 | 119.5 | 12.1404 | 25.2436 | 99.035 | 12.399 | 27.75 | 34.0 | | 61875 | 1.0 | 48.75 | 119.5 | 12.1397 | 25.124 | 99.507 | 12.458 | 27.875 | 34.25 | # Resource Usage Comparison - VRAM Use: 7.7830 GB # Distillation (Teacher -> Student) Architecture Difference: - **Architecture**: `GPT2LMHeadModel` -> `GPT2LMHeadModel` - **Total Parameters**: 124,439,808 -> 124,439,808 - **Data Type (dtype)**: torch.bfloat16 -> torch.bfloat16 - **Model Size**: 0.24 GB -> 0.24 GB
Module Diff Details ```diff ```

# Train Dataset Trained on 145,744,973 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset. - Num Samples: `247,500` - Subset: `20231101.en` - Split: `train` # Training Objective ``` DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=25.0, loss_fn=cos, layer_mapper=layer-2)) ``` # Hyperparameters The following hyperparameters were used during training:
Expand - learning_rate: `0.0001` - train_batch_size: `4` - eval_batch_size: `8` - seed: `42` - optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08` - lr_scheduler_type: `linear` - lr_scheduler_warmup_ratio: `0.5` - num_epochs: `1.0` - distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=25.0, loss_fn=cos, layer_mapper=layer-2))` - train_embeddings: `True` - lr_scheduler: `` - student_model_name_or_path: `None` - student_config_name_or_path: `None` - student_model_config: `None` - reinitialize_weights: `None` - copy_teacher_modules: `[('lm_head', False)]` - student_model_as_bitnet: `True` - student_model_compile: `False` - dropout: `None` - teacher_model_name_or_path: `gpt2` - teacher_load_in_8bit: `False` - teacher_load_in_4bit: `False` - teacher_model_compile: `False` - dataset_uri: `wikimedia/wikipedia` - dataset_subset: `20231101.en` - dataset_split: `train` - dataset_column_name: `text` - dataset_sample_size: `250000` - dataset_test_size: `0.01` - gradient_accumulation_steps: `1` - weight_decay: `0.0` - max_grad_norm: `1.0` - warmup_ratio: `0.5` - warmup_steps: `0` - gradient_checkpointing: `True`

# Framework Versions - Distily 0.2.0 - Transformers 4.44.1 - Pytorch 2.5.0.dev20240821+cu121 - Datasets 2.21.0