Summary
Distilled with Distily library using teacher model HuggingFaceTB/SmolLM-135M on dataset wikimedia/wikipedia.
Model Architecture:
- Architecture:
LlamaForCausalLM
- Total Parameters: 81,413,568
- Data Type (dtype): torch.float32
- Model Size: 0.30 GB
Student Model Details
LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(49152, 576)
(layers): ModuleList(
(0-14): 15 x LlamaDecoderLayer(
(self_attn): LlamaSdpaAttention(
(q_proj): Linear(in_features=576, out_features=576, bias=False)
(k_proj): Linear(in_features=576, out_features=192, bias=False)
(v_proj): Linear(in_features=576, out_features=192, bias=False)
(o_proj): Linear(in_features=576, out_features=576, bias=False)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LigerSwiGLUMLP(
(gate_proj): Linear(in_features=576, out_features=1536, bias=False)
(up_proj): Linear(in_features=576, out_features=1536, bias=False)
(down_proj): Linear(in_features=1536, out_features=576, bias=False)
)
(input_layernorm): LigerRMSNorm((576,), eps=1e-05, offset=0.0)
(post_attention_layernorm): LigerRMSNorm((576,), eps=1e-05, offset=0.0)
)
)
(norm): LigerRMSNorm((576,), eps=1e-05, offset=0.0)
(rotary_emb): LlamaRotaryEmbedding()
)
(lm_head): Linear(in_features=576, out_features=49152, bias=False)
)
Resource Usage
- Max Train VRAM Use: 19.6182 GB
- Available VRAM: 23.4329 GB
- GPUs:
- 1x NVIDIA GeForce RTX 4090
- CPUs: 64
- CPU Memory: 251.7299 GB
- CPU Memory Bandwidth: 1600 GB/s
Distillation (Teacher -> Student) Architecture Difference:
- Architecture:
LlamaForCausalLM
->LlamaForCausalLM
- Total Parameters: 134,515,008 -> 81,413,568
- Data Type (dtype): torch.float32 -> torch.float32
- Model Size: 0.25 GB -> 0.30 GB
Module Diff Details
--- teacher model modules
+++ student model modules
@@ -2,7 +2,7 @@
(model): LlamaModel(
(embed_tokens): Embedding(49152, 576)
(layers): ModuleList(
- (0-29): 30 x LlamaDecoderLayer(
+ (0-14): 15 x LlamaDecoderLayer(
(self_attn): LlamaSdpaAttention(
(q_proj): Linear(in_features=576, out_features=576, bias=False)
(k_proj): Linear(in_features=576, out_features=192, bias=False)
Train Dataset
Trained on 553,266,374 tokens from the wikimedia/wikipedia dataset.
- Num Samples:
998,000
- Subset:
20231101.en
- Split:
train
Training Objective
DistillationObjective(
logits_loss_component=LossComponent(
weight=1,
loss_fn='kl'
),
hs_loss_component=LossComponent(
weight=0
),
attn_loss_component=LossComponent(
weight=0
)
)
Hyperparameters
The following hyperparameters were used during training:
Expand
- learning_rate:
0.0002
- train_batch_size:
4
- eval_batch_size:
2
- seed:
42
- optimizer:
Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type:
polynomial
- num_epochs:
1.0
- distillation_objective:
DistillationObjective( logits_loss_component=LossComponent( weight=1, loss_fn='kl' ), hs_loss_component=LossComponent( weight=0 ), attn_loss_component=LossComponent( weight=0 ) )
- lr_scheduler:
<torch.optim.lr_scheduler.LambdaLR object at 0x76ca190e3fd0>
- student_model_name_or_path:
None
- student_config_name_or_path:
None
- student_model_config:
{'num_hidden_layers': 15}
- reinitialize_weights:
None
- copy_teacher_modules:
[('lm_head', False)]
- student_model_as_bitnet:
False
- student_use_liger_kernel:
True
- teacher_model_name_or_path:
HuggingFaceTB/SmolLM-135M
- teacher_load_in_8bit:
False
- teacher_load_in_4bit:
False
- dataset_uri:
wikimedia/wikipedia
- dataset_subset:
20231101.en
- dataset_split:
train
- dataset_column_name:
text
- dataset_sample_size:
1000000
- dataset_max_seq_length:
1024
- dataset_test_size:
0.002
- dataset_shuffle:
False
- dataset_shuffle_seed:
42
- dataset_trust_remote_code:
False
- gradient_accumulation_steps:
1
- weight_decay:
0.0
- max_grad_norm:
1.0
- warmup_ratio:
0.0
- warmup_steps:
0
- gradient_checkpointing:
True
Framework Versions
- Distily 0.5.0
- Transformers 4.45.0.dev0
- Pytorch 2.5.0.dev20240910+cu121
- Datasets 2.21.0
- Downloads last month
- 43
Model tree for distily/distily_seq_len_batch_size
Base model
HuggingFaceTB/SmolLM-135M