metadata
tags:
- generated_from_trainer
datasets:
- kanishka/counterfactual_babylm_aann_low_variability_noun
metrics:
- accuracy
model-index:
- name: >-
smolm-autoreg-bpe-counterfactual_babylm_aann_low_variability_noun_1024-1e-3
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: kanishka/counterfactual_babylm_aann_low_variability_noun
type: kanishka/counterfactual_babylm_aann_low_variability_noun
metrics:
- name: Accuracy
type: accuracy
value: 0.40927603188483547
smolm-autoreg-bpe-counterfactual_babylm_aann_low_variability_noun_1024-1e-3
This model was trained from scratch on the kanishka/counterfactual_babylm_aann_low_variability_noun dataset. It achieves the following results on the evaluation set:
- Loss: 3.4134
- Accuracy: 0.4093
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 64
- seed: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 32000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy |
---|---|---|---|---|
3.6017 | 1.0 | 18595 | 3.7832 | 0.3588 |
3.3832 | 2.0 | 37190 | 3.5725 | 0.3809 |
3.2557 | 3.0 | 55785 | 3.4670 | 0.3932 |
3.1779 | 4.0 | 74380 | 3.4315 | 0.3979 |
3.1207 | 5.0 | 92975 | 3.3998 | 0.4016 |
3.0759 | 6.0 | 111570 | 3.3830 | 0.4037 |
3.0401 | 7.0 | 130165 | 3.3819 | 0.4054 |
3.0134 | 8.0 | 148760 | 3.3636 | 0.4073 |
2.9862 | 9.0 | 167355 | 3.3830 | 0.4070 |
2.9548 | 10.0 | 185950 | 3.3661 | 0.4078 |
2.9335 | 11.0 | 204545 | 3.3690 | 0.4085 |
2.9121 | 12.0 | 223140 | 3.3669 | 0.4088 |
2.8942 | 13.0 | 241735 | 3.3727 | 0.4092 |
2.8708 | 14.0 | 260330 | 3.3823 | 0.4091 |
2.8487 | 15.0 | 278925 | 3.3783 | 0.4094 |
2.8298 | 16.0 | 297520 | 3.3950 | 0.4091 |
2.8116 | 17.0 | 316115 | 3.3998 | 0.4095 |
2.7953 | 18.0 | 334710 | 3.4066 | 0.4092 |
2.775 | 19.0 | 353305 | 3.4064 | 0.4094 |
2.759 | 20.0 | 371900 | 3.4134 | 0.4093 |
Framework versions
- Transformers 4.38.0
- Pytorch 2.3.1+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2