Edit model card

smolm-autoreg-bpe-counterfactual-babylm-only_measure_nps_as_singular_removal-1e-3

This model was trained from scratch on the kanishka/counterfactual-babylm-only_measure_nps_as_singular_removal dataset. It achieves the following results on the evaluation set:

  • Loss: 3.3908
  • Accuracy: 0.4133

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 32
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 32000
  • num_epochs: 20.0
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Accuracy
3.5989 1.0 18600 3.7674 0.3583
3.3772 2.0 37200 3.5890 0.3797
3.2519 3.0 55800 3.4638 0.3934
3.1739 4.0 74400 3.4033 0.3988
3.1143 5.0 93000 3.3692 0.4037
3.0706 6.0 111600 3.3750 0.4049
3.0371 7.0 130200 3.3584 0.4069
3.0052 8.0 148800 3.3419 0.4092
2.9778 9.0 167400 3.3557 0.4092
2.9507 10.0 186000 3.3506 0.4108
2.9315 11.0 204600 3.3575 0.4108
2.9052 12.0 223200 3.3518 0.4115
2.8856 13.0 241800 3.3580 0.4114
2.8675 14.0 260400 3.3460 0.4129
2.847 15.0 279000 3.3571 0.4130
2.8246 16.0 297600 3.3696 0.4134
2.8069 17.0 316200 3.3648 0.4141
2.7821 18.0 334800 3.3727 0.4136
2.7721 19.0 353400 3.3847 0.4136
2.7454 20.0 372000 3.3908 0.4133

Framework versions

  • Transformers 4.37.2
  • Pytorch 2.1.0+cu121
  • Datasets 2.16.1
  • Tokenizers 0.15.1
Downloads last month
2
Safetensors
Model size
97.8M params
Tensor type
F32
·
Inference API
This model can be loaded on Inference API (serverless).

Dataset used to train kanishka/smolm-autoreg-bpe-counterfactual-babylm-only_measure_nps_as_singular_removal-1e-3

Evaluation results

  • Accuracy on kanishka/counterfactual-babylm-only_measure_nps_as_singular_removal
    self-reported
    0.413