kanishka's picture
End of training
1b1cd6a verified
metadata
tags:
  - generated_from_trainer
datasets:
  - kanishka/counterfactual-babylm-only_other_det_removal
metrics:
  - accuracy
model-index:
  - name: smolm-autoreg-bpe-counterfactual-babylm-only_other_det_removal-1e-3
    results:
      - task:
          name: Causal Language Modeling
          type: text-generation
        dataset:
          name: kanishka/counterfactual-babylm-only_other_det_removal
          type: kanishka/counterfactual-babylm-only_other_det_removal
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.4116416836738053

smolm-autoreg-bpe-counterfactual-babylm-only_other_det_removal-1e-3

This model was trained from scratch on the kanishka/counterfactual-babylm-only_other_det_removal dataset. It achieves the following results on the evaluation set:

  • Loss: 3.4193
  • Accuracy: 0.4116

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 32
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 32000
  • num_epochs: 20.0
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Accuracy
3.6043 1.0 18597 3.7893 0.3595
3.3863 2.0 37194 3.5796 0.3811
3.2568 3.0 55791 3.4811 0.3933
3.1802 4.0 74388 3.4316 0.3992
3.1237 5.0 92985 3.3913 0.4033
3.0797 6.0 111582 3.4136 0.4042
3.0447 7.0 130179 3.3948 0.4058
3.0084 8.0 148776 3.3772 0.4079
2.985 9.0 167373 3.3589 0.4101
2.9555 10.0 185970 3.3777 0.4096
2.9324 11.0 204567 3.3606 0.4110
2.9092 12.0 223164 3.3722 0.4112
2.89 13.0 241761 3.3737 0.4114
2.8651 14.0 260358 3.3934 0.4110
2.8499 15.0 278955 3.3911 0.4116
2.8292 16.0 297552 3.3942 0.4114
2.8105 17.0 316149 3.4117 0.4113
2.7877 18.0 334746 3.4073 0.4116
2.773 19.0 353343 3.4169 0.4115
2.7535 20.0 371940 3.4193 0.4116

Framework versions

  • Transformers 4.37.2
  • Pytorch 2.1.0+cu121
  • Datasets 2.16.1
  • Tokenizers 0.15.1