kanishka's picture
End of training
9f43b63 verified
---
tags:
- generated_from_trainer
datasets:
- kanishka/counterfactual_babylm_aann_indef_articles_with_pl_nouns_removal_new
metrics:
- accuracy
model-index:
- name: smolm-autoreg-bpe-counterfactual_babylm_aann_indef_articles_with_pl_nouns_removal_new-3e-4
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: kanishka/counterfactual_babylm_aann_indef_articles_with_pl_nouns_removal_new
type: kanishka/counterfactual_babylm_aann_indef_articles_with_pl_nouns_removal_new
metrics:
- name: Accuracy
type: accuracy
value: 0.4091656007481136
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smolm-autoreg-bpe-counterfactual_babylm_aann_indef_articles_with_pl_nouns_removal_new-3e-4
This model was trained from scratch on the kanishka/counterfactual_babylm_aann_indef_articles_with_pl_nouns_removal_new dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4250
- Accuracy: 0.4092
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 32000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 3.7386 | 1.0 | 18600 | 3.9390 | 0.3446 |
| 3.4323 | 2.0 | 37200 | 3.6439 | 0.3747 |
| 3.2906 | 3.0 | 55800 | 3.5132 | 0.3878 |
| 3.2008 | 4.0 | 74400 | 3.4662 | 0.3952 |
| 3.1424 | 5.0 | 93000 | 3.4252 | 0.3988 |
| 3.0983 | 6.0 | 111600 | 3.4146 | 0.4023 |
| 3.061 | 7.0 | 130200 | 3.3961 | 0.4039 |
| 3.0241 | 8.0 | 148800 | 3.3675 | 0.4061 |
| 2.9955 | 9.0 | 167400 | 3.3690 | 0.4071 |
| 2.971 | 10.0 | 186000 | 3.3668 | 0.4077 |
| 2.9425 | 11.0 | 204600 | 3.3717 | 0.4083 |
| 2.9175 | 12.0 | 223200 | 3.3836 | 0.4085 |
| 2.8993 | 13.0 | 241800 | 3.3685 | 0.4096 |
| 2.8802 | 14.0 | 260400 | 3.3869 | 0.4094 |
| 2.8591 | 15.0 | 279000 | 3.3903 | 0.4093 |
| 2.8397 | 16.0 | 297600 | 3.3899 | 0.4099 |
| 2.8158 | 17.0 | 316200 | 3.3992 | 0.4095 |
| 2.7994 | 18.0 | 334800 | 3.4129 | 0.4090 |
| 2.7773 | 19.0 | 353400 | 3.4211 | 0.4092 |
| 2.7599 | 20.0 | 372000 | 3.4250 | 0.4092 |
### Framework versions
- Transformers 4.38.0
- Pytorch 2.3.1+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2