|
--- |
|
tags: |
|
- generated_from_trainer |
|
datasets: |
|
- kanishka/counterfactual-babylm-pipps-random_removal |
|
metrics: |
|
- accuracy |
|
model-index: |
|
- name: smolm-autoreg-bpe-counterfactual-babylm-pipps-random_removal-1e-3 |
|
results: |
|
- task: |
|
name: Causal Language Modeling |
|
type: text-generation |
|
dataset: |
|
name: kanishka/counterfactual-babylm-pipps-random_removal |
|
type: kanishka/counterfactual-babylm-pipps-random_removal |
|
metrics: |
|
- name: Accuracy |
|
type: accuracy |
|
value: 0.4119714215135951 |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# smolm-autoreg-bpe-counterfactual-babylm-pipps-random_removal-1e-3 |
|
|
|
This model was trained from scratch on the kanishka/counterfactual-babylm-pipps-random_removal dataset. |
|
It achieves the following results on the evaluation set: |
|
- Loss: 3.3829 |
|
- Accuracy: 0.4120 |
|
|
|
## Model description |
|
|
|
More information needed |
|
|
|
## Intended uses & limitations |
|
|
|
More information needed |
|
|
|
## Training and evaluation data |
|
|
|
More information needed |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 0.001 |
|
- train_batch_size: 32 |
|
- eval_batch_size: 64 |
|
- seed: 42 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: linear |
|
- lr_scheduler_warmup_steps: 32000 |
|
- num_epochs: 20.0 |
|
- mixed_precision_training: Native AMP |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | Accuracy | |
|
|:-------------:|:-----:|:------:|:---------------:|:--------:| |
|
| 3.6058 | 1.0 | 18592 | 3.8079 | 0.3582 | |
|
| 3.3918 | 2.0 | 37184 | 3.5864 | 0.3803 | |
|
| 3.264 | 3.0 | 55776 | 3.4837 | 0.3930 | |
|
| 3.1794 | 4.0 | 74368 | 3.4301 | 0.3984 | |
|
| 3.1239 | 5.0 | 92960 | 3.3843 | 0.4023 | |
|
| 3.0814 | 6.0 | 111552 | 3.3626 | 0.4045 | |
|
| 3.0416 | 7.0 | 130144 | 3.3471 | 0.4076 | |
|
| 3.0128 | 8.0 | 148736 | 3.3522 | 0.4079 | |
|
| 2.9879 | 9.0 | 167328 | 3.3497 | 0.4087 | |
|
| 2.9616 | 10.0 | 185920 | 3.3193 | 0.4123 | |
|
| 2.941 | 11.0 | 204512 | 3.3381 | 0.4113 | |
|
| 2.9156 | 12.0 | 223104 | 3.3479 | 0.4114 | |
|
| 2.8946 | 13.0 | 241696 | 3.3280 | 0.4130 | |
|
| 2.8744 | 14.0 | 260288 | 3.3445 | 0.4123 | |
|
| 2.8532 | 15.0 | 278880 | 3.3571 | 0.4119 | |
|
| 2.831 | 16.0 | 297472 | 3.3629 | 0.4122 | |
|
| 2.8168 | 17.0 | 316064 | 3.3629 | 0.4121 | |
|
| 2.7943 | 18.0 | 334656 | 3.3743 | 0.4119 | |
|
| 2.7777 | 19.0 | 353248 | 3.3781 | 0.4121 | |
|
| 2.7631 | 20.0 | 371840 | 3.3829 | 0.4120 | |
|
|
|
|
|
### Framework versions |
|
|
|
- Transformers 4.37.2 |
|
- Pytorch 2.1.0+cu121 |
|
- Datasets 2.16.1 |
|
- Tokenizers 0.15.1 |
|
|