--- base_model: models/smolm-mlm/config.json tags: - generated_from_trainer datasets: - AO-CHILDES metrics: - accuracy widget: - text: Do you like your ? - text: Look here . What is that ? - text: Where is ? model-index: - name: smolm-mlm-bpe-unmask-seed_111 results: [] pipeline_tag: fill-mask --- # smolm-mlm-bpe-unmask-seed_111 This model is a fine-tuned version of [models/smolm-mlm/config.json](https://huggingface.co/models/smolm-mlm/config.json) on 5M words of American-English child-directed input. It achieves the following results on the evaluation set: - Loss: 2.6956 - Accuracy: 0.4492 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 512 - seed: 111 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 24000 - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:--------:| | 3.5129 | 1.0 | 11938 | 3.4627 | 0.3523 | | 3.319 | 2.0 | 23876 | 3.3322 | 0.3641 | | 3.1577 | 3.0 | 35814 | 3.1841 | 0.3810 | | 3.0357 | 4.0 | 47752 | 3.0588 | 0.3982 | | 2.9606 | 5.0 | 59690 | 2.9535 | 0.4109 | | 2.87 | 6.0 | 71628 | 2.8745 | 0.4221 | | 2.7817 | 7.0 | 83566 | 2.8351 | 0.4284 | | 2.7388 | 8.0 | 95504 | 2.7536 | 0.4417 | | 2.6618 | 9.0 | 107442 | 2.7308 | 0.4424 | | 2.6258 | 10.0 | 119380 | 2.6880 | 0.4522 | ### Framework versions - Transformers 4.32.1 - Pytorch 1.13.1+cu117 - Datasets 2.12.0 - Tokenizers 0.13.3