distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of distilbert-base-uncased on the clinc_oos dataset. It achieves the following results on the evaluation set:
- Loss: 0.1762
- Accuracy: 0.9432
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy |
---|---|---|---|---|
1.5323 | 1.0 | 318 | 0.9964 | 0.7194 |
0.77 | 2.0 | 636 | 0.4772 | 0.8594 |
0.4012 | 3.0 | 954 | 0.2790 | 0.9177 |
0.2565 | 4.0 | 1272 | 0.2186 | 0.9352 |
0.204 | 5.0 | 1590 | 0.1970 | 0.9371 |
0.1817 | 6.0 | 1908 | 0.1879 | 0.9426 |
0.1704 | 7.0 | 2226 | 0.1822 | 0.9445 |
0.1633 | 8.0 | 2544 | 0.1788 | 0.9439 |
0.1597 | 9.0 | 2862 | 0.1771 | 0.9439 |
0.1576 | 10.0 | 3180 | 0.1762 | 0.9432 |
Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.0
- Downloads last month
- 12
Finetuned from
Dataset used to train igory1999/distilbert-base-uncased-distilled-clinc
Evaluation results
- Accuracy on clinc_oosvalidation set self-reported0.943