distilbert-base-uncased-english-cefr-lexical-evaluation-dt-v1
This model is a fine-tuned version of distilbert-base-uncased on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.5309
- Accuracy: 0.8716
- F1: 0.8713
- Precision: 0.8714
- Recall: 0.8716
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
---|---|---|---|---|---|---|---|
0.6225 | 1.0 | 3403 | 0.6408 | 0.7578 | 0.7538 | 0.7826 | 0.7578 |
0.3645 | 2.0 | 6806 | 0.4180 | 0.8597 | 0.8573 | 0.8554 | 0.8597 |
0.2349 | 3.0 | 10209 | 0.4452 | 0.8631 | 0.8621 | 0.8637 | 0.8631 |
0.1269 | 4.0 | 13612 | 0.5257 | 0.8694 | 0.8690 | 0.8690 | 0.8694 |
0.0605 | 5.0 | 17015 | 0.6865 | 0.8671 | 0.8668 | 0.8669 | 0.8671 |
Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
- Downloads last month
- 13
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for hafidikhsan/distilbert-base-uncased-english-cefr-lexical-evaluation-dt-v1
Base model
distilbert/distilbert-base-uncased