metadata
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: V4-bert-text-classification-model
results: []
V4-bert-text-classification-model
This model is a fine-tuned version of bert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.1669
- Accuracy: 0.9625
- F1: 0.8267
- Precision: 0.8224
- Recall: 0.8323
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
---|---|---|---|---|---|---|---|
1.5437 | 0.11 | 50 | 1.6872 | 0.2896 | 0.0712 | 0.1912 | 0.1327 |
0.7301 | 0.22 | 100 | 0.7109 | 0.8078 | 0.4981 | 0.4962 | 0.5091 |
0.3015 | 0.33 | 150 | 0.4554 | 0.8988 | 0.6675 | 0.6601 | 0.6756 |
0.2653 | 0.44 | 200 | 0.5038 | 0.8682 | 0.6422 | 0.6256 | 0.6615 |
0.1756 | 0.55 | 250 | 0.4243 | 0.9106 | 0.6947 | 0.8014 | 0.7020 |
0.2054 | 0.66 | 300 | 0.3561 | 0.9087 | 0.6890 | 0.8099 | 0.6808 |
0.1555 | 0.76 | 350 | 0.2747 | 0.9166 | 0.7225 | 0.8152 | 0.7111 |
0.1215 | 0.87 | 400 | 0.1767 | 0.9625 | 0.8282 | 0.8258 | 0.8313 |
0.1221 | 0.98 | 450 | 0.3365 | 0.9243 | 0.7956 | 0.7887 | 0.8103 |
0.1382 | 1.09 | 500 | 0.2132 | 0.9554 | 0.8235 | 0.8221 | 0.8263 |
0.0829 | 1.2 | 550 | 0.2316 | 0.9524 | 0.8216 | 0.8178 | 0.8279 |
0.0978 | 1.31 | 600 | 0.1630 | 0.9680 | 0.8356 | 0.8353 | 0.8363 |
0.0876 | 1.42 | 650 | 0.1157 | 0.9718 | 0.8363 | 0.8359 | 0.8370 |
0.0818 | 1.53 | 700 | 0.1504 | 0.9669 | 0.8338 | 0.8389 | 0.8289 |
0.0642 | 1.64 | 750 | 0.1519 | 0.9713 | 0.8364 | 0.8384 | 0.8347 |
0.0984 | 1.75 | 800 | 0.1716 | 0.9666 | 0.8334 | 0.8360 | 0.8310 |
0.0724 | 1.86 | 850 | 0.1554 | 0.9694 | 0.8358 | 0.8366 | 0.8353 |
0.0546 | 1.97 | 900 | 0.1799 | 0.9642 | 0.8329 | 0.8339 | 0.8328 |
0.0559 | 2.07 | 950 | 0.1864 | 0.9642 | 0.8323 | 0.8294 | 0.8363 |
0.0396 | 2.18 | 1000 | 0.2251 | 0.9601 | 0.8286 | 0.8262 | 0.8327 |
0.0509 | 2.29 | 1050 | 0.1233 | 0.9721 | 0.8341 | 0.8358 | 0.8325 |
0.0528 | 2.4 | 1100 | 0.1674 | 0.9669 | 0.8345 | 0.8336 | 0.8360 |
0.0269 | 2.51 | 1150 | 0.1662 | 0.9686 | 0.8365 | 0.8350 | 0.8384 |
0.009 | 2.62 | 1200 | 0.1835 | 0.9661 | 0.8341 | 0.8310 | 0.8378 |
0.0193 | 2.73 | 1250 | 0.1949 | 0.9666 | 0.8339 | 0.8342 | 0.8340 |
0.0502 | 2.84 | 1300 | 0.1532 | 0.9694 | 0.8327 | 0.8305 | 0.8351 |
0.027 | 2.95 | 1350 | 0.1821 | 0.9680 | 0.8355 | 0.8351 | 0.8365 |
0.0271 | 3.06 | 1400 | 0.2110 | 0.9344 | 0.7545 | 0.8226 | 0.7387 |
0.0149 | 3.17 | 1450 | 0.2127 | 0.9631 | 0.8336 | 0.8337 | 0.8345 |
0.018 | 3.28 | 1500 | 0.1662 | 0.9710 | 0.8366 | 0.8347 | 0.8388 |
0.0067 | 3.38 | 1550 | 0.1927 | 0.9669 | 0.8340 | 0.8309 | 0.8378 |
0.0102 | 3.49 | 1600 | 0.1735 | 0.9705 | 0.8363 | 0.8333 | 0.8398 |
0.0035 | 3.6 | 1650 | 0.1687 | 0.9705 | 0.8356 | 0.8350 | 0.8366 |
0.0014 | 3.71 | 1700 | 0.1689 | 0.9713 | 0.8363 | 0.8359 | 0.8370 |
0.0147 | 3.82 | 1750 | 0.1648 | 0.9710 | 0.8361 | 0.8355 | 0.8369 |
0.0085 | 3.93 | 1800 | 0.1667 | 0.9716 | 0.8364 | 0.8358 | 0.8371 |
Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2