NER-ExtractTotal
This model is a fine-tuned version of distilbert-base-uncased on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.1108
- Precision: 0.8496
- Recall: 0.9132
- F1: 0.8802
- Accuracy: 0.9735
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
Training results
Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
---|---|---|---|---|---|---|---|
No log | 1.0 | 18 | 0.4964 | 0.0351 | 0.0329 | 0.0340 | 0.8212 |
No log | 2.0 | 36 | 0.3155 | 0.1957 | 0.2425 | 0.2166 | 0.8686 |
No log | 3.0 | 54 | 0.2135 | 0.5064 | 0.5898 | 0.5450 | 0.9184 |
No log | 4.0 | 72 | 0.1629 | 0.6288 | 0.7964 | 0.7028 | 0.9364 |
No log | 5.0 | 90 | 0.1106 | 0.7813 | 0.8024 | 0.7917 | 0.9647 |
No log | 6.0 | 108 | 0.1117 | 0.8038 | 0.8832 | 0.8417 | 0.9671 |
No log | 7.0 | 126 | 0.1023 | 0.8270 | 0.9162 | 0.8693 | 0.9707 |
No log | 8.0 | 144 | 0.1080 | 0.8370 | 0.9072 | 0.8707 | 0.9703 |
No log | 9.0 | 162 | 0.0961 | 0.8455 | 0.9012 | 0.8725 | 0.9728 |
No log | 10.0 | 180 | 0.0902 | 0.8504 | 0.9192 | 0.8835 | 0.9753 |
No log | 11.0 | 198 | 0.1092 | 0.8407 | 0.9162 | 0.8768 | 0.9721 |
No log | 12.0 | 216 | 0.0871 | 0.8571 | 0.9162 | 0.8857 | 0.9760 |
No log | 13.0 | 234 | 0.1081 | 0.8515 | 0.9102 | 0.8799 | 0.9739 |
No log | 14.0 | 252 | 0.1142 | 0.8547 | 0.9162 | 0.8844 | 0.9742 |
No log | 15.0 | 270 | 0.1079 | 0.8520 | 0.9132 | 0.8815 | 0.9739 |
No log | 16.0 | 288 | 0.1065 | 0.8511 | 0.9072 | 0.8783 | 0.9739 |
No log | 17.0 | 306 | 0.1097 | 0.8515 | 0.9102 | 0.8799 | 0.9742 |
No log | 18.0 | 324 | 0.1098 | 0.8492 | 0.9102 | 0.8786 | 0.9739 |
No log | 19.0 | 342 | 0.1109 | 0.8496 | 0.9132 | 0.8802 | 0.9735 |
No log | 20.0 | 360 | 0.1108 | 0.8496 | 0.9132 | 0.8802 | 0.9735 |
Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.12.0
- Tokenizers 0.15.1
- Downloads last month
- 4
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.