task-t1
This model is a fine-tuned version of bert-base-uncased on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.4146
- F1: 0.7293
- Chronic Disease F1: 0.7306
- Chronic Disease Num: 2537
- Cancer F1: 0.7151
- Cancer Num: 880
- Allergy F1: 0.6551
- Allergy Num: 219
- Treatment F1: 0.7365
- Treatment Num: 3197
- Other F1: 0
- Other Num: 0
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
Training results
Training Loss | Epoch | Step | Validation Loss | F1 | Chronic Disease F1 | Chronic Disease Num | Cancer F1 | Cancer Num | Allergy F1 | Allergy Num | Treatment F1 | Treatment Num | Other F1 | Other Num |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1.0109 | 0.2717 | 100 | 0.6744 | 0.4452 | 0.4017 | 2537 | 0.0448 | 880 | 0.0 | 219 | 0.5504 | 3197 | 0 | 0 |
0.5833 | 0.5435 | 200 | 0.4954 | 0.6268 | 0.6392 | 2537 | 0.5937 | 880 | 0.0 | 219 | 0.6459 | 3197 | 0 | 0 |
0.4668 | 0.8152 | 300 | 0.4519 | 0.6782 | 0.6951 | 2537 | 0.6396 | 880 | 0.0359 | 219 | 0.6962 | 3197 | 0 | 0 |
0.4275 | 1.0870 | 400 | 0.4314 | 0.7046 | 0.7102 | 2537 | 0.6883 | 880 | 0.5127 | 219 | 0.7138 | 3197 | 0 | 0 |
0.3483 | 1.3587 | 500 | 0.4282 | 0.7181 | 0.7212 | 2537 | 0.7078 | 880 | 0.6469 | 219 | 0.7226 | 3197 | 0 | 0 |
0.3334 | 1.6304 | 600 | 0.4126 | 0.7293 | 0.7313 | 2537 | 0.7170 | 880 | 0.6683 | 219 | 0.7349 | 3197 | 0 | 0 |
0.3249 | 1.9022 | 700 | 0.4146 | 0.7293 | 0.7306 | 2537 | 0.7151 | 880 | 0.6551 | 219 | 0.7365 | 3197 | 0 | 0 |
Framework versions
- Transformers 4.42.4
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
- Downloads last month
- 13
This model does not have enough activity to be deployed to Inference API (serverless) yet.
Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.