metadata
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner-cadec-active
results: []
distilbert-base-uncased-finetuned-ner-cadec-active
This model is a fine-tuned version of distilbert-base-uncased on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.4418
- Precision: 0.4111
- Recall: 0.3715
- F1: 0.3903
- Accuracy: 0.8709
- Adr Precision: 0.3331
- Adr Recall: 0.3514
- Adr F1: 0.3420
- Disease Precision: 0.0
- Disease Recall: 0.0
- Disease F1: 0.0
- Drug Precision: 0.7366
- Drug Recall: 0.6395
- Drug F1: 0.6846
- Finding Precision: 0.0
- Finding Recall: 0.0
- Finding F1: 0.0
- Symptom Precision: 0.0
- Symptom Recall: 0.0
- Symptom F1: 0.0
- B-adr Precision: 0.6633
- B-adr Recall: 0.3935
- B-adr F1: 0.4939
- B-disease Precision: 0.0
- B-disease Recall: 0.0
- B-disease F1: 0.0
- B-drug Precision: 0.9637
- B-drug Recall: 0.6427
- B-drug F1: 0.7711
- B-finding Precision: 0.0
- B-finding Recall: 0.0
- B-finding F1: 0.0
- B-symptom Precision: 0.0
- B-symptom Recall: 0.0
- B-symptom F1: 0.0
- I-adr Precision: 0.2541
- I-adr Recall: 0.2887
- I-adr F1: 0.2703
- I-disease Precision: 0.0
- I-disease Recall: 0.0
- I-disease F1: 0.0
- I-drug Precision: 0.8042
- I-drug Recall: 0.6895
- I-drug F1: 0.7425
- I-finding Precision: 0.0
- I-finding Recall: 0.0
- I-finding F1: 0.0
- I-symptom Precision: 0.0
- I-symptom Recall: 0.0
- I-symptom F1: 0.0
- Macro Avg F1: 0.2278
- Weighted Avg F1: 0.4320
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | Adr Precision | Adr Recall | Adr F1 | Disease Precision | Disease Recall | Disease F1 | Drug Precision | Drug Recall | Drug F1 | Finding Precision | Finding Recall | Finding F1 | Symptom Precision | Symptom Recall | Symptom F1 | B-adr Precision | B-adr Recall | B-adr F1 | B-disease Precision | B-disease Recall | B-disease F1 | B-drug Precision | B-drug Recall | B-drug F1 | B-finding Precision | B-finding Recall | B-finding F1 | B-symptom Precision | B-symptom Recall | B-symptom F1 | I-adr Precision | I-adr Recall | I-adr F1 | I-disease Precision | I-disease Recall | I-disease F1 | I-drug Precision | I-drug Recall | I-drug F1 | I-finding Precision | I-finding Recall | I-finding F1 | I-symptom Precision | I-symptom Recall | I-symptom F1 | Macro Avg F1 | Weighted Avg F1 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
No log | 1.0 | 16 | 0.9134 | 0.0 | 0.0 | 0.0 | 0.7726 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
No log | 2.0 | 32 | 0.6436 | 0.1614 | 0.0574 | 0.0847 | 0.8059 | 0.1615 | 0.0830 | 0.1097 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0009 | 0.0018 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0879 | 0.0551 | 0.0678 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0070 | 0.0215 |
No log | 3.0 | 48 | 0.5510 | 0.2477 | 0.1653 | 0.1982 | 0.8339 | 0.2492 | 0.2005 | 0.2222 | 0.0 | 0.0 | 0.0 | 0.2403 | 0.1327 | 0.1710 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6667 | 0.0088 | 0.0174 | 0.0 | 0.0 | 0.0 | 0.9919 | 0.5485 | 0.7064 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0677 | 0.0663 | 0.0670 | 0.0 | 0.0 | 0.0 | 1.0 | 0.1336 | 0.2357 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1026 | 0.1321 |
No log | 4.0 | 64 | 0.5042 | 0.3921 | 0.2704 | 0.3201 | 0.8458 | 0.2796 | 0.2332 | 0.2543 | 0.0 | 0.0 | 0.0 | 0.9634 | 0.5435 | 0.6950 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7672 | 0.0245 | 0.0475 | 0.0 | 0.0 | 0.0 | 0.9873 | 0.5575 | 0.7126 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0692 | 0.0701 | 0.0696 | 0.0 | 0.0 | 0.0 | 0.9838 | 0.5469 | 0.7030 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1533 | 0.1963 |
No log | 5.0 | 80 | 0.4672 | 0.3841 | 0.2960 | 0.3344 | 0.8510 | 0.2863 | 0.2668 | 0.2762 | 0.0 | 0.0 | 0.0 | 0.8818 | 0.5552 | 0.6813 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7578 | 0.0931 | 0.1659 | 0.0 | 0.0 | 0.0 | 0.9802 | 0.5763 | 0.7258 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0978 | 0.1099 | 0.1035 | 0.0 | 0.0 | 0.0 | 0.9165 | 0.5650 | 0.6991 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1694 | 0.2509 |
No log | 6.0 | 96 | 0.4543 | 0.3814 | 0.3116 | 0.3430 | 0.8580 | 0.2925 | 0.2833 | 0.2878 | 0.0 | 0.0 | 0.0 | 0.7839 | 0.5758 | 0.6639 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7212 | 0.1810 | 0.2894 | 0.0 | 0.0 | 0.0 | 0.9721 | 0.5934 | 0.7369 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1459 | 0.1679 | 0.1562 | 0.0 | 0.0 | 0.0 | 0.8661 | 0.6245 | 0.7257 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1908 | 0.3164 |
No log | 7.0 | 112 | 0.4443 | 0.3829 | 0.3356 | 0.3577 | 0.8623 | 0.3027 | 0.3084 | 0.3055 | 0.0 | 0.0 | 0.0 | 0.7103 | 0.6090 | 0.6557 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6896 | 0.2571 | 0.3745 | 0.0 | 0.0 | 0.0 | 0.9639 | 0.6239 | 0.7575 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1725 | 0.2027 | 0.1863 | 0.0 | 0.0 | 0.0 | 0.8104 | 0.6868 | 0.7435 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2062 | 0.3610 |
No log | 8.0 | 128 | 0.4512 | 0.4119 | 0.3448 | 0.3754 | 0.8680 | 0.3295 | 0.3219 | 0.3257 | 0.0 | 0.0 | 0.0 | 0.7559 | 0.6081 | 0.6740 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6898 | 0.3345 | 0.4505 | 0.0 | 0.0 | 0.0 | 0.9652 | 0.6221 | 0.7566 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2328 | 0.2530 | 0.2425 | 0.0 | 0.0 | 0.0 | 0.8540 | 0.6760 | 0.7547 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2204 | 0.4072 |
No log | 9.0 | 144 | 0.4413 | 0.4046 | 0.3637 | 0.3831 | 0.8697 | 0.3250 | 0.3420 | 0.3333 | 0.0 | 0.0 | 0.0 | 0.7416 | 0.6332 | 0.6831 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6717 | 0.3703 | 0.4774 | 0.0 | 0.0 | 0.0 | 0.9648 | 0.6391 | 0.7689 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2378 | 0.2744 | 0.2548 | 0.0 | 0.0 | 0.0 | 0.8135 | 0.6850 | 0.7438 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2245 | 0.4210 |
No log | 10.0 | 160 | 0.4418 | 0.4111 | 0.3715 | 0.3903 | 0.8709 | 0.3331 | 0.3514 | 0.3420 | 0.0 | 0.0 | 0.0 | 0.7366 | 0.6395 | 0.6846 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6633 | 0.3935 | 0.4939 | 0.0 | 0.0 | 0.0 | 0.9637 | 0.6427 | 0.7711 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2541 | 0.2887 | 0.2703 | 0.0 | 0.0 | 0.0 | 0.8042 | 0.6895 | 0.7425 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2278 | 0.4320 |
Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0