File size: 6,747 Bytes
b062a29
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
344e520
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b062a29
344e520
 
 
 
 
 
 
 
 
 
 
 
 
 
b062a29
344e520
 
 
b062a29
344e520
 
 
 
 
 
 
 
 
 
b062a29
 
 
344e520
 
b062a29
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
344e520
 
 
b062a29
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner-cadec
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# bert-finetuned-ner-cadec

This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2334
- Precision: 0.6055
- Recall: 0.6988
- F1: 0.6488
- Accuracy: 0.9250
- Adr Precision: 0.5685
- Adr Recall: 0.6992
- Adr F1: 0.6271
- Disease Precision: 0.25
- Disease Recall: 0.125
- Disease F1: 0.1667
- Drug Precision: 0.8371
- Drug Recall: 0.9069
- Drug F1: 0.8706
- Finding Precision: 0.2439
- Finding Recall: 0.3448
- Finding F1: 0.2857
- Symptom Precision: 0.5
- Symptom Recall: 0.0870
- Symptom F1: 0.1481
- B-adr Precision: 0.7596
- B-adr Recall: 0.8357
- B-adr F1: 0.7958
- B-disease Precision: 0.6
- B-disease Recall: 0.1875
- B-disease F1: 0.2857
- B-drug Precision: 0.9423
- B-drug Recall: 0.9655
- B-drug F1: 0.9538
- B-finding Precision: 0.5789
- B-finding Recall: 0.3793
- B-finding F1: 0.4583
- B-symptom Precision: 0.5
- B-symptom Recall: 0.0870
- B-symptom F1: 0.1481
- I-adr Precision: 0.5699
- I-adr Recall: 0.6782
- I-adr F1: 0.6194
- I-disease Precision: 0.3333
- I-disease Recall: 0.1379
- I-disease F1: 0.1951
- I-drug Precision: 0.8611
- I-drug Recall: 0.9118
- I-drug F1: 0.8857
- I-finding Precision: 0.3125
- I-finding Recall: 0.3704
- I-finding F1: 0.3390
- I-symptom Precision: 0.0
- I-symptom Recall: 0.0
- I-symptom F1: 0.0
- Macro Avg F1: 0.4681
- Weighted Avg F1: 0.7238

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3

### Training results

| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1     | Accuracy | Adr Precision | Adr Recall | Adr F1 | Disease Precision | Disease Recall | Disease F1 | Drug Precision | Drug Recall | Drug F1 | Finding Precision | Finding Recall | Finding F1 | Symptom Precision | Symptom Recall | Symptom F1 | B-adr Precision | B-adr Recall | B-adr F1 | B-disease Precision | B-disease Recall | B-disease F1 | B-drug Precision | B-drug Recall | B-drug F1 | B-finding Precision | B-finding Recall | B-finding F1 | B-symptom Precision | B-symptom Recall | B-symptom F1 | I-adr Precision | I-adr Recall | I-adr F1 | I-disease Precision | I-disease Recall | I-disease F1 | I-drug Precision | I-drug Recall | I-drug F1 | I-finding Precision | I-finding Recall | I-finding F1 | I-symptom Precision | I-symptom Recall | I-symptom F1 | Macro Avg F1 | Weighted Avg F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|:-------------:|:----------:|:------:|:-----------------:|:--------------:|:----------:|:--------------:|:-----------:|:-------:|:-----------------:|:--------------:|:----------:|:-----------------:|:--------------:|:----------:|:---------------:|:------------:|:--------:|:-------------------:|:----------------:|:------------:|:----------------:|:-------------:|:---------:|:-------------------:|:----------------:|:------------:|:-------------------:|:----------------:|:------------:|:---------------:|:------------:|:--------:|:-------------------:|:----------------:|:------------:|:----------------:|:-------------:|:---------:|:-------------------:|:----------------:|:------------:|:-------------------:|:----------------:|:------------:|:------------:|:---------------:|
| No log        | 1.0   | 127  | 0.2633          | 0.5612    | 0.6348 | 0.5958 | 0.9139   | 0.5047        | 0.6436     | 0.5658 | 0.0               | 0.0            | 0.0        | 0.8148         | 0.8627      | 0.8381  | 0.0714            | 0.0345         | 0.0465     | 0.0               | 0.0            | 0.0        | 0.7530          | 0.7778       | 0.7652   | 0.0                 | 0.0              | 0.0          | 0.9154           | 0.9064        | 0.9109    | 1.0                 | 0.0690           | 0.1290       | 0.0                 | 0.0              | 0.0          | 0.4993          | 0.6362       | 0.5595   | 0.0                 | 0.0              | 0.0          | 0.8775           | 0.8775        | 0.8775    | 0.3077              | 0.1481           | 0.2          | 0.0                 | 0.0              | 0.0          | 0.3442       | 0.6698          |
| No log        | 2.0   | 254  | 0.2358          | 0.6       | 0.6863 | 0.6402 | 0.9240   | 0.5595        | 0.6857     | 0.6162 | 0.2222            | 0.125          | 0.16       | 0.8296         | 0.9069      | 0.8665  | 0.2647            | 0.3103         | 0.2857     | 0.0               | 0.0            | 0.0        | 0.7649          | 0.8247       | 0.7937   | 0.8333              | 0.1562           | 0.2632       | 0.9327           | 0.9557        | 0.9440    | 0.7222              | 0.4483           | 0.5532       | 0.0                 | 0.0              | 0.0          | 0.5646          | 0.6709       | 0.6132   | 0.2222              | 0.1379           | 0.1702       | 0.8664           | 0.9216        | 0.8931    | 0.28                | 0.2593           | 0.2692       | 0.0                 | 0.0              | 0.0          | 0.4500       | 0.7185          |
| No log        | 3.0   | 381  | 0.2334          | 0.6055    | 0.6988 | 0.6488 | 0.9250   | 0.5685        | 0.6992     | 0.6271 | 0.25              | 0.125          | 0.1667     | 0.8371         | 0.9069      | 0.8706  | 0.2439            | 0.3448         | 0.2857     | 0.5               | 0.0870         | 0.1481     | 0.7596          | 0.8357       | 0.7958   | 0.6                 | 0.1875           | 0.2857       | 0.9423           | 0.9655        | 0.9538    | 0.5789              | 0.3793           | 0.4583       | 0.5                 | 0.0870           | 0.1481       | 0.5699          | 0.6782       | 0.6194   | 0.3333              | 0.1379           | 0.1951       | 0.8611           | 0.9118        | 0.8857    | 0.3125              | 0.3704           | 0.3390       | 0.0                 | 0.0              | 0.0          | 0.4681       | 0.7238          |


### Framework versions

- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0