pierreguillou commited on
Commit
cfbac95
1 Parent(s): aca0f80

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +49 -43
README.md CHANGED
@@ -22,28 +22,35 @@ model-index:
22
  metrics:
23
  - name: F1
24
  type: f1
25
- value: 0.8716487228203504
26
  - name: Precision
27
  type: precision
28
- value: 0.8559286898839138
29
  - name: Recall
30
  type: recall
31
- value: 0.8879569892473118
32
  - name: Accuracy
33
  type: accuracy
34
- value: 0.9755893153732458
35
  - name: Loss
36
  type: loss
37
- value: 0.1133928969502449
38
  widget:
39
  - text: "Acrescento que não há de se falar em violação do artigo 114, § 3º, da Constituição Federal, posto que referido dispositivo revela-se impertinente, tratando da possibilidade de ajuizamento de dissídio coletivo pelo Ministério Público do Trabalho nos casos de greve em atividade essencial."
40
  ---
41
 
42
  ## (BERT base) NER model in the legal domain in Portuguese (LeNER-Br)
43
 
44
- **ner-bert-base-portuguese-cased-lenerbr** is a NER model (token classification) in the legal domain in Portuguese that was finetuned on 16/12/2021 in Google Colab from the model [BERTimbau base](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on the dataset [LeNER_br](https://huggingface.co/datasets/lener_br) by using a NER objective.
45
 
46
- Note: due to the small size of BERTimbau base and finetuning dataset, the model overfitted before to reach the end of training. Here are the overall final metrics on the validation dataset (*note: see the paragraph "Validation metrics by Named Entity" to get detailed metrics*):
 
 
 
 
 
 
 
47
  - **f1**: 0.8716487228203504
48
  - **precision**: 0.8559286898839138
49
  - **recall**: 0.8879569892473118
@@ -102,53 +109,52 @@ ner(input_text)
102
  ````
103
  Num examples = 7828
104
  Num Epochs = 3
105
- Instantaneous batch size per device = 8
106
  Total train batch size (w. parallel, distributed & accumulation) = 8
107
- Gradient Accumulation steps = 1
108
- Total optimization steps = 2937
109
-
110
- Step Training Loss Validation Loss Precision Recall F1 Accuracy
111
- 290 0.315100 0.141881 0.764542 0.709462 0.735973 0.960550
112
- 580 0.089100 0.137700 0.729155 0.810538 0.767695 0.959940
113
- 870 0.071700 0.122069 0.780277 0.872903 0.823995 0.967955
114
- 1160 0.047500 0.125950 0.800312 0.881720 0.839046 0.968367
115
- 1450 0.034900 0.129228 0.763666 0.910323 0.830570 0.969068
116
- 1740 0.036100 0.113393 0.855929 0.887957 0.871649 0.975589
117
- 2030 0.037800 0.121275 0.817230 0.889462 0.851818 0.970393
118
- 2320 0.018700 0.115745 0.836066 0.877419 0.856243 0.973136
119
- 2610 0.017100 0.118826 0.822488 0.888817 0.854367 0.973471
120
  ````
121
 
122
  ### Validation metrics by Named Entity
123
  ````
124
  Num examples = 1177
125
 
126
- {'JURISPRUDENCIA': {'f1': 0.6641509433962263,
127
  'number': 657,
128
- 'precision': 0.6586826347305389,
129
- 'recall': 0.669710806697108},
130
- 'LEGISLACAO': {'f1': 0.8489082969432314,
131
  'number': 571,
132
- 'precision': 0.8466898954703833,
133
- 'recall': 0.851138353765324},
134
- 'LOCAL': {'f1': 0.8066037735849058,
135
  'number': 194,
136
- 'precision': 0.7434782608695653,
137
- 'recall': 0.8814432989690721},
138
- 'ORGANIZACAO': {'f1': 0.8540462427745664,
139
  'number': 1340,
140
- 'precision': 0.8277310924369747,
141
- 'recall': 0.8820895522388059},
142
- 'PESSOA': {'f1': 0.9845722300140253,
143
  'number': 1072,
144
- 'precision': 0.9868791002811621,
145
- 'recall': 0.9822761194029851},
146
- 'TEMPO': {'f1': 0.9527794381350867,
147
  'number': 816,
148
- 'precision': 0.9299883313885647,
149
- 'recall': 0.9767156862745098},
150
- 'overall_accuracy': 0.9755893153732458,
151
- 'overall_f1': 0.8716487228203504,
152
- 'overall_precision': 0.8559286898839138,
153
- 'overall_recall': 0.8879569892473118}
154
  ````
 
22
  metrics:
23
  - name: F1
24
  type: f1
25
+ value: 0.8733423827921062
26
  - name: Precision
27
  type: precision
28
+ value: 0.8487923685812868
29
  - name: Recall
30
  type: recall
31
+ value: 0.8993548387096775
32
  - name: Accuracy
33
  type: accuracy
34
+ value: 0.9759397808828684
35
  - name: Loss
36
  type: loss
37
+ value: 0.10249536484479904
38
  widget:
39
  - text: "Acrescento que não há de se falar em violação do artigo 114, § 3º, da Constituição Federal, posto que referido dispositivo revela-se impertinente, tratando da possibilidade de ajuizamento de dissídio coletivo pelo Ministério Público do Trabalho nos casos de greve em atividade essencial."
40
  ---
41
 
42
  ## (BERT base) NER model in the legal domain in Portuguese (LeNER-Br)
43
 
44
+ **ner-bert-base-portuguese-cased-lenerbr** is a NER model (token classification) in the legal domain in Portuguese that was finetuned on 20/12/2021 in Google Colab from the model [pierreguillou/bert-base-cased-pt-lenerbr](https://huggingface.co/pierreguillou/bert-base-cased-pt-lenerbr) on the dataset [LeNER_br](https://huggingface.co/datasets/lener_br) by using a NER objective.
45
 
46
+ Due to the small size of BERTimbau base and finetuning dataset, the model overfitted before to reach the end of training. Here are the overall final metrics on the validation dataset (*note: see the paragraph "Validation metrics by Named Entity" to get detailed metrics*):
47
+ - **f1**: 0.8733423827921062
48
+ - **precision**: 0.8487923685812868
49
+ - **recall**: 0.8993548387096775
50
+ - **accuracy**: 0.9759397808828684
51
+ - **loss**: 0.10249536484479904
52
+
53
+ **Note**: the model [pierreguillou/bert-base-cased-pt-lenerbr](https://huggingface.co/pierreguillou/bert-base-cased-pt-lenerbr) is a language model that was created through the finetuning of the model [BERTimbau base](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on the dataset [LeNER-Br language modeling](https://huggingface.co/datasets/pierreguillou/lener_br_finetuning_language_model) by using a MASK objective. This first specialization of the language model before fine finetuning on the NER task improved a bit the model quality. To prove it, here are the results of the NER model finetuned from the model [BERTimbau base](https://huggingface.co/neuralmind/bert-base-portuguese-cased) (a non-specialized language model):
54
  - **f1**: 0.8716487228203504
55
  - **precision**: 0.8559286898839138
56
  - **recall**: 0.8879569892473118
 
109
  ````
110
  Num examples = 7828
111
  Num Epochs = 3
112
+ Instantaneous batch size per device = 4
113
  Total train batch size (w. parallel, distributed & accumulation) = 8
114
+ Gradient Accumulation steps = 2
115
+ Total optimization steps = 2934
116
+
117
+ Step Training Loss Validation Loss Precision Recall F1 Accuracy
118
+ 290 0.314600 0.163042 0.735828 0.697849 0.716336 0.949198
119
+ 580 0.086900 0.123495 0.779540 0.824301 0.801296 0.965807
120
+ 870 0.072800 0.106785 0.798481 0.858925 0.827600 0.968626
121
+ 1160 0.046300 0.109921 0.824576 0.877419 0.850177 0.973243
122
+ 1450 0.036600 0.102495 0.848792 0.899355 0.873342 0.975940
123
+ 1740 0.033400 0.121514 0.821681 0.899785 0.858961 0.967071
124
+ 2030 0.034700 0.115568 0.846849 0.887097 0.866506 0.970607
125
+ 2320 0.018000 0.108600 0.840258 0.895914 0.867194 0.973730
 
126
  ````
127
 
128
  ### Validation metrics by Named Entity
129
  ````
130
  Num examples = 1177
131
 
132
+ {'JURISPRUDENCIA': {'f1': 0.7069834413246942,
133
  'number': 657,
134
+ 'precision': 0.6707650273224044,
135
+ 'recall': 0.7473363774733638},
136
+ 'LEGISLACAO': {'f1': 0.8256227758007118,
137
  'number': 571,
138
+ 'precision': 0.8390596745027125,
139
+ 'recall': 0.8126094570928196},
140
+ 'LOCAL': {'f1': 0.7688564476885645,
141
  'number': 194,
142
+ 'precision': 0.728110599078341,
143
+ 'recall': 0.8144329896907216},
144
+ 'ORGANIZACAO': {'f1': 0.8548387096774193,
145
  'number': 1340,
146
+ 'precision': 0.8062169312169312,
147
+ 'recall': 0.9097014925373135},
148
+ 'PESSOA': {'f1': 0.9826697892271662,
149
  'number': 1072,
150
+ 'precision': 0.9868297271872061,
151
+ 'recall': 0.9785447761194029},
152
+ 'TEMPO': {'f1': 0.9615846338535414,
153
  'number': 816,
154
+ 'precision': 0.9423529411764706,
155
+ 'recall': 0.9816176470588235},
156
+ 'overall_accuracy': 0.9759397808828684,
157
+ 'overall_f1': 0.8733423827921062,
158
+ 'overall_precision': 0.8487923685812868,
159
+ 'overall_recall': 0.8993548387096775}
160
  ````