pierreguillou
commited on
Commit
•
97b4886
1
Parent(s):
ea04839
Update README.md
Browse files
README.md
CHANGED
@@ -51,6 +51,8 @@ Due to the small size of BERTimbau base and finetuning dataset, the model overfi
|
|
51 |
- **recall**: 0.8993548387096775
|
52 |
- **accuracy**: 0.9759397808828684
|
53 |
- **loss**: 0.10249536484479904
|
|
|
|
|
54 |
|
55 |
**Note**: the model [pierreguillou/bert-base-cased-pt-lenerbr](https://huggingface.co/pierreguillou/bert-base-cased-pt-lenerbr) is a language model that was created through the finetuning of the model [BERTimbau base](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on the dataset [LeNER-Br language modeling](https://huggingface.co/datasets/pierreguillou/lener_br_finetuning_language_model) by using a MASK objective. This first specialization of the language model before finetuning on the NER task improved a bit the model quality. To prove it, here are the results of the NER model finetuned from the model [BERTimbau base](https://huggingface.co/neuralmind/bert-base-portuguese-cased) (a non-specialized language model):
|
56 |
- **f1**: 0.8716487228203504
|
@@ -63,6 +65,8 @@ Due to the small size of BERTimbau base and finetuning dataset, the model overfi
|
|
63 |
|
64 |
You can test this model into the widget of this page.
|
65 |
|
|
|
|
|
66 |
## Using the model for inference in production
|
67 |
````
|
68 |
# install pytorch: check https://pytorch.org/
|
|
|
51 |
- **recall**: 0.8993548387096775
|
52 |
- **accuracy**: 0.9759397808828684
|
53 |
- **loss**: 0.10249536484479904
|
54 |
+
|
55 |
+
Check as well the [large version of this model](https://huggingface.co/pierreguillou/ner-bert-large-cased-pt-lenerbr) with a f1 of 0.908.
|
56 |
|
57 |
**Note**: the model [pierreguillou/bert-base-cased-pt-lenerbr](https://huggingface.co/pierreguillou/bert-base-cased-pt-lenerbr) is a language model that was created through the finetuning of the model [BERTimbau base](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on the dataset [LeNER-Br language modeling](https://huggingface.co/datasets/pierreguillou/lener_br_finetuning_language_model) by using a MASK objective. This first specialization of the language model before finetuning on the NER task improved a bit the model quality. To prove it, here are the results of the NER model finetuned from the model [BERTimbau base](https://huggingface.co/neuralmind/bert-base-portuguese-cased) (a non-specialized language model):
|
58 |
- **f1**: 0.8716487228203504
|
|
|
65 |
|
66 |
You can test this model into the widget of this page.
|
67 |
|
68 |
+
Use as well the [NER App](https://huggingface.co/spaces/pierreguillou/ner-bert-pt-lenerbr) that allows comparing the 2 BERT models (base and large) fitted in the NER task with the legal LeNER-Br dataset.
|
69 |
+
|
70 |
## Using the model for inference in production
|
71 |
````
|
72 |
# install pytorch: check https://pytorch.org/
|