bert-finetuned-ner / README.md
csNoHug's picture
Training complete
604b471
|
raw
history blame
No virus
8.9 kB
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- wnut_17
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wnut_17
type: wnut_17
config: wnut_17
split: test
args: wnut_17
metrics:
- name: Precision
type: precision
value: 0.5091743119266054
- name: Recall
type: recall
value: 0.3086190917516219
- name: F1
type: f1
value: 0.38430467397576457
- name: Accuracy
type: accuracy
value: 0.935251256281407
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the wnut_17 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4256
- Precision: 0.5092
- Recall: 0.3086
- F1: 0.3843
- Accuracy: 0.9353
- Corporation Precision: 0.2188
- Corporation Recall: 0.2121
- Corporation F1: 0.2154
- Creative-work Precision: 0.3768
- Creative-work Recall: 0.1831
- Creative-work F1: 0.2464
- Group Precision: 0.3594
- Group Recall: 0.1394
- Group F1: 0.2009
- Location Precision: 0.5439
- Location Recall: 0.4133
- Location F1: 0.4697
- Person Precision: 0.7538
- Person Recall: 0.4569
- Person F1: 0.5689
- Product Precision: 0.1446
- Product Recall: 0.0945
- Product F1: 0.1143
- B-corporation Precision: 0.3333
- B-corporation Recall: 0.2424
- B-corporation F1: 0.2807
- B-creative-work Precision: 0.8158
- B-creative-work Recall: 0.2183
- B-creative-work F1: 0.3444
- B-group Precision: 0.4906
- B-group Recall: 0.1576
- B-group F1: 0.2385
- B-location Precision: 0.6606
- B-location Recall: 0.48
- B-location F1: 0.5560
- B-person Precision: 0.8423
- B-person Recall: 0.4732
- B-person F1: 0.6060
- B-product Precision: 0.4792
- B-product Recall: 0.1811
- B-product F1: 0.2629
- I-corporation Precision: 0.3404
- I-corporation Recall: 0.2909
- I-corporation F1: 0.3137
- I-creative-work Precision: 0.4559
- I-creative-work Recall: 0.2403
- I-creative-work F1: 0.3147
- I-group Precision: 0.3333
- I-group Recall: 0.1150
- I-group F1: 0.1711
- I-location Precision: 0.5849
- I-location Recall: 0.3263
- I-location F1: 0.4189
- I-person Precision: 0.7375
- I-person Recall: 0.3512
- I-person F1: 0.4758
- I-product Precision: 0.2206
- I-product Recall: 0.1240
- I-product F1: 0.1587
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | Corporation Precision | Corporation Recall | Corporation F1 | Creative-work Precision | Creative-work Recall | Creative-work F1 | Group Precision | Group Recall | Group F1 | Location Precision | Location Recall | Location F1 | Person Precision | Person Recall | Person F1 | Product Precision | Product Recall | Product F1 | B-corporation Precision | B-corporation Recall | B-corporation F1 | B-creative-work Precision | B-creative-work Recall | B-creative-work F1 | B-group Precision | B-group Recall | B-group F1 | B-location Precision | B-location Recall | B-location F1 | B-person Precision | B-person Recall | B-person F1 | B-product Precision | B-product Recall | B-product F1 | I-corporation Precision | I-corporation Recall | I-corporation F1 | I-creative-work Precision | I-creative-work Recall | I-creative-work F1 | I-group Precision | I-group Recall | I-group F1 | I-location Precision | I-location Recall | I-location F1 | I-person Precision | I-person Recall | I-person F1 | I-product Precision | I-product Recall | I-product F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|:---------------------:|:------------------:|:--------------:|:-----------------------:|:--------------------:|:----------------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:|:----------------:|:-------------:|:---------:|:-----------------:|:--------------:|:----------:|:-----------------------:|:--------------------:|:----------------:|:-------------------------:|:----------------------:|:------------------:|:-----------------:|:--------------:|:----------:|:--------------------:|:-----------------:|:-------------:|:------------------:|:---------------:|:-----------:|:-------------------:|:----------------:|:------------:|:-----------------------:|:--------------------:|:----------------:|:-------------------------:|:----------------------:|:------------------:|:-----------------:|:--------------:|:----------:|:--------------------:|:-----------------:|:-------------:|:------------------:|:---------------:|:-----------:|:-------------------:|:----------------:|:------------:|
| No log | 1.0 | 425 | 0.3964 | 0.4911 | 0.2558 | 0.3364 | 0.9301 | 0.1579 | 0.1364 | 0.1463 | 0.0 | 0.0 | 0.0 | 0.2157 | 0.0667 | 0.1019 | 0.4490 | 0.44 | 0.4444 | 0.7412 | 0.4406 | 0.5526 | 0.0233 | 0.0079 | 0.0118 | 0.2558 | 0.1667 | 0.2018 | 0.0 | 0.0 | 0.0 | 0.52 | 0.0788 | 0.1368 | 0.5703 | 0.4867 | 0.5252 | 0.9019 | 0.4499 | 0.6003 | 0.25 | 0.0079 | 0.0153 | 0.3571 | 0.1818 | 0.2410 | 0.5556 | 0.0388 | 0.0725 | 0.1471 | 0.0442 | 0.0680 | 0.4198 | 0.3579 | 0.3864 | 0.7152 | 0.3512 | 0.4711 | 0.2143 | 0.0744 | 0.1104 |
| 0.2016 | 2.0 | 850 | 0.4337 | 0.5407 | 0.2706 | 0.3607 | 0.9327 | 0.1897 | 0.1667 | 0.1774 | 0.3488 | 0.1056 | 0.1622 | 0.3077 | 0.0727 | 0.1176 | 0.5327 | 0.38 | 0.4436 | 0.7837 | 0.4476 | 0.5697 | 0.1042 | 0.0394 | 0.0571 | 0.2979 | 0.2121 | 0.2478 | 0.9048 | 0.1338 | 0.2331 | 0.48 | 0.0727 | 0.1263 | 0.6915 | 0.4333 | 0.5328 | 0.8855 | 0.4685 | 0.6128 | 0.6875 | 0.0866 | 0.1538 | 0.3243 | 0.2182 | 0.2609 | 0.5 | 0.1628 | 0.2456 | 0.25 | 0.0619 | 0.0993 | 0.5660 | 0.3158 | 0.4054 | 0.76 | 0.3393 | 0.4691 | 0.2093 | 0.0744 | 0.1098 |
| 0.0823 | 3.0 | 1275 | 0.4256 | 0.5092 | 0.3086 | 0.3843 | 0.9353 | 0.2188 | 0.2121 | 0.2154 | 0.3768 | 0.1831 | 0.2464 | 0.3594 | 0.1394 | 0.2009 | 0.5439 | 0.4133 | 0.4697 | 0.7538 | 0.4569 | 0.5689 | 0.1446 | 0.0945 | 0.1143 | 0.3333 | 0.2424 | 0.2807 | 0.8158 | 0.2183 | 0.3444 | 0.4906 | 0.1576 | 0.2385 | 0.6606 | 0.48 | 0.5560 | 0.8423 | 0.4732 | 0.6060 | 0.4792 | 0.1811 | 0.2629 | 0.3404 | 0.2909 | 0.3137 | 0.4559 | 0.2403 | 0.3147 | 0.3333 | 0.1150 | 0.1711 | 0.5849 | 0.3263 | 0.4189 | 0.7375 | 0.3512 | 0.4758 | 0.2206 | 0.1240 | 0.1587 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1