File size: 7,128 Bytes
b4a90b3
 
6fd5f23
b4a90b3
 
6fd5f23
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b4a90b3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
---
language: en
license: mit
datasets:
- conll2003
model-index:
- name: 51la5/bert-large-NER
  results:
  - task:
      type: token-classification
      name: Token Classification
    dataset:
      name: conll2003
      type: conll2003
      config: conll2003
      split: train
    metrics:
    - type: accuracy
      value: 0.92134118380748
      name: Accuracy
      verified: true
      verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDQ0YzhiNTZiOGU0MzRmZWI3MTI5MzczZDJhNDFhOTIzM2ZmZTc1YTU1OWM4ZTNjMzA2MGExMWY3NWEzYjE1MyIsInZlcnNpb24iOjF9.ur67pAdmsKDTi3TqGKAtTTw1Sublxzlaod9yC5dn4S-ZUPITGwY1RQkGhHoBu-v5ROMKik2sTVtuunjxiF4LBQ
    - type: precision
      value: 0.934078568374172
      name: Precision
      verified: true
      verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzlhYjI5ZjI5YzA3NTE3OWU2MWE3MzJjMjZlMmY1ZWZjOWFlNGIzOWRkNmRiZGE2MjI5ZjgwY2YxMGIzZTk0NCIsInZlcnNpb24iOjF9.Fqm95frBpQykWOSuOLWyLIPdhpY4Cdcquoy4NIdYVNRkk8HvTi_QhlJk04O1714G2933BrEAR6burehhtGDLAw
    - type: recall
      value: 0.9339203317092059
      name: Recall
      verified: true
      verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzcwOWYzMTlmYTBiN2VmMTQ1NzhlZDM0Y2E2YWVjYjk3MzRmZWVjYmQ2NGZhMjc2YTU5MjgzZDE1NzgzY2E2NyIsInZlcnNpb24iOjF9.KR6BAQ-xu1j9-H5mPppDURgnT2x8bkyUtzOGXUumnDUYgFNn036MYipEoO8RLNFDsyCXMnSysqfTAvKdt2LsCg
    - type: auc
      value: NaN
      name: AUC
      verified: true
      verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZmRkZmEyN2ZlODQyZWE3ODMxMWZiMjlkNjFlNGU4YmIxMGQzYmYwZTQzYmViZDY2NGUwZDc2ZTE1N2U0MGIzMyIsInZlcnNpb24iOjF9.QwirjnylT22eYV0xoI664lijKlSsHa7zreMv7cqyPutlMFWv02g3RQG0hnlaTl_7EkoxhgFuPfLjYCiEOx4iAQ
    - type: f1
      value: 0.9339994433396395
      name: F1
      verified: true
      verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTc2MDdkYmE4M2Q2Y2NhMjNjYTcwYWRhNjYyZmIxYzJkZjI5OTZiNWY4ZTBlYmFkOTM2ZjViYzE3M2FlM2EyNCIsInZlcnNpb24iOjF9.NwLQXRAdZgx4Z5hEo_nOs2yFS80T3oOwvqRmlA9hbumQnW7rah74fuw2bCaCh_rOW4XsmtvJvjVdMC_tAE0dCA
    - type: loss
      value: 0.41425618529319763
      name: loss
      verified: true
      verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTUwZjAyMjRiODg2MGVjOGQzOGU5MzZiMjU1Mzg4M2NmZjdjMzAwYTU3NWM3MTYxZmI4YTZiZjhjNjlmYWVmZiIsInZlcnNpb24iOjF9.3y-7ARyHBZd1jIelu5qU2CrntQysLIVyzy50NDHTJP2v5FWO8S9bJIeUVFXwS7v6QArWmbRIlXjTo72_zPupBw
---
# bert-base-NER

## Model description

**bert-large-NER** is a fine-tuned BERT model that is ready to use for **Named Entity Recognition** and achieves **state-of-the-art performance** for the NER task. It has been trained to recognize four types of entities: location (LOC), organizations (ORG), person (PER) and Miscellaneous (MISC). 

Specifically, this model is a *bert-large-cased* model that was fine-tuned on the English version of the standard [CoNLL-2003 Named Entity Recognition](https://www.aclweb.org/anthology/W03-0419.pdf) dataset. 

If you'd like to use a smaller BERT model fine-tuned on the same dataset, a [**bert-base-NER**](https://huggingface.co/dslim/bert-base-NER/) version is also available. 


## Intended uses & limitations

#### How to use

You can use this model with Transformers *pipeline* for NER.

```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline

tokenizer = AutoTokenizer.from_pretrained("dslim/bert-base-NER")
model = AutoModelForTokenClassification.from_pretrained("dslim/bert-base-NER")

nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "My name is Wolfgang and I live in Berlin"

ner_results = nlp(example)
print(ner_results)
```

#### Limitations and bias

This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. Furthermore, the model occassionally tags subword tokens as entities and post-processing of results may be necessary to handle those cases. 

## Training data

This model was fine-tuned on English version of the standard [CoNLL-2003 Named Entity Recognition](https://www.aclweb.org/anthology/W03-0419.pdf) dataset. 

The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:

Abbreviation|Description
-|-
O|Outside of a named entity
B-MIS |Beginning of a miscellaneous entity right after another miscellaneous entity
I-MIS | Miscellaneous entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organization right after another organization
I-ORG |organization
B-LOC |Beginning of a location right after another location
I-LOC |Location


### CoNLL-2003 English Dataset Statistics
This dataset was derived from the Reuters corpus which consists of Reuters news stories. You can read more about how this dataset was created in the CoNLL-2003 paper. 
#### # of training examples per entity type
Dataset|LOC|MISC|ORG|PER
-|-|-|-|-
Train|7140|3438|6321|6600
Dev|1837|922|1341|1842
Test|1668|702|1661|1617
#### # of articles/sentences/tokens per dataset
Dataset |Articles |Sentences |Tokens
-|-|-|-
Train |946 |14,987 |203,621
Dev |216 |3,466 |51,362
Test |231 |3,684 |46,435

## Training procedure

This model was trained on a single NVIDIA V100 GPU with recommended hyperparameters from the [original BERT paper](https://arxiv.org/pdf/1810.04805) which trained & evaluated the model on CoNLL-2003 NER task. 

## Eval results
metric|dev|test
-|-|-
f1 |95.7 |91.7
precision |95.3 |91.2
recall |96.1 |92.3

The test metrics are a little lower than the official Google BERT results which encoded document context & experimented with CRF. More on replicating the original results [here](https://github.com/google-research/bert/issues/223).

### BibTeX entry and citation info

```
@article{DBLP:journals/corr/abs-1810-04805,
  author    = {Jacob Devlin and
               Ming{-}Wei Chang and
               Kenton Lee and
               Kristina Toutanova},
  title     = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
               Understanding},
  journal   = {CoRR},
  volume    = {abs/1810.04805},
  year      = {2018},
  url       = {http://arxiv.org/abs/1810.04805},
  archivePrefix = {arXiv},
  eprint    = {1810.04805},
  timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
  biburl    = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
```
@inproceedings{tjong-kim-sang-de-meulder-2003-introduction,
    title = "Introduction to the {C}o{NLL}-2003 Shared Task: Language-Independent Named Entity Recognition",
    author = "Tjong Kim Sang, Erik F.  and
      De Meulder, Fien",
    booktitle = "Proceedings of the Seventh Conference on Natural Language Learning at {HLT}-{NAACL} 2003",
    year = "2003",
    url = "https://www.aclweb.org/anthology/W03-0419",
    pages = "142--147",
}
```