Davlan commited on
Commit
927df9c
1 Parent(s): c3eca74

add bert ner hrl

Browse files
README.md ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Hugging Face's logo
2
+ ---
3
+ language:
4
+ - ar
5
+ - de
6
+ - en
7
+ - es
8
+ - fr
9
+ - it
10
+ - lv
11
+ - nl
12
+ - pt
13
+ - zh
14
+ - multilingual
15
+
16
+ ---
17
+ # bert-base-multilingual-cased-ner-hrl
18
+ ## Model description
19
+ **bert-base-multilingual-cased-ner-hrl** is a **Named Entity Recognition** model for 10 high resourced languages (Arabic, German, English, Spanish, French, Italian, Latvian, Dutch, Portuguese and Chinese) based on a fine-tuned mBERT base model. It has been trained to recognize three types of entities: location (LOC), organizations (ORG), and person (PER).
20
+ Specifically, this model is a *bert-base-multilingual-cased* model that was fine-tuned on an aggregation of 10 high-resourced languages
21
+ ## Intended uses & limitations
22
+ #### How to use
23
+ You can use this model with Transformers *pipeline* for NER.
24
+ ```python
25
+ from transformers import AutoTokenizer, AutoModelForTokenClassification
26
+ from transformers import pipeline
27
+ tokenizer = AutoTokenizer.from_pretrained("Davlan/bert-base-multilingual-cased-ner-hrl")
28
+ model = AutoModelForTokenClassification.from_pretrained("Davlan/bert-base-multilingual-cased-ner-hrl")
29
+ nlp = pipeline("ner", model=model, tokenizer=tokenizer)
30
+ example = "Nader Jokhadar had given Syria the lead with a well-struck header in the seventh minute."
31
+ ner_results = nlp(example)
32
+ print(ner_results)
33
+ ```
34
+ #### Limitations and bias
35
+ This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
36
+ ## Training data
37
+ The training data for the 10 languages are from:
38
+
39
+ Language|Dataset
40
+ -|-
41
+ Arabic | [ANERcorp](https://github.com/EmnamoR/Arabic-named-entity-recognition)
42
+ German | [conll 2003](https://www.clips.uantwerpen.be/conll2003/ner/)
43
+ English | [conll 2003](https://www.clips.uantwerpen.be/conll2003/ner/)
44
+ Spanish | [conll 2002](https://www.clips.uantwerpen.be/conll2002/ner/)
45
+ French | [Europeana Newspapers](https://github.com/EuropeanaNewspapers/ner-corpora/tree/master/enp_FR.bnf.bio)
46
+ Italian | [Italian I-CAB](https://ontotext.fbk.eu/icab.html)
47
+ Latvian | [Latvian NER](https://github.com/LUMII-AILab/FullStack/tree/master/NamedEntities)
48
+ Dutch | [conll 2002](https://www.clips.uantwerpen.be/conll2002/ner/)
49
+ Portuguese |[Paramopama + Second Harem](https://github.com/davidsbatista/NER-datasets/tree/master/Portuguese)
50
+ Chinese | [MSRA](https://huggingface.co/datasets/msra_ner)
51
+
52
+ The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
53
+ Abbreviation|Description
54
+ -|-
55
+ O|Outside of a named entity
56
+ B-PER |Beginning of a person’s name right after another person’s name
57
+ I-PER |Person’s name
58
+ B-ORG |Beginning of an organisation right after another organisation
59
+ I-ORG |Organisation
60
+ B-LOC |Beginning of a location right after another location
61
+ I-LOC |Location
62
+ ## Training procedure
63
+ This model was trained on NVIDIA V100 GPU with recommended hyperparameters from HuggingFace code.
64
+
65
+
config.json ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "bert-base-multilingual-cased",
3
+ "architectures": [
4
+ "BertForTokenClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "directionality": "bidi",
8
+ "gradient_checkpointing": false,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "id2label": {
13
+ "0": "O",
14
+ "1": "B-DATE",
15
+ "2": "I-DATE",
16
+ "3": "B-PER",
17
+ "4": "I-PER",
18
+ "5": "B-ORG",
19
+ "6": "I-ORG",
20
+ "7": "B-LOC",
21
+ "8": "I-LOC"
22
+ },
23
+ "initializer_range": 0.02,
24
+ "intermediate_size": 3072,
25
+ "label2id": {
26
+ "B-DATE": 1,
27
+ "B-LOC": 7,
28
+ "B-ORG": 5,
29
+ "B-PER": 3,
30
+ "I-DATE": 2,
31
+ "I-LOC": 8,
32
+ "I-ORG": 6,
33
+ "I-PER": 4,
34
+ "O": 0
35
+ },
36
+ "layer_norm_eps": 1e-12,
37
+ "max_position_embeddings": 512,
38
+ "model_type": "bert",
39
+ "num_attention_heads": 12,
40
+ "num_hidden_layers": 12,
41
+ "pad_token_id": 0,
42
+ "pooler_fc_size": 768,
43
+ "pooler_num_attention_heads": 12,
44
+ "pooler_num_fc_layers": 3,
45
+ "pooler_size_per_head": 128,
46
+ "pooler_type": "first_token_transform",
47
+ "position_embedding_type": "absolute",
48
+ "type_vocab_size": 2,
49
+ "vocab_size": 119547
50
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8c707863b713df859962ba50dcd834ab1b5bd459e7cc184e3aab62f2d34fc764
3
+ size 709167607
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
1
+ {"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
1
+ {"do_lower_case": false, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "model_max_length": 512, "name_or_path": "bert-base-multilingual-cased"}
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:22d7b45befcaae3b668f7a2bc0a9e2d77c4e5a9f7d09e99db695b6cb6edcca81
3
+ size 1519
vocab.txt ADDED
The diff for this file is too large to render. See raw diff