asahi417 commited on
Commit
75c8d1e
1 Parent(s): 8dfb4d4

model update

Browse files
README.md ADDED
@@ -0,0 +1,140 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - mit_restaurant
4
+ metrics:
5
+ - f1
6
+ - precision
7
+ - recall
8
+ model-index:
9
+ - name: tner/roberta-large-mit-restaurant
10
+ results:
11
+ - task:
12
+ name: Token Classification
13
+ type: token-classification
14
+ dataset:
15
+ name: mit_restaurant
16
+ type: mit_restaurant
17
+ args: mit_restaurant
18
+ metrics:
19
+ - name: F1
20
+ type: f1
21
+ value: 0.8164676304211189
22
+ - name: Precision
23
+ type: precision
24
+ value: 0.8085901027077498
25
+ - name: Recall
26
+ type: recall
27
+ value: 0.8245001586797842
28
+ - name: F1 (macro)
29
+ type: f1_macro
30
+ value: 0.8081522050756316
31
+ - name: Precision (macro)
32
+ type: precision_macro
33
+ value: 0.7974927131040113
34
+ - name: Recall (macro)
35
+ type: recall_macro
36
+ value: 0.8199029986502094
37
+ - name: F1 (entity span)
38
+ type: f1_entity_span
39
+ value: 0.8557510999371464
40
+ - name: Precision (entity span)
41
+ type: precision_entity_span
42
+ value: 0.8474945533769063
43
+ - name: Recall (entity span)
44
+ type: recall_entity_span
45
+ value: 0.8641701047286575
46
+
47
+ pipeline_tag: token-classification
48
+ widget:
49
+ - text: "Jacob Collier is a Grammy awarded artist from England."
50
+ example_title: "NER Example 1"
51
+ ---
52
+ # tner/roberta-large-mit-restaurant
53
+
54
+ This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the
55
+ [tner/mit_restaurant](https://huggingface.co/datasets/tner/mit_restaurant) dataset.
56
+ Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository
57
+ for more detail). It achieves the following results on the test set:
58
+ - F1 (micro): 0.8164676304211189
59
+ - Precision (micro): 0.8085901027077498
60
+ - Recall (micro): 0.8245001586797842
61
+ - F1 (macro): 0.8081522050756316
62
+ - Precision (macro): 0.7974927131040113
63
+ - Recall (macro): 0.8199029986502094
64
+
65
+ The per-entity breakdown of the F1 score on the test set are below:
66
+ - amenity: 0.7140221402214022
67
+ - cuisine: 0.8558052434456929
68
+ - dish: 0.829103214890017
69
+ - location: 0.8611793611793611
70
+ - money: 0.8579710144927537
71
+ - rating: 0.8
72
+ - restaurant: 0.8713375796178344
73
+ - time: 0.6757990867579908
74
+
75
+ For F1 scores, the confidence interval is obtained by bootstrap as below:
76
+ - F1 (micro):
77
+ - 90%: [0.8050039870241192, 0.8289531287254172]
78
+ - 95%: [0.8030897272187587, 0.8312785732455824]
79
+ - F1 (macro):
80
+ - 90%: [0.8050039870241192, 0.8289531287254172]
81
+ - 95%: [0.8030897272187587, 0.8312785732455824]
82
+
83
+ Full evaluation can be found at [metric file of NER](https://huggingface.co/tner/roberta-large-mit-restaurant/raw/main/eval/metric.json)
84
+ and [metric file of entity span](https://huggingface.co/tner/roberta-large-mit-restaurant/raw/main/eval/metric_span.json).
85
+
86
+ ### Usage
87
+ This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip
88
+ ```shell
89
+ pip install tner
90
+ ```
91
+ and activate model as below.
92
+ ```python
93
+ from tner import TransformersNER
94
+ model = TransformersNER("tner/roberta-large-mit-restaurant")
95
+ model.predict(["Jacob Collier is a Grammy awarded English artist from London"])
96
+ ```
97
+ It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment.
98
+
99
+ ### Training hyperparameters
100
+
101
+ The following hyperparameters were used during training:
102
+ - dataset: ['tner/mit_restaurant']
103
+ - dataset_split: train
104
+ - dataset_name: None
105
+ - local_dataset: None
106
+ - model: roberta-large
107
+ - crf: True
108
+ - max_length: 128
109
+ - epoch: 15
110
+ - batch_size: 64
111
+ - lr: 1e-05
112
+ - random_seed: 42
113
+ - gradient_accumulation_steps: 1
114
+ - weight_decay: None
115
+ - lr_warmup_step_ratio: 0.1
116
+ - max_grad_norm: 10.0
117
+
118
+ The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/tner/roberta-large-mit-restaurant/raw/main/trainer_config.json).
119
+
120
+ ### Reference
121
+ If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
122
+
123
+ ```
124
+
125
+ @inproceedings{ushio-camacho-collados-2021-ner,
126
+ title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition",
127
+ author = "Ushio, Asahi and
128
+ Camacho-Collados, Jose",
129
+ booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations",
130
+ month = apr,
131
+ year = "2021",
132
+ address = "Online",
133
+ publisher = "Association for Computational Linguistics",
134
+ url = "https://aclanthology.org/2021.eacl-demos.7",
135
+ doi = "10.18653/v1/2021.eacl-demos.7",
136
+ pages = "53--62",
137
+ abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.",
138
+ }
139
+
140
+ ```
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "tner_ckpt/mit_restaurant_roberta_large/best_model",
3
  "architectures": [
4
  "RobertaForTokenClassification"
5
  ],
 
1
  {
2
+ "_name_or_path": "tner_ckpt/mit_restaurant_roberta_large/model_rcsnba/epoch_5",
3
  "architectures": [
4
  "RobertaForTokenClassification"
5
  ],
eval/metric.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"micro/f1": 0.8164676304211189, "micro/f1_ci": {"90": [0.8050039870241192, 0.8289531287254172], "95": [0.8030897272187587, 0.8312785732455824]}, "micro/recall": 0.8245001586797842, "micro/precision": 0.8085901027077498, "macro/f1": 0.8081522050756316, "macro/f1_ci": {"90": [0.7954595245799596, 0.8219360781988571], "95": [0.792555816856374, 0.825200956567577]}, "macro/recall": 0.8199029986502094, "macro/precision": 0.7974927131040113, "per_entity_metric": {"amenity": {"f1": 0.7140221402214022, "f1_ci": {"90": [0.683327540549487, 0.7436187631209326], "95": [0.678020677628301, 0.7517245237181946]}, "precision": 0.7023593466424682, "recall": 0.726078799249531}, "cuisine": {"f1": 0.8558052434456929, "f1_ci": {"90": [0.833801564945227, 0.8766126228464227], "95": [0.8292487217750814, 0.8803160836742926]}, "precision": 0.8526119402985075, "recall": 0.8590225563909775}, "dish": {"f1": 0.829103214890017, "f1_ci": {"90": [0.7969955469192779, 0.8597312956236187], "95": [0.7924528301886793, 0.8644379746637839]}, "precision": 0.8085808580858086, "recall": 0.8506944444444444}, "location": {"f1": 0.8611793611793611, "f1_ci": {"90": [0.8402656883936412, 0.8813415532273775], "95": [0.8362810514435675, 0.8845460980496779]}, "precision": 0.8590686274509803, "recall": 0.8633004926108374}, "money": {"f1": 0.8579710144927537, "f1_ci": {"90": [0.8159509202453988, 0.8963682024533951], "95": [0.8072020389249304, 0.904497826286624]}, "precision": 0.8505747126436781, "recall": 0.8654970760233918}, "rating": {"f1": 0.8, "f1_ci": {"90": [0.7592525006758584, 0.8345533282112174], "95": [0.7518762102471958, 0.8421052631578947]}, "precision": 0.7589285714285714, "recall": 0.845771144278607}, "restaurant": {"f1": 0.8713375796178344, "f1_ci": {"90": [0.8471992536498038, 0.896299785653756], "95": [0.8420912467587077, 0.9023500668180059]}, "precision": 0.8929503916449086, "recall": 0.8507462686567164}, "time": {"f1": 0.6757990867579908, "f1_ci": {"90": [0.621840639870406, 0.7241566310120426], "95": [0.6140051214994007, 0.7334963325183375]}, "precision": 0.6548672566371682, "recall": 0.6981132075471698}}}
eval/metric_span.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"micro/f1": 0.8557510999371464, "micro/f1_ci": {"90": [0.8457549886322284, 0.866532074562069], "95": [0.8439094859106241, 0.8689014756604962]}, "micro/recall": 0.8641701047286575, "micro/precision": 0.8474945533769063, "macro/f1": 0.8557510999371464, "macro/f1_ci": {"90": [0.8457549886322284, 0.866532074562069], "95": [0.8439094859106241, 0.8689014756604962]}, "macro/recall": 0.8641701047286575, "macro/precision": 0.8474945533769063}
eval/prediction.validation.json ADDED
The diff for this file is too large to render. See raw diff
 
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0d2e091e9d78302ff4904bd9cdfd878b10ac33a35ff737f6cd634790afd7b6c1
3
- size 1417441393
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:58e1e876ca140f5c57210bac516075cd00f5f1a761973316b0dfaef37cf54fd6
3
+ size 1417446833
tokenizer_config.json CHANGED
@@ -6,7 +6,7 @@
6
  "errors": "replace",
7
  "mask_token": "<mask>",
8
  "model_max_length": 512,
9
- "name_or_path": "tner_ckpt/mit_restaurant_roberta_large/best_model",
10
  "pad_token": "<pad>",
11
  "sep_token": "</s>",
12
  "special_tokens_map_file": "tner_ckpt/mit_restaurant_roberta_large/model_rcsnba/epoch_5/special_tokens_map.json",
 
6
  "errors": "replace",
7
  "mask_token": "<mask>",
8
  "model_max_length": 512,
9
+ "name_or_path": "tner_ckpt/mit_restaurant_roberta_large/model_rcsnba/epoch_5",
10
  "pad_token": "<pad>",
11
  "sep_token": "</s>",
12
  "special_tokens_map_file": "tner_ckpt/mit_restaurant_roberta_large/model_rcsnba/epoch_5/special_tokens_map.json",
trainer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"dataset": ["tner/mit_restaurant"], "dataset_split": "train", "dataset_name": null, "local_dataset": null, "model": "roberta-large", "crf": true, "max_length": 128, "epoch": 15, "batch_size": 64, "lr": 1e-05, "random_seed": 42, "gradient_accumulation_steps": 1, "weight_decay": null, "lr_warmup_step_ratio": 0.1, "max_grad_norm": 10.0}