asahi417 commited on
Commit
4ec2d4b
1 Parent(s): 034a49a

model update

Browse files
README.md ADDED
@@ -0,0 +1,137 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - tner/tweetner7
4
+ metrics:
5
+ - f1
6
+ - precision
7
+ - recall
8
+ model-index:
9
+ - name: cardiffnlp/twitter-roberta-base-2022-154m-tweetner7-2020
10
+ results:
11
+ - task:
12
+ name: Token Classification
13
+ type: token-classification
14
+ dataset:
15
+ name: tner/tweetner7
16
+ type: tner/tweetner7
17
+ args: tner/tweetner7
18
+ metrics:
19
+ - name: F1
20
+ type: f1
21
+ value: 0.6419150543257219
22
+ - name: Precision
23
+ type: precision
24
+ value: 0.6451010159990658
25
+ - name: Recall
26
+ type: recall
27
+ value: 0.6387604070305273
28
+ - name: F1 (macro)
29
+ type: f1_macro
30
+ value: 0.5829431071584856
31
+ - name: Precision (macro)
32
+ type: precision_macro
33
+ value: 0.5886989381701707
34
+ - name: Recall (macro)
35
+ type: recall_macro
36
+ value: 0.5796110916728531
37
+ - name: F1 (entity span)
38
+ type: f1_entity_span
39
+ value: 0.7753631609529343
40
+ - name: Precision (entity span)
41
+ type: precision_entity_span
42
+ value: 0.7791661800770758
43
+ - name: Recall (entity span)
44
+ type: recall_entity_span
45
+ value: 0.7715970856944605
46
+
47
+ pipeline_tag: token-classification
48
+ widget:
49
+ - text: "Jacob Collier is a Grammy awarded artist from England."
50
+ example_title: "NER Example 1"
51
+ ---
52
+ # cardiffnlp/twitter-roberta-base-2022-154m-tweetner7-2020
53
+
54
+ This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-2022-154m](https://huggingface.co/cardiffnlp/twitter-roberta-base-2022-154m) on the
55
+ [tner/tweetner7](https://huggingface.co/datasets/tner/tweetner7) dataset.
56
+ Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository
57
+ for more detail). It achieves the following results on the test set:
58
+ - F1 (micro): 0.6419150543257219
59
+ - Precision (micro): 0.6451010159990658
60
+ - Recall (micro): 0.6387604070305273
61
+ - F1 (macro): 0.5829431071584856
62
+ - Precision (macro): 0.5886989381701707
63
+ - Recall (macro): 0.5796110916728531
64
+
65
+ The per-entity breakdown of the F1 score on the test set are below:
66
+ - corporation: 0.5127020785219399
67
+ - event: 0.43384759233286585
68
+ - group: 0.6000666000666002
69
+ - location: 0.6535326086956522
70
+ - person: 0.8390577234310376
71
+ - product: 0.6386386386386387
72
+ - work_of_art: 0.40275650842266464
73
+
74
+ For F1 scores, the confidence interval is obtained by bootstrap as below:
75
+ - F1 (micro):
76
+
77
+ - F1 (macro):
78
+
79
+
80
+ Full evaluation can be found at [metric file of NER](https://huggingface.co/cardiffnlp/twitter-roberta-base-2022-154m-tweetner7-2020/raw/main/eval/metric.json)
81
+ and [metric file of entity span](https://huggingface.co/cardiffnlp/twitter-roberta-base-2022-154m-tweetner7-2020/raw/main/eval/metric_span.json).
82
+
83
+ ### Usage
84
+ This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip
85
+ ```shell
86
+ pip install tner
87
+ ```
88
+ and activate model as below.
89
+ ```python
90
+ from tner import TransformersNER
91
+ model = TransformersNER("cardiffnlp/twitter-roberta-base-2022-154m-tweetner7-2020")
92
+ model.predict(["Jacob Collier is a Grammy awarded English artist from London"])
93
+ ```
94
+ It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment.
95
+
96
+ ### Training hyperparameters
97
+
98
+ The following hyperparameters were used during training:
99
+ - dataset: ['tner/tweetner7']
100
+ - dataset_split: train_2020
101
+ - dataset_name: None
102
+ - local_dataset: None
103
+ - model: cardiffnlp/twitter-roberta-base-2022-154m
104
+ - crf: True
105
+ - max_length: 128
106
+ - epoch: 30
107
+ - batch_size: 32
108
+ - lr: 0.0001
109
+ - random_seed: 42
110
+ - gradient_accumulation_steps: 1
111
+ - weight_decay: 1e-07
112
+ - lr_warmup_step_ratio: 0.3
113
+ - max_grad_norm: 10
114
+
115
+ The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/cardiffnlp/twitter-roberta-base-2022-154m-tweetner7-2020/raw/main/trainer_config.json).
116
+
117
+ ### Reference
118
+ If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
119
+
120
+ ```
121
+
122
+ @inproceedings{ushio-camacho-collados-2021-ner,
123
+ title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition",
124
+ author = "Ushio, Asahi and
125
+ Camacho-Collados, Jose",
126
+ booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations",
127
+ month = apr,
128
+ year = "2021",
129
+ address = "Online",
130
+ publisher = "Association for Computational Linguistics",
131
+ url = "https://aclanthology.org/2021.eacl-demos.7",
132
+ doi = "10.18653/v1/2021.eacl-demos.7",
133
+ pages = "53--62",
134
+ abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.",
135
+ }
136
+
137
+ ```
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "cner_output/model/baseline/t_roberta_base_2022/best_model",
3
  "architectures": [
4
  "RobertaForTokenClassification"
5
  ],
 
1
  {
2
+ "_name_or_path": "cardiffnlp/twitter-roberta-base-2022-154m",
3
  "architectures": [
4
  "RobertaForTokenClassification"
5
  ],
eval/metric.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"micro/f1": 0.6419150543257219, "micro/f1_ci": {}, "micro/recall": 0.6387604070305273, "micro/precision": 0.6451010159990658, "macro/f1": 0.5829431071584856, "macro/f1_ci": {}, "macro/recall": 0.5796110916728531, "macro/precision": 0.5886989381701707, "per_entity_metric": {"corporation": {"f1": 0.5127020785219399, "f1_ci": {}, "precision": 0.5336538461538461, "recall": 0.49333333333333335}, "event": {"f1": 0.43384759233286585, "f1_ci": {}, "precision": 0.4461538461538462, "recall": 0.4222020018198362}, "group": {"f1": 0.6000666000666002, "f1_ci": {}, "precision": 0.6067340067340067, "recall": 0.5935441370223979}, "location": {"f1": 0.6535326086956522, "f1_ci": {}, "precision": 0.6362433862433863, "recall": 0.6717877094972067}, "person": {"f1": 0.8390577234310376, "f1_ci": {}, "precision": 0.8188838188838189, "recall": 0.8602507374631269}, "product": {"f1": 0.6386386386386387, "f1_ci": {}, "precision": 0.621832358674464, "recall": 0.6563786008230452}, "work_of_art": {"f1": 0.40275650842266464, "f1_ci": {}, "precision": 0.4573913043478261, "recall": 0.359781121751026}}}
eval/metric.test_2020.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"micro/f1": 0.638988177069013, "micro/f1_ci": {}, "micro/recall": 0.6030098598858329, "micro/precision": 0.6795321637426901, "macro/f1": 0.5967724585385162, "macro/f1_ci": {}, "macro/recall": 0.5601050750558658, "macro/precision": 0.6414056489059751, "per_entity_metric": {"corporation": {"f1": 0.5411764705882353, "f1_ci": {}, "precision": 0.6174496644295302, "recall": 0.4816753926701571}, "event": {"f1": 0.47265625000000006, "f1_ci": {}, "precision": 0.4898785425101215, "recall": 0.45660377358490567}, "group": {"f1": 0.5499999999999999, "f1_ci": {}, "precision": 0.6184738955823293, "recall": 0.49517684887459806}, "location": {"f1": 0.6394984326018808, "f1_ci": {}, "precision": 0.6623376623376623, "recall": 0.6181818181818182}, "person": {"f1": 0.810580204778157, "f1_ci": {}, "precision": 0.8246527777777778, "recall": 0.7969798657718121}, "product": {"f1": 0.6682577565632458, "f1_ci": {}, "precision": 0.7035175879396985, "recall": 0.6363636363636364}, "work_of_art": {"f1": 0.49523809523809514, "f1_ci": {}, "precision": 0.5735294117647058, "recall": 0.43575418994413406}}}
eval/metric_span.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"micro/f1": 0.7753631609529343, "micro/f1_ci": {}, "micro/recall": 0.7715970856944605, "micro/precision": 0.7791661800770758, "macro/f1": 0.7753631609529343, "macro/f1_ci": {}, "macro/recall": 0.7715970856944605, "macro/precision": 0.7791661800770758}
eval/metric_span.test_2020.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"micro/f1": 0.7462194116029693, "micro/f1_ci": {}, "micro/recall": 0.7042034250129735, "micro/precision": 0.7935672514619883, "macro/f1": 0.7462194116029693, "macro/f1_ci": {}, "macro/recall": 0.7042034250129735, "macro/precision": 0.7935672514619883}
eval/prediction.validation_2020.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json CHANGED
@@ -6,7 +6,7 @@
6
  "errors": "replace",
7
  "mask_token": "<mask>",
8
  "model_max_length": 512,
9
- "name_or_path": "cner_output/model/baseline/t_roberta_base_2022/best_model",
10
  "pad_token": "<pad>",
11
  "sep_token": "</s>",
12
  "special_tokens_map_file": null,
 
6
  "errors": "replace",
7
  "mask_token": "<mask>",
8
  "model_max_length": 512,
9
+ "name_or_path": "cardiffnlp/twitter-roberta-base-2022-154m",
10
  "pad_token": "<pad>",
11
  "sep_token": "</s>",
12
  "special_tokens_map_file": null,
trainer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"dataset": ["tner/tweetner7"], "dataset_split": "train_2020", "dataset_name": null, "local_dataset": null, "model": "cardiffnlp/twitter-roberta-base-2022-154m", "crf": true, "max_length": 128, "epoch": 30, "batch_size": 32, "lr": 0.0001, "random_seed": 42, "gradient_accumulation_steps": 1, "weight_decay": 1e-07, "lr_warmup_step_ratio": 0.3, "max_grad_norm": 10}