asahi417 commited on
Commit
f07c818
1 Parent(s): e8f6f76

model update

Browse files
README.md ADDED
@@ -0,0 +1,136 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - fin
4
+ metrics:
5
+ - f1
6
+ - precision
7
+ - recall
8
+ model-index:
9
+ - name: tner/deberta-v3-large-fin
10
+ results:
11
+ - task:
12
+ name: Token Classification
13
+ type: token-classification
14
+ dataset:
15
+ name: fin
16
+ type: fin
17
+ args: fin
18
+ metrics:
19
+ - name: F1
20
+ type: f1
21
+ value: 0.6430868167202574
22
+ - name: Precision
23
+ type: precision
24
+ value: 0.6578947368421053
25
+ - name: Recall
26
+ type: recall
27
+ value: 0.6289308176100629
28
+ - name: F1 (macro)
29
+ type: f1_macro
30
+ value: 0.37234464254803534
31
+ - name: Precision (macro)
32
+ type: precision_macro
33
+ value: 0.3758815642868512
34
+ - name: Recall (macro)
35
+ type: recall_macro
36
+ value: 0.3836106023606024
37
+ - name: F1 (entity span)
38
+ type: f1_entity_span
39
+ value: 0.6883116883116883
40
+ - name: Precision (entity span)
41
+ type: precision_entity_span
42
+ value: 0.7043189368770764
43
+ - name: Recall (entity span)
44
+ type: recall_entity_span
45
+ value: 0.6730158730158731
46
+
47
+ pipeline_tag: token-classification
48
+ widget:
49
+ - text: "Jacob Collier is a Grammy awarded artist from England."
50
+ example_title: "NER Example 1"
51
+ ---
52
+ # tner/deberta-v3-large-fin
53
+
54
+ This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the
55
+ [tner/fin](https://huggingface.co/datasets/tner/fin) dataset.
56
+ Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository
57
+ for more detail). It achieves the following results on the test set:
58
+ - F1 (micro): 0.6430868167202574
59
+ - Precision (micro): 0.6578947368421053
60
+ - Recall (micro): 0.6289308176100629
61
+ - F1 (macro): 0.37234464254803534
62
+ - Precision (macro): 0.3758815642868512
63
+ - Recall (macro): 0.3836106023606024
64
+
65
+ The per-entity breakdown of the F1 score on the test set are below:
66
+ - LOC: nan
67
+ - MISC: nan
68
+ - ORG: nan
69
+ - PER: nan
70
+
71
+ For F1 scores, the confidence interval is obtained by bootstrap as below:
72
+ - F1 (micro):
73
+ - 90%: [0.5722111059165758, 0.7112704135498799]
74
+ - 95%: [0.557944362785127, 0.725353903079494]
75
+ - F1 (macro):
76
+ - 90%: [0.5722111059165758, 0.7112704135498799]
77
+ - 95%: [0.557944362785127, 0.725353903079494]
78
+
79
+ Full evaluation can be found at [metric file of NER](https://huggingface.co/tner/deberta-v3-large-fin/raw/main/eval/metric.json)
80
+ and [metric file of entity span](https://huggingface.co/tner/deberta-v3-large-fin/raw/main/eval/metric_span.json).
81
+
82
+ ### Usage
83
+ This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip
84
+ ```shell
85
+ pip install tner
86
+ ```
87
+ and activate model as below.
88
+ ```python
89
+ from tner import TransformersNER
90
+ model = TransformersNER("tner/deberta-v3-large-fin")
91
+ model.predict(["Jacob Collier is a Grammy awarded English artist from London"])
92
+ ```
93
+ It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment.
94
+
95
+ ### Training hyperparameters
96
+
97
+ The following hyperparameters were used during training:
98
+ - dataset: ['tner/fin']
99
+ - dataset_split: train
100
+ - dataset_name: None
101
+ - local_dataset: None
102
+ - model: microsoft/deberta-v3-large
103
+ - crf: False
104
+ - max_length: 128
105
+ - epoch: 17
106
+ - batch_size: 16
107
+ - lr: 1e-05
108
+ - random_seed: 42
109
+ - gradient_accumulation_steps: 4
110
+ - weight_decay: 1e-07
111
+ - lr_warmup_step_ratio: 0.1
112
+ - max_grad_norm: 10.0
113
+
114
+ The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/tner/deberta-v3-large-fin/raw/main/trainer_config.json).
115
+
116
+ ### Reference
117
+ If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
118
+
119
+ ```
120
+
121
+ @inproceedings{ushio-camacho-collados-2021-ner,
122
+ title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition",
123
+ author = "Ushio, Asahi and
124
+ Camacho-Collados, Jose",
125
+ booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations",
126
+ month = apr,
127
+ year = "2021",
128
+ address = "Online",
129
+ publisher = "Association for Computational Linguistics",
130
+ url = "https://aclanthology.org/2021.eacl-demos.7",
131
+ doi = "10.18653/v1/2021.eacl-demos.7",
132
+ pages = "53--62",
133
+ abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.",
134
+ }
135
+
136
+ ```
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "tner_ckpt/fin_deberta_v3_large/best_model",
3
  "architectures": [
4
  "DebertaV2ForTokenClassification"
5
  ],
 
1
  {
2
+ "_name_or_path": "tner_ckpt/fin_deberta_v3_large/model_ulfllg/epoch_16",
3
  "architectures": [
4
  "DebertaV2ForTokenClassification"
5
  ],
eval/metric.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"micro/f1": 0.6430868167202574, "micro/f1_ci": {"90": [0.5722111059165758, 0.7112704135498799], "95": [0.557944362785127, 0.725353903079494]}, "micro/recall": 0.6289308176100629, "micro/precision": 0.6578947368421053, "macro/f1": 0.37234464254803534, "macro/f1_ci": {"90": [0.321037444212583, 0.4174222520031422], "95": [0.3126661472561014, 0.4276527473028317]}, "macro/recall": 0.3836106023606024, "macro/precision": 0.3758815642868512, "per_entity_metric": {"LOC": {"f1": NaN, "f1_ci": {"90": [NaN, NaN], "95": [NaN, NaN]}, "precision": 0.0, "recall": 0.0}, "MISC": {"f1": NaN, "f1_ci": {"90": [NaN, NaN], "95": [NaN, NaN]}, "precision": 0.0, "recall": 0.0}, "ORG": {"f1": NaN, "f1_ci": {"90": [NaN, NaN], "95": [NaN, NaN]}, "precision": 0.0, "recall": 0.0}, "PER": {"f1": NaN, "f1_ci": {"90": [NaN, NaN], "95": [NaN, NaN]}, "precision": 0.0, "recall": 0.0}}}
eval/metric_span.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"micro/f1": 0.6883116883116883, "micro/f1_ci": {"90": [0.6137984272716044, 0.757765305655086], "95": [0.604156373368873, 0.7718631178707224]}, "micro/recall": 0.6730158730158731, "micro/precision": 0.7043189368770764, "macro/f1": 0.6883116883116883, "macro/f1_ci": {"90": [0.6137984272716044, 0.757765305655086], "95": [0.604156373368873, 0.7718631178707224]}, "macro/recall": 0.6730158730158731, "macro/precision": 0.7043189368770764}
eval/prediction.validation.json ADDED
The diff for this file is too large to render. See raw diff
 
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:011ac87e23c7729d64bcba4abeb513607010d1f887bac0ae014e29bd6374edf7
3
- size 1736217519
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b043f9a9734e2e12de53c5dbdeb48ff85def0ef8c78bea2f8bc73eabbcbd2198
3
+ size 1736223023
tokenizer_config.json CHANGED
@@ -4,7 +4,7 @@
4
  "do_lower_case": false,
5
  "eos_token": "[SEP]",
6
  "mask_token": "[MASK]",
7
- "name_or_path": "tner_ckpt/fin_deberta_v3_large/best_model",
8
  "pad_token": "[PAD]",
9
  "sep_token": "[SEP]",
10
  "sp_model_kwargs": {},
 
4
  "do_lower_case": false,
5
  "eos_token": "[SEP]",
6
  "mask_token": "[MASK]",
7
+ "name_or_path": "tner_ckpt/fin_deberta_v3_large/model_ulfllg/epoch_16",
8
  "pad_token": "[PAD]",
9
  "sep_token": "[SEP]",
10
  "sp_model_kwargs": {},
trainer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"dataset": ["tner/fin"], "dataset_split": "train", "dataset_name": null, "local_dataset": null, "model": "microsoft/deberta-v3-large", "crf": false, "max_length": 128, "epoch": 17, "batch_size": 16, "lr": 1e-05, "random_seed": 42, "gradient_accumulation_steps": 4, "weight_decay": 1e-07, "lr_warmup_step_ratio": 0.1, "max_grad_norm": 10.0}