asahi417 commited on
Commit
aa571ea
1 Parent(s): fa71810

model update

Browse files
README.md ADDED
@@ -0,0 +1,135 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - tner/ttc
4
+ metrics:
5
+ - f1
6
+ - precision
7
+ - recall
8
+ model-index:
9
+ - name: tner/deberta-v3-large-ttc
10
+ results:
11
+ - task:
12
+ name: Token Classification
13
+ type: token-classification
14
+ dataset:
15
+ name: tner/ttc
16
+ type: tner/ttc
17
+ args: tner/ttc
18
+ metrics:
19
+ - name: F1
20
+ type: f1
21
+ value: 0.8266925817946227
22
+ - name: Precision
23
+ type: precision
24
+ value: 0.8264248704663213
25
+ - name: Recall
26
+ type: recall
27
+ value: 0.8269604666234608
28
+ - name: F1 (macro)
29
+ type: f1_macro
30
+ value: 0.8267742072572187
31
+ - name: Precision (macro)
32
+ type: precision_macro
33
+ value: 0.8278533291801137
34
+ - name: Recall (macro)
35
+ type: recall_macro
36
+ value: 0.8257668793195109
37
+ - name: F1 (entity span)
38
+ type: f1_entity_span
39
+ value: 0.8713961775186264
40
+ - name: Precision (entity span)
41
+ type: precision_entity_span
42
+ value: 0.8711139896373057
43
+ - name: Recall (entity span)
44
+ type: recall_entity_span
45
+ value: 0.8716785482825664
46
+
47
+ pipeline_tag: token-classification
48
+ widget:
49
+ - text: "Jacob Collier is a Grammy awarded artist from England."
50
+ example_title: "NER Example 1"
51
+ ---
52
+ # tner/deberta-v3-large-ttc
53
+
54
+ This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the
55
+ [tner/ttc](https://huggingface.co/datasets/tner/ttc) dataset.
56
+ Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository
57
+ for more detail). It achieves the following results on the test set:
58
+ - F1 (micro): 0.8266925817946227
59
+ - Precision (micro): 0.8264248704663213
60
+ - Recall (micro): 0.8269604666234608
61
+ - F1 (macro): 0.8267742072572187
62
+ - Precision (macro): 0.8278533291801137
63
+ - Recall (macro): 0.8257668793195109
64
+
65
+ The per-entity breakdown of the F1 score on the test set are below:
66
+ - location: 0.7862266857962696
67
+ - organization: 0.7770320656226697
68
+ - person: 0.9170638703527169
69
+
70
+ For F1 scores, the confidence interval is obtained by bootstrap as below:
71
+ - F1 (micro):
72
+ - 90%: [0.8124223893760291, 0.8416139230675236]
73
+ - 95%: [0.8098712905029445, 0.8440240645643514]
74
+ - F1 (macro):
75
+ - 90%: [0.8124223893760291, 0.8416139230675236]
76
+ - 95%: [0.8098712905029445, 0.8440240645643514]
77
+
78
+ Full evaluation can be found at [metric file of NER](https://huggingface.co/tner/deberta-v3-large-ttc/raw/main/eval/metric.json)
79
+ and [metric file of entity span](https://huggingface.co/tner/deberta-v3-large-ttc/raw/main/eval/metric_span.json).
80
+
81
+ ### Usage
82
+ This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip
83
+ ```shell
84
+ pip install tner
85
+ ```
86
+ and activate model as below.
87
+ ```python
88
+ from tner import TransformersNER
89
+ model = TransformersNER("tner/deberta-v3-large-ttc")
90
+ model.predict(["Jacob Collier is a Grammy awarded English artist from London"])
91
+ ```
92
+ It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment.
93
+
94
+ ### Training hyperparameters
95
+
96
+ The following hyperparameters were used during training:
97
+ - dataset: ['tner/ttc']
98
+ - dataset_split: train
99
+ - dataset_name: None
100
+ - local_dataset: None
101
+ - model: microsoft/deberta-v3-large
102
+ - crf: True
103
+ - max_length: 128
104
+ - epoch: 15
105
+ - batch_size: 16
106
+ - lr: 1e-05
107
+ - random_seed: 42
108
+ - gradient_accumulation_steps: 4
109
+ - weight_decay: 1e-07
110
+ - lr_warmup_step_ratio: 0.1
111
+ - max_grad_norm: 10.0
112
+
113
+ The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/tner/deberta-v3-large-ttc/raw/main/trainer_config.json).
114
+
115
+ ### Reference
116
+ If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
117
+
118
+ ```
119
+
120
+ @inproceedings{ushio-camacho-collados-2021-ner,
121
+ title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition",
122
+ author = "Ushio, Asahi and
123
+ Camacho-Collados, Jose",
124
+ booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations",
125
+ month = apr,
126
+ year = "2021",
127
+ address = "Online",
128
+ publisher = "Association for Computational Linguistics",
129
+ url = "https://aclanthology.org/2021.eacl-demos.7",
130
+ doi = "10.18653/v1/2021.eacl-demos.7",
131
+ pages = "53--62",
132
+ abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.",
133
+ }
134
+
135
+ ```
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "tner_ckpt/ttc_deberta_v3_large/best_model",
3
  "architectures": [
4
  "DebertaV2ForTokenClassification"
5
  ],
 
1
  {
2
+ "_name_or_path": "microsoft/deberta-v3-large",
3
  "architectures": [
4
  "DebertaV2ForTokenClassification"
5
  ],
eval/metric.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"micro/f1": 0.8266925817946227, "micro/f1_ci": {"90": [0.8124223893760291, 0.8416139230675236], "95": [0.8098712905029445, 0.8440240645643514]}, "micro/recall": 0.8269604666234608, "micro/precision": 0.8264248704663213, "macro/f1": 0.8267742072572187, "macro/f1_ci": {"90": [0.8124381534173242, 0.8415633373861329], "95": [0.8091911900905925, 0.845009292494298]}, "macro/recall": 0.8257668793195109, "macro/precision": 0.8278533291801137, "per_entity_metric": {"location": {"f1": 0.7862266857962696, "f1_ci": {"90": [0.7537985559174355, 0.8180537895612209], "95": [0.747742347424999, 0.8240248027848741]}, "precision": 0.7896253602305475, "recall": 0.7828571428571428}, "organization": {"f1": 0.7770320656226697, "f1_ci": {"90": [0.754573031135531, 0.7996870719423494], "95": [0.7482457440846576, 0.8046029849290348]}, "precision": 0.7707100591715976, "recall": 0.7834586466165413}, "person": {"f1": 0.9170638703527169, "f1_ci": {"90": [0.8987457734702097, 0.9367336323051129], "95": [0.8954628546358585, 0.940273320608835]}, "precision": 0.9232245681381958, "recall": 0.9109848484848485}}}
eval/metric_span.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"micro/f1": 0.8713961775186264, "micro/f1_ci": {"90": [0.8592197178260848, 0.8844864406930985], "95": [0.8564147820469139, 0.8869424677617551]}, "micro/recall": 0.8716785482825664, "micro/precision": 0.8711139896373057, "macro/f1": 0.8713961775186264, "macro/f1_ci": {"90": [0.8592197178260848, 0.8844864406930985], "95": [0.8564147820469139, 0.8869424677617551]}, "macro/recall": 0.8716785482825664, "macro/precision": 0.8711139896373057}
eval/prediction.validation.json ADDED
The diff for this file is too large to render. See raw diff
 
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:90c2023335908809242c4b9207c65063d8f9cceca272df44d6876de33fb697d4
3
- size 1736209327
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a34eb4892bb85383e6dc32136557291dacfd2866545b271e324547a950b08a21
3
+ size 1736214831
tokenizer.json CHANGED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json CHANGED
@@ -4,7 +4,7 @@
4
  "do_lower_case": false,
5
  "eos_token": "[SEP]",
6
  "mask_token": "[MASK]",
7
- "name_or_path": "tner_ckpt/ttc_deberta_v3_large/best_model",
8
  "pad_token": "[PAD]",
9
  "sep_token": "[SEP]",
10
  "sp_model_kwargs": {},
 
4
  "do_lower_case": false,
5
  "eos_token": "[SEP]",
6
  "mask_token": "[MASK]",
7
+ "name_or_path": "microsoft/deberta-v3-large",
8
  "pad_token": "[PAD]",
9
  "sep_token": "[SEP]",
10
  "sp_model_kwargs": {},
trainer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"dataset": ["tner/ttc"], "dataset_split": "train", "dataset_name": null, "local_dataset": null, "model": "microsoft/deberta-v3-large", "crf": true, "max_length": 128, "epoch": 15, "batch_size": 16, "lr": 1e-05, "random_seed": 42, "gradient_accumulation_steps": 4, "weight_decay": 1e-07, "lr_warmup_step_ratio": 0.1, "max_grad_norm": 10.0}