English
SpEL
Entity Linking
Structured Prediction
File size: 8,313 Bytes
c6d8f61
 
6b897ee
 
 
 
 
7bbcb81
6b897ee
 
 
 
b14d477
 
c6d8f61
6b897ee
 
 
 
 
 
68f0c2d
 
 
 
 
 
ffe436e
68f0c2d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ffe436e
 
68f0c2d
 
 
 
 
 
 
 
 
 
 
ffe436e
 
 
68f0c2d
 
 
 
 
 
6b897ee
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
---
license: cc-by-4.0
language:
- en
datasets:
- CoNLL2003/AIDA
- Wikipedia
- sshavara/AIDA_testc
tags:
- SpEL
- Entity Linking
- Structured Prediction
widget:
- text: "Leicestershire beat Somerset by an innings and 39 runs in two days."
---

## SpEL (Structured prediction for Entity Linking)
SpEL model finetuned on English Wikipedia as well as the training portion of CoNLL2003/AIDA. 
It is introduced in the paper [SPEL: Structured Prediction for Entity Linking (EMNLP 2023)](https://arxiv.org/abs/2310.14684). 
The code and data are available in [this repository](https://github.com/shavarani/SpEL).

### Usage
The following snippet demonstrates a quick way that SpEL can be used to generate subword-level, word-level, and phrase-level annotations for a sentence.

```python
# download SpEL from https://github.com/shavarani/SpEL
from transformers import AutoTokenizer
from spel.model import SpELAnnotator, dl_sa
from spel.configuration import device
from spel.utils import get_subword_to_word_mapping
from spel.span_annotation import WordAnnotation, PhraseAnnotation
finetuned_after_step = 4
sentence = "Grace Kelly by Mika reached the top of the UK Singles Chart in 2007."
tokenizer = AutoTokenizer.from_pretrained("roberta-base")
# ############################################# LOAD SpEL #############################################################
spel = SpELAnnotator()
spel.init_model_from_scratch(device=device)
if finetuned_after_step == 3:
    spel.shrink_classification_head_to_aida(device)
spel.load_checkpoint(None, device=device, load_from_torch_hub=True, finetuned_after_step=finetuned_after_step)
# ############################################# RUN SpEL ##############################################################
inputs = tokenizer(sentence, return_tensors="pt")
token_offsets = list(zip(inputs.encodings[0].tokens,inputs.encodings[0].offsets))
subword_annotations = spel.annotate_subword_ids(inputs.input_ids, k_for_top_k_to_keep=10, token_offsets=token_offsets)
# #################################### CREATE WORD-LEVEL ANNOTATIONS ##################################################
tokens_offsets = token_offsets[1:-1]
subword_annotations = subword_annotations[1:]
for sa in subword_annotations:
    sa.idx2tag = dl_sa.mentions_itos
word_annotations = [WordAnnotation(subword_annotations[m[0]:m[1]], tokens_offsets[m[0]:m[1]])
                    for m in get_subword_to_word_mapping(inputs.tokens(), sentence)]
# ################################## CREATE PHRASE-LEVEL ANNOTATIONS ##################################################
phrase_annotations = []
for w in word_annotations:
    if not w.annotations:
        continue
    if phrase_annotations and phrase_annotations[-1].resolved_annotation == w.resolved_annotation:
        phrase_annotations[-1].add(w)
    else:
        phrase_annotations.append(PhraseAnnotation(w))
# ################################## PRINT OUT THE CREATED ANNOTATIONS ################################################
for phrase_annotation in phrase_annotations:
   print(dl_sa.mentions_itos[phrase_annotation.resolved_annotation])
```





## Evaluation Results
Entity Linking evaluation results of *SpEL* compared to that of the literature over AIDA test sets:

| Approach                                                        | EL Micro-F1<br/>test-a | EL Micro-F1<br/>test-b |            #params<br/>on GPU            | speed<br/>sec/doc |
|-----------------------------------------------------------------|:----------------------:|:----------------------:|:----------------------------------------:|:-----------------:|
| Hoffart et al. (2011)                                           |          72.4          |          72.8          |                    -                     |         -         |
| Kolitsas et al. (2018)                                          |          89.4          |          82.4          |                  330.7M                  |       0.097       |
| Broscheit (2019)                                                |          86.0          |          79.3          |                  495.1M                  |       0.613       |
| Peters et al. (2019)                                            |          82.1          |          73.1          |                    -                     |         -         |
| Martins et al. (2019)                                           |          85.2          |          81.9          |                    -                     |         -         |
| van Hulst et al. (2020)                                         |          83.3          |          82.4          |                  19.0M                   |       0.337       |
| Févry et al. (2020)                                             |          79.7          |          76.7          |                    -                     |         -         |
| Poerner et al. (2020)                                           |          90.8          |          85.0          |                  131.1M                  |         -         |
| Kannan Ravi et al. (2021)                                       |           -            |          83.1          |                    -                     |         -         |
| De Cao et al. (2021b)                                           |           -            |          83.7          |                  406.3M                  |      40.969       |
| De Cao et al. (2021a)<br/>(no mention-specific candidate set)   |          61.9          |          49.4          |                  124.8M                  |       0.268       |
| De Cao et al. (2021a)<br/>(using PPRforNED candidate set)       |          90.1          |          85.5          |                  124.8M                  |       0.194       |
| Mrini et al. (2022)                                             |           -            |          85.7          |  (train) 811.5M<br/>(test) 406.2M        |         -         |
| Zhang et al. (2022)                                             |           -            |          85.8          |                 1004.3M                  |         -         |
| Feng et al. (2022)                                              |           -            |          86.3          |                  157.3M                  |         -         |
| <hr/>                                                           |         <hr/>          |         <hr/>          |                  <hr/>                   |       <hr/>       |
| **SpEL-base** (no mention-specific candidate set)               |          91.3          |          85.5          |                  128.9M                  |       0.084       |
| **SpEL-base** (KB+Yago candidate set)                           |          90.6          |          85.7          |                  128.9M                  |       0.158       |
| **SpEL-base** (PPRforNED candidate set)<br/>(context-agnostic)  |          91.7          |          86.8          |                  128.9M                  |       0.153       |
| **SpEL-base** (PPRforNED candidate set)<br/>(context-aware)     |          92.7          |          88.1          |                  128.9M                  |       0.156       |
| **SpEL-large** (no mention-specific candidate set)              |          91.6          |          85.8          |                  361.1M                  |       0.273       |
| **SpEL-large** (KB+Yago candidate set)                          |          90.8          |          85.7          |                  361.1M                  |       0.267       |
| **SpEL-large** (PPRforNED candidate set)<br/>(context-agnostic) |          92.0          |          87.3          |                  361.1M                  |       0.268       |
| **SpEL-large** (PPRforNED candidate set)<br/>(context-aware)    |          92.9          |          88.6          |                  361.1M                  |       0.267       |

----

## Citation
If you use SpEL finetuned models or data, please cite our paper:

```
@inproceedings{shavarani2023spel,
  title={Sp{EL}: Structured Prediction for Entity Linking},
  author={Shavarani, Hassan S.  and  Sarkar, Anoop},
  booktitle={The 2023 Conference on Empirical Methods in Natural Language Processing},
  year={2023},
  url={https://arxiv.org/abs/2310.14684}
}
```