File size: 2,324 Bytes
4fdd695
 
 
 
 
 
035dc2f
4fdd695
 
 
 
 
 
 
035dc2f
ec52731
 
 
 
035dc2f
 
ec52731
035dc2f
ec52731
 
7eb8072
 
 
4fdd695
 
 
27897b3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
66c844e
 
8f53e7f
fc7da15
 
 
 
 
66c844e
e91dfff
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
---

language: 
  - en
tags:
- fast
- coreference-resolution
license: mit
datasets:
- multi_news
- ontonotes
metrics:
- CoNLL
task_categories:
- coreference-resolution
model-index:
- name: biu-nlp/f-coref
  results:
  - task:
      type: coreference-resolution
      name: coreference-resolution
    dataset:
      name: ontonotes
      type: coreference
    metrics:
    - name: Avg. F1
      type: CoNLL
      value: 78.5

---

## F-Coref: Fast, Accurate and Easy to Use Coreference Resolution

[F-Coref](https://arxiv.org/abs/2209.04280) allows to process 2.8K OntoNotes documents in 25 seconds on a V100 GPU (compared to 6 minutes for the [LingMess](https://arxiv.org/abs/2205.12644) model, and to 12 minutes of the popular AllenNLP coreference model) with only a modest drop in accuracy.
The fast speed is achieved through a combination of distillation of a compact model from the LingMess model, and an efficient batching implementation using a technique we call leftover

Please check the [official repository](https://github.com/shon-otmazgin/fastcoref) for more details and updates.

#### Experiments

| Model                 | Runtime | Memory  |
|-----------------------|---------|---------|
| [Joshi et al. (2020)](https://arxiv.org/abs/1907.10529)    | 12:06 | 27.4 |
| [Otmazgin et al. (2022)](https://arxiv.org/abs/2205.12644) | 06:43 | 4.6 |
|      + Batching                                            | 06:00 | 6.6 |
| [Kirstain et al. (2021)](https://arxiv.org/abs/2101.00434) | 04:37 | 4.4 |
| [Dobrovolskii (2021)](https://arxiv.org/abs/2109.04127)    | 03:49 | 3.5 |
| [F-Coref](https://arxiv.org/abs/2209.04280)                | 00:45 | 3.3 |
|      + Batching                                            | 00:35 | 4.5 |
|           + Leftovers batching                             | 00:25 | 4.0 |
The inference time(Min:Sec) and memory(GiB) for each model on 2.8K documents. Average of 3 runs. Hardware, NVIDIA Tesla V100 SXM2.

### Citation

```
@inproceedings{Otmazgin2022FcorefFA,
  title={F-coref: Fast, Accurate and Easy to Use Coreference Resolution},
  author={Shon Otmazgin and Arie Cattan and Yoav Goldberg},
  booktitle={AACL},
  year={2022}
}
```
[F-coref: Fast, Accurate and Easy to Use Coreference Resolution](https://aclanthology.org/2022.aacl-demo.6) (Otmazgin et al., AACL-IJCNLP 2022)