fractalego
commited on
Commit
·
75a48cb
1
Parent(s):
c67dfe8
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,58 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
## Introduction
|
2 |
+
This is a zero-shot relation extractor based on the paper [Exploring the zero-shot limit of FewRel](https://www.aclweb.org/anthology/2020.coling-main.124).
|
3 |
+
|
4 |
+
##
|
5 |
+
Installation
|
6 |
+
```bash
|
7 |
+
$ pip install zero-shot-re
|
8 |
+
```
|
9 |
+
|
10 |
+
## Run the Extractor
|
11 |
+
```python
|
12 |
+
from transformers import AutoModel, AutoTokenizer
|
13 |
+
from zero-shot-re import RelationExtractor
|
14 |
+
|
15 |
+
model = AutoModel.from_pretrained("fractalego/fewrel-zero-shot")
|
16 |
+
tokenizer = AutoTokenizer.from_pretrained("fractalego/fewrel-zero-shot")
|
17 |
+
|
18 |
+
relations = ['noble title', 'founding date', 'occupation of a person']
|
19 |
+
extractor = RelationExtractor(model, tokenizer, relations)
|
20 |
+
ranked_rels = extractor.rank(text='John Smith received an OBE', head='John Smith', tail='OBE')
|
21 |
+
print(ranked_rels)
|
22 |
+
```
|
23 |
+
with results
|
24 |
+
```python3
|
25 |
+
[('noble title', 0.9690611883997917),
|
26 |
+
('occupation of a person', 0.0012609362602233887),
|
27 |
+
('founding date', 0.00024014711380004883)]
|
28 |
+
```
|
29 |
+
|
30 |
+
## Accuracy
|
31 |
+
The results as in the paper are
|
32 |
+
|
33 |
+
| Model | 0-shot 5-ways | 0-shot 10-ways |
|
34 |
+
|------------------------|--------------|----------------|
|
35 |
+
|(1) Distillbert |70.1±0.5 | 55.9±0.6 |
|
36 |
+
|(2) Bert Large |80.8±0.4 | 69.6±0.5 |
|
37 |
+
|(3) Distillbert + SQUAD |81.3±0.4 | 70.0±0.2 |
|
38 |
+
|(4) Bert Large + SQUAD |86.0±0.6 | 76.2±0.4 |
|
39 |
+
|
40 |
+
This version uses the (4) Bert Large + SQUAD model
|
41 |
+
|
42 |
+
## Cite as
|
43 |
+
```bibtex
|
44 |
+
@inproceedings{cetoli-2020-exploring,
|
45 |
+
title = "Exploring the zero-shot limit of {F}ew{R}el",
|
46 |
+
author = "Cetoli, Alberto",
|
47 |
+
booktitle = "Proceedings of the 28th International Conference on Computational Linguistics",
|
48 |
+
month = dec,
|
49 |
+
year = "2020",
|
50 |
+
address = "Barcelona, Spain (Online)",
|
51 |
+
publisher = "International Committee on Computational Linguistics",
|
52 |
+
url = "https://www.aclweb.org/anthology/2020.coling-main.124",
|
53 |
+
doi = "10.18653/v1/2020.coling-main.124",
|
54 |
+
pages = "1447--1451",
|
55 |
+
abstract = "This paper proposes a general purpose relation extractor that uses Wikidata descriptions to represent the relation{'}s surface form. The results are tested on the FewRel 1.0 dataset, which provides an excellent framework for training and evaluating the proposed zero-shot learning system in English. This relation extractor architecture exploits the implicit knowledge of a language model through a question-answering approach.",
|
56 |
+
}
|
57 |
+
```
|
58 |
+
|