ikuyamada commited on
Commit
412ed09
1 Parent(s): f936ea9

add README.md

Browse files
Files changed (1) hide show
  1. README.md +64 -0
README.md ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ thumbnail: https://github.com/studio-ousia/luke/raw/master/resources/luke_logo.png
4
+ tags:
5
+ - luke
6
+ - named entity recognition
7
+ - entity typing
8
+ - relation classification
9
+ - question answering
10
+ license: apache-2.0
11
+ ---
12
+
13
+ ## LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention
14
+
15
+ **LUKE** (**L**anguage **U**nderstanding with **K**nowledge-based
16
+ **E**mbeddings) is a new pre-trained contextualized representation of words and
17
+ entities based on transformer. LUKE treats words and entities in a given text as
18
+ independent tokens, and outputs contextualized representations of them. LUKE
19
+ adopts an entity-aware self-attention mechanism that is an extension of the
20
+ self-attention mechanism of the transformer, and considers the types of tokens
21
+ (words or entities) when computing attention scores.
22
+
23
+ LUKE achieves state-of-the-art results on five popular NLP benchmarks including
24
+ **[SQuAD v1.1](https://rajpurkar.github.io/SQuAD-explorer/)** (extractive
25
+ question answering),
26
+ **[CoNLL-2003](https://www.clips.uantwerpen.be/conll2003/ner/)** (named entity
27
+ recognition), **[ReCoRD](https://sheng-z.github.io/ReCoRD-explorer/)**
28
+ (cloze-style question answering),
29
+ **[TACRED](https://nlp.stanford.edu/projects/tacred/)** (relation
30
+ classification), and
31
+ **[Open Entity](https://www.cs.utexas.edu/~eunsol/html_pages/open_entity.html)**
32
+ (entity typing).
33
+
34
+ Please check the [official repository](https://github.com/studio-ousia/luke) for
35
+ more details and updates.
36
+
37
+ This is the LUKE base model with 12 hidden layers, 768 hidden size. The total number
38
+ of parameters in this model is 253M. It is trained using December 2018 version of
39
+ Wikipedia.
40
+
41
+ ### Experimental results
42
+
43
+ The experimental results are provided as follows:
44
+
45
+ | Task | Dataset | Metric | LUKE-large | luke-base | Previous SOTA |
46
+ | ------------------------------ | ---------------------------------------------------------------------------- | ------ | ----------------- | --------- | ------------------------------------------------------------------------- |
47
+ | Extractive Question Answering | [SQuAD v1.1](https://rajpurkar.github.io/SQuAD-explorer/) | EM/F1 | **90.2**/**95.4** | 86.1/92.3 | 89.9/95.1 ([Yang et al., 2019](https://arxiv.org/abs/1906.08237)) |
48
+ | Named Entity Recognition | [CoNLL-2003](https://www.clips.uantwerpen.be/conll2003/ner/) | F1 | **94.3** | 93.3 | 93.5 ([Baevski et al., 2019](https://arxiv.org/abs/1903.07785)) |
49
+ | Cloze-style Question Answering | [ReCoRD](https://sheng-z.github.io/ReCoRD-explorer/) | EM/F1 | **90.6**/**91.2** | - | 83.1/83.7 ([Li et al., 2019](https://www.aclweb.org/anthology/D19-6011/)) |
50
+ | Relation Classification | [TACRED](https://nlp.stanford.edu/projects/tacred/) | F1 | **72.7** | - | 72.0 ([Wang et al. , 2020](https://arxiv.org/abs/2002.01808)) |
51
+ | Fine-grained Entity Typing | [Open Entity](https://www.cs.utexas.edu/~eunsol/html_pages/open_entity.html) | F1 | **78.2** | - | 77.6 ([Wang et al. , 2020](https://arxiv.org/abs/2002.01808)) |
52
+
53
+ ### Citation
54
+
55
+ If you find LUKE useful for your work, please cite the following paper:
56
+
57
+ ```latex
58
+ @inproceedings{yamada2020luke,
59
+ title={LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention},
60
+ author={Ikuya Yamada and Akari Asai and Hiroyuki Shindo and Hideaki Takeda and Yuji Matsumoto},
61
+ booktitle={EMNLP},
62
+ year={2020}
63
+ }
64
+ ```