id
stringlengths
32
33
x
stringlengths
41
1.75k
y
stringlengths
4
39
e7c947a02bb0e81d6b6b4b9da74024_5
These works implicitly define what is good gender debiasing: according to <cite>Bolukbasi et al. (2016b)</cite> , there is no gender bias if each nonexplicitly gendered word in the vocabulary is in equal distance to both elements of all explicitly gendered pairs.
background
e7c947a02bb0e81d6b6b4b9da74024_6
We refer to the word embeddings of the previous works as HARD-DEBIASED<cite> (Bolukbasi et al., 2016b)</cite> and GN-GLOVE (gender-neutral GloVe) counterparts in a predefined set.
similarities
e7c947a02bb0e81d6b6b4b9da74024_7
6 Unless otherwise specified, we follow <cite>Bolukbasi et al. (2016b)</cite> and use a reduced version of the vocabulary for both word embeddings: we take the most frequent 50,000 words and phrases and remove words with upper-case letters, digits, or punctuation, and words longer than 20 characters.
extends differences
e7c947a02bb0e81d6b6b4b9da74024_8
Male-and female-biased words cluster together We take the most biased words in the vocabulary according to the original bias (500 malebiased and 500 female-biased 8 ), and cluster them 6 We use the embeddings provided by <cite>Bolukbasi et al. (2016b)</cite> in https://github.com/tolga-b/ debiaswe and by Zhao et al. (2018) in https:// github.com/uclanlp/gn_glove.
similarities uses
e7c947a02bb0e81d6b6b4b9da74024_9
Professions We consider the list of professions used in <cite>Bolukbasi et al. (2016b)</cite> and Zhao et al. (2018) 10 in light of the neighbours-based bias definition.
uses similarities
e7f972baa73e7ababa28eded3adad9_0
Numerous initiatives such as the Digital Corpus of Sanskrit 1 , GRETIL 2 , The Sanskrit Library 3 and others from the Sanskrit Linguistic and Computational Linguistic community is a fine example of such efforts (Goyal et al., 2012; <cite>Krishna et al., 2017)</cite> .
background
e7f972baa73e7ababa28eded3adad9_1
Our approach will help to scale the segmentation process in comparison with the challenges posed by knowledge involved processes in the current systems <cite>(Krishna et al., 2017)</cite> .
extends
e7f972baa73e7ababa28eded3adad9_2
To further catalyse the research in word segmentation for Sanskrit, <cite>Krishna et al. (2017)</cite> has released a dataset for the word segmentation task. <cite>The work</cite> releases a dataset of 119,000 sentences in Sanskrit along with the lexical and morphological analysis from a shallow parser. <cite>The work</cite> emphasises the need for not just predicting the inflected word form but also the prediction of the associated morphological information of the word.
background
e7f972baa73e7ababa28eded3adad9_3
The additional information will be beneficial in further processing of Sanskrit texts, such as Dependency parsing or summarisation <cite>(Krishna et al., 2017)</cite> .So far, no system successfully predicts the morphological information of the words in addition to the final word form.
future_work
e7f972baa73e7ababa28eded3adad9_4
In our case we use 105,000 parallel strings from the Digital Corpus of Sanskrit as released in <cite>Krishna et al. (2017)</cite> .
uses
e7f972baa73e7ababa28eded3adad9_5
We used a dataset of 107,000 sentences from the <cite>Sanskrit Word Segmentation Dataset</cite> <cite>(Krishna et al., 2017)</cite> .
uses
e7f972baa73e7ababa28eded3adad9_6
The systems by Krishna et al. (2016) and <cite>Krishna et al. (2017)</cite> assume that the parser by Goyal et al. (2012) , identifies all the possible candidate chunks.
background
e7f972baa73e7ababa28eded3adad9_7
The systems by Krishna et al. (2016) and <cite>Krishna et al. (2017)</cite> assume that the parser by Goyal et al. (2012) , identifies all the possible candidate chunks. Our proposed model is built with precisely one purpose in mind, which is to predict the final word-forms in a given sequence.
differences
e7f972baa73e7ababa28eded3adad9_8
<cite>Krishna et al. (2017)</cite> states that it is desirable to predict the morphological information of a word from along with the final word-form as the information will be helpful in further processing of Sanskrit.
background
e7f972baa73e7ababa28eded3adad9_9
<cite>Krishna et al. (2017)</cite> states that it is desirable to predict the morphological information of a word from along with the final word-form as the information will be helpful in further processing of Sanskrit. The segmentation task is seen as a means and not an end itself. Here, we overlook this aspect and see the segmentation task as an end in itself.
differences
e7f972baa73e7ababa28eded3adad9_10
Given the importance of morphological segmentation in morphologically rich languages such as Hebrew and Arabic (Seeker and Ç etinoglu, 2015) , the same applies to the morphologically rich Sanskrit as well <cite>(Krishna et al., 2017)</cite> .
background
e7f972baa73e7ababa28eded3adad9_11
Given the importance of morphological segmentation in morphologically rich languages such as Hebrew and Arabic (Seeker and Ç etinoglu, 2015) , the same applies to the morphologically rich Sanskrit as well <cite>(Krishna et al., 2017)</cite> . But, we leave this work for future.
future_work
e831e058f208542af16c1ea236d2c9_0
These steps result in an improvement of 43.98% percent relative error reduction in F-score over an earlier best result in edited detection when punctuation is included in both training and testing data<cite> [Charniak and Johnson 2001]</cite> , and 20.44% percent relative error reduction in F-score over the latest best result where punctuation is excluded from the training and testing data [Johnson and Charniak 2004] .
differences
e831e058f208542af16c1ea236d2c9_1
Because of the availability of the Switchboard corpus [Godfrey et al. 1992] and other conversational telephone speech (CTS) corpora, there has been an increasing interest in improving the performance of identifying the edited regions for parsing disfluent sentences<cite> [Charniak and Johnson 2001</cite> , Johnson and Charniak 2004 , Liu et al. 2005 .
background
e831e058f208542af16c1ea236d2c9_2
These steps result in a significant improvement in F-score over the earlier best result reported in<cite> [Charniak and Johnson 2001]</cite> , where punctuation is included in both the training and testing data of the Switchboard corpus, and a significant error reduction in F-score over the latest best result [Johnson and Charniak 2004] , where punctuation is ignored in both the training and testing data of the Switchboard corpus.
differences
e831e058f208542af16c1ea236d2c9_3
We include the distributions with punctuation is to match with the baseline system reported in<cite> [Charniak and Johnson 2001]</cite> , where punctuation is included to identify the edited regions.
motivation similarities
e831e058f208542af16c1ea236d2c9_4
We take as our baseline system the work by<cite> [Charniak and Johnson 2001]</cite> .
uses
e831e058f208542af16c1ea236d2c9_6
We re-implement the boosting algorithm reported by<cite> [Charniak and Johnson 2001]</cite>
uses
e831e058f208542af16c1ea236d2c9_7
In<cite> [Charniak and Johnson 2001]</cite> , identifying edited regions is considered as a classification problem, where each word is classified either as edited or normal.
background
e831e058f208542af16c1ea236d2c9_8
We relax the definition for rough copy, because more than 94% of all edits have both reparandum and repair, while the rough copy defined in<cite> [Charniak and Johnson 2001]</cite> only covers 77.66% of such instances.
differences
e831e058f208542af16c1ea236d2c9_9
Since the original code from<cite> [Charniak and Johnson 2001]</cite> is not available, we conducted our first experiment to replicate the result of their baseline system described in section 3.
motivation
e831e058f208542af16c1ea236d2c9_10
We used the exactly same training and testing data from the Switchboard corpus as in<cite> [Charniak and Johnson 2001]</cite> .
uses similarities
e831e058f208542af16c1ea236d2c9_11
These results are comparable with the results from <cite>[Charniak & Johnson 2001]</cite> , i.e., 95.2%, 67.8%, and 79.2% for precision, recall, and f-score, correspondingly.
similarities
e834dadbcf08cf14e476b5f5cbf79e_0
In particular, the memory network Chien and Lin, 2018) , neural variational learning (<cite>Serban et al., 2017</cite>; Chung et al., 2015) , neural discrete representation (Jang et al., 2016; Maddison et al., 2016; van den Oord et al., 2017) , recurrent ladder network (Rasmus et al., 2015; Prémont-Schwarz et al., 2017; Sønderby et al., 2016) , stochastic neural network (Fraccaro et al., 2016; Goyal et al., 2017; Shabanian et al., 2017) , Markov recurrent neural network (Venkatraman et al., 2017; Kuo and Chien, 2018) , sequence GAN (Yu et al., 2017) and reinforcement learning (Tegho et al., 2017) are introduced in various deep models which open a window to more practical tasks, e.g. reading comprehension, sentence generation, dialogue system, question answering and machine translation.
background
e92c6b44f4482ca868221bff551d67_0
Briefly, our method consists in augmenting a state-of-the-art statistical parser <cite>(Henderson, 2003)</cite> , whose architecture and properties make it particularly adaptive to new tasks.
extends
e92c6b44f4482ca868221bff551d67_1
Our approach maintains state-of-the-art results in parsing, while also reaching state-of-the-art results in function labelling, by suitably extending a Simple Synchrony Network (SSN) parser <cite>(Henderson, 2003)</cite> into a single integrated system.
extends
e92c6b44f4482ca868221bff551d67_2
We use a family of statistical parsers, the Simple Synchrony Network (SSN) parsers <cite>(Henderson, 2003)</cite> , which crucially do not make any explicit independence assumptions, and learn to smooth across rare feature combinations.
uses
e92c6b44f4482ca868221bff551d67_3
SSN parsers, on the other hand, do not state any explicit independence assumptions: they induce a finite history representation of an unbounded sequence of moves, so that the representation of a move i − 1 is included in the inputs to the represention of the next move i, as explained in more detail in <cite>(Henderson, 2003)</cite> .
background
e92c6b44f4482ca868221bff551d67_4
H03 indicates the model illustrated in <cite>(Henderson, 2003)</cite> .
uses
e92c6b44f4482ca868221bff551d67_5
All our models, as well as the parser described in <cite>(Henderson, 2003)</cite> , are run only once.
similarities
e92c6b44f4482ca868221bff551d67_6
<cite>(Henderson, 2003)</cite> tested the effect of larger input vocabulary on SSN performance by changing the frequency cut-off that selects the input tag-word pairs.
background
e92c6b44f4482ca868221bff551d67_7
Second, this interpretation of the results is confirmed by comparing different ways of enlarging the vocabulary size input to the SSN. <cite>(Henderson, 2003)</cite> tested the effect of larger input vocabulary on SSN performance by changing the frequency cut-off that selects the input tag-word pairs.
uses
e9404db1fbda5dd8c55a40711d06ec_0
The method is designed for intrinsic evaluation and extends the approach proposed in (<cite>Schnabel et al., 2015</cite>) .
extends
e9404db1fbda5dd8c55a40711d06ec_1
In (<cite>Schnabel et al., 2015</cite>) , crowdsourcingbased evaluation was proposed for synonyms or a word relatedness task where six word embedding techniques were evaluated.
background
e9404db1fbda5dd8c55a40711d06ec_2
The <cite>crowdsourcingbased intrinsic evaluation</cite> which tests embeddings for semantic relationship between words focuses on a direct comparison of word embeddings with respect to individual queries.
background
e9404db1fbda5dd8c55a40711d06ec_3
Although <cite>the method</cite> is promising for evaluating different word embeddings, <cite>it</cite> has some shortcomings.
motivation
e9404db1fbda5dd8c55a40711d06ec_4
Specifically, <cite>it</cite> does not explicitly consider word context.
motivation
e9404db1fbda5dd8c55a40711d06ec_5
As <cite>the approach</cite> relies on human interpretation of words, it is important to take into account how humans interpret or understand the meaning of a word.
background
e9404db1fbda5dd8c55a40711d06ec_6
Thus, if <cite>the approach</cite> is based only on the word without its context, it will be difficult for humans to understand the meaning of a particular word, and it could result in word sense ambiguity (WSA).
motivation
e9404db1fbda5dd8c55a40711d06ec_7
In this paper, we show what are the consequences of the lack of the word context in (<cite>Schnabel et al., 2015</cite>) , and we discuss how to address the resulting challenge.
uses motivation
e9404db1fbda5dd8c55a40711d06ec_8
<cite>The method</cite> in (<cite>Schnabel et al., 2015</cite>) started by creating a query inventory which is a pre-selected set of query terms and semantically related target words.
background
e9404db1fbda5dd8c55a40711d06ec_9
Although the experiments in (<cite>Schnabel et al., 2015</cite>) incorporated participants with adequate knowledge of English, the ambiguity is inherent in the language.
background
e9404db1fbda5dd8c55a40711d06ec_10
Also, the evaluated word embedding techniques in (<cite>Schnabel et al., 2015</cite>) except TSCCA (Dhillon et al., 2015)-generate one vector for each word, and that makes comparisons between two related words from two embedding techniques difficult.
background
e9404db1fbda5dd8c55a40711d06ec_11
Before we introduce our extensions in the next section, we investigate how (<cite>Schnabel et al., 2015</cite>) accommodates word sense ambiguity.
uses
e9404db1fbda5dd8c55a40711d06ec_12
To achieve such an evaluation, we have first extended the work of (<cite>Schnabel et al., 2015</cite>) to include sentential context to avoid word sense ambiguity faced by a human tester.
extends
e9404db1fbda5dd8c55a40711d06ec_13
We then extended <cite>the method</cite> further so that it is more suitable to evaluate embedding techniques designed for polysemous words with regard to their ability to embed diverse senses.
extends
e9404db1fbda5dd8c55a40711d06ec_14
Our chief idea is to extend the work of (<cite>Schnabel et al., 2015</cite>) by adding a context sentence for each query term.
extends
e9404db1fbda5dd8c55a40711d06ec_15
In fact, (<cite>Schnabel et al., 2015</cite>) have already considered 'I don't know the meaning of one (or several) of the words'; however, when the context is in place, there may be a situation when none of the embeddings make a good match for the query term, and in that case 'None of the above' is more appropriate.
motivation background
e9404db1fbda5dd8c55a40711d06ec_16
Note that this is not needed in (<cite>Schnabel et al., 2015</cite>) where query words are not annotated.
background
e9404db1fbda5dd8c55a40711d06ec_17
At the end of Sec. 2.2, we explained how word sense ambiguity is accommodated in (<cite>Schnabel et al., 2015</cite>) .
background
e9404db1fbda5dd8c55a40711d06ec_18
We argued that <cite>their</cite> evaluation was in expectation with respect to subjective preferences of the Turkers.
uses background
e9404db1fbda5dd8c55a40711d06ec_19
In this paper, a crowdsourcing-based word embedding evaluation technique of (<cite>Schnabel et al., 2015</cite>) was extended to provide data-driven treatment of word sense ambiguity.
extends
e9404db1fbda5dd8c55a40711d06ec_20
The method of (<cite>Schnabel et al., 2015</cite>) relies on user's subjective and knowledge dependent ability to select 'preferred' meanings whereas our method would deal with this problem selecting explicit contexts for words.
differences
e9779b09826d709f8851550d958df7_0
These corpora, as well as deep learning models, lead to contributions in multilingual language grounding and learning of shared and multimodal representations with neural networks [4, 7, <cite>8,</cite> 9, 10, 11, 12, 13] .
background
e9779b09826d709f8851550d958df7_1
This preliminary result, in line with previous findings of<cite> [8]</cite> , confirms that neural speech-image models can capture a cross-lingual semantic signal, a first step in the perspective of learning speech-to-speech translation systems without text supervision.
similarities
e9779b09826d709f8851550d958df7_2
We have seen in previous section that attention focuses on nouns and Table 2 suggests that these nouns correspond to the main concept of the paired image. To confirm this trend, we experiment on a crosslingual speech-to-speech retrieval task using images as pivots. This possibility was introduced in<cite> [8]</cite> , but required training jointly or alternatively two speech encoders within the same architecture and a parallel bilingual speech dataset while we experiment with separately trained models for both languages.
extends differences
e9779b09826d709f8851550d958df7_3
In<cite> [8]</cite> , a parallel corpus was needed as the loss functions adopted try to minimise either the distance between captions in two languages or the distance between captions in two languages and the associated image as pivot.
background
e9779b09826d709f8851550d958df7_4
We evaluated our approach on 1k captions of our test corpus to be comparable with<cite> [8]</cite> .
similarities uses
e9779b09826d709f8851550d958df7_5
For comparison, we report<cite> [8]</cite> 's results on English to Hindi (HI) and Hindi to English speech-to-speech retrieval.
similarities
e9779b09826d709f8851550d958df7_6
is paired with image I, we assess the ability of our approach to rank the matching spoken caption in language tgt paired with image I in the top 1, 5, and 10 results and give its median rank r. We report our results in Table 4 as well as results from<cite> [8]</cite> who performed speechto-speech retrieval using crowd-sourced spoken captions in English and Hindi.
similarities
e9779b09826d709f8851550d958df7_7
Nevertheless, it is also important to mention that<cite> [8]</cite> experimented on real speech with multiple speakers while we used synthetic speech with only one voice.
differences
e9779b09826d709f8851550d958df7_8
ces so that there would be only one target caption for each query in order to compare our results with<cite> [8]</cite> .
future_work
e99193f62a8f3a9e46dee3cadd786f_0
Our dataset is a gold standard corpus of 1557 single-and multi-word disorder annotations <cite>(Ogren et al., 2008)</cite> .
uses
e99baf9c4b8650f29f410501c5165b_0
Some recent research of image captioning take inspiration from neural machine translation systems (NMT) [15] [16] [17] <cite>[18]</cite> that successfully use sequence-to-sequence learning for translation.
background
e99baf9c4b8650f29f410501c5165b_1
To overcome these limitations both for machine translation and image captioning, some new models were proposed by using the attention mechanism [3, 16,<cite> 18]</cite> .
motivation
e99baf9c4b8650f29f410501c5165b_2
Promising results has been published since attention was introduced in [16] then later refined in <cite>[18]</cite> .
background
e99baf9c4b8650f29f410501c5165b_3
We use the Luong style of attention <cite>[18]</cite> which is a refined version of attention mechanism and that to the best of our knowledge, there has not been any published work reporting the performance of an image captioning model that is built following only the encoder-decoder pipeline with Luong style of attention.
extends differences
e99baf9c4b8650f29f410501c5165b_4
Inspired by the use of attention in sequence-to-sequence learning for machine translation [16,<cite> 18]</cite> , visual attention has been proved to be a very effective way of improving image captioning.
motivation
e99baf9c4b8650f29f410501c5165b_5
In our work, we model the distribution p(w i t |X i , w i 1:t−1 ; θ) with a LSTM cell wrapped with Luong-style attention mechanism <cite>[18]</cite> .
similarities
e99baf9c4b8650f29f410501c5165b_6
In our model, we use the general form described in <cite>[18]</cite> :
extends differences
e99baf9c4b8650f29f410501c5165b_7
In this paper, we use a LSTM cell wrapped with the attention mechanism described in <cite>[18]</cite> to form R. LSTM [24] is a powerful form of recurrent neural network that is widely used now because of its ability to deal with issues like vanishing and exploding gradients.
extends differences
e9f7d339ccda101000b53d89da4e49_0
The previous method for AMR parsing takes a Train Dev Test 3504 463 398 Table 1 : Statistics of the extracted NP data two-step approach: first identifying distinct concepts (nodes) in the AMR graph, then defining the dependency relations between those concepts<cite> (Flanigan et al., 2014)</cite> .
background
e9f7d339ccda101000b53d89da4e49_1
We obtain this alignment by using the rule-based alignment tool by <cite>Flanigan et al. (2014)</cite> .
uses
e9f7d339ccda101000b53d89da4e49_2
We adopt the method proposed by <cite>Flanigan et al. (2014)</cite> as our baseline, which is a two-step pipeline method of concept identification step and<cite> (Flanigan et al., 2014)</cite> for a retired plant worker.
uses
e9f7d339ccda101000b53d89da4e49_3
We use the implementation 2 of<cite> (Flanigan et al., 2014)</cite> as our baseline.
uses
e9f7d339ccda101000b53d89da4e49_4
The method by <cite>Flanigan et al. (2014)</cite> can only generate the concepts that appear in the training data. On the other hand, our method can generate concepts that do not appear in the training data using the concept generation rules LEMMA, DICT PRED , and DICT NOUN in Table 3 .
differences
eab79e8aa2cbe6f3aeef0018129208_0
We propose solutions to enhance the Inside-Outside Recursive Neural Network (IORNN) reranker of<cite> Le and Zuidema (2014)</cite> .
uses
eab79e8aa2cbe6f3aeef0018129208_1
We propose solutions to enhance the Inside-Outside Recursive Neural Network (IORNN) reranker of<cite> Le and Zuidema (2014)</cite> . Replacing the original softmax function with a hierarchical softmax using a binary tree constructed by combining output of the Brown clustering algorithm and frequency-based Huffman codes, we significantly reduce the reranker's computational complexity.
extends
eab79e8aa2cbe6f3aeef0018129208_2
For dependency parsing, the inside-outside recursive neural net (IORNN) reranker proposed by<cite> Le and Zuidema (2014)</cite> is among the top systems, including the Chen and Manning (2014)'s extremely fast transition-based parser employing a traditional feed-forward neural network.
background
eab79e8aa2cbe6f3aeef0018129208_3
We focus on how to enhance the IORNN reranker of<cite> Le and Zuidema (2014)</cite> by both reducing its computational complexity and increasing its accuracy.
extends
eab79e8aa2cbe6f3aeef0018129208_4
Second, by comparing a countbased model with their neural-net-based model on perplexity,<cite> Le and Zuidema (2014)</cite> suggested that predicting with neural nets is an effective solution for the problem of data sparsity.
background
eab79e8aa2cbe6f3aeef0018129208_5
We focus on how to enhance the IORNN reranker of<cite> Le and Zuidema (2014)</cite> by both reducing its computational complexity and increasing its accuracy.
uses
eab79e8aa2cbe6f3aeef0018129208_6
We firstly introduce the IORNN reranker <cite>(Le and Zuidema, 2014)</cite> .
uses
eab79e8aa2cbe6f3aeef0018129208_7
<cite>(Le and Zuidema, 2014)</cite> where r = 0 if y is the first dependent of h; oth- is the set of y's sisters generated before. And,
uses background
eab79e8aa2cbe6f3aeef0018129208_9
Solutions to enhance the IORNN reranker of<cite> Le and Zuidema (2014)</cite> were proposed. We showed that, by replacing the original softmax function with a hierarchical softmax, the reranker's computational complexity significantly decreases.
extends
eb5ef34dd9c3845cd27c33242d5316_0
One recent notable work (<cite>Ganea and Hofmann 2017</cite>) instead pioneers to rely on pre-trained entity embeddings, learnable context representation and differentiable joint inference stage to learn basic features and their combinations from scratch.
background
eb5ef34dd9c3845cd27c33242d5316_1
One recent notable work (<cite>Ganea and Hofmann 2017</cite>) instead pioneers to rely on pre-trained entity embeddings, learnable context representation and differentiable joint inference stage to learn basic features and their combinations from scratch. <cite>Such model design</cite> allows to learn useful regularities in an end-to-end fashion and eliminates the need for extensive feature engineering. <cite>It</cite> also substantially outperforms In Milwaukee , Marc Newfield homered off Jose Parra leading off the bottom of the 12th as the Brewers rallied for a 5-4 victory over the Minnesota Twins .
background
eb5ef34dd9c3845cd27c33242d5316_2
Local context score Golden Figure 1 : One error case on AIDA-CoNLL development set of the full model of <cite>Ganea and Hofmann (2017)</cite> .
background
eb5ef34dd9c3845cd27c33242d5316_3
Such state-of-the-art entity linking models <cite>(Ganea and Hofmann 2017</cite>; Le and Titov 2018) employ attention-based bag-of-words context model and pre-trained entity embeddings bootstrapped from word embeddings to assess topic level context compatibility.
background
eb5ef34dd9c3845cd27c33242d5316_4
Such state-of-the-art entity linking models <cite>(Ganea and Hofmann 2017</cite>; Le and Titov 2018) employ attention-based bag-of-words context model and pre-trained entity embeddings bootstrapped from word embeddings to assess topic level context compatibility. However, the latent entity type information in the immediate context of the mention is neglected. We suspect this may sometimes cause the models link mentions to incorrect entities with incorrect type. To verify this, we conduct error analysis of the well known <cite>DeepED 1</cite> model <cite>(Ganea and Hofmann 2017)</cite> on the development set of AIDA-CoNLL (Hoffart et al. 2011) , and found that more than half of <cite>their</cite> error cases fall into the category of type errors where the predicted entity's type is different from the golden entity's type, although some predictive contextual cue for them can be found in <cite>their</cite> local context.
motivation
eb5ef34dd9c3845cd27c33242d5316_5
To verify this, we conduct error analysis of the well known <cite>DeepED 1</cite> model <cite>(Ganea and Hofmann 2017)</cite> on the development set of AIDA-CoNLL (Hoffart et al. 2011) , and found that more than half of <cite>their</cite> error cases fall into the category of type errors where the predicted entity's type is different from the golden entity's type, although some predictive contextual cue for them can be found in <cite>their</cite> local context. As shown in Fig. 1 , the full model of <cite>Ganea and Hofmann (2017)</cite> incorrectly links the mention "Milwaukee" to the entity MILWAUKEE BREWERS.
uses
eb5ef34dd9c3845cd27c33242d5316_6
The reason why the local context model of <cite>Ganea and Hofmann (2017)</cite> couldn't capture such apparent cue is two folds.
background
eb5ef34dd9c3845cd27c33242d5316_7
On the other hand, the pre-trained entity embedding of <cite>Ganea and Hofmann (2017)</cite> is not very sensitive to entity types.
background
eb5ef34dd9c3845cd27c33242d5316_8
So it is natural for the model of <cite>Ganea and Hofmann (2017)</cite> to make type errors when it is trained to fit such entity embeddings.
background
eb5ef34dd9c3845cd27c33242d5316_9
What's more, we integrate a BERT-based entity similarity feature into the local model of <cite>Ganea and Hofmann (2017)</cite> to better capture entity type information.
uses