Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D19-1009",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:59:08.043978Z"
},
"title": "Game Theory Meets Embeddings: a Unified Framework for Word Sense Disambiguation",
"authors": [
{
"first": "Rocco",
"middle": [],
"last": "Tripodi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Venice",
"location": {}
},
"email": "rocco.tripodi@unive.it"
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Sapienza University of Rome",
"location": {}
},
"email": "navigli@di.uniroma1.it"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Game-theoretic models, thanks to their intrinsic ability to exploit contextual information, have shown to be particularly suited for the Word Sense Disambiguation task. They represent ambiguous words as the players of a non-cooperative game and their senses as the strategies that the players can select in order to play the games. The interaction among the players is modeled with a weighted graph and the payoff as an embedding similarity function, which the players try to maximize. The impact of the word and sense embedding representations in the framework was tested and analyzed extensively: experiments on standard benchmarks show state-of-art performances and different tests hint at the usefulness of using disambiguation to obtain contextualized word representations.",
"pdf_parse": {
"paper_id": "D19-1009",
"_pdf_hash": "",
"abstract": [
{
"text": "Game-theoretic models, thanks to their intrinsic ability to exploit contextual information, have shown to be particularly suited for the Word Sense Disambiguation task. They represent ambiguous words as the players of a non-cooperative game and their senses as the strategies that the players can select in order to play the games. The interaction among the players is modeled with a weighted graph and the payoff as an embedding similarity function, which the players try to maximize. The impact of the word and sense embedding representations in the framework was tested and analyzed extensively: experiments on standard benchmarks show state-of-art performances and different tests hint at the usefulness of using disambiguation to obtain contextualized word representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Word Sense Disambiguation (WSD), the task of linking the appropriate meaning from a sense inventory to words in a text, is an open problem in Natural Language Processing (NLP). It is particularly challenging because it deals with the semantics of words and, by their very nature, words are ambiguous and can be used with different meanings in different situations. Among the key tasks aimed at enabling Natural Language Understanding (Navigli, 2018) , WSD provides a basic, solid contribution since it is able to identify the intended meaning of the words in a sentence (Kim et al., 2010) .",
"cite_spans": [
{
"start": 434,
"end": 449,
"text": "(Navigli, 2018)",
"ref_id": "BIBREF37"
},
{
"start": 570,
"end": 588,
"text": "(Kim et al., 2010)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "WSD can be seen as a classification task in which words are the objects to be classified and senses are the classes into which the objects have to be classified (Navigli, 2009) ; therefore it is possible to use supervised learning techniques to solve the WSD problem. One drawback with this idea is that it requires large amounts of data that are difficult to obtain. Furthermore, in the WSD context, the production of annotated data is even more complicated and excessively timeconsuming compared to other tasks. This arises because of the variability in lexical use. Furthermore, the number of different meanings to be considered in a WSD task is in the order of thousands, whereas classical classification tasks in machine learning have considerably fewer classes.",
"cite_spans": [
{
"start": 161,
"end": 176,
"text": "(Navigli, 2009)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We decided to adopt a semi-supervised approach to overcome the knowledge acquisition bottleneck and innovate the strand of research introduced by Tripodi and Pelillo (2017) . These researchers developed a flexible game-theoretic WSD model that exploits word and sense similarity information. This combination of features allows the textual coherence to be maintained: in fact, in this model the disambiguation process is relational, and the sense assigned to a word must always be compatible with the senses of the words in the same text. It can be seen as a constraint satisfaction model which aims to find the best configuration of senses for the words in context. This is possible because the payoff function of the games is modeled in a way in which, when a game is played between two players, they are emboldened to select the senses that have the highest compatibility with the senses that the co-player is choosing. Another appealing feature of this model is that it offers the possibility to configure many components of the system: it is possible to use any word and sense representation; also, one can model the interactions of the players in different ways by exploiting word similarity information, the syntactic structure of the sentence and the importance provided by specific relations. Furthermore, it is possible to use different priors on the sense distributions and to use different game dynamics to find the equilibrium state of the model. Traditional WSD methods have only some of these properties.",
"cite_spans": [
{
"start": 146,
"end": 172,
"text": "Tripodi and Pelillo (2017)",
"ref_id": "BIBREF52"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The main difference between our model and the model proposed by Tripodi and Pelillo (2017) is that they did not use state-of-the-art models for word and sense representations. They used word co-occurrence measures for word similarity and tfidf vectors for sense similarity, resulting in sparse graphs in which nodes can be disjoint or some semantic area is not covered. Instead, we are advocating the use of dense vectors, which provide a completely different perspective not only in terms of representation but also in terms of dynamics. Each player is involved in many more games and this affects the computation of the payoffs and the convergence of the system. The interaction among the players is defined in a different way and the priors are modeled with a more realistic distribution to avoid the skewness typical of word sense distributions. Furthermore, our model is evaluated on recent standard benchmarks, facilitating comparison with other models.",
"cite_spans": [
{
"start": 64,
"end": 90,
"text": "Tripodi and Pelillo (2017)",
"ref_id": "BIBREF52"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The main contributions of this paper are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. the release of a general framework for WSD; 2. the evaluation of different word and sense embeddings;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "3. state-of-the-art performances on standard benchmarks (in different cases performing better than recent supervised models); 4. the use of disambiguated sense vectors to obtain contextualized word representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "WSD approaches can be divided into two main categories: supervised, which require human intervention in the creation of sense-annotated datasets, and the so-called knowledge-based approaches (Navigli, 2009) , which require the construction of a task-independent lexical-semantic knowledge resource, but which, once that work is available, use models that are completely autonomous.",
"cite_spans": [
{
"start": 191,
"end": 206,
"text": "(Navigli, 2009)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Sense Disambiguation",
"sec_num": "2"
},
{
"text": "As regards supervised systems, a popular system is It makes sense (Zhong and Ng, 2010) , a model which takes advantage of standard WSD features such as POS-tags, word co-occurrences, and collocations and creates individual support vector machine classifiers for each ambiguous word. Newer supervised models exploit deep neural networks and especially long short-term memory (LSTM) networks, a type of recurrent neural network particularly suitable for handling arbitrary-length sequences. Yuan et al. (2016) proposed a deep neural model trained with large amounts of data obtained in a semi-supervised fashion. This model was re-implemented by Le et al. (2018) , reaching comparable results with a smaller training corpus. Raganato et al. (2017) introduced two approaches for neural WSD using models developed for machine translation and substituting translated words with sense-annotated ones. A recent work that combines labeled data and knowledge-based information has been proposed by Luo et al. (2018) . Uslu et al. (2018) proposed fastSense, a model inspired by fastText which -rather than predicting context words -predicts word senses.",
"cite_spans": [
{
"start": 66,
"end": 86,
"text": "(Zhong and Ng, 2010)",
"ref_id": "BIBREF57"
},
{
"start": 489,
"end": 507,
"text": "Yuan et al. (2016)",
"ref_id": "BIBREF56"
},
{
"start": 644,
"end": 660,
"text": "Le et al. (2018)",
"ref_id": "BIBREF24"
},
{
"start": 723,
"end": 745,
"text": "Raganato et al. (2017)",
"ref_id": "BIBREF49"
},
{
"start": 989,
"end": 1006,
"text": "Luo et al. (2018)",
"ref_id": "BIBREF26"
},
{
"start": 1009,
"end": 1027,
"text": "Uslu et al. (2018)",
"ref_id": "BIBREF53"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Sense Disambiguation",
"sec_num": "2"
},
{
"text": "Knowledge-based models, instead, exploit the structural properties of a lexical-semantic knowledge base, and typically use the relational information between concepts in the semantic graph together with the lexical information contained therein (Navigli and Lapata, 2010) . A popular algorithm used to select the sense of each word in this graph is PageRank (Page et al., 1999 ) that performs random walks over the network to identify the most important nodes (Haveliwala, 2002; Mihalcea et al., 2004; De Cao et al., 2010 ). An extension of these models was proposed by Agirre et al. (2014) in which the Personalized PageRank algorithm is applied. Another knowledge-based approach is Babelfy (Moro et al., 2014) , which defines a semantic signature for a given context and compares it with all the candidate senses in order to perform the disambiguation task. Chaplot and Salakhutdinov (2018) proposed a method that uses the whole document as the context for the words to be disambiguated, exploiting topical information (Ferret and Grau, 2002) . It models word senses using a variant of the Latent Dirichlet Allocation framework (Blei et al., 2003) , in which the topic distributions of the words are replaced with sense distributions modeled with a logistic normal distribution according to the frequencies obtained from WordNet.",
"cite_spans": [
{
"start": 245,
"end": 271,
"text": "(Navigli and Lapata, 2010)",
"ref_id": "BIBREF38"
},
{
"start": 358,
"end": 376,
"text": "(Page et al., 1999",
"ref_id": "BIBREF41"
},
{
"start": 460,
"end": 478,
"text": "(Haveliwala, 2002;",
"ref_id": "BIBREF19"
},
{
"start": 479,
"end": 501,
"text": "Mihalcea et al., 2004;",
"ref_id": "BIBREF30"
},
{
"start": 502,
"end": 521,
"text": "De Cao et al., 2010",
"ref_id": "BIBREF13"
},
{
"start": 570,
"end": 590,
"text": "Agirre et al. (2014)",
"ref_id": "BIBREF1"
},
{
"start": 692,
"end": 711,
"text": "(Moro et al., 2014)",
"ref_id": "BIBREF34"
},
{
"start": 1021,
"end": 1044,
"text": "(Ferret and Grau, 2002)",
"ref_id": "BIBREF17"
},
{
"start": 1130,
"end": 1149,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Sense Disambiguation",
"sec_num": "2"
},
{
"text": "A good machine-interpretable representation of lexical features is fundamental for every NLP system. A system's performance, however, depends on the quality of the input representations. Furthermore, the inclusion of semantic features, in addition to lexical ones, has been proven effective in many NLP approaches (Li and Jurafsky, 2015) .",
"cite_spans": [
{
"start": 314,
"end": 337,
"text": "(Li and Jurafsky, 2015)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word and Sense Embeddings",
"sec_num": "3"
},
{
"text": "Word embeddings, the current paradigm for lexical representation of words, were popularized with word2vec (Mikolov et al., 2013) . The main idea is to exploit a neural language model which learns to predict a word occurrence given its surroundings. Another well-known word embedding model was presented by Pennington et al. (2014) , which shares the idea of word2vec, but with the difference that it uses explicit latent representations obtained from statistical calculation on word co-occurrences. However, all word embedding models share a common issue: they cannot capture polysemy since they conflate the various word senses into a single vector representation. Several efforts have been presented so far to deal with this problem. SensEmbed (Iacobacci et al., 2015) uses a knowledge-based disambiguation system to build a sense-annotated corpus that, in its turn, is used to train a vector space model for word senses with word2vec. AutoExtend (Rothe and Sch\u00fctze, 2015) , instead, is initialized with a set of pretrained word embeddings, and induces sense and synset vectors in the same semantic space using an autoencoder. The vectors are induced by constraining their representation given the assumption that synsets are sums of their lexemes. Camacho-Collados et al. (2015) presented NASARI, an approach that learns sense vectors by exploiting the hyperlink structure of the English Wikipedia, linking their representations to the semantic network of BabelNet (Navigli and Ponzetto, 2012) . More recent works, such as ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019) , are based on language models learned using complex neural network architectures. The advantage of these models is that they can produce different representations of words according to the context in which they appear.",
"cite_spans": [
{
"start": 106,
"end": 128,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF31"
},
{
"start": 306,
"end": 330,
"text": "Pennington et al. (2014)",
"ref_id": "BIBREF42"
},
{
"start": 746,
"end": 770,
"text": "(Iacobacci et al., 2015)",
"ref_id": "BIBREF20"
},
{
"start": 949,
"end": 974,
"text": "(Rothe and Sch\u00fctze, 2015)",
"ref_id": "BIBREF50"
},
{
"start": 1251,
"end": 1281,
"text": "Camacho-Collados et al. (2015)",
"ref_id": "BIBREF7"
},
{
"start": 1468,
"end": 1496,
"text": "(Navigli and Ponzetto, 2012)",
"ref_id": "BIBREF39"
},
{
"start": 1531,
"end": 1552,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF43"
},
{
"start": 1562,
"end": 1583,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word and Sense Embeddings",
"sec_num": "3"
},
{
"text": "In this work we take a different approach to WSD by employing a model based on game theory (GT). This discipline was introduced by Neuman and Morgenstern (1944) in order to develop a mathematical framework able to model the essentials of decision making in interactive situations. In its normal-form representation (Weibull, 1997) , it consists of a finite set of players N = (1, .., n), a finite set of pure strategies S i = {1, ..., m i } for each player i \u2208 N , and a payoff (utility) function u i : S \u2192 R, that associates a payoff with each combination of strategies in S = S 1 \u00d7S 2 \u00d7...\u00d7S n . A fundamental assumption in game theory is that each player i tries to maximize the value of u i . Furthermore, in non-cooperative games the players choose their strategies independently, considering what choices other players can make and trying to find the best response to the strategy of the co-players.",
"cite_spans": [
{
"start": 315,
"end": 330,
"text": "(Weibull, 1997)",
"ref_id": "BIBREF55"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Game Theory and Game Dynamics",
"sec_num": "4"
},
{
"text": "A player i, in addition to playing single (pure) strategies from S i , can also use mixed strategies, that are probability distributions over pure strategies. A mixed strategy over S i is defined as a vector",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Game Theory and Game Dynamics",
"sec_num": "4"
},
{
"text": "x i = (x 1 , . . . , x m i ), such that x j \u2265 0 and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Game Theory and Game Dynamics",
"sec_num": "4"
},
{
"text": "x j = 1. Each mixed strategy corresponds to a point in the simplex \u2206 m , whose corners correspond to pure strategies. The intuition is that player i randomises over strategies according to the probabilities in x i . Each mixed strategy profile lives in the mixed strategy space of the game, given by the Cartesian product",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Game Theory and Game Dynamics",
"sec_num": "4"
},
{
"text": "\u0398 = \u2206 m 1 \u00d7\u2206 m 2 \u00d7 \u2022 \u2022 \u2022 \u00d7 \u2206 mn .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Game Theory and Game Dynamics",
"sec_num": "4"
},
{
"text": "In a two-player game, a strategy profile can be defined as a pair (x i , x j ). The expected payoff for this strategy profile is computed as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Game Theory and Game Dynamics",
"sec_num": "4"
},
{
"text": "u(x i , x j ) = x T i \u2022 A ij x j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Game Theory and Game Dynamics",
"sec_num": "4"
},
{
"text": "where A ij is the m i \u00d7 m j payoff matrix between players i and j.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Game Theory and Game Dynamics",
"sec_num": "4"
},
{
"text": "In evolutionary game theory (Weibull, 1997), the games are played repeatedly and the players update their mixed strategy distributions over time until no player can improve the payoff obtained with the current mixed strategy. This situation corresponds to the equilibrium of the system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Game Theory and Game Dynamics",
"sec_num": "4"
},
{
"text": "The payoff corresponding to the h-th pure strategy is computed as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Game Theory and Game Dynamics",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "u(x h i ) = x h i \u2022 n i j=1 (A ij x j ) h",
"eq_num": "(1)"
}
],
"section": "Game Theory and Game Dynamics",
"sec_num": "4"
},
{
"text": "It is important to note here that the payoff in Equation 1 is additively separable, in fact, the summation is over all the n i players with whom i is playing the games. The average payoff of player i is calculated as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Game Theory and Game Dynamics",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "u(x i ) = m i h=1 u(x h i )",
"eq_num": "(2)"
}
],
"section": "Game Theory and Game Dynamics",
"sec_num": "4"
},
{
"text": "To find the Nash equilibrium of the game it is common to use the discrete time version of the replicator dynamics equation (Weibull, 1997) for each player i \u2208 N ,",
"cite_spans": [
{
"start": 123,
"end": 138,
"text": "(Weibull, 1997)",
"ref_id": "BIBREF55"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Game Theory and Game Dynamics",
"sec_num": "4"
},
{
"text": "x h i (t + 1) = x h i (t) u(x h i ) u(x i ) \u2200 h \u2208 x i (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Game Theory and Game Dynamics",
"sec_num": "4"
},
{
"text": "This equation allows better than average strategies to grow at each iteration. It can be considered as an inductive learning process, in which the players learn from past experiences how to play their best strategy. We note that each player optimizes their individual strategy space, but this operation is done according to what other players simultaneously are doing, so the local optimization is the result of a global process. Game-theoretic models are appealing because they are versatile, interpretable and have a solid mathematical foundation. Furthermore, it is always possible to find the Nash equilibrium in non-cooperative games in mixed strategies (Nash, 1951) . In fact, starting from an interior point of \u0398, a point x is a Nash equilibrium only if it is the limit of a trajectory of Equation 3 (Weibull, 1997). ",
"cite_spans": [
{
"start": 659,
"end": 671,
"text": "(Nash, 1951)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Game Theory and Game Dynamics",
"sec_num": "4"
},
{
"text": "The model used in this paper, Word Sense Disambiguation Games (WSDG), was introduced by Tripodi and Pelillo (2017) . It is based on graphtheoretic principles to model the geometry of the data and on game theory to model the learning algorithm which disambiguates the words in a text. It represents the words as the players of a noncooperative game and their senses as the strategy that the players can select in order to play the games. The players are arranged in a graph whose edges determine the interactions and carry word similarity information. The payoff matrix is en-coded as a sense similarity function. The players play the games repeatedly and -at each iteration -update their strategy preferences according to what strategy has been effective in previous games. These preferences, as introduced previously, are encoded as a probability distribution over strategies (senses).",
"cite_spans": [
{
"start": 88,
"end": 114,
"text": "Tripodi and Pelillo (2017)",
"ref_id": "BIBREF52"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Model",
"sec_num": "5"
},
{
"text": "Formally, for a text T we select its content words W = (1, . . . , n) as the players of the game I = (1, . . . , n). For each word we use a knowledge base to determine its possible senses. Each sense is represented as a strategy that the player can select from the set S i = {1, ..., m i }, where m i is the number of senses of word w i . The set of all different senses in the text, C = {1, ..., m}, is the strategy space of the games. The strategy space is modeled, for each player, as a probability distribution, x i , of length m. It takes non-zero values only on the entries corresponding to the elements of S i . It can be initialized with a normal distribution in the case of unsupervised learning or with information obtained from sense-labeled corpora in the case of semi-supervised learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Model",
"sec_num": "5"
},
{
"text": "The payoff of a game depends on a payoff matrix Z in which the rows are indexed according to the strategies of player i and the columns according to the strategies of player j. Its entries Z r,t are the payoff obtained when player i selects strategy r and player j selects strategy t. It is important to note here that the payoff of a game does not depend on the single strategy taken individually by a player, but always by the combination of two simultaneous actions. In WSD this means that the sense selected by a word influences the choices of the other words in the text and this allows the textual coherence to be maintained.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Model",
"sec_num": "5"
},
{
"text": "The disambiguation games to build a payoff function require: a word similarity matrix A, a sense similarity matrix Z and a sense distribution x i for each player i. A models the players' interactions, so that similar players play together and the more similar they are the more reciprocal influence they have. It can be interpreted as an attention mechanism (Vaswani et al., 2017) since it weights the payoffs. Z is used to create the payoff matrices of the games so that the more similar the senses of the words are the more the corresponding players are encouraged to select them, since they give a high payoff. A and Z are obtained by computing vector representations of word and sense (see Section 3) and then calculating their pairwise sim-ilarity.",
"cite_spans": [
{
"start": 358,
"end": 380,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF54"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Model",
"sec_num": "5"
},
{
"text": "The strategy space of each player, i, is represented as a column vector of length m. It is initialized with:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Model",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x h i = |m i | \u22121 if sense h is in S i , 0 otherwise.",
"eq_num": "(4)"
}
],
"section": "The Model",
"sec_num": "5"
},
{
"text": "This initialization is used in the case of unsupervised WSD, since it does not use information from sense-tagged corpora. If instead this information is available, |m i | \u22121 in Equation 4 is substituted with the frequency of the corresponding sense and then x i is normalized in order to sum up to one. Once these sources of information are computed, the WSDG are run by using the replicator dynamic equation (Taylor and Jonker, 1978) in Equation 3, where the payoff of strategy h for player i is calculated as:",
"cite_spans": [
{
"start": 409,
"end": 434,
"text": "(Taylor and Jonker, 1978)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Model",
"sec_num": "5"
},
{
"text": "u(x h i ) = x h i \u2022 n i j=1 (A ij Zx j ) h (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Model",
"sec_num": "5"
},
{
"text": "where n i are the neighbours of player i as in the graph A. The average payoff is calculated as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Model",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "u(x i ) = m i h=1 u(x h i )",
"eq_num": "(6)"
}
],
"section": "The Model",
"sec_num": "5"
},
{
"text": "The complexity of WSDG scales linearly with the number of words to be disambiguated. Differently from other models based on PageRank, it is possible to disambiguate all the words at the same time. As an example, WSDG can disambiguate 200 words (1650 senses) in 7 seconds, on a single CPU core. A generic representation of the model is proposed in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 347,
"end": 355,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "The Model",
"sec_num": "5"
},
{
"text": "Implementation details The cosine similarity was used as similarity measure for both words and senses. The A matrix was treated as the adjacency matrix of an undirected weighted graph and, to reduce the complexity of the model, the edges with weight lower than 0.1 were removed. The symmetric normalized Laplacian of this graph was calculated as D \u22121/2 AD \u22121/2 , where D is the degree matrix of graph A. Since the algorithm operates on an entire text, local information is added to matrix A. The mean value of the matrix is added to the log(n) cells on the left of the main diagonal. For BERT, this operation was replaced with its attention layer, adding to matrix A the mean attention distribution of all the heads of the last layer. The choice of the last layer is motivated by the fact that it stores semantic information and its attention distributions have high entropy (Clark et al., 2019) . The first singular vector was removed from A in the case of word vectors whose length exceeded 500. This was done to reduce the redundancy of the representations in line with Arora et al. (2017) . The distributions for each x were computed according to SemCor (Miller et al., 1993) and normalized using the softmax function. The replicator dynamics were run until a maximum number of iterations was reached (100) or the difference between two consecutive iterations was below a small threshold (10 \u22123 ), calculated as n i=1",
"cite_spans": [
{
"start": 875,
"end": 895,
"text": "(Clark et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 1073,
"end": 1092,
"text": "Arora et al. (2017)",
"ref_id": "BIBREF2"
},
{
"start": 1151,
"end": 1179,
"text": "SemCor (Miller et al., 1993)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Model",
"sec_num": "5"
},
{
"text": "x i (t \u2212 1) \u2212 x i (t) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Model",
"sec_num": "5"
},
{
"text": "The code of the model is available at https://github. com/roccotrip/wsd_games_emb.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Model",
"sec_num": "5"
},
{
"text": "The evaluation of our model was conducted using the framework proposed by Raganato et al. (2017) . This consists of five datasets which were unified to the same WordNet 3.0 inventory: Senseval-2 (S2), Senseval-3 (S3), SemEval-2007 (SE7), SemEval-2013 (SE13) and SemEval-2015 (SE15). These datasets have in total 26 texts and 10, 619 words to be disambiguated. Our objective was to test our game-theoretic model with different settings and to evaluate its performances. To this end we performed experiments comparing 16 different sets of pretrained word embeddings and 7 sets of sense embeddings.",
"cite_spans": [
{
"start": 74,
"end": 96,
"text": "Raganato et al. (2017)",
"ref_id": "BIBREF49"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "6"
},
{
"text": "Word embeddings As word embedding models we included 4 pre-word2vec models: the hierarchical log-bilinear model (Mnih and Hinton, 2007, HLBL) , a probabilistic linear neural model which aims to predict the embedding of a word given the concatenation of the previous words; CW (Collobert and Weston, 2008) , an embeddings model with a deep unified architecture for multitask NLP; Distributional Memory (Baroni and Lenci, 2010, DM), a semantically enriched countbased model; leskBasile (Basile et al., 2014) , a model based on Latent Semantic Analysis reduced via Singular-Value Decomposition; 3 models obtained with word2vec: GoogleNews, a set of 300-dimensions vectors trained with the Google News dataset; BNC-*, vectors of different dimensions trained on the British National Corpus including POS information during training; and w2vR, trained with word2vec on the 2014 dump of the English Wikipedia, enriched with retrofitting (Faruqui et al., 2015) , a technique to enhance pre-trained embeddings with semantic information. The enrichment was performed using retrofitting's best configuration, based on the Paraphrase Database (Ganitkevitch et al., 2013, PPDB) . We also tested GloVe (Pennington et al., 2014) , trained with the concatenation of the 2014 dump of the English Wikipedia and Gigaword 5, and fastText trained on Wikipedia 2017, UMBC corpus and the statmt.org news dataset. Contextualized word embeddings As contextualized embeddings we used ELMo (Peters et al., 2018) in three different configurations, namely: ELMo-avg, weighted sum of its three layers; ELMo-avg emb, weighted sum of its three layers and the embeddings it produces; and ELMo-emb, the word embeddings produced by the model 1 . We also tested three implementations of BERT (Devlin et al., 2019) : base cased (b-c); large uncased (l-u) and large cased (l-c). They offer pretrained deep bidirectional representations of words 1 TensorFlow models available at https://tfhub. dev/google/elmo/2. in context 2 . We used seven configurations for each model: one for each of the last four layers (numbered from 1 to 4), the sum of these layers, their concatenation and the embedding layer. We fed all these models with the entire texts of the datasets. Since BERT uses WordPiece tokenization, we averaged sub-token embeddings to obtain token-level representations.",
"cite_spans": [
{
"start": 112,
"end": 141,
"text": "(Mnih and Hinton, 2007, HLBL)",
"ref_id": null
},
{
"start": 276,
"end": 304,
"text": "(Collobert and Weston, 2008)",
"ref_id": "BIBREF12"
},
{
"start": 484,
"end": 505,
"text": "(Basile et al., 2014)",
"ref_id": "BIBREF4"
},
{
"start": 930,
"end": 952,
"text": "(Faruqui et al., 2015)",
"ref_id": "BIBREF16"
},
{
"start": 1131,
"end": 1164,
"text": "(Ganitkevitch et al., 2013, PPDB)",
"ref_id": null
},
{
"start": 1188,
"end": 1213,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF42"
},
{
"start": 1463,
"end": 1484,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF43"
},
{
"start": 1756,
"end": 1777,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "6"
},
{
"text": "We also included three models which were built together with the sense embeddings introduced below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "6"
},
{
"text": "Sense embeddings As sense embeddings, in addition to the three models introduced in Section 3 (AutoExtend, NASARI and SensEmbed), we included four models: Chen et al. (2014) , a unified model which learns sense vectors by training a sense-annotated corpus disambiguated with a framework based on semantic similarity of Word-Net sense definitions; meanBNC, created using a weighted combination of the words from WordNet glosses, using, as word vectors, the set of BNC-200 mentioned earlier; DeConf (Pilehvar and Collier, 2016), also linked to WordNet, a model where sense vectors are inferred in the same semantic space of pre-trained word embeddings by decomposing the given word representation into its constituent senses; and finally SW2V (Mancini et al., 2017) , a model linked to BabelNet which uses a shallow disambiguation step and which, by extending the word2vec architecture, learns word and sense vectors jointly in the same semantic space as an emerging feature.",
"cite_spans": [
{
"start": 155,
"end": 173,
"text": "Chen et al. (2014)",
"ref_id": "BIBREF10"
},
{
"start": 741,
"end": 763,
"text": "(Mancini et al., 2017)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "6"
},
{
"text": "Results The results of these models are reported in Figure 2 . One of the most interesting patterns that emerges from the heat map is that there are some combinations of word and sense embeddings that always work better than others. Sense vectors drive the performance of the system, contributing in great part to the accumulation of payoffs during the games. The sense vectors that maintain high performances are SensEmbed, AutoExtended and Chen2014. In particular Chen2014 has high performances with all the word embedding combinations. While these models are specific sense embedding techniques, the construction of BNC-200 follows a very simple method, which in view of these results can be refined using more principled gloss embedding techniques. The performances of Table 1 : Comparison with state-of-the-art algorithms: unsupervised or knowledge-based (unsup.), and supervised (sup.). MFS refers to the MFS heuristic computed on SemCor on each dataset. The results are provided as F1 and the first result of the semi supervised systems with a statistically significant difference from the best of each dataset is marked with * (\u03c7 2 , p < 0.1). \u2020 indicates the same statistics but including also supervised models.",
"cite_spans": [],
"ref_spans": [
{
"start": 52,
"end": 60,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 773,
"end": 780,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "6"
},
{
"text": "NASARI are lower compared to lexical vectors: this may be due to our choice to use NASARIembed, whose vectors have low dimensionality. The word vectors that have consistently high performances in association with the three sense vectors mentioned above are BERT, Chen2014, SensEmbed and SW2V. This is not surprising since they are able to produce contextualised word representations, performing, in fact, a preliminary disambiguation of the words. In particular, SW2V is specifically tailored for WSD. ELMo and fast-Text perform slightly worse. The vectors constructed using syntactic information and trained on the BNC corpus have similar performances to the their counterparts trained on larger corpora without the use of syntactic information. If we focus on BERT, we can see that it is able to maintain high performances (F 1 \u2248 67) with all its configurations, except for the embedding layers of all the models (*-emb). The contribution of the sum and concatenation operations is not significant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "6"
},
{
"text": "Comparison We performed a comparison with 3 configurations of our model, one for each of the three best sense vectors: WSDG \u03b1 , obtained using Chen2014 as sense vectors and BERT-l-u-4 as word vectors; WSDG \u03b2 , obtained using SensEmbed as sense vectors and BERT-l-c-4 as word vectors; and WSDG \u03b3 , obtained using AutoExtended as sense vectors and BERT-l-u-3 as word vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "6"
},
{
"text": "As comparison systems we included three semisupervised approaches mentioned above, Babelfy (Moro et al., 2014) , ppr w2w , the best configuration of UKB (Agirre et al., 2018) , and WSD-TM, introduced by Chaplot and Salakhutdinov (2018) (for this model we did not have the possibility to verify the results since its code is not available). In addition, we also report the performances of relevant supervised models, namely: It Makes Sense (Zhong and Ng, 2010, IMS) , Iacobacci et al. (2016) , Yuan et al. (2016) , Raganato et al. (2017) , and Uslu et al. (2018) .",
"cite_spans": [
{
"start": 91,
"end": 110,
"text": "(Moro et al., 2014)",
"ref_id": "BIBREF34"
},
{
"start": 153,
"end": 174,
"text": "(Agirre et al., 2018)",
"ref_id": "BIBREF0"
},
{
"start": 439,
"end": 464,
"text": "(Zhong and Ng, 2010, IMS)",
"ref_id": null
},
{
"start": 467,
"end": 490,
"text": "Iacobacci et al. (2016)",
"ref_id": "BIBREF21"
},
{
"start": 493,
"end": 511,
"text": "Yuan et al. (2016)",
"ref_id": "BIBREF56"
},
{
"start": 514,
"end": 536,
"text": "Raganato et al. (2017)",
"ref_id": "BIBREF49"
},
{
"start": 543,
"end": 561,
"text": "Uslu et al. (2018)",
"ref_id": "BIBREF53"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "6"
},
{
"text": "The results of our evaluation are shown in Table 1 . As we can see our model achieves state-of-theart performances on four datasets and on S13 and S15 it performs better than many supervised systems. In general the gap between supervised and semi-supervised systems is reducing. This encourages new research in this direction. Our model fares particularly well on the disambiguation of nouns and verbs. However, the main gap between our models and supervised systems relies upon the disambiguation of verbs.",
"cite_spans": [],
"ref_spans": [
{
"start": 43,
"end": 51,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "6"
},
{
"text": "Polysemy As expected, most of the errors made by WSDG \u03b1 are on highly polysemous words. Figure 3 shows that the number of wrong answers increases as the number of senses grows, and that the number of wrong answers starts to be higher than that of correct answers when the number of senses for a target word is in the range of 10-15 senses. The words with the highest number of errors are polysemous verbs such as: say (34), make (24), find (21), have, (17), take (15), get, (15) and do (13). These are words that in many NLP applications (especially those based on distributional models) are treated as stopwords.",
"cite_spans": [],
"ref_spans": [
{
"start": 88,
"end": 96,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "7"
},
{
"text": "Sense rank Mancini et al. (2017) show that senses which are not the most frequent ones are particularly challenging and most sense-based approaches fail to represent them properly. In Figure 4 we report the results of WSDG \u03b1 divided per sense rank, where it is possible to see how the performances of the system deteriorate as the rank of the correct sense increases. It is interesting to see that, in the first part of the plot, the performances follow a regular pattern that resembles a power-law distribution. This requires further analysis beyond the scope of this work, along the lines of Ferrer-i Cancho and Vitevitch (2018). Priors Corroborating the findings of Pilehvar and Navigli (2014), Postma et al. (2016) conducted a series of experiments to study the effect that the variation of sense distributions in the training set has on the performances of It makes sense (Zhong and Ng, 2010) . Specifically, they increased the volume of training examples (V) by enriching SemCor with senses inferred from BabelNet; increased the number of least frequent senses (LFS) (V+LFS); and overfitted the model constructing a training set proportional to the correct sense distribution of the test set (GOLD,GOLD+LFS). We used the same training sets to compute the priors for our system. The results of this analysis are reported in Table 2 . These experiments show that increasing the num- ber of training examples has a small beneficial effect. Increasing the number of LFS examples leads to worse results because this is a deviation from a real sense distribution. Further, to work with better semantic representations, this operation should also be accompanied by a similar selection on the training set of sense and word embeddings, otherwise LFS remain underrepresented. Finally, mimicking the distribution of the test set is more beneficial for WSDG \u03b1 than for IMS, especially when LFS examples are added, suggesting that semisupervised systems can better adapt to specific domains than supervised systems.",
"cite_spans": [
{
"start": 11,
"end": 32,
"text": "Mancini et al. (2017)",
"ref_id": "BIBREF27"
},
{
"start": 698,
"end": 718,
"text": "Postma et al. (2016)",
"ref_id": "BIBREF48"
},
{
"start": 877,
"end": 897,
"text": "(Zhong and Ng, 2010)",
"ref_id": "BIBREF57"
}
],
"ref_spans": [
{
"start": 184,
"end": 192,
"text": "Figure 4",
"ref_id": null
},
{
"start": 1329,
"end": 1336,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "7"
},
{
"text": "We now present three WSD applications in as many tasks: selection of context-sensitive embeddings; sentence similarity; paraphrases detection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exploratory study",
"sec_num": "8"
},
{
"text": "We used the Word in Context (WiC) dataset (Pilehvar and Camacho-Collados, 2019) for this task. It contains 7466 sentence pairs in which a target word appears in two different contexts. The task consisted of predicting if a target word has the same sense in the two sentences or not. The aim of this experiment was twofold: we wanted to show the usefulness of contextualized word embeddings obtained from WSD systems and to demonstrate that the model was able to maintain the textual coherence. The experiments on this dataset were conducted on the development set (1400 sentence pairs). The comparison was conducted against state-of-theart models for contextualized word embeddings and sense embeddings: Context2Vec (Melamud et al., 2016) based on a bidirectional LSTM language model; ELMo 1 , the first LSTM hidden state; ELMo 3 , the weighted sum of the 3 LSTM layers; BERT base ; BERT large . The results of these systems were taken from Pilehvar and Camacho-Collados (2019). We note here that all these models, including WSDG \u03b1 , do not use training data. They are based on a simple threshold-based classifier, tuned on the development set (638 sentence pairs). WSDG \u03b1 disambiguates all the words in each pair of sentences separately and, if the cosine similarity among the senses assigned to the target words is below a threshold (0.9), it classifies the pair as different senses, and as the same sense otherwise. As shown in Table 3 the disambiguation step has a big impact on the results.",
"cite_spans": [
{
"start": 716,
"end": 738,
"text": "(Melamud et al., 2016)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 1431,
"end": 1438,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Context-sensitive embeddings",
"sec_num": null
},
{
"text": "Sentence similarity We used the SICK dataset (Marelli et al., 2014) for this task. It consists of 9841 sentence pairs that had been annotated with relatedness scores on a 5-point rating scale. We used the test split of this dataset that contains 4906 sentence pairs. The aim of this experiment was to test if disambiguated sense vectors can provide a better representation of sentences than word vectors. We used a simple method to test the two representations: it consisted of representing a sentence as the sum of the disambiguated sense vectors in one case and as the sum of word vectors in the other case. Once the sentence representations had been obtained for both methods the cosine similarity was used to measure their relatedness. The results of this experiment are reported in Table 4 as Pearson and Spearman correlation and Mean Squared Error (MSE). We used the \u03b1 configuration of our model with Chen2014 to represent senses and BERT-l-u-4 to represent words. As we can see the simplicity of the method leads to low performances for both representations, but sense vectors correlate better than word vectors.",
"cite_spans": [
{
"start": 45,
"end": 67,
"text": "(Marelli et al., 2014)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 787,
"end": 794,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Context-sensitive embeddings",
"sec_num": null
},
{
"text": "Paraphrase detection We used the test set of the Microsoft Research Paraphrase Corpus (Dolan et al., 2004, MRPC) for this task. The corpus contains 1621 sentence pairs that have been annotated with a binary label: 1 if the two sentences constitute a paraphrase and 0 otherwise. In this task we also used the sum of word vectors and the sum of disambiguated sense vectors to represent the sentences, and used part of the training set (10%) in order to tune the threshold parameter below which the sentences are not considered paraphrase. The classification accuracy for the word vectors used by WSDG \u03b1 was 58.1 whereas the disambiguated sense vectors obtained 66.9.",
"cite_spans": [
{
"start": 86,
"end": 112,
"text": "(Dolan et al., 2004, MRPC)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Context-sensitive embeddings",
"sec_num": null
},
{
"text": "In this work we have presented WSDG, a flexible game-theoretic model for WSD. It combines game dynamics with most successful word and sense embeddings, therefore showing the potential of an effective combination of the two areas of game theory and word sense disambiguation. Our approach achieves state-of-the-art performances on four datasets performing particularly well on the disambiguation of nouns and verbs. Beyond the numerical results, in this paper we have presented a model able to construct and evaluate word and sense representations. This is particularly useful since it can serve as a test bed for new word and sense embeddings. In particular, it will be interesting to test new sense embedding models based on contextual embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
},
{
"text": "Thanks to the flexibility and scalability of our model, as future work we plan to explore in depth its use in different tasks, such as the creation of sentence (document) embeddings and lexical substitution. In fact, we believe that using disambiguated sense vectors, as shown in the contextsensitive embeddings and paraphrase detection studies, can offer a more accurate representation and improve the quality of downstream applications such as sentiment analysis and text classification (see, e.g., (Pilehvar et al., 2017)), machine translation and topic modelling. Encouraged by the good results achieved in our exploratory studies, we plan to develop a new model for contextualised word embeddings based on a gametheoretic framework.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
},
{
"text": "PyTorch models available at https://github. com/huggingface/pytorch-transformers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors gratefully acknowledge the support of the ODYC-CEUS project No. 732942 (first author) and of the ERC Consolidator Grant MOUSSE No. 726487 (second author) under the European Union's Horizon 2020 research and innovation programme. The experiments have been run on the SCSCF cluster of Ca' Foscari University. The authors thank Ignacio Iacobacci for preliminary work on this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The risk of sub-optimal use of open source NLP software: UKB is inadvertently state-of-the-art in knowledge-based WSD",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Oier",
"middle": [],
"last": "L\u00f3pez De Lacalle",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "Soroa",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of Workshop for NLP Open Source Software (NLP-OSS)",
"volume": "",
"issue": "",
"pages": "29--33",
"other_ids": {
"DOI": [
"10.18653/v1/W18-2505"
]
},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Oier L\u00f3pez de Lacalle, and Aitor Soroa. 2018. The risk of sub-optimal use of open source NLP software: UKB is inadvertently state-of-the-art in knowledge-based WSD. In Proceedings of Work- shop for NLP Open Source Software (NLP-OSS), pages 29-33, Melbourne, Australia. ACL.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Random walks for knowledge-based word sense disambiguation",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
}
],
"year": 2014,
"venue": "Computational Linguistics",
"volume": "40",
"issue": "1",
"pages": "57--84",
"other_ids": {
"DOI": [
"10.1162/COLI_a_00164"
]
},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Oier L\u00f3pez de Lacalle, and Aitor Soroa. 2014. Random walks for knowledge-based word sense disambiguation. Computational Linguistics, 40(1):57-84.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A simple but tough-to-beat baseline for sentence embeddings",
"authors": [
{
"first": "Sanjeev",
"middle": [],
"last": "Arora",
"suffix": ""
},
{
"first": "Yingyu",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Tengyu",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2017,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A simple but tough-to-beat baseline for sentence em- beddings. In International Conference on Learning Representations.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Distributional memory: A general framework for corpus-based semantics",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Lenci",
"suffix": ""
}
],
"year": 2010,
"venue": "Computational Linguistics",
"volume": "36",
"issue": "4",
"pages": "673--721",
"other_ids": {
"DOI": [
"10.1162/coli_a_00016"
]
},
"num": null,
"urls": [],
"raw_text": "Marco Baroni and Alessandro Lenci. 2010. Dis- tributional memory: A general framework for corpus-based semantics. Computational Linguis- tics, 36(4):673-721.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "An enhanced Lesk word sense disambiguation algorithm through a distributional semantic model",
"authors": [
{
"first": "Pierpaolo",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "Annalina",
"middle": [],
"last": "Caputo",
"suffix": ""
},
{
"first": "Giovanni",
"middle": [],
"last": "Semeraro",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "1591--1600",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pierpaolo Basile, Annalina Caputo, and Giovanni Se- meraro. 2014. An enhanced Lesk word sense dis- ambiguation algorithm through a distributional se- mantic model. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 1591-1600, Dublin, Ireland. Dublin City University and ACL.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Latent dirichlet allocation",
"authors": [
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
}
],
"year": 2003,
"venue": "J. Mach. Learn. Res",
"volume": "3",
"issue": "",
"pages": "993--1022",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. J. Mach. Learn. Res., 3:993-1022.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00051"
]
},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "NASARI: a novel approach to a semantically-aware representation of items",
"authors": [
{
"first": "Jos\u00e9",
"middle": [],
"last": "Camacho-Collados",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Taher Pilehvar",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "567--577",
"other_ids": {
"DOI": [
"10.3115/v1/N15-1059"
]
},
"num": null,
"urls": [],
"raw_text": "Jos\u00e9 Camacho-Collados, Mohammad Taher Pilehvar, and Roberto Navigli. 2015. NASARI: a novel ap- proach to a semantically-aware representation of items. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 567-577, Denver, Colorado. ACL.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The origins of zipf's meaning-frequency law",
"authors": [
{
"first": "Ramon",
"middle": [],
"last": "Ferrer-I Cancho",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"S"
],
"last": "Vitevitch",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of the Association for Information Science and Technology",
"volume": "69",
"issue": "11",
"pages": "1369--1379",
"other_ids": {
"DOI": [
"10.1002/asi.24057"
]
},
"num": null,
"urls": [],
"raw_text": "Ramon Ferrer-i Cancho and Michael S. Vitevitch. 2018. The origins of zipf's meaning-frequency law. Journal of the Association for Information Science and Technology, 69(11):1369-1379.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Knowledge-based word sense disambiguation using topic models",
"authors": [
{
"first": "Devendra",
"middle": [],
"last": "Singh Chaplot",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2018,
"venue": "AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Devendra Singh Chaplot and Ruslan Salakhutdinov. 2018. Knowledge-based word sense disambiguation using topic models. In AAAI Conference on Artifi- cial Intelligence.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A unified model for word sense representation and disambiguation",
"authors": [
{
"first": "Xinxiong",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1025--1035",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1110"
]
},
"num": null,
"urls": [],
"raw_text": "Xinxiong Chen, Zhiyuan Liu, and Maosong Sun. 2014. A unified model for word sense represen- tation and disambiguation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1025-1035, Doha, Qatar. ACL.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "What does BERT look at? an analysis of BERT's attention",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Urvashi",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "276--286",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What does BERT look at? an analysis of BERT's attention. In Pro- ceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 276-286, Florence, Italy. ACL.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A unified architecture for natural language processing: Deep neural networks with multitask learning",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. ICML",
"volume": "",
"issue": "",
"pages": "160--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proc. ICML, pages 160-167.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Robust and efficient page rank for word sense disambiguation",
"authors": [
{
"first": "Diego",
"middle": [],
"last": "De Cao",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Basili",
"suffix": ""
},
{
"first": "Matteo",
"middle": [],
"last": "Luciani",
"suffix": ""
},
{
"first": "Francesco",
"middle": [],
"last": "Mesiano",
"suffix": ""
},
{
"first": "Riccardo",
"middle": [],
"last": "Rossi",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of TextGraphs-5 -2010 Workshop on Graph-based Methods for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "24--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diego De Cao, Roberto Basili, Matteo Luciani, Francesco Mesiano, and Riccardo Rossi. 2010. Ro- bust and efficient page rank for word sense dis- ambiguation. In Proceedings of TextGraphs-5 - 2010 Workshop on Graph-based Methods for Nat- ural Language Processing, pages 24-32, Uppsala, Sweden. ACL.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. ACL.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Unsupervised construction of large paraphrase corpora: Exploiting massively parallel news sources",
"authors": [
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Quirk",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
}
],
"year": 2004,
"venue": "COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "350--356",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bill Dolan, Chris Quirk, and Chris Brockett. 2004. Unsupervised construction of large paraphrase cor- pora: Exploiting massively parallel news sources. In COLING 2004: Proceedings of the 20th Inter- national Conference on Computational Linguistics, pages 350-356, Geneva, Switzerland. COLING.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Retrofitting word vectors to semantic lexicons",
"authors": [
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Jesse",
"middle": [],
"last": "Dodge",
"suffix": ""
},
{
"first": "Sujay",
"middle": [],
"last": "Kumar Jauhar",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1606--1615",
"other_ids": {
"DOI": [
"10.3115/v1/N15-1184"
]
},
"num": null,
"urls": [],
"raw_text": "Manaal Faruqui, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard Hovy, and Noah A. Smith. 2015. Retrofitting word vectors to semantic lexicons. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 1606-1615, Denver, Colorado. ACL.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A bootstrapping approach for robust topic analysis",
"authors": [
{
"first": "Olivier",
"middle": [],
"last": "Ferret",
"suffix": ""
},
{
"first": "Brigitte",
"middle": [],
"last": "Grau",
"suffix": ""
}
],
"year": 2002,
"venue": "Nat. Lang. Eng",
"volume": "8",
"issue": "3",
"pages": "209--233",
"other_ids": {
"DOI": [
"10.1017/S1351324902002929"
]
},
"num": null,
"urls": [],
"raw_text": "Olivier Ferret and Brigitte Grau. 2002. A bootstrap- ping approach for robust topic analysis. Nat. Lang. Eng., 8(3):209-233.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "PPDB: The paraphrase database",
"authors": [
{
"first": "Juri",
"middle": [],
"last": "Ganitkevitch",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "758--764",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. PPDB: The paraphrase database. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 758-764, Atlanta, Georgia. ACL.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Topic-sensitive pagerank",
"authors": [
{
"first": "H",
"middle": [],
"last": "Taher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Haveliwala",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 11th International Conference on World Wide Web, WWW '02",
"volume": "",
"issue": "",
"pages": "517--526",
"other_ids": {
"DOI": [
"10.1145/511446.511513"
]
},
"num": null,
"urls": [],
"raw_text": "Taher H. Haveliwala. 2002. Topic-sensitive pagerank. In Proceedings of the 11th International Conference on World Wide Web, WWW '02, pages 517-526, New York, NY, USA. ACM.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "SensEmbed: Learning sense embeddings for word and relational similarity",
"authors": [
{
"first": "Ignacio",
"middle": [],
"last": "Iacobacci",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Taher Pilehvar",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "95--105",
"other_ids": {
"DOI": [
"10.3115/v1/P15-1010"
]
},
"num": null,
"urls": [],
"raw_text": "Ignacio Iacobacci, Mohammad Taher Pilehvar, and Roberto Navigli. 2015. SensEmbed: Learning sense embeddings for word and relational similarity. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), pages 95-105, Beijing, China. ACL.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Embeddings for word sense disambiguation: An evaluation study",
"authors": [
{
"first": "Ignacio",
"middle": [],
"last": "Iacobacci",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Taher Pilehvar",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "897--907",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1085"
]
},
"num": null,
"urls": [],
"raw_text": "Ignacio Iacobacci, Mohammad Taher Pilehvar, and Roberto Navigli. 2016. Embeddings for word sense disambiguation: An evaluation study. In Proceed- ings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 897-907, Berlin, Germany. ACL.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Bag of tricks for efficient text classification",
"authors": [
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter",
"volume": "2",
"issue": "",
"pages": "427--431",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In Proceedings of the 15th Con- ference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Pa- pers, pages 427-431, Valencia, Spain. ACL.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Improving the quality of text understanding by delaying ambiguity resolution",
"authors": [
{
"first": "Ken",
"middle": [],
"last": "Doo Soon Kim",
"suffix": ""
},
{
"first": "Bruce",
"middle": [],
"last": "Barker",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Porter",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "581--589",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Doo Soon Kim, Ken Barker, and Bruce Porter. 2010. Improving the quality of text understanding by de- laying ambiguity resolution. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 581-589, Beijing, China. Coling 2010 Organizing Committee.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A deep dive into word sense disambiguation with LSTM",
"authors": [
{
"first": "Minh",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Marten",
"middle": [],
"last": "Postma",
"suffix": ""
},
{
"first": "Jacopo",
"middle": [],
"last": "Urbani",
"suffix": ""
},
{
"first": "Piek",
"middle": [],
"last": "Vossen",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "354--365",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minh Le, Marten Postma, Jacopo Urbani, and Piek Vossen. 2018. A deep dive into word sense dis- ambiguation with LSTM. In Proceedings of the 27th International Conference on Computational Linguistics, pages 354-365, Santa Fe, New Mexico, USA. ACL.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Do multi-sense embeddings improve natural language understanding?",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1722--1732",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1200"
]
},
"num": null,
"urls": [],
"raw_text": "Jiwei Li and Dan Jurafsky. 2015. Do multi-sense em- beddings improve natural language understanding? In Proceedings of the 2015 Conference on Empiri- cal Methods in Natural Language Processing, pages 1722-1732, Lisbon, Portugal. ACL.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Incorporating glosses into neural word sense disambiguation",
"authors": [
{
"first": "Fuli",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Tianyu",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Qiaolin",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Baobao",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Zhifang",
"middle": [],
"last": "Sui",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fuli Luo, Tianyu Liu, Qiaolin Xia, Baobao Chang, and Zhifang Sui. 2018. Incorporating glosses into neural word sense disambiguation. In Proceedings of the 56th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), Mel- bourne, Australia. ACL.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Embedding words and senses together via joint knowledgeenhanced training",
"authors": [
{
"first": "Massimiliano",
"middle": [],
"last": "Mancini",
"suffix": ""
},
{
"first": "Jose",
"middle": [],
"last": "Camacho-Collados",
"suffix": ""
},
{
"first": "Ignacio",
"middle": [],
"last": "Iacobacci",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 21st Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "100--111",
"other_ids": {
"DOI": [
"10.18653/v1/K17-1012"
]
},
"num": null,
"urls": [],
"raw_text": "Massimiliano Mancini, Jose Camacho-Collados, Igna- cio Iacobacci, and Roberto Navigli. 2017. Embed- ding words and senses together via joint knowledge- enhanced training. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 100-111, Vancou- ver, Canada. ACL.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A SICK cure for the evaluation of compositional distributional semantic models",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Marelli",
"suffix": ""
},
{
"first": "Stefano",
"middle": [],
"last": "Menini",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Luisa",
"middle": [],
"last": "Bentivogli",
"suffix": ""
},
{
"first": "Raffaella",
"middle": [],
"last": "Bernardi",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Zamparelli",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC-2014)",
"volume": "",
"issue": "",
"pages": "216--223",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, and Roberto Zam- parelli. 2014. A SICK cure for the evaluation of compositional distributional semantic models. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC- 2014), pages 216-223, Reykjavik, Iceland. Euro- pean Languages Resources Association (ELRA).",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "context2vec: Learning generic context embedding with bidirectional LSTM",
"authors": [
{
"first": "Oren",
"middle": [],
"last": "Melamud",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Goldberger",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "51--61",
"other_ids": {
"DOI": [
"10.18653/v1/K16-1006"
]
},
"num": null,
"urls": [],
"raw_text": "Oren Melamud, Jacob Goldberger, and Ido Dagan. 2016. context2vec: Learning generic context em- bedding with bidirectional LSTM. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 51-61, Berlin, Germany. ACL.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "PageRank on semantic networks, with application to word sense disambiguation",
"authors": [
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Tarau",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Figa",
"suffix": ""
}
],
"year": 2004,
"venue": "COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1126--1132",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rada Mihalcea, Paul Tarau, and Elizabeth Figa. 2004. PageRank on semantic networks, with application to word sense disambiguation. In COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics, pages 1126-1132, Geneva, Switzerland. COLING.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Exploiting similarities among languages for machine translation",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Quoc V. Le, and Ilya Sutskever. 2013. Exploiting similarities among languages for ma- chine translation. CoRR, abs/1309.4168.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "A semantic concordance",
"authors": [
{
"first": "A",
"middle": [],
"last": "George",
"suffix": ""
},
{
"first": "Claudia",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Randee",
"middle": [],
"last": "Leacock",
"suffix": ""
},
{
"first": "Ross T",
"middle": [],
"last": "Tengi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bunker",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the workshop on Human Language Technology",
"volume": "",
"issue": "",
"pages": "303--308",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A Miller, Claudia Leacock, Randee Tengi, and Ross T Bunker. 1993. A semantic concordance. In Proceedings of the workshop on Human Language Technology, pages 303-308. ACL.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Three new graphical models for statistical language modelling",
"authors": [
{
"first": "Andriy",
"middle": [],
"last": "Mnih",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 24th International Conference on Machine Learning, ICML '07",
"volume": "",
"issue": "",
"pages": "641--648",
"other_ids": {
"DOI": [
"10.1145/1273496.1273577"
]
},
"num": null,
"urls": [],
"raw_text": "Andriy Mnih and Geoffrey Hinton. 2007. Three new graphical models for statistical language modelling. In Proceedings of the 24th International Conference on Machine Learning, ICML '07, pages 641-648, New York, NY, USA. ACM.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Entity linking meets word sense disambiguation: a unified approach",
"authors": [
{
"first": "Andrea",
"middle": [],
"last": "Moro",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Raganato",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2014,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "231--244",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00179"
]
},
"num": null,
"urls": [],
"raw_text": "Andrea Moro, Alessandro Raganato, and Roberto Nav- igli. 2014. Entity linking meets word sense disam- biguation: a unified approach. Transactions of the Association for Computational Linguistics, 2:231- 244.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Non-cooperative games",
"authors": [
{
"first": "John",
"middle": [],
"last": "Nash",
"suffix": ""
}
],
"year": 1951,
"venue": "Annals of Mathematics",
"volume": "54",
"issue": "2",
"pages": "286--295",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Nash. 1951. Non-cooperative games. Annals of Mathematics, 54(2):286-295.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Word sense disambiguation: A survey",
"authors": [
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2009,
"venue": "ACM Comput. Surv",
"volume": "41",
"issue": "2",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/1459352.1459355"
]
},
"num": null,
"urls": [],
"raw_text": "Roberto Navigli. 2009. Word sense disambiguation: A survey. ACM Comput. Surv., 41(2):10:1-10:69.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Natural Language Understanding: Instructions for (present and future) use",
"authors": [
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI 2018)",
"volume": "",
"issue": "",
"pages": "5697--5702",
"other_ids": {
"DOI": [
"10.24963/ijcai.2018/812"
]
},
"num": null,
"urls": [],
"raw_text": "Roberto Navigli. 2018. Natural Language Under- standing: Instructions for (present and future) use. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI 2018), pages 5697-5702.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "An experimental study of graph connectivity for unsupervised word sense disambiguation",
"authors": [
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2010,
"venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence",
"volume": "32",
"issue": "4",
"pages": "678--692",
"other_ids": {
"DOI": [
"10.1109/TPAMI.2009.36"
]
},
"num": null,
"urls": [],
"raw_text": "Roberto Navigli and Mirella Lapata. 2010. An ex- perimental study of graph connectivity for unsuper- vised word sense disambiguation. IEEE Transac- tions on Pattern Analysis and Machine Intelligence, 32(4):678-692.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "BabelNet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network",
"authors": [
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
},
{
"first": "Simone",
"middle": [
"Paolo"
],
"last": "Ponzetto",
"suffix": ""
}
],
"year": 2012,
"venue": "Artif. Intell",
"volume": "193",
"issue": "",
"pages": "217--250",
"other_ids": {
"DOI": [
"10.1016/j.artint.2012.07.001"
]
},
"num": null,
"urls": [],
"raw_text": "Roberto Navigli and Simone Paolo Ponzetto. 2012. BabelNet: The automatic construction, evaluation and application of a wide-coverage multilingual se- mantic network. Artif. Intell., 193:217-250.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Theory of games and economic behavior",
"authors": [
{
"first": "Oskar",
"middle": [],
"last": "John Von Neuman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Morgenstern",
"suffix": ""
}
],
"year": 1944,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John von Neuman and Oskar Morgenstern. 1944. The- ory of games and economic behavior. Princeton University Press.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "The pagerank citation ranking: Bringing order to the web. Technical Report 1999-66",
"authors": [
{
"first": "Lawrence",
"middle": [],
"last": "Page",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Brin",
"suffix": ""
},
{
"first": "Rajeev",
"middle": [],
"last": "Motwani",
"suffix": ""
},
{
"first": "Terry",
"middle": [],
"last": "Winograd",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1999. The pagerank citation rank- ing: Bringing order to the web. Technical Re- port 1999-66, Stanford InfoLab. Previous number = SIDL-WP-1999-0120.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. ACL.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2227--2237",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1202"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. ACL.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "WiC: the word-in-context dataset for evaluating context-sensitive meaning representations",
"authors": [
{
"first": "Mohammad",
"middle": [],
"last": "Taher Pilehvar",
"suffix": ""
},
{
"first": "Jose",
"middle": [],
"last": "Camacho-Collados",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1267--1273",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1128"
]
},
"num": null,
"urls": [],
"raw_text": "Mohammad Taher Pilehvar and Jose Camacho- Collados. 2019. WiC: the word-in-context dataset for evaluating context-sensitive meaning represen- tations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 1267-1273, Minneapolis, Minnesota. ACL.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "De-conflated semantic representations",
"authors": [
{
"first": "Mohammad",
"middle": [],
"last": "Taher Pilehvar",
"suffix": ""
},
{
"first": "Nigel",
"middle": [],
"last": "Collier",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1680--1690",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1174"
]
},
"num": null,
"urls": [],
"raw_text": "Mohammad Taher Pilehvar and Nigel Collier. 2016. De-conflated semantic representations. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1680-1690, Austin, Texas. ACL.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "A large-scale pseudoword-based evaluation framework for state-of-the-art word sense disambiguation",
"authors": [
{
"first": "Mohammad",
"middle": [],
"last": "Taher",
"suffix": ""
},
{
"first": "Pilehvar",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2014,
"venue": "Computational Linguistics",
"volume": "40",
"issue": "4",
"pages": "837--881",
"other_ids": {
"DOI": [
"10.1162/COLI_a_00202"
]
},
"num": null,
"urls": [],
"raw_text": "Mohammad Taher Pilehvar and Roberto Navigli. 2014. A large-scale pseudoword-based evaluation frame- work for state-of-the-art word sense disambiguation. Computational Linguistics, 40(4):837-881.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Towards a Seamless Integration of Word Senses into Downstream NLP Applications",
"authors": [
{
"first": "Jos\u00e9",
"middle": [],
"last": "Mohammed Taher Pilehvar",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Camacho-Collados",
"suffix": ""
},
{
"first": "Nigel",
"middle": [],
"last": "Navigli",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Collier",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. of the 55th Annual Meeting of the Association for Computational Linguistics (ACL 2017)",
"volume": "",
"issue": "",
"pages": "1857--1869",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohammed Taher Pilehvar, Jos\u00e9 Camacho-Collados, Roberto Navigli, and Nigel Collier. 2017. Towards a Seamless Integration of Word Senses into Down- stream NLP Applications. In Proc. of the 55th An- nual Meeting of the Association for Computational Linguistics (ACL 2017), pages 1857-1869, Vancou- ver, Canada. ACL.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "More is not always better: balancing sense distributions for all-words word sense disambiguation",
"authors": [
{
"first": "Marten",
"middle": [],
"last": "Postma",
"suffix": ""
},
{
"first": "Ruben",
"middle": [
"Izquierdo"
],
"last": "Bevia",
"suffix": ""
},
{
"first": "Piek",
"middle": [],
"last": "Vossen",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "3496--3506",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marten Postma, Ruben Izquierdo Bevia, and Piek Vossen. 2016. More is not always better: balanc- ing sense distributions for all-words word sense dis- ambiguation. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 3496-3506, Osaka, Japan. The COLING 2016 Organizing Com- mittee.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Word sense disambiguation: A unified evaluation framework and empirical comparison",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Raganato",
"suffix": ""
},
{
"first": "Jose",
"middle": [],
"last": "Camacho-Collados",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "99--110",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alessandro Raganato, Jose Camacho-Collados, and Roberto Navigli. 2017. Word sense disambiguation: A unified evaluation framework and empirical com- parison. In Proceedings of the 15th Conference of the European Chapter of the Association for Compu- tational Linguistics: Volume 1, Long Papers, pages 99-110, Valencia, Spain. ACL.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "AutoExtend: Extending word embeddings to embeddings for synsets and lexemes",
"authors": [
{
"first": "Sascha",
"middle": [],
"last": "Rothe",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1793--1803",
"other_ids": {
"DOI": [
"10.3115/v1/P15-1173"
]
},
"num": null,
"urls": [],
"raw_text": "Sascha Rothe and Hinrich Sch\u00fctze. 2015. AutoEx- tend: Extending word embeddings to embeddings for synsets and lexemes. In Proceedings of the 53rd Annual Meeting of the Association for Compu- tational Linguistics and the 7th International Joint Conference on Natural Language Processing (Vol- ume 1: Long Papers), pages 1793-1803, Beijing, China. ACL.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Evolutionary stable strategies and game dynamics",
"authors": [
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Leo",
"middle": [
"B"
],
"last": "Taylor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jonker",
"suffix": ""
}
],
"year": 1978,
"venue": "Mathematical Biosciences",
"volume": "40",
"issue": "1",
"pages": "145--156",
"other_ids": {
"DOI": [
"10.1016/0025-5564(78)90077-9"
]
},
"num": null,
"urls": [],
"raw_text": "Peter D. Taylor and Leo B. Jonker. 1978. Evolutionary stable strategies and game dynamics. Mathematical Biosciences, 40(1):145 -156.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "A gametheoretic approach to word sense disambiguation",
"authors": [
{
"first": "Rocco",
"middle": [],
"last": "Tripodi",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Pelillo",
"suffix": ""
}
],
"year": 2017,
"venue": "Computational Linguistics",
"volume": "43",
"issue": "1",
"pages": "31--70",
"other_ids": {
"DOI": [
"10.1162/COLI_a_00274"
]
},
"num": null,
"urls": [],
"raw_text": "Rocco Tripodi and Marcello Pelillo. 2017. A game- theoretic approach to word sense disambiguation. Computational Linguistics, 43(1):31-70.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "FastSense: An efficient word sense disambiguation classifier",
"authors": [
{
"first": "Tolga",
"middle": [],
"last": "Uslu",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Mehler",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Baumartz",
"suffix": ""
},
{
"first": "Wahed",
"middle": [],
"last": "Hemati",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tolga Uslu, Alexander Mehler, Daniel Baumartz, and Wahed Hemati. 2018. FastSense: An efficient word sense disambiguation classifier. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018), Miyazaki, Japan. ELRA.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran As- sociates, Inc.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Evolutionary game theory",
"authors": [
{
"first": "W",
"middle": [],
"last": "J\u00f6rgen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Weibull",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J\u00f6rgen W. Weibull. 1997. Evolutionary game theory. MIT press.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Semi-supervised word sense disambiguation with neural models",
"authors": [
{
"first": "Dayu",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Richardson",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Doherty",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Evans",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Altendorf",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "1374--1385",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dayu Yuan, Julian Richardson, Ryan Doherty, Colin Evans, and Eric Altendorf. 2016. Semi-supervised word sense disambiguation with neural models. In Proceedings of COLING 2016, the 26th Interna- tional Conference on Computational Linguistics: Technical Papers, pages 1374-1385, Osaka, Japan. The COLING 2016 Organizing Committee.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "It makes sense: A wide-coverage word sense disambiguation system for free text",
"authors": [
{
"first": "Zhi",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the ACL 2010 System Demonstrations",
"volume": "",
"issue": "",
"pages": "78--83",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhi Zhong and Hwee Tou Ng. 2010. It makes sense: A wide-coverage word sense disambiguation system for free text. In Proceedings of the ACL 2010 Sys- tem Demonstrations, pages 78-83, Uppsala, Swe- den. ACL.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Generic scheme of the model. \u2022, \u00d7 and \u03c3 refer to elementwise multiplication, matrix multiplication and normalization, respectively."
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Performances of the model on the union of all datasets. The results are presented as F1 for all combinations of word and sense embeddings. Word vectors are on the rows and sense vectors on the columns."
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Correct and wrong answers given by WSDG \u03b1 grouped by number of sensesFigure 4: Correct and wrong answers given by WSDG \u03b1 per sense rank."
},
"TABREF2": {
"num": null,
"text": "Comparison using different priors.",
"html": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF4": {
"num": null,
"text": "Performance on the WiC dataset.",
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"3\">Pearson Spearman MSE</td></tr><tr><td>sense</td><td>46.5</td><td>43.9</td><td>7.9</td></tr><tr><td>word</td><td>39.8</td><td>39.9</td><td>8.6</td></tr></table>"
},
"TABREF5": {
"num": null,
"text": "WSDG \u03b1 results on the SICK dataset.",
"html": null,
"type_str": "table",
"content": "<table/>"
}
}
}
}