{ "paper_id": "D19-1038", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:05:58.013106Z" }, "title": "Neural Cross-Lingual Relation Extraction Based on Bilingual Word Embedding Mapping", "authors": [ { "first": "Jian", "middle": [], "last": "Ni", "suffix": "", "affiliation": { "laboratory": "", "institution": "IBM Research AI", "location": { "addrLine": "1101 Kitchawan Road", "postCode": "10598", "settlement": "Yorktown Heights", "region": "NY", "country": "USA" } }, "email": "" }, { "first": "Radu", "middle": [], "last": "Florian", "suffix": "", "affiliation": { "laboratory": "", "institution": "IBM Research AI", "location": { "addrLine": "1101 Kitchawan Road", "postCode": "10598", "settlement": "Yorktown Heights", "region": "NY", "country": "USA" } }, "email": "raduf@us.ibm.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Relation extraction (RE) seeks to detect and classify semantic relationships between entities, which provides useful information for many NLP applications. Since the state-ofthe-art RE models require large amounts of manually annotated data and language-specific resources to achieve high accuracy, it is very challenging to transfer an RE model of a resource-rich language to a resource-poor language. In this paper, we propose a new approach for cross-lingual RE model transfer based on bilingual word embedding mapping. It projects word embeddings from a target language to a source language, so that a welltrained source-language neural network RE model can be directly applied to the target language. Experiment results show that the proposed approach achieves very good performance for a number of target languages on both in-house and open datasets, using a small bilingual dictionary with only 1K word pairs.", "pdf_parse": { "paper_id": "D19-1038", "_pdf_hash": "", "abstract": [ { "text": "Relation extraction (RE) seeks to detect and classify semantic relationships between entities, which provides useful information for many NLP applications. Since the state-ofthe-art RE models require large amounts of manually annotated data and language-specific resources to achieve high accuracy, it is very challenging to transfer an RE model of a resource-rich language to a resource-poor language. In this paper, we propose a new approach for cross-lingual RE model transfer based on bilingual word embedding mapping. It projects word embeddings from a target language to a source language, so that a welltrained source-language neural network RE model can be directly applied to the target language. Experiment results show that the proposed approach achieves very good performance for a number of target languages on both in-house and open datasets, using a small bilingual dictionary with only 1K word pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Relation extraction (RE) is an important information extraction task that seeks to detect and classify semantic relationships between entities like persons, organizations, geo-political entities, locations, and events. It provides useful information for many NLP applications such as knowledge base construction, text mining and question answering. For example, the entity Washington, D.C. and the entity United States have a CapitalOf relationship, and extraction of such relationships can help answer questions like \"What is the capital city of the United States?\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Traditional RE models (e.g., Zelenko et al. (2003) ; Kambhatla (2004) ; Li and Ji (2014) ) require careful feature engineering to derive and combine various lexical, syntactic and semantic features. Recently, neural network RE models (e.g., Zeng et al. (2014) ; dos Santos et al. 2015; Miwa and Bansal (2016) ; Nguyen and Grishman (2016)) have become very successful. These models employ a certain level of automatic feature learning by using word embeddings, which significantly simplifies the feature engineering task while considerably improving the accuracy, achieving the state-of-the-art performance for relation extraction.", "cite_spans": [ { "start": 29, "end": 50, "text": "Zelenko et al. (2003)", "ref_id": "BIBREF37" }, { "start": 53, "end": 69, "text": "Kambhatla (2004)", "ref_id": "BIBREF13" }, { "start": 72, "end": 88, "text": "Li and Ji (2014)", "ref_id": "BIBREF17" }, { "start": 241, "end": 259, "text": "Zeng et al. (2014)", "ref_id": "BIBREF38" }, { "start": 286, "end": 308, "text": "Miwa and Bansal (2016)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "All the above RE models are supervised machine learning models that need to be trained with large amounts of manually annotated RE data to achieve high accuracy. However, annotating RE data by human is expensive and timeconsuming, and can be quite difficult for a new language. Moreover, most RE models require language-specific resources such as dependency parsers and part-of-speech (POS) taggers, which also makes it very challenging to transfer an RE model of a resource-rich language to a resourcepoor language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There are a few existing weakly supervised cross-lingual RE approaches that require no human annotation in the target languages, e.g., Kim et al. (2010) ; Kim and Lee (2012) ; Faruqui and Kumar (2015) ; Zou et al. (2018) . However, the existing approaches require aligned parallel corpora or machine translation systems, which may not be readily available in practice.", "cite_spans": [ { "start": 135, "end": 152, "text": "Kim et al. (2010)", "ref_id": "BIBREF14" }, { "start": 155, "end": 173, "text": "Kim and Lee (2012)", "ref_id": "BIBREF15" }, { "start": 176, "end": 200, "text": "Faruqui and Kumar (2015)", "ref_id": "BIBREF6" }, { "start": 203, "end": 220, "text": "Zou et al. (2018)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we make the following contributions to cross-lingual RE:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We propose a new approach for direct crosslingual RE model transfer based on bilingual word embedding mapping. It projects word embeddings from a target language to a source language (e.g., English), so that a well-trained source-language RE model can be directly applied to the target language, with no manually annotated RE data needed for the target language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We design a deep neural network architecture for the source-language (English) RE model that uses word embeddings and generic language-independent features as the input. The English RE model achieves thestate-of-the-art performance without using language-specific resources.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We conduct extensive experiments which show that the proposed approach achieves very good performance (up to 79% of the accuracy of the supervised target-language RE model) for a number of target languages on both in-house and the ACE05 datasets (Walker et al., 2006) , using a small bilingual dictionary with only 1K word pairs. To the best of our knowledge, this is the first work that includes empirical studies for cross-lingual RE on several languages across a variety of language families, without using aligned parallel corpora or machine translation systems.", "cite_spans": [ { "start": 248, "end": 269, "text": "(Walker et al., 2006)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We organize the paper as follows. In Section 2 we provide an overview of our approach. In Section 3 we describe how to build monolingual word embeddings and learn a linear mapping between two languages. In Section 4 we present a neural network architecture for the source-language (English). In Section 5 we evaluate the performance of the proposed approach for a number of target languages. We discuss related work in Section 6 and conclude the paper in Section 7.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We summarize the main steps of our neural crosslingual RE model transfer approach as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview of the Approach", "sec_num": "2" }, { "text": "1. Build word embeddings for the source language and the target language separately using monolingual data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview of the Approach", "sec_num": "2" }, { "text": "2. Learn a linear mapping that projects the target-language word embeddings into the source-language embedding space using a small bilingual dictionary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview of the Approach", "sec_num": "2" }, { "text": "3. Build a neural network source-language RE model that uses word embeddings and generic language-independent features as the input.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview of the Approach", "sec_num": "2" }, { "text": "4. For a target-language sentence and any two entities in it, project the word embeddings of the words in the sentence to the sourcelanguage word embeddings using the linear mapping, and then apply the source-language RE model on the projected word embeddings to classify the relationship between the two entities. An example is shown in Figure 1 , where the target language is Portuguese and the source language is English.", "cite_spans": [], "ref_spans": [ { "start": 338, "end": 346, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Overview of the Approach", "sec_num": "2" }, { "text": "We will describe each component of our approach in the subsequent sections.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview of the Approach", "sec_num": "2" }, { "text": "In recent years, vector representations of words, known as word embeddings, become ubiquitous for many NLP applications (Collobert et al., 2011; Mikolov et al., 2013a; Pennington et al., 2014) .", "cite_spans": [ { "start": 120, "end": 144, "text": "(Collobert et al., 2011;", "ref_id": "BIBREF3" }, { "start": 145, "end": 167, "text": "Mikolov et al., 2013a;", "ref_id": "BIBREF20" }, { "start": 168, "end": 192, "text": "Pennington et al., 2014)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Cross-Lingual Word Embeddings", "sec_num": "3" }, { "text": "A monolingual word embedding model maps words in the vocabulary V of a language to realvalued vectors in R d\u00d71 . The dimension of the vector space d is normally much smaller than the size of the vocabulary V = |V| for efficient representation. It also aims to capture semantic similarities between the words based on their distributional properties in large samples of monolingual data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-Lingual Word Embeddings", "sec_num": "3" }, { "text": "Cross-lingual word embedding models try to build word embeddings across multiple languages (Upadhyay et al., 2016; Ruder et al., 2017) . One approach builds monolingual word embeddings separately and then maps them to the same vector space using a bilingual dictionary (Mikolov et al., 2013b; Faruqui and Dyer, 2014) . Another approach builds multilingual word embeddings in a shared vector space simultaneously, by generating mixed language corpora using aligned sentences (Luong et al., 2015; .", "cite_spans": [ { "start": 91, "end": 114, "text": "(Upadhyay et al., 2016;", "ref_id": "BIBREF32" }, { "start": 115, "end": 134, "text": "Ruder et al., 2017)", "ref_id": "BIBREF28" }, { "start": 269, "end": 292, "text": "(Mikolov et al., 2013b;", "ref_id": "BIBREF21" }, { "start": 293, "end": 316, "text": "Faruqui and Dyer, 2014)", "ref_id": "BIBREF5" }, { "start": 474, "end": 494, "text": "(Luong et al., 2015;", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Cross-Lingual Word Embeddings", "sec_num": "3" }, { "text": "In this paper, we adopt the technique in (Mikolov et al., 2013b) because it only requires a small bilingual dictionary of aligned word pairs, and does not require parallel corpora of aligned sentences which could be more difficult to obtain.", "cite_spans": [ { "start": 41, "end": 64, "text": "(Mikolov et al., 2013b)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Cross-Lingual Word Embeddings", "sec_num": "3" }, { "text": "To build monolingual word embeddings for the source and target languages, we use a variant of the Continuous Bag-of-Words (CBOW) word2vec model (Mikolov et al., 2013a) . The standard CBOW model has two matrices, the input word matrixX \u2208 R d\u00d7V and the output word matrix X \u2208 R d\u00d7V . For the ith word w i in V, let e(w i ) \u2208 R V \u00d71 be a one-hot vector with 1 at index i and 0s at other indexes, so thatx i =Xe(w i ) (the ith column ofX) is the input vector representation of word w i , and x i = Xe(w i ) (the ith column of X) is the output vector representation (i.e., word embedding) of word w i .", "cite_spans": [ { "start": 144, "end": 167, "text": "(Mikolov et al., 2013a)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Monolingual Word Embeddings", "sec_num": "3.1" }, { "text": "Given a sequence of training words w 1 , w 2 , ..., w N , the CBOW model seeks to predict a target word w t using a window of 2c context words surrounding w t , by maximizing the following objective function:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Monolingual Word Embeddings", "sec_num": "3.1" }, { "text": "L = 1 N N t=1 log P (w t |w t\u2212c , ..., w t\u22121 , w t+1 , ..., w t+c )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Monolingual Word Embeddings", "sec_num": "3.1" }, { "text": "The conditional probability is calculated using a softmax function:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Monolingual Word Embeddings", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (w t |w t\u2212c , ..., w t+c ) = exp(x T txc(t) ) V i=1 exp(x T ix c(t) )", "eq_num": "(1)" } ], "section": "Monolingual Word Embeddings", "sec_num": "3.1" }, { "text": "where x t = Xe(w t ) is the output vector representation of word w t , and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Monolingual Word Embeddings", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "x c(t) = \u2212c\u2264j\u2264c,j =0X e(w t+j )", "eq_num": "(2)" } ], "section": "Monolingual Word Embeddings", "sec_num": "3.1" }, { "text": "is the sum of the input vector representations of the context words. In our variant of the CBOW model, we use a separate input word matrixX j for a context word at position j, \u2212c \u2264 j \u2264 c, j = 0. In addition, we employ weights that decay with the distances of the context words to the target word. Under these modifications, we hav\u1ebd", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Monolingual Word Embeddings", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "x new c(t) = \u2212c\u2264j\u2264c,j =0 1 |j|X j e(w t+j )", "eq_num": "(3)" } ], "section": "Monolingual Word Embeddings", "sec_num": "3.1" }, { "text": "We use the variant to build monolingual word embeddings because experiments on named entity recognition and word similarity tasks showed this variant leads to small improvements over the standard CBOW model (Ni et al., 2017) .", "cite_spans": [ { "start": 207, "end": 224, "text": "(Ni et al., 2017)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Monolingual Word Embeddings", "sec_num": "3.1" }, { "text": "Mikolov et al. (2013b) observed that word embeddings of different languages often have similar geometric arrangements, and suggested to learn a linear mapping between the vector spaces.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual Word Embedding Mapping", "sec_num": "3.2" }, { "text": "Let D be a bilingual dictionary with aligned word pairs (", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual Word Embedding Mapping", "sec_num": "3.2" }, { "text": "w i , v i ) i=1,.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual Word Embedding Mapping", "sec_num": "3.2" }, { "text": "..,D between a source language s and a target language t, where w i is a source-language word and v i is the translation of w i in the target language. Let x i \u2208 R d\u00d71 be the word embedding of the source-language word w i , y i \u2208 R d\u00d71 be the word embedding of the targetlanguage word v i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual Word Embedding Mapping", "sec_num": "3.2" }, { "text": "We find a linear mapping (matrix) M t\u2192s such that M t\u2192s y i approximates x i , by solving the fol-lowing least squares problem using the dictionary as the training set:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual Word Embedding Mapping", "sec_num": "3.2" }, { "text": "M t\u2192s = arg min M\u2208R d\u00d7d D i=1 ||x i \u2212 My i || 2 (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual Word Embedding Mapping", "sec_num": "3.2" }, { "text": "Using M t\u2192s , for any target-language word v with word embedding y, we can project it into the source-language embedding space as M t\u2192s y.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bilingual Word Embedding Mapping", "sec_num": "3.2" }, { "text": "To ensure that all the training instances in the dictionary D contribute equally to the optimization objective in (4) and to preserve vector norms after projection, we have tried length normalization and orthogonal transformation for learning the bilingual mapping as in (Xing et al., 2015; Artetxe et al., 2016; Smith et al., 2017) . First, we normalize the source-language and target-language word embeddings to be unit vectors: x = x ||x|| for each source-language word embedding x, and y = y ||y|| for each target-language word embedding y.", "cite_spans": [ { "start": 271, "end": 290, "text": "(Xing et al., 2015;", "ref_id": "BIBREF36" }, { "start": 291, "end": 312, "text": "Artetxe et al., 2016;", "ref_id": "BIBREF0" }, { "start": 313, "end": 332, "text": "Smith et al., 2017)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Length Normalization and Orthogonal Transformation", "sec_num": "3.2.1" }, { "text": "Next, we add an orthogonality constraint to (4) such that M is an orthogonal matrix, i.e., M T M = I where I denotes the identity matrix:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Length Normalization and Orthogonal Transformation", "sec_num": "3.2.1" }, { "text": "M O t\u2192s = arg min M\u2208R d\u00d7d ,M T M=I D i=1 ||x i \u2212 My i || 2 (5) M O", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Length Normalization and Orthogonal Transformation", "sec_num": "3.2.1" }, { "text": "t\u2192s can be computed using singular-value decomposition (SVD).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Length Normalization and Orthogonal Transformation", "sec_num": "3.2.1" }, { "text": "The mapping learned in (4) or (5) requires a seed dictionary. To relax this requirement, Artetxe et al. 2017proposed a self-learning procedure that can be combined with a dictionary-based mapping technique. Starting with a small seed dictionary, the procedure iteratively 1) learns a mapping using the current dictionary; and 2) computes a new dictionary using the learned mapping. Artetxe et al. (2018) proposed an unsupervised method to learn the bilingual mapping without using a seed dictionary. The method first uses a heuristic to build an initial dictionary that aligns the vocabularies of two languages, and then applies a robust self-learning procedure to iteratively improve the mapping. Another unsuper-vised method based on adversarial training was proposed in Conneau et al. (2018) .", "cite_spans": [ { "start": 382, "end": 403, "text": "Artetxe et al. (2018)", "ref_id": "BIBREF2" }, { "start": 773, "end": 794, "text": "Conneau et al. (2018)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Semi-Supervised and Unsupervised Mappings", "sec_num": "3.2.2" }, { "text": "We compare the performance of different mappings for cross-lingual RE model transfer in Section 5.3.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semi-Supervised and Unsupervised Mappings", "sec_num": "3.2.2" }, { "text": "For any two entities in a sentence, an RE model determines whether these two entities have a relationship, and if yes, classifies the relationship into one of the pre-defined relation types. We focus on neural network RE models since these models achieve the state-of-the-art performance for relation extraction. Most importantly, neural network RE models use word embeddings as the input, which are amenable to cross-lingual model transfer via cross-lingual word embeddings. In this paper, we use English as the source language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Network RE Models", "sec_num": "4" }, { "text": "Our neural network architecture has four layers. The first layer is the embedding layer which maps input words in a sentence to word embeddings. The second layer is a context layer which transforms the word embeddings to context-aware vector representations using a recurrent or convolutional neural network layer. The third layer is a summarization layer which summarizes the vectors in a sentence by grouping and pooling. The final layer is the output layer which returns the classification label for the relation type.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Network RE Models", "sec_num": "4" }, { "text": "For an English sentence with n words s = (w 1 , w 2 , ..., w n ), the embedding layer maps each word w t to a real-valued vector (word embedding) x t \u2208 R d\u00d71 using the English word embedding model (Section 3.1). In addition, for each entity m in the sentence, the embedding layer maps its entity type to a real-valued vector (entity label embedding) l m \u2208 R dm\u00d71 (initialized randomly). In our experiments we use d = 300 and d m = 50.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Embedding Layer", "sec_num": "4.1" }, { "text": "Given the word embeddings x t 's of the words in the sentence, the context layer tries to build a sentence-context-aware vector representation for each word. We consider two types of neural network layers that aim to achieve this.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Layer", "sec_num": "4.2" }, { "text": "The first type of context layer is based on Long Short-Term Memory (LSTM) type recurrent neural networks (Hochreiter and Schmidhuber, 1997; Graves and Schmidhuber, 2005) . Recurrent neural networks (RNNs) are a class of neural networks that operate on sequential data such as sequences of words. LSTM networks are a type of RNNs that have been invented to better capture long-range dependencies in sequential data.", "cite_spans": [ { "start": 105, "end": 139, "text": "(Hochreiter and Schmidhuber, 1997;", "ref_id": "BIBREF12" }, { "start": 140, "end": 169, "text": "Graves and Schmidhuber, 2005)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Bi-LSTM Context Layer", "sec_num": "4.2.1" }, { "text": "We pass the word embeddings x t 's to a forward and a backward LSTM layer. A forward or backward LSTM layer consists of a set of recurrently connected blocks known as memory blocks. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bi-LSTM Context Layer", "sec_num": "4.2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2212 \u2192 i t = \u03c3 \u2212 \u2192 W i x t + \u2212 \u2192 U i \u2212 \u2192 h t\u22121 + \u2212 \u2192 b i \u2212 \u2192 f t = \u03c3 \u2212 \u2192 W f x t + \u2212 \u2192 U f \u2212 \u2192 h t\u22121 + \u2212 \u2192 b f \u2212 \u2192 o t = \u03c3 \u2212 \u2192 W o x t + \u2212 \u2192 U o \u2212 \u2192 h t\u22121 + \u2212 \u2192 b o \u2212 \u2192 c t = \u2212 \u2192 f t \u2212 \u2192 c t\u22121 + \u2212 \u2192 i t tanh \u2212 \u2192 W c x t + \u2212 \u2192 U c \u2212 \u2192 h t\u22121 + \u2212 \u2192 b c \u2212 \u2192 h t = \u2212 \u2192 o t tanh( \u2212 \u2192 c t )", "eq_num": "(6)" } ], "section": "Bi-LSTM Context Layer", "sec_num": "4.2.1" }, { "text": "where \u03c3 is the element-wise sigmoid function and is the element-wise multiplication.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bi-LSTM Context Layer", "sec_num": "4.2.1" }, { "text": "The hidden state vector \u2212 \u2192 h t in the forward LSTM layer incorporates information from the left (past) tokens of w t in the sentence. Similarly, we can compute the hidden state vector \u2190 \u2212 h t in the backward LSTM layer, which incorporates information from the right (future) tokens of w t in the sentence. The concatenation of the two vectors", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bi-LSTM Context Layer", "sec_num": "4.2.1" }, { "text": "h t = [ \u2212 \u2192 h t , \u2190 \u2212 h t ]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bi-LSTM Context Layer", "sec_num": "4.2.1" }, { "text": "is a good representation of the word w t with both left and right contextual information in the sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bi-LSTM Context Layer", "sec_num": "4.2.1" }, { "text": "The second type of context layer is based on Convolutional Neural Networks (CNNs) (Zeng et al., 2014; dos Santos et al., 2015) , which applies convolution-like operation on successive windows of size k around each word in the sentence. Let z t = [x t\u2212(k\u22121)/2 , ..., x t+(k\u22121)/2 ] be the concatenation of k word embeddings around w t . The convolutional layer computes a hidden state vector", "cite_spans": [ { "start": 82, "end": 101, "text": "(Zeng et al., 2014;", "ref_id": "BIBREF38" }, { "start": 102, "end": 126, "text": "dos Santos et al., 2015)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "CNN Context Layer", "sec_num": "4.2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h t = tanh(Wz t + b)", "eq_num": "(7)" } ], "section": "CNN Context Layer", "sec_num": "4.2.2" }, { "text": "for each word w t , where W is a weight matrix and b is a bias vector, and tanh(\u2022) is the element-wise hyperbolic tangent function.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CNN Context Layer", "sec_num": "4.2.2" }, { "text": "After the context layer, the sentence (w 1 , w 2 , ..., w n ) is represented by (h 1 , ...., h n ). Suppose m 1 = (w b 1 , .., w e 1 ) and m 2 = (w b 2 , .., w e 2 ) are two entities in the sentence where m 1 is on the left of m 2 (i.e., e 1 < b 2 ). As different sentences and entities may have various lengths, the summarization layer tries to build a fixed-length vector that best summarizes the representations of the sentence and the two entities for relation type classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summarization Layer", "sec_num": "4.3" }, { "text": "We divide the hidden state vectors h t 's into 5 groups:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summarization Layer", "sec_num": "4.3" }, { "text": "\u2022 G 1 = {h 1 , .., h b 1 \u22121 } includes vectors that", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summarization Layer", "sec_num": "4.3" }, { "text": "are left to the first entity m 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summarization Layer", "sec_num": "4.3" }, { "text": "\u2022 G 2 = {h b 1 , .., h e 1 } includes vectors that are in the first entity m 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summarization Layer", "sec_num": "4.3" }, { "text": "\u2022 G 3 = {h e 1 +1 , .., h b 2 \u22121 } includes vectors that are between the two entities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summarization Layer", "sec_num": "4.3" }, { "text": "\u2022 G 4 = {h b 2 , .., h e 2 } includes vectors that are in the second entity m 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summarization Layer", "sec_num": "4.3" }, { "text": "\u2022 G 5 = {h e 2 +1 , .., h n } includes vectors that are right to the second entity m 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summarization Layer", "sec_num": "4.3" }, { "text": "We perform element-wise max pooling among the vectors in each group:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summarization Layer", "sec_num": "4.3" }, { "text": "h G i (j) = max h\u2208G i h(j), 1 \u2264 j \u2264 d h , 1 \u2264 i \u2264 5 (8)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summarization Layer", "sec_num": "4.3" }, { "text": "where d h is the dimension of the hidden state vectors. Concatenating the h G i 's we get a fixedlength vector", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summarization Layer", "sec_num": "4.3" }, { "text": "h s = [h G 1 , ..., h G 5 ].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summarization Layer", "sec_num": "4.3" }, { "text": "The output layer receives inputs from the previous layers (the summarization vector h s , the entity label embeddings l m 1 and l m 2 for the two entities under consideration) and returns a probability distribution over the relation type labels:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Output Layer", "sec_num": "4.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p = softmax W s h s +W m 1 l m 1 +W m 2 l m 2 +b o", "eq_num": "(9)" } ], "section": "Output Layer", "sec_num": "4.4" }, { "text": "Given the word embeddings of a sequence of words in a target language t, (y 1 , ..., y n ), we project them into the English embedding space by applying the linear mapping M t\u2192s learned in Section 3.2: (M t\u2192s y 1 , M t\u2192s y 2 , ..., M t\u2192s y n ). The neural network English RE model is then applied on the projected word embeddings and the entity label embeddings (which are language independent) to perform relationship classification. Note that our models do not use languagespecific resources such as dependency parsers or POS taggers because these resources might not be readily available for a target language. Also our models do not use precise word position features since word positions in sentences can vary a lot across languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-Lingual RE Model Transfer", "sec_num": "4.5" }, { "text": "In this section, we evaluate the performance of the proposed cross-lingual RE approach on both in-house dataset and the ACE (Automatic Content Extraction) 2005 multilingual dataset (Walker et al., 2006) .", "cite_spans": [ { "start": 181, "end": 202, "text": "(Walker et al., 2006)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "Our in-house dataset includes manually annotated RE data for 6 languages: English, German, Spanish, Italian, Japanese and Portuguese. It defines 56 entity types (e.g., Person, Organization, Geo-Political Entity, Location, Facility, Time, Event Violence, etc.) and 53 relation types between the entities (e.g., AgentOf, LocatedAt, PartOf, TimeOf, AffectedBy, etc.).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "5.1" }, { "text": "The ACE05 dataset includes manually annotated RE data for 3 languages: English, Arabic and Chinese. It defines 7 entity types (Person, Organization, Geo-Political Entity, Location, Facility, Weapon, Vehicle) and 6 relation types between the entities (Agent-Artifact, General-Affiliation, ORG-Affiliation, Part-Whole, Personal-Social, Physical).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "5.1" }, { "text": "For both datasets, we create a class label \"O\" to denote that the two entities under consideration do not have a relationship belonging to one of the relation types of interest.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "5.1" }, { "text": "We build 3 neural network English RE models under the architecture described in Section 4:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source (English) RE Model Performance", "sec_num": "5.2" }, { "text": "\u2022 The first neural network RE model does not have a context layer and the word embeddings are directly passed to the summarization layer. We call it Pass-Through for short.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source (English) RE Model Performance", "sec_num": "5.2" }, { "text": "\u2022 The second neural network RE model has a Bi-LSTM context layer. We call it Bi-LSTM for short.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source (English) RE Model Performance", "sec_num": "5.2" }, { "text": "Model F1 FCM (S) (Gormley et al., 2015) 55.06 Hybrid FCM (E) (Gormley et al., 2015) 58.26 BIDIRECT (S) (Nguyen and Grishman, 2016) \u2022 The third neural network model has a CNN context layer with a window size 3. We call it CNN for short.", "cite_spans": [ { "start": 17, "end": 39, "text": "(Gormley et al., 2015)", "ref_id": "BIBREF7" }, { "start": 61, "end": 83, "text": "(Gormley et al., 2015)", "ref_id": "BIBREF7" }, { "start": 103, "end": 130, "text": "(Nguyen and Grishman, 2016)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Source (English) RE Model Performance", "sec_num": "5.2" }, { "text": "First we compare our neural network English RE models with the state-of-the-art RE models on the ACE05 English data. The ACE05 English data can be divided to 6 different domains: broadcast conversation (bc), broadcast news (bn), telephone conversation (cts), newswire (nw), usenet (un) and webblogs (wl). We apply the same data split in (Plank and Moschitti, 2013; Gormley et al., 2015; Nguyen and Grishman, 2016) , which uses news (the union of bn and nw) as the training set, a half of bc as the development set and the remaining data as the test set.", "cite_spans": [ { "start": 337, "end": 364, "text": "(Plank and Moschitti, 2013;", "ref_id": "BIBREF27" }, { "start": 365, "end": 386, "text": "Gormley et al., 2015;", "ref_id": "BIBREF7" }, { "start": 387, "end": 413, "text": "Nguyen and Grishman, 2016)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Source (English) RE Model Performance", "sec_num": "5.2" }, { "text": "We learn the model parameters using Adam (Kingma and Ba, 2015). We apply dropout (Srivastava et al., 2014) to the hidden layers to reduce overfitting. The development set is used for tuning the model hyperparameters and for early stopping.", "cite_spans": [ { "start": 81, "end": 106, "text": "(Srivastava et al., 2014)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Source (English) RE Model Performance", "sec_num": "5.2" }, { "text": "In Table 1 we compare our models with the best models in (Gormley et al., 2015) and (Nguyen and Grishman, 2016) . Our Bi-LSTM model outperforms the best model (single or ensemble) in (Gormley et al., 2015) and the best single model in (Nguyen and Grishman, 2016), without using any language-specific resources such as dependency Figure 2 : Cross-lingual RE performance (F 1 score) vs. dictionary size (number of bilingual word pairs for learning the mapping (4)) under the Bi-LSTM English RE model on the target-language development data.", "cite_spans": [ { "start": 57, "end": 79, "text": "(Gormley et al., 2015)", "ref_id": "BIBREF7" }, { "start": 84, "end": 111, "text": "(Nguyen and Grishman, 2016)", "ref_id": "BIBREF24" }, { "start": 183, "end": 205, "text": "(Gormley et al., 2015)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 1", "ref_id": "TABREF2" }, { "start": 329, "end": 337, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Source (English) RE Model Performance", "sec_num": "5.2" }, { "text": "In parsers. While the data split in the previous works was motivated by domain adaptation, the focus of this paper is on cross-lingual model transfer, and hence we apply a random data split as follows. For the source language English and each target language, we randomly select 80% of the data as the training set, 10% as the development set, and keep the remaining 10% as the test set. The sizes of the sets are summarized in Table 2 .", "cite_spans": [], "ref_spans": [ { "start": 428, "end": 435, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "We report the Precision, Recall and F 1 score of the 3 neural network English RE models in Table 3 . Note that adding an additional context layer with either Bi-LSTM or CNN significantly improves the performance of our English RE model, compared with the simple Pass-Through model. Therefore, we will focus on the Bi-LSTM model and the CNN model in the subsequent experiments.", "cite_spans": [], "ref_spans": [ { "start": 91, "end": 99, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "We apply the English RE models to the 7 target languages across a variety of language families.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-Lingual RE Performance", "sec_num": "5.3" }, { "text": "The bilingual dictionary includes the most frequent target-language words and their translations in English. To determine how many word pairs are needed to learn an effective bilingual word embedding mapping for cross-lingual RE, we first evaluate the performance (F 1 score) of our cross-lingual RE approach on the target-language development sets with an increasing dictionary size, as plotted in Figure 2 .", "cite_spans": [], "ref_spans": [ { "start": 399, "end": 407, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Dictionary Size", "sec_num": "5.3.1" }, { "text": "We found that for most target languages, once the dictionary size reaches 1K, further increasing the dictionary size may not improve the transfer performance. Therefore, we select the dictionary size to be 1K.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dictionary Size", "sec_num": "5.3.1" }, { "text": "We compare the performance of cross-lingual RE model transfer under the following bilingual word embedding mappings:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison of Different Mappings", "sec_num": "5.3.2" }, { "text": "\u2022 Regular-1K: the regular mapping learned in (4) using 1K word pairs; \u2022 Orthogonal-1K: the orthogonal mapping with length normalization learned in (5) using 1K word pairs (in this case we train the English RE models with the normalized English word embeddings);", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison of Different Mappings", "sec_num": "5.3.2" }, { "text": "\u2022 Semi-Supervised-1K: the mapping learned with 1K word pairs and improved by the selflearning method in (Artetxe et al., 2017) ;", "cite_spans": [ { "start": 104, "end": 126, "text": "(Artetxe et al., 2017)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Comparison of Different Mappings", "sec_num": "5.3.2" }, { "text": "\u2022 Unsupervised: the mapping learned by the unsupervised method in (Artetxe et al., 2018) .", "cite_spans": [ { "start": 66, "end": 88, "text": "(Artetxe et al., 2018)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Comparison of Different Mappings", "sec_num": "5.3.2" }, { "text": "The results are summarized in Table 4 . The regular mapping outperforms the orthogonal mapping consistently across the target languages. While the orthogonal mapping was shown to work better than the regular mapping for the word translation task (Xing et al., 2015; Artetxe et al., 2016; Smith et al., 2017) , our cross-lingual RE approach directly maps target-language word embeddings to the English embedding space without conducting word translations. Moreover, the orthogonal mapping requires length normalization, but we observed that length normalization adversely affects the performance of the English RE models (about 2.0 F 1 points drop).", "cite_spans": [ { "start": 246, "end": 265, "text": "(Xing et al., 2015;", "ref_id": "BIBREF36" }, { "start": 266, "end": 287, "text": "Artetxe et al., 2016;", "ref_id": "BIBREF0" }, { "start": 288, "end": 307, "text": "Smith et al., 2017)", "ref_id": "BIBREF30" } ], "ref_spans": [ { "start": 30, "end": 37, "text": "Table 4", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Comparison of Different Mappings", "sec_num": "5.3.2" }, { "text": "We apply the vecmap toolkit 1 to obtain the semi-supervised and unsupervised mappings. The unsupervised mapping has the lowest average accuracy over the target languages, but it does not require a seed dictionary. Among all the mappings, the regular mapping achieves the best average accuracy over the target languages using a dictionary with only 1K word pairs, and hence we adopt it for the cross-lingual RE task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison of Different Mappings", "sec_num": "5.3.2" }, { "text": "The cross-lingual RE model transfer results for the in-house test data are summarized in Table 5 and the results for the ACE05 test data are summarized in Table 6 , using the regular mapping learned 1 https://github.com/artetxem/vecmap with a bilingual dictionary of size 1K. In the tables, we also provide the performance of the supervised RE model (Bi-LSTM) for each target language, which is trained with a few hundred thousand tokens of manually annotated RE data in the target-language, and may serve as an upper bound for the cross-lingual model transfer performance.", "cite_spans": [], "ref_spans": [ { "start": 89, "end": 96, "text": "Table 5", "ref_id": "TABREF9" }, { "start": 155, "end": 162, "text": "Table 6", "ref_id": "TABREF10" } ], "eq_spans": [], "section": "Performance on Test Data", "sec_num": "5.3.3" }, { "text": "Among the 2 neural network models, the Bi-LSTM model achieves a better cross-lingual RE performance than the CNN model for 6 out of the 7 target languages. In terms of absolute performance, the Bi-LSTM model achieves over 40.0 F 1 scores for German, Spanish, Portuguese and Chinese. In terms of relative performance, it reaches over 75% of the accuracy of the supervised targetlanguage RE model for German, Spanish, Italian and Portuguese. While Japanese and Arabic appear to be more difficult to transfer, it still achieves 55% and 52% of the accuracy of the supervised Japanese and Arabic RE model, respectively, without using any manually annotated RE data in Japanese/Arabic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance on Test Data", "sec_num": "5.3.3" }, { "text": "We apply model ensemble to further improve the accuracy of the Bi-LSTM model. We train 5 Bi-LSTM English RE models initiated with different random seeds, apply the 5 models on the target languages, and combine the outputs by selecting the relation type labels with the highest probabilities among the 5 models. This Ensemble approach improves the single model by 0.6-1.9 F 1 points, except for Arabic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance on Test Data", "sec_num": "5.3.3" }, { "text": "Since our approach projects the target-language word embeddings to the source-language embedding space preserving the word order, it is expected to work better for a target language that has more similar word order as the source language. This has been verified by our experiments. The source language, English, belongs to the SVO (Subject, Verb, Object) language family where in a sentence the subject comes first, the verb second, and the object third. Spanish, Italian, Portuguese, German (in conventional typology) and Chinese also belong to the SVO language family, and our approach achieves over 70% relative accuracy for these languages. On the other hand, Japanese belongs to the SOV (Subject, Object, Verb) language family and Arabic belongs to the VSO (Verb, Subject, Object) language family, and our approach achieves lower relative accuracy for these two languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5.3.4" }, { "text": "There are a few weakly supervised cross-lingual RE approaches. Kim et al. (2010) and Kim and Lee (2012) project annotated English RE data to Korean to create weakly labeled training data via aligned parallel corpora. Faruqui and Kumar (2015) translates a target-language sentence into English, performs RE in English, and then projects the relation phrases back to the target-language sentence. Zou et al. (2018) proposes an adversarial feature adaptation approach for cross-lingual relation classification, which uses a machine translation system to translate source-language sentences into target-language sentences. Unlike the existing approaches, our approach does not require aligned parallel corpora or machine translation systems. There are also several multilingual RE approaches, e.g., Verga et al. (2016) ; Min et al. (2017); Lin et al. (2017) , where the focus is to improve monolingual RE by jointly modeling texts in multiple languages.", "cite_spans": [ { "start": 63, "end": 80, "text": "Kim et al. (2010)", "ref_id": "BIBREF14" }, { "start": 85, "end": 103, "text": "Kim and Lee (2012)", "ref_id": "BIBREF15" }, { "start": 217, "end": 241, "text": "Faruqui and Kumar (2015)", "ref_id": "BIBREF6" }, { "start": 395, "end": 412, "text": "Zou et al. (2018)", "ref_id": "BIBREF39" }, { "start": 795, "end": 814, "text": "Verga et al. (2016)", "ref_id": "BIBREF33" }, { "start": 836, "end": 853, "text": "Lin et al. (2017)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Many cross-lingual word embedding models have been developed recently (Upadhyay et al., 2016; Ruder et al., 2017 ). An important application of cross-lingual word embeddings is to enable cross-lingual model transfer. In this paper, we apply the bilingual word embedding mapping technique in (Mikolov et al., 2013b) to cross-lingual RE model transfer. Similar approaches have been applied to other NLP tasks such as dependency parsing (Guo et al., 2015) , POS tagging (Gouws and S\u00f8gaard, 2015) and named entity recognition (Ni et al., 2017; Xie et al., 2018) .", "cite_spans": [ { "start": 70, "end": 93, "text": "(Upadhyay et al., 2016;", "ref_id": "BIBREF32" }, { "start": 94, "end": 112, "text": "Ruder et al., 2017", "ref_id": "BIBREF28" }, { "start": 291, "end": 314, "text": "(Mikolov et al., 2013b)", "ref_id": "BIBREF21" }, { "start": 434, "end": 452, "text": "(Guo et al., 2015)", "ref_id": "BIBREF11" }, { "start": 467, "end": 492, "text": "(Gouws and S\u00f8gaard, 2015)", "ref_id": "BIBREF9" }, { "start": 522, "end": 539, "text": "(Ni et al., 2017;", "ref_id": "BIBREF25" }, { "start": 540, "end": 557, "text": "Xie et al., 2018)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "In this paper, we developed a simple yet effective neural cross-lingual RE model transfer approach, which has very low resource requirements (a small bilingual dictionary with 1K word pairs) and can be easily extended to a new language. Extensive experiments for 7 target languages across a variety of language families on both in-house and open datasets show that the proposed approach achieves very good performance (up to 79% of the accuracy of the supervised target-language RE model), which provides a strong baseline for building cross-lingual RE models with minimal resources.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" } ], "back_matter": [ { "text": "We thank Mo Yu for sharing their ACE05 English data split and the anonymous reviewers for their valuable comments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Learning principled bilingual mappings of word embeddings while preserving monolingual invariance", "authors": [ { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Gorka", "middle": [], "last": "Labaka", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2289--2294", "other_ids": { "DOI": [ "10.18653/v1/D16-1250" ] }, "num": null, "urls": [], "raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2016. Learning principled bilingual mappings of word em- beddings while preserving monolingual invariance. In Proceedings of the 2016 Conference on Empiri- cal Methods in Natural Language Processing, pages 2289-2294, Austin, Texas. Association for Compu- tational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Learning bilingual word embeddings with (almost) no bilingual data", "authors": [ { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Gorka", "middle": [], "last": "Labaka", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "451--462", "other_ids": { "DOI": [ "10.18653/v1/P17-1042" ] }, "num": null, "urls": [], "raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the 55th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 451-462, Vancouver, Canada. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings", "authors": [ { "first": "Mikel", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "Gorka", "middle": [], "last": "Labaka", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "789--798", "other_ids": { "DOI": [ "10.18653/v1/P18-1073" ] }, "num": null, "urls": [], "raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018. A robust self-learning method for fully unsuper- vised cross-lingual mappings of word embeddings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 789-798, Melbourne, Aus- tralia. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Natural language processing (almost) from scratch", "authors": [ { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "L\u00e9on", "middle": [], "last": "Bottou", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Karlen", "suffix": "" }, { "first": "Koray", "middle": [], "last": "Kavukcuoglu", "suffix": "" }, { "first": "Pavel", "middle": [], "last": "Kuksa", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2493--2537", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12:2493-2537.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Word translation without parallel data", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Marc'aurelio", "middle": [], "last": "Ranzato", "suffix": "" }, { "first": "Ludovic", "middle": [], "last": "Denoyer", "suffix": "" }, { "first": "Herv\u00e9", "middle": [], "last": "J\u00e9gou", "suffix": "" } ], "year": 2018, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, and Herv\u00e9 J\u00e9gou. 2018. Word translation without parallel data. In Interna- tional Conference on Learning Representations.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Improving vector space word representations using multilingual correlation", "authors": [ { "first": "Manaal", "middle": [], "last": "Faruqui", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "462--471", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manaal Faruqui and Chris Dyer. 2014. Improving vec- tor space word representations using multilingual correlation. In Proceedings of the 14th Conference of the European Chapter of the Association for Com- putational Linguistics, pages 462-471, Gothenburg, Sweden. Association for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Multilingual open relation extraction using cross-lingual projection", "authors": [ { "first": "Manaal", "middle": [], "last": "Faruqui", "suffix": "" }, { "first": "Shankar", "middle": [], "last": "Kumar", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1351--1356", "other_ids": { "DOI": [ "10.3115/v1/N15-1151" ] }, "num": null, "urls": [], "raw_text": "Manaal Faruqui and Shankar Kumar. 2015. Multi- lingual open relation extraction using cross-lingual projection. In Proceedings of the 2015 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, pages 1351-1356. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Improved relation extraction with feature-rich compositional embedding models", "authors": [ { "first": "Matthew", "middle": [ "R" ], "last": "Gormley", "suffix": "" }, { "first": "Mo", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Dredze", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1774--1784", "other_ids": { "DOI": [ "10.18653/v1/D15-1205" ] }, "num": null, "urls": [], "raw_text": "Matthew R. Gormley, Mo Yu, and Mark Dredze. 2015. Improved relation extraction with feature-rich com- positional embedding models. In Proceedings of the 2015 Conference on Empirical Methods in Nat- ural Language Processing, pages 1774-1784, Lis- bon, Portugal. Association for Computational Lin- guistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Bilbowa: Fast bilingual distributed representations without word alignments", "authors": [ { "first": "Stephan", "middle": [], "last": "Gouws", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 32nd International Conference on Machine Learning", "volume": "", "issue": "", "pages": "748--756", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephan Gouws, Yoshua Bengio, and Greg Corrado. 2015. Bilbowa: Fast bilingual distributed repre- sentations without word alignments. In Proceed- ings of the 32nd International Conference on Ma- chine Learning, pages 748-756. JMLR Workshop and Conference Proceedings.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Simple task-specific bilingual word embeddings", "authors": [ { "first": "Stephan", "middle": [], "last": "Gouws", "suffix": "" }, { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1386--1390", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephan Gouws and Anders S\u00f8gaard. 2015. Sim- ple task-specific bilingual word embeddings. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 1386-1390, Denver, Colorado. Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Framewise phoneme classification with bidirectional LSTM and other neural network architectures", "authors": [ { "first": "Alex", "middle": [], "last": "Graves", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 2005, "venue": "NEURAL NETWORKS", "volume": "18", "issue": "5-6", "pages": "602--610", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Graves and J\u00fcrgen Schmidhuber. 2005. Frame- wise phoneme classification with bidirectional LSTM and other neural network architectures. NEURAL NETWORKS, 18(5-6):602-610.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Cross-lingual dependency parsing based on distributed representations", "authors": [ { "first": "Jiang", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Wanxiang", "middle": [], "last": "Che", "suffix": "" }, { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" }, { "first": "Haifeng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "1234--1244", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiang Guo, Wanxiang Che, David Yarowsky, Haifeng Wang, and Ting Liu. 2015. Cross-lingual de- pendency parsing based on distributed representa- tions. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 1234-1244, Beijing, China. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural Computation", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": { "DOI": [ "10.1162/neco.1997.9.8.1735" ] }, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Combining lexical, syntactic, and semantic features with maximum entropy models for extracting relations", "authors": [ { "first": "Nanda", "middle": [], "last": "Kambhatla", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the ACL 2004 on Interactive Poster and Demonstration Sessions", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.3115/1219044.1219066" ] }, "num": null, "urls": [], "raw_text": "Nanda Kambhatla. 2004. Combining lexical, syntac- tic, and semantic features with maximum entropy models for extracting relations. In Proceedings of the ACL 2004 on Interactive Poster and Demonstra- tion Sessions, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A cross-lingual annotation projection approach for relation detection", "authors": [ { "first": "Seokhwan", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Minwoo", "middle": [], "last": "Jeong", "suffix": "" }, { "first": "Jonghoon", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Gary Geunbae", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23rd International Conference on Computational Linguistics, COLING '10", "volume": "", "issue": "", "pages": "564--571", "other_ids": {}, "num": null, "urls": [], "raw_text": "Seokhwan Kim, Minwoo Jeong, Jonghoon Lee, and Gary Geunbae Lee. 2010. A cross-lingual annota- tion projection approach for relation detection. In Proceedings of the 23rd International Conference on Computational Linguistics, COLING '10, pages 564-571, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A graph-based cross-lingual projection approach for weakly supervised relation extraction", "authors": [ { "first": "Seokhwan", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Gary Geunbae", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers", "volume": "12", "issue": "", "pages": "48--53", "other_ids": {}, "num": null, "urls": [], "raw_text": "Seokhwan Kim and Gary Geunbae Lee. 2012. A graph-based cross-lingual projection approach for weakly supervised relation extraction. In Proceed- ings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers -Vol- ume 2, ACL '12, pages 48-53, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 3rd International Conference on Learning Representations (ICLR), ICLR '15", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceed- ings of the 3rd International Conference on Learn- ing Representations (ICLR), ICLR '15.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Incremental joint extraction of entity mentions and relations", "authors": [ { "first": "Qi", "middle": [], "last": "Li", "suffix": "" }, { "first": "Heng", "middle": [], "last": "Ji", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "402--412", "other_ids": { "DOI": [ "10.3115/v1/P14-1038" ] }, "num": null, "urls": [], "raw_text": "Qi Li and Heng Ji. 2014. Incremental joint extrac- tion of entity mentions and relations. In Proceed- ings of the 52nd Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 402-412. Association for Computa- tional Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Neural relation extraction with multi-lingual attention", "authors": [ { "first": "Yankai", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Zhiyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Maosong", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "34--43", "other_ids": { "DOI": [ "10.18653/v1/P17-1004" ] }, "num": null, "urls": [], "raw_text": "Yankai Lin, Zhiyuan Liu, and Maosong Sun. 2017. Neural relation extraction with multi-lingual atten- tion. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 34-43. Association for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Bilingual word representations with monolingual quality in mind", "authors": [ { "first": "Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Pham", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing", "volume": "", "issue": "", "pages": "151--159", "other_ids": { "DOI": [ "10.3115/v1/W15-1521" ] }, "num": null, "urls": [], "raw_text": "Thang Luong, Hieu Pham, and Christopher D. Man- ning. 2015. Bilingual word representations with monolingual quality in mind. In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing, pages 151-159. Association for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "CoRR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word represen- tations in vector space. CoRR, abs/1301.3781.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Exploiting similarities among languages for machine translation", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Le", "suffix": "" }, { "first": "", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Quoc V. Le, and Ilya Sutskever. 2013b. Exploiting similarities among languages for machine translation. CoRR, abs/1309.4168.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Learning transferable representation for bilingual relation extraction via convolutional neural networks", "authors": [ { "first": "Zhuolin", "middle": [], "last": "Bonan Min", "suffix": "" }, { "first": "Marjorie", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Freedman", "suffix": "" }, { "first": "", "middle": [], "last": "Weischedel", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "674--684", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bonan Min, Zhuolin Jiang, Marjorie Freedman, and Ralph Weischedel. 2017. Learning transferable rep- resentation for bilingual relation extraction via con- volutional neural networks. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 674-684, Taipei, Taiwan. Asian Federation of Natural Language Processing.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "End-to-end relation extraction using LSTMs on sequences and tree structures", "authors": [ { "first": "Makoto", "middle": [], "last": "Miwa", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1105--1116", "other_ids": { "DOI": [ "10.18653/v1/P16-1105" ] }, "num": null, "urls": [], "raw_text": "Makoto Miwa and Mohit Bansal. 2016. End-to-end re- lation extraction using LSTMs on sequences and tree structures. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1105-1116. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Combining neural networks and log-linear models to improve relation extraction", "authors": [ { "first": "Huu", "middle": [], "last": "Thien", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "", "middle": [], "last": "Grishman", "suffix": "" } ], "year": 2016, "venue": "Proceedings of IJCAI Workshop on Deep Learning for Artificial Intelligence (DLAI)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thien Huu Nguyen and Ralph Grishman. 2016. Com- bining neural networks and log-linear models to im- prove relation extraction. In Proceedings of IJCAI Workshop on Deep Learning for Artificial Intelli- gence (DLAI).", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Weakly supervised cross-lingual named entity recognition via effective annotation and representation projection", "authors": [ { "first": "Jian", "middle": [], "last": "Ni", "suffix": "" }, { "first": "Georgiana", "middle": [], "last": "Dinu", "suffix": "" }, { "first": "Radu", "middle": [], "last": "Florian", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1470--1480", "other_ids": { "DOI": [ "10.18653/v1/P17-1135" ] }, "num": null, "urls": [], "raw_text": "Jian Ni, Georgiana Dinu, and Radu Florian. 2017. Weakly supervised cross-lingual named entity recognition via effective annotation and represen- tation projection. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1470- 1480. Association for Computational Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": { "DOI": [ "10.3115/v1/D14-1162" ] }, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Embedding semantic similarity in tree kernels for domain adaptation of relation extraction", "authors": [ { "first": "Barbara", "middle": [], "last": "Plank", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Moschitti", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1498--1507", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barbara Plank and Alessandro Moschitti. 2013. Em- bedding semantic similarity in tree kernels for do- main adaptation of relation extraction. In Proceed- ings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 1498-1507, Sofia, Bulgaria. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "A survey of cross-lingual embedding models", "authors": [ { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" }, { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Ruder, Ivan Vuli\u0107, and Anders S\u00f8gaard. 2017. A survey of cross-lingual embedding models. CoRR, abs/1706.04902.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Classifying relations by ranking with convolutional neural networks", "authors": [ { "first": "Santos", "middle": [], "last": "Cicero Dos", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Xiang", "suffix": "" }, { "first": "Bowen", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "626--634", "other_ids": { "DOI": [ "10.3115/v1/P15-1061" ] }, "num": null, "urls": [], "raw_text": "Cicero dos Santos, Bing Xiang, and Bowen Zhou. 2015. Classifying relations by ranking with con- volutional neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Compu- tational Linguistics and the 7th International Joint Conference on Natural Language Processing (Vol- ume 1: Long Papers), pages 626-634. Association for Computational Linguistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Offline bilingual word vectors, orthogonal transformations and the inverted softmax", "authors": [ { "first": "L", "middle": [], "last": "Samuel", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" }, { "first": "H", "middle": [ "P" ], "last": "David", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Turban", "suffix": "" }, { "first": "Nils", "middle": [ "Y" ], "last": "Hamblin", "suffix": "" }, { "first": "", "middle": [], "last": "Hammerla", "suffix": "" } ], "year": 2017, "venue": "5th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Samuel L. Smith, David H. P. Turban, Steven Hamblin, and Nils Y. Hammerla. 2017. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. In 5th International Conference on Learn- ing Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Dropout: A simple way to prevent neural networks from overfitting", "authors": [ { "first": "Nitish", "middle": [], "last": "Srivastava", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Krizhevsky", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" } ], "year": 2014, "venue": "Journal of Machine Learning Research", "volume": "15", "issue": "1", "pages": "1929--1958", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Re- search, 15(1):1929-1958.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Cross-lingual models of word embeddings: An empirical comparison", "authors": [ { "first": "Shyam", "middle": [], "last": "Upadhyay", "suffix": "" }, { "first": "Manaal", "middle": [], "last": "Faruqui", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1661--1670", "other_ids": { "DOI": [ "10.18653/v1/P16-1157" ] }, "num": null, "urls": [], "raw_text": "Shyam Upadhyay, Manaal Faruqui, Chris Dyer, and Dan Roth. 2016. Cross-lingual models of word em- beddings: An empirical comparison. In Proceed- ings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 1661-1670. Association for Computa- tional Linguistics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Multilingual relation extraction using compositional universal schema", "authors": [ { "first": "Patrick", "middle": [], "last": "Verga", "suffix": "" }, { "first": "David", "middle": [], "last": "Belanger", "suffix": "" }, { "first": "Emma", "middle": [], "last": "Strubell", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Roth", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "886--896", "other_ids": { "DOI": [ "10.18653/v1/N16-1103" ] }, "num": null, "urls": [], "raw_text": "Patrick Verga, David Belanger, Emma Strubell, Ben- jamin Roth, and Andrew McCallum. 2016. Multi- lingual relation extraction using compositional uni- versal schema. In Proceedings of the 2016 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, pages 886-896. Association for Computational Linguistics.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "ACE 2005 multilingual training corpus. Philadelphia: Linguistic Data Consortium", "authors": [ { "first": "Christopher", "middle": [], "last": "Walker", "suffix": "" }, { "first": "Stephanie", "middle": [], "last": "Strassel", "suffix": "" }, { "first": "Julie", "middle": [], "last": "Medero", "suffix": "" }, { "first": "Kazuaki", "middle": [], "last": "Maeda", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher Walker, Stephanie Strassel, Julie Medero, and Kazuaki Maeda. 2006. ACE 2005 multilingual training corpus. Philadelphia: Linguistic Data Con- sortium.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Neural crosslingual named entity recognition with minimal resources", "authors": [ { "first": "Jiateng", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Zhilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" }, { "first": "Jaime", "middle": [], "last": "Carbonell", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "369--379", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiateng Xie, Zhilin Yang, Graham Neubig, Noah A. Smith, and Jaime Carbonell. 2018. Neural cross- lingual named entity recognition with minimal re- sources. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 369-379, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Normalized word embedding and orthogonal transform for bilingual word translation", "authors": [ { "first": "Chao", "middle": [], "last": "Xing", "suffix": "" }, { "first": "Dong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Chao", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yiye", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1006--1011", "other_ids": { "DOI": [ "10.3115/v1/N15-1104" ] }, "num": null, "urls": [], "raw_text": "Chao Xing, Dong Wang, Chao Liu, and Yiye Lin. 2015. Normalized word embedding and orthog- onal transform for bilingual word translation. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 1006-1011, Denver, Colorado. Association for Computational Linguistics.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Kernel methods for relation extraction", "authors": [ { "first": "Dmitry", "middle": [], "last": "Zelenko", "suffix": "" }, { "first": "Chinatsu", "middle": [], "last": "Aone", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Richardella", "suffix": "" } ], "year": 2003, "venue": "The Journal of Machine Learning Research", "volume": "3", "issue": "", "pages": "1083--1106", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2003. Kernel methods for relation ex- traction. The Journal of Machine Learning Re- search, 3:1083-1106.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Relation classification via convolutional deep neural network", "authors": [ { "first": "Daojian", "middle": [], "last": "Zeng", "suffix": "" }, { "first": "Kang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Siwei", "middle": [], "last": "Lai", "suffix": "" }, { "first": "Guangyou", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2014, "venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers", "volume": "", "issue": "", "pages": "2335--2344", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation classification via con- volutional deep neural network. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 2335-2344. Dublin City University and As- sociation for Computational Linguistics.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Adversarial feature adaptation for cross-lingual relation classification", "authors": [ { "first": "Bowei", "middle": [], "last": "Zou", "suffix": "" }, { "first": "Zengzhuang", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Hong", "suffix": "" }, { "first": "Guodong", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "437--448", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bowei Zou, Zengzhuang Xu, Yu Hong, and Guodong Zhou. 2018. Adversarial feature adaptation for cross-lingual relation classification. In Proceedings of the 27th International Conference on Computa- tional Linguistics, pages 437-448, Santa Fe, New Mexico, USA. Association for Computational Lin- guistics.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "text": "Neural cross-lingual relation extraction based on bilingual word embedding mapping -target language: Portuguese, source language: English.", "uris": null }, "TABREF0": { "content": "
gates: an input gate output gate \u2212 \u2192 o t ( \u2212 \u2192 \u2022 indicates the forward direc-\u2212 \u2192 i t , a forget gate \u2212 \u2192 f t and an
tion), which are updated as follows:
", "type_str": "table", "num": null, "html": null, "text": "The memory block at the t-th word in the forward LSTM layer contains a memory cell \u2212 \u2192 c t and three" }, "TABREF2": { "content": "
: Comparison with the state-of-the-art RE mod-
els on the ACE05 English data (S: Single Model; E:
Ensemble Model).
In-HouseTraining Dev Test
English (Source)1137140140
German (Target)2803535
Spanish (Target)4515555
Italian (Target)3224040
Japanese (Target)3965050
Portuguese (Target)3905050
ACE05Training Dev Test
English (Source)4796060
Arabic (Target)3234040
Chinese (Target)5076363
", "type_str": "table", "num": null, "html": null, "text": "" }, "TABREF3": { "content": "", "type_str": "table", "num": null, "html": null, "text": "Number of documents in the training/dev/test sets of the in-house and ACE05 datasets." }, "TABREF5": { "content": "
", "type_str": "table", "num": null, "html": null, "text": "Performance of the supervised English RE models on the in-house and ACE05 English test data." }, "TABREF7": { "content": "
", "type_str": "table", "num": null, "html": null, "text": "Comparison of the performance (F 1 score) using different mappings on the target-language development data under the Bi-LSTM model." }, "TABREF8": { "content": "
ModelGermanSpanishItalianJapanesePortuguese
PRF1 PRF1 PRF1 PRF1 PRF1
Bi-LSTM39.6
", "type_str": "table", "num": null, "html": null, "text": "48.9 43.8 54.5 47.6 50.8 41.8 34.2 37.6 33.9 25.1 28.9 52.9 44.5 48.4 CNN 32.5 50.5 39.5 49.3 48.3 48.8 36.6 34.9 35.7 27.3 31.5 29.3 49.0 44.0 46.3 Ensemble 39.6 50.5 44.4 56.9 49.1 52.7 42.6 35.3 38.6 35.3 26.4 30.2 54.9 45.2 49.6 Supervised 59.3 56.4 57.8 68.4 65.4 66.8 51.4 48.3 49.8 52.7 52.0 52.4 64.0 61.3 62.6" }, "TABREF9": { "content": "
ModelArabicChinese
PRF1 PRF1
Bi-LSTM30.3 45.7 36.4 61.7 37.8 46.8
CNN24.0 39.7 29.9 56.4 33.8 42.3
Ensemble27.5 48.7 35.2 61.0 40.4 48.6
Supervised 70.0 69.1 69.5 66.9 69.4 68.1
", "type_str": "table", "num": null, "html": null, "text": "Performance of the cross-lingual RE approach on the in-house target-language test data." }, "TABREF10": { "content": "", "type_str": "table", "num": null, "html": null, "text": "Performance of the cross-lingual RE approach on the ACE05 target-language test data." } } } }