{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:08:03.596406Z" }, "title": "BUCC2020: Bilingual Dictionary Induction using Cross-lingual Embedding", "authors": [ { "first": "Sanjanasri", "middle": [], "last": "Jp", "suffix": "", "affiliation": { "laboratory": "", "institution": "Amrita Vishwa Vidyapeetham", "location": { "postCode": "641112", "settlement": "Coimbatore", "country": "India" } }, "email": "" }, { "first": "Vijay", "middle": [ "Krishna" ], "last": "Menon", "suffix": "", "affiliation": { "laboratory": "", "institution": "Amrita Vishwa Vidyapeetham", "location": { "postCode": "641112", "settlement": "Coimbatore", "country": "India" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper presents a deep learning system for the BUCC 2020 shared task: Bilingual dictionary induction from comparable corpora. We have submitted two runs for this shared Task, German (de) and English (en) language pair for \"closed track\" and Tamil (ta) and English (en) for the \"open track\". Our core approach focuses on quantifying the semantics of the language pairs, so that semantics of two different language pairs can be compared or transfer learned. With the advent of word embeddings, it is possible to quantify this. In this paper, we propose a deep learning approach which makes use of the supplied training data, to generate cross-lingual embedding. This is later used for inducting bilingual dictionaries from comparable corpora.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "This paper presents a deep learning system for the BUCC 2020 shared task: Bilingual dictionary induction from comparable corpora. We have submitted two runs for this shared Task, German (de) and English (en) language pair for \"closed track\" and Tamil (ta) and English (en) for the \"open track\". Our core approach focuses on quantifying the semantics of the language pairs, so that semantics of two different language pairs can be compared or transfer learned. With the advent of word embeddings, it is possible to quantify this. In this paper, we propose a deep learning approach which makes use of the supplied training data, to generate cross-lingual embedding. This is later used for inducting bilingual dictionaries from comparable corpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In machine translation, the extraction of bilingual dictionaries from parallel corpora have been conducted very successfully. Theoretically, it is possible to extract multilingual lexical knowledge from comparable rather than from parallel corpora as the former is more abundant than the latter. To implement any machine learning tasks in Natural Language processing (NLP), it is necessary to quantify the semantics (meaning) of the word in a language. Representation of semantics of a word quantitatively is made possible with the evolution of word embeddings (Mikolov et al., 2013a) ;they are dense distributed vector representations of words. This numerical representation mimics the linguistic phenomena such as lexical, syntactic, morphological and other complex phenomena such as ambiguity, negation, lemmas, inference and so on. Contemporary vector training algorithms such as GloVe and Word2Vec (Pennington et al., 2014; Mikolov et al., 2013c) are more accurate in capturing word to word semantics than conventional vector space models such as Latent Semantic Analysis (LSA) (Deerwester et al., 1990) and perform better in almost all downstream tasks in NLP (Treviso et al., 2017; Bansal et al., 2014; Guo et al., 2014) .", "cite_spans": [ { "start": 561, "end": 584, "text": "(Mikolov et al., 2013a)", "ref_id": "BIBREF7" }, { "start": 903, "end": 928, "text": "(Pennington et al., 2014;", "ref_id": "BIBREF10" }, { "start": 929, "end": 951, "text": "Mikolov et al., 2013c)", "ref_id": "BIBREF9" }, { "start": 1083, "end": 1108, "text": "(Deerwester et al., 1990)", "ref_id": "BIBREF4" }, { "start": 1166, "end": 1188, "text": "(Treviso et al., 2017;", "ref_id": "BIBREF12" }, { "start": 1189, "end": 1209, "text": "Bansal et al., 2014;", "ref_id": "BIBREF0" }, { "start": 1210, "end": 1227, "text": "Guo et al., 2014)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In this paper, we train a transfer learning model/Deep Neural Network(DNN) using pre-trained monolingual embeddings of the given bilingual dictionary. Source embedding is given to DNN, so it generates a target embedding. The generated embedding is compared with the original (monolingual) embedding to find the closest embedding. The word corresponding to the closest embedding is identified as the word translation of the given source word. Simply, we perform a reverse look up to identify the correct word translation from the original embedding given the transfer learned embedding.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Section 2 describes the systems that are experimented for this task. Section 3 gives the details of the data used for this experimentation. Section 4 gives insight about the computational complexity. Section 5 details the evaluation method carried out to justify the system. Section 6 gives the results of the systems. Section 7 gives some concluding inferences and remarks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The main objective of this work is to develop an efficient and accurate transfer learning method for attaining 'crosslingual' word embeddings without the large monolingual and bilingual corpus. The system was developed in four stages; each improving the accuracy. The test data result submitted is run on the system that gave us the best accuracy. System one derives the translation matrix for the language pair using the standard method (direct linear mapping) (Mikolov et al., 2013b) . Given pairs of word vectors in a source and target language < x i , y i > n i=1 respectively, we calculate the transformation matrix (W ) between the two languages utilizing pseudo inverse X + = (X T X) \u22121 X T , as follows:", "cite_spans": [ { "start": 462, "end": 485, "text": "(Mikolov et al., 2013b)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "System Description", "sec_num": "2." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "XW = Y W = X + Y", "eq_num": "(1)" } ], "section": "System Description", "sec_num": "2." }, { "text": "System two and three deploy deep learning network to learn the mapping between two different language embeddings. In this method we train a transfer learning model to generate cross-lingual embedding. Our method has obvious advantages over the bilingual embedding (Chandar et al., 2014; Gouws et al., 2015) , because bilingual embeddings might compromise semantics in order to project each language (source and target) into the common vector space; the semantic properties pertaining to the language might be lost as the model considers only the common semantic features between the languages. Our method generates cross-lingual embedding by projecting the vectors of one language into another language space without compromising the actual semantics of both the languages. Also, to train an efficient bilingual embedding, it is necessary to have large bilingual resources. The transfer learning model can generate better cross-lingual embedding when trained with as minimum as 5000 dictionary words. System two is implemented on a Multi Layer Perceptron (MLP) and system three uses Convolutional Neural Networks (CNN). System four is a mere extension of the CNN with a small topical modification. It fine tunes the pre-trained translational model (system 3) using neighbourhood relationships. The systems of each language pair are implemented ", "cite_spans": [ { "start": 264, "end": 286, "text": "(Chandar et al., 2014;", "ref_id": "BIBREF2" }, { "start": 287, "end": 306, "text": "Gouws et al., 2015)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "System Description", "sec_num": "2." }, { "text": "The multi layer perceptrons (MLP) is a fully connected DNN that holds a special place in NLP for intuitive nonlinear modeling. Our MLP topology possesses three dense layers, that uses Rectified Linear Unit (ReLU) as its activation. The dropout layer that follows immediate to every dense layer avoids overfitting in training. Cosine proximity is used as the loss function and RMSprop as optimizer. Figure 1 depicts the architecture of MLP.", "cite_spans": [], "ref_spans": [ { "start": 398, "end": 406, "text": "Figure 1", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Multi Layer Perceptron", "sec_num": "2.1." }, { "text": "The architecture of CNN has five layers, a CNN layer followed by maxpooling, flatten layer, dropout layer and a dense layer. Rectified Linear Unit (ReLU) is used as activation function in each layer. Again, the cosine proximity and RMSprop is used as loss function and optimizer respectively for training. The CNN architecture is shown in Figure 2 .", "cite_spans": [], "ref_spans": [ { "start": 339, "end": 347, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "1D-Convolutional Neural Network", "sec_num": "2.2." }, { "text": "In this architecture of CNN, the translation model is trained on neighbourhood relationship of source language word pairs given the cosine similarity between the correspond- ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fine-tuned Convolutional Neural Network (Fine-tuned CNN)", "sec_num": "2.3." }, { "text": "(wv t * i , wv t * j )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fine-tuned Convolutional Neural Network (Fine-tuned CNN)", "sec_num": "2.3." }, { "text": ", is passed on to the dot layer, that computes the cosine proximity between the vectors. The cosine distance/output of the dot layer is passed on to dropout layer to avoid over fitting and finally passed on to dense layer, where linear activation is used. For back propagation, the cosine distance between the corresponding target language words (w ti , w tj ) for the source language word (w si , w sj ) is given as labels, mean squared error and RM-Sprop is used as a loss and optimizer respectively. Please note, that, model1 and model2 are already trained and back propagating with the cosine similarity of the word pairs helps in better learning of the neighbourhood relations. The topology of this model is shown in Figure 3 3. Data For \"closed track\", German (de) and English (en) language pairs, we used the FastText pre-trained embeddings of Wacky corpora (Conneau et al., 2017) and the given bilingual dictionary for training. For \"open track\", Tamil (ta) and English (en) language pairs, FastText pre-trained embeddings of crawled web corpus (Bojanowski et al., 2017; Pre-trained, 2019 ) and in-house dictionary is used. Details of the dataset used for the tasks is shown in Table 1 4", "cite_spans": [ { "start": 865, "end": 887, "text": "(Conneau et al., 2017)", "ref_id": "BIBREF3" }, { "start": 1053, "end": 1078, "text": "(Bojanowski et al., 2017;", "ref_id": "BIBREF1" }, { "start": 1079, "end": 1096, "text": "Pre-trained, 2019", "ref_id": null } ], "ref_spans": [ { "start": 722, "end": 730, "text": "Figure 3", "ref_id": "FIGREF1" }, { "start": 1186, "end": 1193, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Fine-tuned Convolutional Neural Network (Fine-tuned CNN)", "sec_num": "2.3." }, { "text": "To induce a word translation for the source word, we perform a reverse look up of the transfer learned target vector with the original target monolingual embedding. Given a set of source and target word < w si , w ti > and their corresponding embeddings (original monolingual embeddings) < wv si , wv ti > and transfer learned target embedding< wv t * i >. For every query source word w si , the correct target word w ti is identified by locating the target embedding wv ti that is the closest neighbour to the transfer learned/projected target word embedding wv t * i , where cosine similarity is computed as a measure between the embedding. However, performing the reverse lookup is computationally intensive. For instance, the embedding size of each test data (German (de) and Tamil(ta)) is \u2208 R 2000\u00d7300 and English pre-trained Wacky and Crawled web corpus is approximately 2 billion words. Henceforth, the size of original embedding is \u2208 R 2E9\u00d7300 . The word vectors are of double data type (8 bytes). The cartesian product of the original embedding and transfer learned test embedding would sum upto size of \u2208 R 4E12\u00d7300 (approximately, four trillion).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ". Computational Complexity", "sec_num": null }, { "text": "Computing such huge dataset takes months for a normal computer system to compute. This complex computation is deployed to the cluster using Apache Spark R Framework. The word pairs are filtered based on cosine similarity. The figure 4 shows the architecture.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ". Computational Complexity", "sec_num": null }, { "text": "We know that word embeddings translate semantic relationships to spatial distances, in a good word embedding model the semantically related word pairs in a languages are expected to have closer spatial distance (higher similarity score) in their respective embeddings. We use this linguistic aspect to evaluate our cross-lingual word embeddings. Here, we treat the original (monolingually trained) embedding as our ground truth and compare the global neighbourhood behavior of the generated embeddding. Algorithm 1 explains this. The original (monolingual pretrained) and transfer learned embedding are represented as OrigV ec and T ransV ec; N represents the size of the test set. The similarity metric between two words vectors a and b is computed using cosine distance as given in Equation 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Tasks", "sec_num": "5." }, { "text": "cos(a, b) = a T b ||a||.||b||", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Tasks", "sec_num": "5." }, { "text": "(2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Tasks", "sec_num": "5." }, { "text": "The percentage accuracy of the test data on transfer learned model of each language pairs, German-English (de-en) and Tamil-English (ta-en), tested over various systems is shown Table 2 . In fine-tuned CNN network (CNN+NN), the dictionary is inducted by passing test data to model1 and the output of model1 is calculated for percentage accuracy on global neighbourhood. From the results in Table 2 , it is evident that CNN+NN network outperforms the other three models in each language pair. Henceforth, the final result submitted for the shared task is run on CNN+NN network ", "cite_spans": [], "ref_spans": [ { "start": 178, "end": 185, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 390, "end": 397, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Results", "sec_num": "6." }, { "text": "In this paper, we were able to generate bilingual dictionary for language pairs, German-English (de-en) and Tamil-English (ta-en) by using 'cross-lingual' embeddings (vectors in separate space, mapped) that is trained on neighbourhood relationship between source language word pairs. As word embedding has no ground truth to evaluate the crosslingual embedding, we also proposed an evaluation method to validate the model. For 'de-en' and 'ta-en' language pairs, the model is trained with 10095 and 21100 FastText pre-trained monolingual embedding of bilingual words. We started with linear mapping system, as the results were not satisfactory, we moved on to deep learning network. In deep network, CNN gave better accuracy than MLP. Hence, the CNN network was further fine-tuned with a neighbourhood information of source language. This gave the best accuracy among every other systems. Henceforth, test data was run on this system. The core system generates the transfer learned/projected target embedding for the given source embedding. The generated target embedding is compared with the original monolingual target embedding to find the correct target word translation for the source word. To do this reverse lookup process, Apache Spark R Scala language APIs is utilized to manage the computational complexity and speed up.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7." } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Tailoring continuous word representations for dependency parsing", "authors": [ { "first": "M", "middle": [], "last": "Bansal", "suffix": "" }, { "first": "K", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "K", "middle": [], "last": "Livescu", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "809--815", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bansal, M., Gimpel, K., and Livescu, K. (2014). Tailoring continuous word representations for dependency parsing. In Proceedings of the 52nd Annual Meeting of the Asso- ciation for Computational Linguistics (Volume 2: Short Papers), pages 809-815. Association for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Enriching word vectors with subword information", "authors": [ { "first": "P", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "E", "middle": [], "last": "Grave", "suffix": "" }, { "first": "A", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "T", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2017, "venue": "Transactions of the Association for Computational Linguistics", "volume": "5", "issue": "", "pages": "135--146", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bojanowski, P., Grave, E., Joulin, A., and Mikolov, T. (2017). Enriching word vectors with subword informa- tion. Transactions of the Association for Computational Linguistics, 5:135-146.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "An autoencoder approach to learning bilingual word representations", "authors": [ { "first": "A", "middle": [ "P S" ], "last": "Chandar", "suffix": "" }, { "first": "S", "middle": [], "last": "Lauly", "suffix": "" }, { "first": "H", "middle": [], "last": "Larochelle", "suffix": "" }, { "first": "M", "middle": [ "M" ], "last": "Khapra", "suffix": "" }, { "first": "B", "middle": [], "last": "Ravindran", "suffix": "" }, { "first": "V", "middle": [ "C" ], "last": "Raykar", "suffix": "" }, { "first": "A", "middle": [], "last": "Saha", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chandar, A. P. S., Lauly, S., Larochelle, H., Khapra, M. M., Ravindran, B., Raykar, V. C., and Saha, A. (2014). An autoencoder approach to learning bilingual word repre- sentations. CoRR, abs/1402.1454.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Word translation without parallel data", "authors": [ { "first": "A", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "G", "middle": [], "last": "Lample", "suffix": "" }, { "first": "M", "middle": [], "last": "Ranzato", "suffix": "" }, { "first": "L", "middle": [], "last": "Denoyer", "suffix": "" }, { "first": "H", "middle": [], "last": "J\u00e9gou", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Conneau, A., Lample, G., Ranzato, M., Denoyer, L., and J\u00e9gou, H. (2017). Word translation without parallel data. CoRR, abs/1710.04087.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Indexing by latent semantic analysis", "authors": [ { "first": "S", "middle": [], "last": "Deerwester", "suffix": "" }, { "first": "S", "middle": [ "T" ], "last": "Dumais", "suffix": "" }, { "first": "G", "middle": [ "W" ], "last": "Furnas", "suffix": "" }, { "first": "T", "middle": [ "K" ], "last": "Landauer", "suffix": "" }, { "first": "R", "middle": [], "last": "Harshman", "suffix": "" } ], "year": 1990, "venue": "JOURNAL OF THE AMERICAN SOCI-ETY FOR INFORMATION SCIENCE", "volume": "41", "issue": "6", "pages": "391--407", "other_ids": {}, "num": null, "urls": [], "raw_text": "Deerwester, S., Dumais, S. T., Furnas, G. W., Landauer, T. K., and Harshman, R. (1990). Indexing by latent se- mantic analysis. JOURNAL OF THE AMERICAN SOCI- ETY FOR INFORMATION SCIENCE, 41(6):391-407.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Bilbowa: Fast bilingual distributed representations without word alignments", "authors": [ { "first": "S", "middle": [], "last": "Gouws", "suffix": "" }, { "first": "Y", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "G", "middle": [], "last": "Corrado", "suffix": "" } ], "year": 2015, "venue": "Workshop and Conference Proceedings", "volume": "37", "issue": "", "pages": "748--756", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gouws, S., Bengio, Y., and Corrado, G. (2015). Bilbowa: Fast bilingual distributed representations without word alignments. In ICML, volume 37 of JMLR Workshop and Conference Proceedings, pages 748-756.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Revisiting embedding features for simple semi-supervised learning", "authors": [ { "first": "J", "middle": [], "last": "Guo", "suffix": "" }, { "first": "W", "middle": [], "last": "Che", "suffix": "" }, { "first": "H", "middle": [], "last": "Wang", "suffix": "" }, { "first": "T", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2014, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guo, J., Che, W., Wang, H., and Liu, T. (2014). Revisiting embedding features for simple semi-supervised learning. In EMNLP.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "T", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "K", "middle": [], "last": "Chen", "suffix": "" }, { "first": "G", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "J", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "CoRR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikolov, T., Chen, K., Corrado, G., and Dean, J. (2013a). Efficient estimation of word representations in vector space. CoRR, abs/1301.3781.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Exploiting similarities among languages for machine translation", "authors": [ { "first": "T", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Q", "middle": [ "V" ], "last": "Le", "suffix": "" }, { "first": "I", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikolov, T., Le, Q. V., and Sutskever, I. (2013b). Ex- ploiting similarities among languages for machine trans- lation. CoRR, abs/1309.4168.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Distributed representations of words and phrases and their compositionality. CoRR", "authors": [ { "first": "T", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "I", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "K", "middle": [], "last": "Chen", "suffix": "" }, { "first": "G", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "J", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikolov, T., Sutskever, I., Chen, K., Corrado, G., and Dean, J. (2013c). Distributed representations of words and phrases and their compositionality. CoRR, abs/1310.4546.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "J", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "R", "middle": [], "last": "Socher", "suffix": "" }, { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "EMNLP", "volume": "14", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pennington, J., Socher, R., and Manning, C. D. (2014). Glove: Global vectors for word representation. In EMNLP, volume 14, pages 1532-1543.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Evaluating word embeddings for sentence boundary detection in speech transcripts", "authors": [ { "first": "M", "middle": [ "V" ], "last": "Treviso", "suffix": "" }, { "first": "C", "middle": [ "D" ], "last": "Shulby", "suffix": "" }, { "first": "Alu\u00edsio", "middle": [], "last": "", "suffix": "" }, { "first": "S", "middle": [ "M" ], "last": "", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Treviso, M. V., Shulby, C. D., and Alu\u00edsio, S. M. (2017). Evaluating word embeddings for sentence boundary de- tection in speech transcripts. CoRR, abs/1708.04704.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "type_str": "figure", "text": "Architecture of CNN for learning the transfer model for cross-lingual embedding as mentioned above, they are further trained over the monolingual embedding of bilingual word pairs of the respective languages." }, "FIGREF1": { "uris": null, "num": null, "type_str": "figure", "text": "Architecture of CNN for learning the transfer model based on neighbourhood relations for cross-lingual embedding" }, "FIGREF2": { "uris": null, "num": null, "type_str": "figure", "text": "Algorithm for computing percentage accuracy for global neighbourhood behaviour of the transfer learned embeddings Input: Input:OrigV ec, T ransV ec Output: Output: Accuracy k, i \u2190 0 for i < N do for j < N do CosOrigV ec[k] = cos(OrigV ec[i], OrigV ec[j]) CosT ransV ec[k] = cos(T ransV ec[i], T ransV ec[j]) k = k + 1 end for end for sum, i \u2190 0 for i < N * N do grad = CosOrigV ec[i] \u2212 CosT ransV ec[i] tmp = grad * grad sum = sum + tmp end for RM SE = sqrt(sum/(N * N )) P erErr = (RM SE/2) * 100 Accuracy = 100 \u2212 P erErr in" }, "FIGREF3": { "uris": null, "num": null, "type_str": "figure", "text": "Block diagram for reverse look up of dictionary using Apache Spark R Framework model." }, "TABREF0": { "html": null, "type_str": "table", "content": "
TrainTest
Language Pairs
(#. of word pairs)(# of word pairs)
de -en100956000
ta -en211001999
ing target language word pairs as labels. The core objec-
tive of this network focuses on fine tuning the previously
learned translational model to improve on neighbourhood
relations.
For training, embeddings of randomly chosen source lan-
guage word pairs (wv si , wv sj ) from the dictionary is given
as an inputs to model1 and model2. The model1 and
model2 are identical copies of pre-trained translational
model discussed in section 2.2. The outputs of model1 and
model2, transfer learned/projected target language word
vectors
", "num": null, "text": "Description of Data" }, "TABREF1": { "html": null, "type_str": "table", "content": "
ModelsLanguage pairs de -en ta -en
Linear Mapping 73.0176.05
MLP80.6785.52
CNN85.1690.33
CNN+NN89.9193.65
", "num": null, "text": "Percentage Accuracy of transfer model of various systems" } } } }