ACL-OCL / Base_JSON /prefixB /json /bucc /2021.bucc-1.2.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:08:07.428573Z"
},
"title": "Mining Bilingual Word Pairs from Comparable Corpus using Apache Spark Framework",
"authors": [
{
"first": "Sanjanasri",
"middle": [],
"last": "Jp",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Amrita School of Engineering",
"location": {
"postCode": "641112",
"settlement": "Coimbatore",
"country": "India"
}
},
"email": "jpsanjanasri@cb.amrita.edu"
},
{
"first": "Vijay",
"middle": [
"Krishna"
],
"last": "Menon",
"suffix": "",
"affiliation": {},
"email": "vijay.km@gadgeon.com"
},
{
"first": "Soman",
"middle": [],
"last": "Kp",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Amrita School of Engineering",
"location": {
"postCode": "641112",
"settlement": "Coimbatore",
"country": "India"
}
},
"email": ""
},
{
"first": "Krzysztof",
"middle": [],
"last": "Wolk",
"suffix": "",
"affiliation": {},
"email": "kwolk@pja.edu.pl"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Bilingual dictionaries are essential resources in many areas of natural language processing tasks, but resource-scarce and less popular language pairs rarely have such. Efficient automatic methods for inducting bilingual dictionaries are needed as manual resources and efforts are scarce for low-resourced languages. In this paper, we induce word translations using bilingual embedding. We use the Apache Spark \u00ae framework for parallel computation. Further, to validate the quality of the generated bilingual dictionary, we use it in a phrase-table aided Neural Machine Translation (NMT) system. The system can perform moderately well with a manual bilingual dictionary; we change this into our inducted dictionary. The corresponding translated outputs are compared using the Bilingual Evaluation Understudy (BLEU) and Rank-based Intuitive Bilingual Evaluation Score (RIBES) metrics.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Bilingual dictionaries are essential resources in many areas of natural language processing tasks, but resource-scarce and less popular language pairs rarely have such. Efficient automatic methods for inducting bilingual dictionaries are needed as manual resources and efforts are scarce for low-resourced languages. In this paper, we induce word translations using bilingual embedding. We use the Apache Spark \u00ae framework for parallel computation. Further, to validate the quality of the generated bilingual dictionary, we use it in a phrase-table aided Neural Machine Translation (NMT) system. The system can perform moderately well with a manual bilingual dictionary; we change this into our inducted dictionary. The corresponding translated outputs are compared using the Bilingual Evaluation Understudy (BLEU) and Rank-based Intuitive Bilingual Evaluation Score (RIBES) metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Digitised bilingual dictionaries primarily exist for resource-rich language pairs, such as English-German, English-Chinese, English-Hindi, etc. (Lardilleux et al., 2010) . Such dictionaries are helpful for many natural language processing (NLP) tasks such as Machine Translation (MT) for translating Out-Of-Vocabulary (OOV) words, cross-lingual information retrieval, cross-lingual word embedding and multilingual parts-of-speech tagging (Wo\u0142k, 2019; Ye et al., 2016; Sharma and Mittal, 2018) . Creating a bilingual dictionary requires high-quality parallel corpora and expert linguists, both of which are scarce and costly in resource-poor languages (Hajnicz et al., 2016; Sarma, 2019) .",
"cite_spans": [
{
"start": 144,
"end": 169,
"text": "(Lardilleux et al., 2010)",
"ref_id": "BIBREF14"
},
{
"start": 438,
"end": 450,
"text": "(Wo\u0142k, 2019;",
"ref_id": "BIBREF25"
},
{
"start": 451,
"end": 467,
"text": "Ye et al., 2016;",
"ref_id": "BIBREF28"
},
{
"start": 468,
"end": 492,
"text": "Sharma and Mittal, 2018)",
"ref_id": "BIBREF20"
},
{
"start": 651,
"end": 673,
"text": "(Hajnicz et al., 2016;",
"ref_id": "BIBREF6"
},
{
"start": 674,
"end": 686,
"text": "Sarma, 2019)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Previous works focus on methods that were based on pivot languages (Tanaka and Umemura, 1994; Istv\u00e1n and Shoichi, 2009; Wushouer et al., 2015) , aligning words (Daille and Morin, 2008; Tufi\u015f and maria Barbu, 2002) or using dependency relations (Yu and Tsujii, 2009) . The pivot-based dictionary induction is a contemporary method that uses only dictionaries to and from a pivot language (intermediate language) to generate a new dictionary. This method is not very effective for highly ambiguous languages as it yields highly noisy dictionaries because lexicons of a language do not exhibit transitive relationship (Wushouer et al., 2014) . Word alignment systems identify the translation equivalence of lexical units between two sentences that are sentence aligned (Choueka et al., 2000; Och and Ney, 2003) . Depending on the purpose, the system may focus on the specific lexical units, e.g. a single word or collocation (Tiedemann, 2004; Schreiner et al., 2011; Chen et al., 2009) . The dependency relation method is based on the premise that related words in different languages have a similar dependency relationship. These methods require either excellent linguistic knowledge or linguistic resource. The research line has robust outcomes on bilingual lexicon induction with the evolution of word embedding either by independently aligning trained word embedding in two languages or using the bilingual embedding to induce word translation pairs through nearestneighbour or similar retrieval methods. In the BDI task, given a list of 'n' source language words w s 1 , w s 2 , ...w sn , the goal is to determine the most appropriate translationw t i , for each query word w s i . Finding a target language word embedding wv t i is accomplished by computing the nearest neighbour to the source word embedding wv s i in the shared semantic space, where cosine similarity is a measure between the embedding (Artetxe et al., 2019) . However, this creates a phenomenon called hubness. In high-dimensional spaces, some data points, called hubs, are extraordinarily close to many other data points (Huang et al., 2019) ; this results in inappropriate/noisy translation.",
"cite_spans": [
{
"start": 67,
"end": 93,
"text": "(Tanaka and Umemura, 1994;",
"ref_id": "BIBREF22"
},
{
"start": 94,
"end": 119,
"text": "Istv\u00e1n and Shoichi, 2009;",
"ref_id": "BIBREF9"
},
{
"start": 120,
"end": 142,
"text": "Wushouer et al., 2015)",
"ref_id": "BIBREF27"
},
{
"start": 160,
"end": 184,
"text": "(Daille and Morin, 2008;",
"ref_id": "BIBREF4"
},
{
"start": 185,
"end": 213,
"text": "Tufi\u015f and maria Barbu, 2002)",
"ref_id": "BIBREF24"
},
{
"start": 244,
"end": 265,
"text": "(Yu and Tsujii, 2009)",
"ref_id": "BIBREF29"
},
{
"start": 615,
"end": 638,
"text": "(Wushouer et al., 2014)",
"ref_id": "BIBREF26"
},
{
"start": 766,
"end": 788,
"text": "(Choueka et al., 2000;",
"ref_id": "BIBREF3"
},
{
"start": 789,
"end": 807,
"text": "Och and Ney, 2003)",
"ref_id": "BIBREF16"
},
{
"start": 922,
"end": 939,
"text": "(Tiedemann, 2004;",
"ref_id": "BIBREF23"
},
{
"start": 940,
"end": 963,
"text": "Schreiner et al., 2011;",
"ref_id": "BIBREF19"
},
{
"start": 964,
"end": 982,
"text": "Chen et al., 2009)",
"ref_id": "BIBREF2"
},
{
"start": 1908,
"end": 1930,
"text": "(Artetxe et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 2095,
"end": 2115,
"text": "(Huang et al., 2019)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, a simple cartesian product of the bilingual/cross-lingual word embedding is used and filters the product outcome based on some linguistic regularities and thresholds. The generated (inducted) bilingual dictionary is used as a separate phrase-table in an NMT system. The system produces translations for every word in the text; the translations are validated for quality using the Bilingual Evaluation Understudy (BLEU) and Rank-based Intuitive Bilingual Evaluation Score (RIBES) metric.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, the terms 'bilingual' and 'crosslingual' for word embedding is used with varying notions. The bilingual embedding maps the source and target language embedding in the shared semantic space. In contrast, the cross-lingual embedding learns a transfer function to translate the embedding from the source language semantic space to target language space; this preserves the more actual semantics pertained to that language (Mikolov et al., 2013) . Visualisation of the embeddings is shown in Figure 1 and Figure 2 . BilBOWA toolkit (Gouws et al., 2015 ) is used to generate bilingual word embedding. The embedding of source and target language are trained jointly so that related words of two languages are closer to each other in the shared space. Therefore, the translational equivalence has higher cosine similarity. The model is trained with minimal parallel corpus and large monolingual corpora. However, the cross-lingual embedding is learned with a very bare minimal resource as small as 5000 source-target word pairs. Global neighbourhood is estimated as cross-lingual entropy. The main advantage of this method over bilingual embedding is that it is possible to generate embedding in the target language semantic space instead of shared space. In shared semantic space, the most semantic information pertained to the language is lost and likely to infer word vectors for related languages.",
"cite_spans": [
{
"start": 434,
"end": 456,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF15"
},
{
"start": 543,
"end": 562,
"text": "(Gouws et al., 2015",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 503,
"end": 511,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 516,
"end": 524,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Bilingual and Cross-lingual Word embedding",
"sec_num": "2"
},
{
"text": "The embedding size of the English word list is \u2208 R 8994\u00d7300 and Tamil is \u2208 R 10097\u00d7300 . Tamil has more number words compared to English because of the inflected forms. The dimension of the Cartesian product of the word pair list (English and Tamil) is 90812418 \u00d7 300; this takes months for a typical computer system to compute. This complex computation is deployed to the cluster using Apache Spark \u00ae Framework (Zaharia et al., 2016) . The word pairs are filtered in two folds, cosine similarity and lemmatization (Kengatharaiyer et al., 2019) , where the root word is extracted from the surface forms. In the case of cross-lingual embedding, cross-lingual entropy is used instead of the cosine similarity measure. Figure 3 shows the architecture. The word embedding of Source and Target Language is mapped to a key-value pair Resilient Distributed Datasets (RDDs), a fundamental data structure of Spark; the word being a key and 300dimensional representation as values. The Cartesian Product of two RDDs (En RDD and Ta RDD) generates the Pair RDD. On the Pair RDD, cosine similarity or cross-lingual entropy is applied to filter top similar words. Filtered RDD is further refined using a lemmatizer to avoid the inflected terms. The resultant RDD is saved as text file; this has the most similar source and target word, a bilingual dictionary.",
"cite_spans": [
{
"start": 412,
"end": 434,
"text": "(Zaharia et al., 2016)",
"ref_id": "BIBREF30"
},
{
"start": 515,
"end": 544,
"text": "(Kengatharaiyer et al., 2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 716,
"end": 724,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Implementation",
"sec_num": "3"
},
{
"text": "The OpenNMT framework (Klein et al., 2017) is used for training an NMT system with the training parameter as shown in Table 1 . The inducted lexi- ",
"cite_spans": [
{
"start": 22,
"end": 42,
"text": "(Klein et al., 2017)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 118,
"end": 125,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Implementation",
"sec_num": "3"
},
{
"text": "For the training language model, the monolingual Tamil corpus from the cEnTam dataset (P. et al., 2020) is used. Likewise, for training the machine translation systems, the English-Tamil parallel cor-pus from the cEnTam dataset is utilised. The specifics of the cEnTam corpus used is reported in Table 2 . 5 Results and Discussion Table 3 and 4 show a sample of bilingually similar words above the cosine distance of 0.90 and 0.95. The correct translations are given in bold letters in Table 3 and 4. It can be inferred that the much more words that are not semantically similar (translational equivalent) but related crowds the search space, which might result in noisy word inductions ( into the dictionary) and ambiguity. Hence the search space was shrunk above the cosine distance of 0.98 as shown in Table 5 . It is observed that the inflected forms (surface forms) are closer than the related words in the embedding space to the query word. Unlike English, Tamil has no prepositions. Instead, it has case inflected nouns, for example, the translation of the prepositional phrase \"in minutes\" in English is equivalent to \"Nimidan-GkaLil\", a case inflected noun(NimidanGkaL + il = minutes + in) in Tamil. Likewise, various sandhi inflected form of the noun \"kuzhanthai\" are kuzhanthaip, kuzhanthaith, etc. The chances of getting associated or related words in such a small space is negligible. The inflections are removed, and the root forms are inducted at the second stage of filtering, lemmatizer. The inducted dictionary is added as a lookup table in the NMT system. The accuracy of the translated sentence of the NMT system before and after appending the dictionary as a phrase table is shown in Table 6 . The induced translation is evaluated based on both the Bilingual Evaluation Understudy (BLEU) (Koehn, 2010) and Rank-based Intuitive Bilingual Evaluation Score (RIBES) (Isozaki et al., 2010) metrics. BLEU is the oldest and most adopted metrics to evaluate Mt system. It rewards systems for n-grams that have exact matches in the reference system. The longer n-gram scores account for the fluency of the translation in BLEU metric. In contrast, RIBES is sensitive towards word reordering, works well for language pairs having very different grammar and word order. It uses rank correlation coefficients based on word order to compare hypothesis and reference translations. Although BLEU is a standard metric for the evaluation of MT system, RIBES is better suited for distant language pairs like English and Tamil (Callison-Burch et al., 2006) . Hence, both measures are used for validating the NMT system developed. In the Table 6 , the score is computed by comparing the reference translations with the translations of the NMT system after appending the manual and inducted dictionary (ManDic & IndDic). The ManDic and InDic systems are compared to showcase that the hypothesis translation of InDic is highly correlated with ManDic, though InDic has comparatively better score than ManDic when validated against Reference translation.",
"cite_spans": [
{
"start": 1809,
"end": 1822,
"text": "(Koehn, 2010)",
"ref_id": "BIBREF13"
},
{
"start": 1883,
"end": 1905,
"text": "(Isozaki et al., 2010)",
"ref_id": "BIBREF8"
},
{
"start": 2528,
"end": 2557,
"text": "(Callison-Burch et al., 2006)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 296,
"end": 303,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 331,
"end": 338,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 486,
"end": 493,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 805,
"end": 812,
"text": "Table 5",
"ref_id": "TABREF4"
},
{
"start": 1705,
"end": 1712,
"text": "Table 6",
"ref_id": "TABREF5"
},
{
"start": 2638,
"end": 2645,
"text": "Table 6",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Corpora Description",
"sec_num": "4"
},
{
"text": "In this paper, we generated an English-Tamil bilingual dictionary using both bilingual (vectors in the same space) and cross-lingual (vectors in separate space, mapped) word embedding. In order to validate this induced dictionary, we have employed a table driven Neural Machine Translation (NMT) system. The goal was to measure the quality of the translated output (Tamil as the target language) when the original manual dictionary (ManDic) is replaced with the induced dictionary (InDic). The Baseline NMT system was trained on English-Tamil parallel corpus with over 56000 entries. A testset with 700 aligned sentences was used for validation. The translation quality is measured over the reference translations which are available (aligned Tamil sentences). Eventually, we will have three categories of translated output, namely, Baseline, ManDic and InDic. We compare each of them with the reference translation using the RIBES and BLEU metric (Isozaki et al., 2010; Koehn, 2010) to ascertain their quality. It is important to note that the quality of the translations is not of our interest but the change in performance when using different dictionaries. RIBES is used as the scoring model as it is invariant to word order and morphology (Tan et al., 2015) .",
"cite_spans": [
{
"start": 948,
"end": 970,
"text": "(Isozaki et al., 2010;",
"ref_id": "BIBREF8"
},
{
"start": 971,
"end": 983,
"text": "Koehn, 2010)",
"ref_id": "BIBREF13"
},
{
"start": 1244,
"end": 1262,
"text": "(Tan et al., 2015)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Summary",
"sec_num": "6"
},
{
"text": "Our results suggest that the induced dictionary performs at par or better than the original manual dictionary. This is also due to the fact that the lexicons are rendered in a context-sensitive manner from word embedding. The lookup process is implemented using Apache Spark \u00ae Framework in Scala language. Induction is a simple reverse lookup using the Cartesian product of all bilingual embedding. The size of this Cartesian product matrix is 1 \u00d7 10 7 \u00d7 300 values which makes it highly computational. Apache Spark can run in parallel, hence, accelerate time and optimise memory. In this paper, bilingual embedding generated by Bil-BOWA (Gouws et al., 2015) is mainly used, but this methodology is also tested with cross-lingual embedding and found equally effective (JP et al., 2020 ). The differences between them are: bilingual embeddings are generated from parallel and good quality comparable bilingual corpus, whereas cross-lingual embedding can be learned from minimal bilingual data. Learning such cross-lingual embedding for resource-poor languages can help to generate induced dictionary resources of even unknown words with a fair amount of accuracy.",
"cite_spans": [
{
"start": 638,
"end": 658,
"text": "(Gouws et al., 2015)",
"ref_id": "BIBREF5"
},
{
"start": 768,
"end": 784,
"text": "(JP et al., 2020",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Summary",
"sec_num": "6"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Bilingual lexicon induction through unsupervised machine translation",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5002--5007",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2019. Bilingual lexicon induction through unsupervised machine translation. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 5002-5007, Florence, Italy. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Re-evaluating the role of bleu in machine translation research",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Callison",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Burch",
"suffix": ""
},
{
"first": "Miles",
"middle": [],
"last": "Osborne",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2006,
"venue": "EACL",
"volume": "",
"issue": "",
"pages": "249--256",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Callison-Burch, Miles Osborne, and Philipp Koehn. 2006. Re-evaluating the role of bleu in ma- chine translation research. In In EACL, pages 249- 256.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A word alignment model based on multiobjective evolutionary algorithms",
"authors": [
{
"first": "Yidong",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Changle",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Qingyang",
"middle": [],
"last": "Hong",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the International Conference",
"volume": "57",
"issue": "",
"pages": "1724--1729",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yidong Chen, Xiaodong Shi, Changle Zhou, and Qingyang Hong. 2009. A word alignment model based on multiobjective evolutionary algo- rithms. Computers & Mathematics with Applica- tions, 57(11):1724 -1729. Proceedings of the In- ternational Conference.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A comprehensive bilingual word alignment system",
"authors": [
{
"first": "Yaacov",
"middle": [],
"last": "Choueka",
"suffix": ""
},
{
"first": "Ehud",
"middle": [
"S"
],
"last": "Conley",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "69--96",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yaacov Choueka, Ehud S. Conley, and Ido Dagan. 2000. A comprehensive bilingual word alignment system, pages 69-96. Springer Netherlands, Dor- drecht.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "An effective compositional model for lexical alignment",
"authors": [
{
"first": "B\u00e9atrice",
"middle": [],
"last": "Daille",
"suffix": ""
},
{
"first": "Emmanuel",
"middle": [],
"last": "Morin",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Third International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B\u00e9atrice Daille and Emmanuel Morin. 2008. An effec- tive compositional model for lexical alignment. In Proceedings of the Third International Joint Confer- ence on Natural Language Processing: Volume-I.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Bilbowa: Fast bilingual distributed representations without word alignments",
"authors": [
{
"first": "Stephan",
"middle": [],
"last": "Gouws",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
}
],
"year": 2015,
"venue": "JMLR Workshop and Conference Proceedings",
"volume": "",
"issue": "",
"pages": "748--756",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephan Gouws, Yoshua Bengio, and Greg Corrado. 2015. Bilbowa: Fast bilingual distributed repre- sentations without word alignments. In ICML, vol- ume 37 of JMLR Workshop and Conference Pro- ceedings, pages 748-756.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Semantic layer of the valence dictionary of Polish walenty",
"authors": [
{
"first": "El\u017cbieta",
"middle": [],
"last": "Hajnicz",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Andrzejczuk",
"suffix": ""
},
{
"first": "Tomasz",
"middle": [],
"last": "Bartosiak",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)",
"volume": "",
"issue": "",
"pages": "2625--2632",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "El\u017cbieta Hajnicz, Anna Andrzejczuk, and Tomasz Bar- tosiak. 2016. Semantic layer of the valence dictio- nary of Polish walenty. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 2625-2632, Por- toro\u017e, Slovenia. European Language Resources As- sociation (ELRA).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Hubless nearest neighbor search for bilingual lexicon induction",
"authors": [
{
"first": "Jiaji",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Qiang",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Church",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4072--4080",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiaji Huang, Qiang Qiu, and Kenneth Church. 2019. Hubless nearest neighbor search for bilingual lexi- con induction. In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 4072-4080, Florence, Italy. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Automatic evaluation of translation quality for distant language pairs",
"authors": [
{
"first": "Hideki",
"middle": [],
"last": "Isozaki",
"suffix": ""
},
{
"first": "Tsutomu",
"middle": [],
"last": "Hirao",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Katsuhito",
"middle": [],
"last": "Sudoh",
"suffix": ""
},
{
"first": "Hajime",
"middle": [],
"last": "Tsukada",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "944--952",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hideki Isozaki, Tsutomu Hirao, Kevin Duh, Katsuhito Sudoh, and Hajime Tsukada. 2010. Automatic eval- uation of translation quality for distant language pairs. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Process- ing, pages 944-952, Cambridge, MA. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Bilingual dictionary generation for low-resourced language pairs",
"authors": [
{
"first": "Varga",
"middle": [],
"last": "Istv\u00e1n",
"suffix": ""
},
{
"first": "Yokoyama",
"middle": [],
"last": "Shoichi",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "862--870",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Varga Istv\u00e1n and Yokoyama Shoichi. 2009. Bilingual dictionary generation for low-resourced language pairs. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 2 -Volume 2, EMNLP '09, pages 862-870, Stroudsburg, PA, USA. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "BUCC2020: Bilingual dictionary induction using cross-lingual embedding",
"authors": [
{
"first": "J",
"middle": [
"P"
],
"last": "Sanjanasri",
"suffix": ""
},
{
"first": "Vijay",
"middle": [
"Krishna"
],
"last": "Menon",
"suffix": ""
},
{
"first": "K",
"middle": [
"P"
],
"last": "Soman",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 13th Workshop on Building and Using Comparable Corpora",
"volume": "",
"issue": "",
"pages": "65--68",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sanjanasri JP, Vijay Krishna Menon, and Soman KP. 2020. BUCC2020: Bilingual dictionary induction using cross-lingual embedding. In Proceedings of the 13th Workshop on Building and Using Compara- ble Corpora, pages 65-68, Marseille, France. Euro- pean Language Resources Association.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Thamizhifst: A morphological analyser and generator for tamil verbs",
"authors": [
{
"first": "Sarveswaran",
"middle": [],
"last": "Kengatharaiyer",
"suffix": ""
},
{
"first": "Gihan",
"middle": [],
"last": "Dias",
"suffix": ""
},
{
"first": "Miriam",
"middle": [],
"last": "Butt",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sarveswaran Kengatharaiyer, Gihan Dias, and Miriam Butt. 2019. Thamizhifst: A morphological analyser and generator for tamil verbs.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "OpenNMT: Opensource toolkit for neural machine translation",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yuntian",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Senellart",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ACL 2017, System Demonstrations",
"volume": "",
"issue": "",
"pages": "67--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senel- lart, and Alexander Rush. 2017. OpenNMT: Open- source toolkit for neural machine translation. In Proceedings of ACL 2017, System Demonstrations, pages 67-72, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Statistical Machine Translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn. 2010. Statistical Machine Translation, 1st edition. Cambridge University Press, New York, NY, USA.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Bilingual lexicon induction: Effortless evaluation of word alignment tools and production of resources for improbable language pairs",
"authors": [
{
"first": "Adrien",
"middle": [],
"last": "Lardilleux",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Gosme",
"suffix": ""
},
{
"first": "Yves",
"middle": [],
"last": "Lepage",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adrien Lardilleux, Julien Gosme, and Yves Lepage. 2010. Bilingual lexicon induction: Effortless eval- uation of word alignment tools and production of re- sources for improbable language pairs. In Proceed- ings of the Seventh conference on International Lan- guage Resources and Evaluation (LREC'10), Val- letta, Malta. European Languages Resources Asso- ciation (ELRA).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "CoRR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word represen- tations in vector space. CoRR, abs/1301.3781.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A systematic comparison of various statistical alignment models",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics",
"volume": "29",
"issue": "1",
"pages": "19--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19-51.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "2020. centam: Creation and validation of a new english-tamil bilingual corpus",
"authors": [
{
"first": "J",
"middle": [
"P"
],
"last": "Sanjanasri",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Premjith",
"suffix": ""
},
{
"first": "Vijay",
"middle": [
"Krishna"
],
"last": "Menon",
"suffix": ""
},
{
"first": "K",
"middle": [
"P"
],
"last": "Soman",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 13th Workshop on Building and Using Comparable Corpora, BUCC@LREC 2020",
"volume": "",
"issue": "",
"pages": "61--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sanjanasri J. P., B. Premjith, Vijay Krishna Menon, and K. P. Soman. 2020. centam: Creation and val- idation of a new english-tamil bilingual corpus. In Proceedings of the 13th Workshop on Building and Using Comparable Corpora, BUCC@LREC 2020, Marseille, France, May, 2020, pages 61-64. Euro- pean Language Resources Association.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Assamese-english bilingual dictionary. CLARIN-PL digital repository",
"authors": [
{
"first": "Shikhar",
"middle": [],
"last": "Prof",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kr",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sarma",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Prof. Shikhar Kr. Sarma. 2019. Assamese-english bilingual dictionary. CLARIN-PL digital reposi- tory.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Improving lexical alignment using hybrid discriminative and postprocessing techniques",
"authors": [
{
"first": "Paulo",
"middle": [],
"last": "Schreiner",
"suffix": ""
},
{
"first": "Aline",
"middle": [],
"last": "Villavicencio",
"suffix": ""
},
{
"first": "Leonardo",
"middle": [],
"last": "Zilio",
"suffix": ""
},
{
"first": "Helena",
"middle": [
"M"
],
"last": "Caseli",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 8th Brazilian Symposium in Information and Human Language Technology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paulo Schreiner, Aline Villavicencio, Leonardo Zilio, and Helena M. Caseli. 2011. Improving lexi- cal alignment using hybrid discriminative and post- processing techniques. In Proceedings of the 8th Brazilian Symposium in Information and Human Language Technology.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Cross-Lingual Information Retrieval: A Dictionary-Based Query Translation Approach",
"authors": [
{
"first": "Vijay",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Namita",
"middle": [],
"last": "Mittal",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "611--618",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vijay Sharma and Namita Mittal. 2018. Cross-Lingual Information Retrieval: A Dictionary-Based Query Translation Approach, pages 611-618.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "An awkward disparity between bleu / ribes scores and human judgements in machine translation",
"authors": [
{
"first": "Li",
"middle": [],
"last": "Ling Tan",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Dehdari",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Van Genabith",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Workshop on Asian Translation (WAT-2015)",
"volume": "",
"issue": "",
"pages": "74--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li Ling Tan, Jonathan Dehdari, and Josef van Genabith. 2015. An awkward disparity between bleu / ribes scores and human judgements in machine transla- tion. In Proceedings of the Workshop on Asian Translation (WAT-2015), pages 74-81. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Construction of a bilingual dictionary intermediated by a third language",
"authors": [
{
"first": "Kumiko",
"middle": [],
"last": "Tanaka",
"suffix": ""
},
{
"first": "Kyoji",
"middle": [],
"last": "Umemura",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the 15th Conference on Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "297--303",
"other_ids": {
"DOI": [
"10.3115/991886.991937"
]
},
"num": null,
"urls": [],
"raw_text": "Kumiko Tanaka and Kyoji Umemura. 1994. Construc- tion of a bilingual dictionary intermediated by a third language. In Proceedings of the 15th Conference on Computational Linguistics -Volume 1, COLING '94, pages 297-303, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Word to word alignment strategies",
"authors": [
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 20th International Conference on Computational Linguistics, COLING '04",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J\u00f6rg Tiedemann. 2004. Word to word alignment strate- gies. In Proceedings of the 20th International Con- ference on Computational Linguistics, COLING '04, Stroudsburg, PA, USA. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Lexical token alignment: experiments, results and applications",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Tufi\u015f",
"suffix": ""
},
{
"first": "Ana",
"middle": [
"Maria"
],
"last": "Barbu",
"suffix": ""
}
],
"year": 2002,
"venue": "Third International Conference on Language Resources and Evaluation (LREC 2002)",
"volume": "",
"issue": "",
"pages": "458--465",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Tufi\u015f and Ana maria Barbu. 2002. Lexical token alignment: experiments, results and applications. In In Third International Conference on Language Re- sources and Evaluation (LREC 2002), Las Palmas, pages 458-465.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Machine Learning in Translation Corpora Processing",
"authors": [
{
"first": "Krzysztof",
"middle": [],
"last": "Wo\u0142k",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Krzysztof Wo\u0142k. 2019. Machine Learning in Transla- tion Corpora Processing.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Pivot-based bilingual dictionary extraction from multiple dictionary resources",
"authors": [
{
"first": "Mairidan",
"middle": [],
"last": "Wushouer",
"suffix": ""
},
{
"first": "Donghui",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Toru",
"middle": [],
"last": "Ishida",
"suffix": ""
},
{
"first": "Katsutoshi",
"middle": [],
"last": "Hirayama",
"suffix": ""
}
],
"year": 2014,
"venue": "PRICAI 2014: Trends in Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "221--234",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mairidan Wushouer, Donghui Lin, Toru Ishida, and Katsutoshi Hirayama. 2014. Pivot-based bilingual dictionary extraction from multiple dictionary re- sources. In PRICAI 2014: Trends in Artificial Intelli- gence, pages 221-234, Cham. Springer International Publishing.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A constraint approach to pivot-based bilingual dictionary induction",
"authors": [
{
"first": "Mairidan",
"middle": [],
"last": "Wushouer",
"suffix": ""
},
{
"first": "Donghui",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Toru",
"middle": [],
"last": "Ishida",
"suffix": ""
},
{
"first": "Katsutoshi",
"middle": [],
"last": "Hirayama",
"suffix": ""
}
],
"year": 2015,
"venue": "ACM Trans. Asian Low-Resour. Lang. Inf. Process",
"volume": "15",
"issue": "1",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/2723144"
]
},
"num": null,
"urls": [],
"raw_text": "Mairidan Wushouer, Donghui Lin, Toru Ishida, and Katsutoshi Hirayama. 2015. A constraint ap- proach to pivot-based bilingual dictionary induction. ACM Trans. Asian Low-Resour. Lang. Inf. Process., 15(1):4:1-4:26.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Part-of-speech tagging based on dictionary and statistical machine learning",
"authors": [
{
"first": "Zhonglin",
"middle": [],
"last": "Ye",
"suffix": ""
},
{
"first": "Zhen",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Junfu",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Hongfeng",
"middle": [],
"last": "Yin",
"suffix": ""
}
],
"year": 2016,
"venue": "35th Chinese Control Conference (CCC)",
"volume": "",
"issue": "",
"pages": "6993--6998",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhonglin Ye, Zhen Jia, Junfu Huang, and Hongfeng Yin. 2016. Part-of-speech tagging based on dictio- nary and statistical machine learning. In 2016 35th Chinese Control Conference (CCC), pages 6993- 6998.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Extracting bilingual dictionary from comparable corpora with dependency heterogeneity",
"authors": [
{
"first": "Kun",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Junichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers",
"volume": "",
"issue": "",
"pages": "121--124",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kun Yu and Junichi Tsujii. 2009. Extracting bilin- gual dictionary from comparable corpora with de- pendency heterogeneity. In Proceedings of Human Language Technologies: The 2009 Annual Confer- ence of the North American Chapter of the Associa- tion for Computational Linguistics, Companion Vol- ume: Short Papers, pages 121-124, Boulder, Col- orado. Association for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Apache spark: A unified engine for big data processing",
"authors": [
{
"first": "Matei",
"middle": [],
"last": "Zaharia",
"suffix": ""
},
{
"first": "Reynold",
"middle": [
"S"
],
"last": "Xin",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Wendell",
"suffix": ""
},
{
"first": "Tathagata",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Armbrust",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Dave",
"suffix": ""
},
{
"first": "Xiangrui",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "Josh",
"middle": [],
"last": "Rosen",
"suffix": ""
},
{
"first": "Shivaram",
"middle": [],
"last": "Venkataraman",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"J"
],
"last": "Franklin",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Ghodsi",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Gonzalez",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Shenker",
"suffix": ""
},
{
"first": "Ion",
"middle": [],
"last": "Stoica",
"suffix": ""
}
],
"year": 2016,
"venue": "Commun. ACM",
"volume": "59",
"issue": "11",
"pages": "56--65",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matei Zaharia, Reynold S. Xin, Patrick Wendell, Tatha- gata Das, Michael Armbrust, Ankur Dave, Xian- grui Meng, Josh Rosen, Shivaram Venkataraman, Michael J. Franklin, Ali Ghodsi, Joseph Gonzalez, Scott Shenker, and Ion Stoica. 2016. Apache spark: A unified engine for big data processing. Commun. ACM, 59(11):56-65.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "Visualization of Bilingual Embedding using T-SNE plot",
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"num": null,
"text": "Apache Spark Implementation for Bilingual Dictionary Induction",
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"num": null,
"text": "Visualization of Cross-lingual embedding using T-SNE plot cons are used as a phrase-table in NMT for translating Out-Of-Vocabulary (OOV) words. Training is done on Google Colab with GPU at backend.",
"type_str": "figure"
},
"TABREF0": {
"content": "<table><tr><td>Hyper Parameters</td><td>Values</td></tr><tr><td>Layers</td><td>3</td></tr><tr><td>Rnn size</td><td>512</td></tr><tr><td>Embedding size</td><td>512</td></tr><tr><td>Encoder/Decoder Type</td><td>Transformers</td></tr><tr><td colspan=\"2\">Train steps / Validation steps 3000/ 5000</td></tr><tr><td>Positional Encoding</td><td>True</td></tr><tr><td>Heads</td><td>8</td></tr><tr><td>Dropout</td><td>0.3</td></tr><tr><td>Learning rate</td><td>3</td></tr><tr><td>Batch size</td><td>4096</td></tr><tr><td>Optimiser</td><td>ADAM</td></tr></table>",
"html": null,
"text": "Training Parameters for English-Tamil Open-NMT Framework",
"num": null,
"type_str": "table"
},
"TABREF1": {
"content": "<table><tr><td>Corpus Type</td><td>English (No. of sentences)</td><td>Tamil (No. of sentences)</td></tr><tr><td colspan=\"2\">Monolingual 589856</td><td>563568</td></tr><tr><td>Parallel</td><td>56495</td><td>56495</td></tr></table>",
"html": null,
"text": "Specification of cEnTam Corpus",
"num": null,
"type_str": "table"
},
"TABREF2": {
"content": "<table><tr><td>English</td><td>Tamil</td><td>Cosine Similarity</td></tr><tr><td>go</td><td>avaL</td><td>0.92</td></tr><tr><td>go</td><td>ennai</td><td>0.90</td></tr><tr><td>go</td><td>evvaLavu</td><td>0.90</td></tr><tr><td>go</td><td>anGkae</td><td>0.92</td></tr><tr><td>go</td><td>poaka</td><td>0.92</td></tr><tr><td>go</td><td>enGkae</td><td>0.90</td></tr><tr><td>go</td><td>un</td><td>0.90</td></tr><tr><td>good</td><td>chariyaana</td><td>0.92</td></tr><tr><td>good</td><td>aen</td><td>0.91</td></tr><tr><td>good</td><td>avaL</td><td>0.90</td></tr><tr><td>good</td><td>nanRaaka</td><td>0.94</td></tr><tr><td>good</td><td>evvaLavu</td><td>0.91</td></tr></table>",
"html": null,
"text": "Sample output of bilingual words extracted above cosine similarity (threshold) 0.90",
"num": null,
"type_str": "table"
},
"TABREF3": {
"content": "<table><tr><td>English</td><td>Tamil</td><td>Cosine Similarity</td></tr><tr><td colspan=\"2\">forests pachumaiyaana</td><td>0.92</td></tr><tr><td>forests</td><td>adarNtha</td><td>0.95</td></tr><tr><td>forests</td><td>kaadukaL</td><td>0.98</td></tr><tr><td>flowers</td><td>malar</td><td>0.95</td></tr><tr><td>flowers</td><td>malarkaL</td><td>0.97</td></tr><tr><td>flowers</td><td>pookkaL</td><td>0.96</td></tr></table>",
"html": null,
"text": "Sample output of bilingual words extracted above cosine similarity 0.95",
"num": null,
"type_str": "table"
},
"TABREF4": {
"content": "<table><tr><td>English</td><td>Tamil</td><td>Cosine Similarity</td></tr><tr><td>minutes</td><td>NimidanGkaL * *</td><td>0.98</td></tr><tr><td>minutes</td><td>NimidanGkaLil *</td><td>0.99</td></tr><tr><td>minutes</td><td>Nimidaththil *</td><td>0.97</td></tr><tr><td colspan=\"2\">minutes NimidanGkaLaaka *</td><td>0.98</td></tr></table>",
"html": null,
"text": "Sample output of bilingual words extracted above cosine similarity 0.98. The exact translation of the query word is annotated with double raised asterisk * * and their inflected forms are annotated with single raised asterisk * .",
"num": null,
"type_str": "table"
},
"TABREF5": {
"content": "<table><tr><td>NMT System</td><td colspan=\"2\">BLEU RIBES</td></tr><tr><td>Reference-Baseline</td><td>0.31</td><td>0.61</td></tr><tr><td>Reference-ManDic</td><td>0.33</td><td>0.66</td></tr><tr><td>Reference-InDic</td><td>0.34</td><td>0.71</td></tr><tr><td>ManDic-InDic</td><td>0.89</td><td>0.95</td></tr></table>",
"html": null,
"text": "Precision of NMT system",
"num": null,
"type_str": "table"
}
}
}
}