ACL-OCL / Base_JSON /prefixI /json /icon /2020.icon-adapmt.5.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:29:17.672902Z"
},
"title": "MUCS@Adap-MT 2020: Low Resource Domain Adaptation for Indic Machine Translation",
"authors": [
{
"first": "Asha",
"middle": [],
"last": "Hegde",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Mangalore University",
"location": {}
},
"email": "hegdekasha@gmail.com"
},
{
"first": "H",
"middle": [
"L"
],
"last": "Shashirekha",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Mangalore University",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Machine Translation (MT) is the task of automatically converting the text in source language to text in target language by preserving the meaning. MT task usually require large corpus for training the translation models. Due to scarcity of resources very less attention is given to translating into low resource languages and in particular into Indic languages. In this direction, a shared task called \"Adap-MT 2020: Low Resource Domain Adaptation for Indic Machine Translation\" is organized to illustrate the capability of general domain MT when translating into Indic languages and low resource domain adaptation of MT systems. In this paper, we, team MUCS, describe a simple word extraction based domain adaptation approach applied to English-Hindi MT only. MT in the proposed model is carried out using Open-NMT-a popular Neural Machine Translation tool. A general domain corpus is built effectively combining the available English-Hindi corpora and removing the duplicate sentences. Further, domain specific corpora is updated by extracting the sentences from generic corpus that match with the vocabulary of the domain specific corpus. The proposed model is exhibited satisfactory results for small domain specific AI and CHE corpora in terms of Bilingual Evaluation Understudy (BLEU) score with 1.25 and 2.72 respectively. Further, this methodology is quite generic and can easily be extended to other low resource language pairs as well.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Machine Translation (MT) is the task of automatically converting the text in source language to text in target language by preserving the meaning. MT task usually require large corpus for training the translation models. Due to scarcity of resources very less attention is given to translating into low resource languages and in particular into Indic languages. In this direction, a shared task called \"Adap-MT 2020: Low Resource Domain Adaptation for Indic Machine Translation\" is organized to illustrate the capability of general domain MT when translating into Indic languages and low resource domain adaptation of MT systems. In this paper, we, team MUCS, describe a simple word extraction based domain adaptation approach applied to English-Hindi MT only. MT in the proposed model is carried out using Open-NMT-a popular Neural Machine Translation tool. A general domain corpus is built effectively combining the available English-Hindi corpora and removing the duplicate sentences. Further, domain specific corpora is updated by extracting the sentences from generic corpus that match with the vocabulary of the domain specific corpus. The proposed model is exhibited satisfactory results for small domain specific AI and CHE corpora in terms of Bilingual Evaluation Understudy (BLEU) score with 1.25 and 2.72 respectively. Further, this methodology is quite generic and can easily be extended to other low resource language pairs as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Machine Translation (MT) acts as a bridge for cross-language communication in Natural Language Processing (NLP). It handles perplexity problems between two languages while preserving its meaning. MT was one of the initial tasks taken up by computer scientists and the research in this field is going on for last 50 years. MT task was initially handled with dictionary matching techniques and slowly upgraded to rule-based approaches (Dove et al., 2012) . To resolve knowledge acquisition issues corpus based approaches became popular and bilingual parallel corpora was used to acquire translation knowledge (Britz et al., 2017) . Along with corpus based approaches, hybrid MT approaches also became popular as these approaches promise state-of-the-art result.",
"cite_spans": [
{
"start": 433,
"end": 452,
"text": "(Dove et al., 2012)",
"ref_id": "BIBREF3"
},
{
"start": 607,
"end": 627,
"text": "(Britz et al., 2017)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The recent shift to large-scale analytical techniques has resulted in very significant improvements in the quality of MT. Neural Machine Translation (NMT) -a corpus based approach has gained attention of the MT researchers. NMT is the task of translating text from one natural language (source) to another natural language (target) using most commonly, Recurrent Neural Networks (RNN), specifically the Encoder-Decoder or Sequence-to Sequence models (Sutskever et al., 2014) . Further, unlike conventional translation systems, all parts of the neural translation model are trained jointly (end-to-end) to maximize the translation performance (Bahdanau et al., 2014) . In an NMT system, a bidirectional RNN, known as encoder is used by the Neural Network (NN) to encode a source sentence for a second RNN, known as decoder which is used to predict words in the target language. This encoder-decoder architecture can be designed with multiple layers to increase the efficiency of the system. Now, NMT has become an effective alternative to traditional Phrase-Based Statistical Machine Translation (Patil and Davies, 2014) .",
"cite_spans": [
{
"start": 450,
"end": 474,
"text": "(Sutskever et al., 2014)",
"ref_id": "BIBREF9"
},
{
"start": 642,
"end": 665,
"text": "(Bahdanau et al., 2014)",
"ref_id": "BIBREF0"
},
{
"start": 1095,
"end": 1119,
"text": "(Patil and Davies, 2014)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In spite of its popularity, NMT faces the following challenges",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Challenges of NMT",
"sec_num": "1.1"
},
{
"text": "\u2022 Normally NMT require a large dataset for training the model and powerful computa-tional resource to build NN with sufficient amount of hidden layers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Challenges of NMT",
"sec_num": "1.1"
},
{
"text": "\u2022 NMT is inconsistent in handling rare words. Since these words are sparsely available in the network, learning and inferencing them is not efficient.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Challenges of NMT",
"sec_num": "1.1"
},
{
"text": "\u2022 Though many experiments are being carried out to handle long sentences, long term dependency issue is still considered as a major problem in NMT (Tien and Minh, 2019).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Challenges of NMT",
"sec_num": "1.1"
},
{
"text": "The main objective of this work is to investigate efficient strategies to perform English to Hindi MT using sufficient amount of general domain corpora and very small domain specific corpora. Rest of the paper is structured as follows: Section 2 gives the brief description about domain adaptation and different approaches to domain adaptation followed by the methodology in Section 3. Experiments and results are given in Section 4 and conclusion in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Challenges of NMT",
"sec_num": "1.1"
},
{
"text": "Dataset plays a crucial role in NN based translation models. Huge amount of quality dataset for training results in good translation performance whereas small dataset results in poor translation performance. Hence, if the dataset is small, effective management of such dataset for NN based translation will be the key for better translation performance. Domain adaptation techniques that transfer existing knowledge to new domains as much as possible is one method in this direction. Domain Adaptation (DA) is a sub-discipline of machine learning in which a model trained on a source distribution is used in the context of a different (but related) target distribution. In simple words, it is the ability to apply an algorithm trained in one domain to a different domain or updating one corpus using another corpus. While the big generic corpus will help to avoid out-of-vocabulary problem and unidiomatic translations, the small specialized corpus will help to capture terminology and vocabulary that is required for the translation (\u0160o\u0161tari\u0107 et al., 2019) . Few effective DA approaches which promise better translation performance are as follows:",
"cite_spans": [
{
"start": 1034,
"end": 1057,
"text": "(\u0160o\u0161tari\u0107 et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation for NMT",
"sec_num": "2"
},
{
"text": "\u2022 Incremental Training and Re-training -In this approach, initially a model is trained on a huge generic corpus and then the same model is re-trained on a small domain specific corpus. This approach has two phases: i) preprocessing and training of huge generic corpus and ii) pre-processing the new domain specific corpus and re-training the base model on the domain specific corpus (Kalimuthu et al., 2019) .",
"cite_spans": [
{
"start": 383,
"end": 407,
"text": "(Kalimuthu et al., 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation for NMT",
"sec_num": "2"
},
{
"text": "\u2022 Ensemble of decoding -In this approach, the base model is trained on generic dataset and the model is re-trained on domain specific dataset. Then instead of combining dataset, both the models are combined during translation (Chu and Wang, 2018) .",
"cite_spans": [
{
"start": 226,
"end": 246,
"text": "(Chu and Wang, 2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation for NMT",
"sec_num": "2"
},
{
"text": "\u2022 Combining Training Data -This approach is a simple and effective DA approach compared to all other approaches. In this approach, both the corpora are combined and this new corpus is used for training ie., huge generic corpus is combined with domain specific corpus and then this new corpus is used for training (Chu and Wang, 2018) .",
"cite_spans": [
{
"start": 313,
"end": 333,
"text": "(Chu and Wang, 2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation for NMT",
"sec_num": "2"
},
{
"text": "\u2022 Data Augmentation -In this approach, size of the domain specific dataset is increased using phrase based translation technique. The information related to word alignment is extracted from the corpus and then this information is used to build n-gram model to construct new dataset. Further, duplicates are discarded to avoid redundancy (Xia et al., 2019) . ",
"cite_spans": [
{
"start": 337,
"end": 355,
"text": "(Xia et al., 2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation for NMT",
"sec_num": "2"
},
{
"text": "Despite the considerable advances in MT models, translation of low-resource languages is still an unresolved issue and DA approaches are promising considerable performance in this direction. In the proposed work, a DA approach of combining both generic dataset and domain specific dataset based on the vocabulary of domain specific dataset is used to conduct effective training and inference for translation using openNMT-a popular open source tool (Klein et al., 2018) . OpenNMT accepts only primarily cleaned dataset as its input. Therefore, noise such as initial space, end space, blank lines and special characters have been removed from the bilingual parallel corpus. This pre-processing is carried out for both generic corpus and domain specific corpora. Then vocabulary of the domain specific corpora is constructed and sentences that contain any of the words in this vocabulary are extracted from the generic corpus. Finally, these extracted sentences are added to the domain specific corpus and the updated corpus is used to train the translation model. Table 2 illustrates a sample sentence from generic corpus and from domain specific corpus along with their vocabulary. The word 'queen' which is present in domain specific corpus is also present in the generic corpus. Hence, that sentence from the generic corpus will be extracted and added to the domain specific corpus.",
"cite_spans": [
{
"start": 449,
"end": 469,
"text": "(Klein et al., 2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 1063,
"end": 1070,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "Dataset and the preparation of dataset for training the translation model play a major role in MT. This data preparation process is carried out at different levels to conduct effective translation. Table 4 . Table 3 shows the details of domain specific dataset after applying DA and details of train and validation dataset used for the experiments are shown in Table 6 .",
"cite_spans": [],
"ref_spans": [
{
"start": 198,
"end": 205,
"text": "Table 4",
"ref_id": "TABREF4"
},
{
"start": 208,
"end": 215,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 361,
"end": 368,
"text": "Table 6",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3.1"
},
{
"text": "English to Hindi MT is implemented using open-NMT which is considered as the most sophisticated generalized translation tool that provides easy modifications. As this model requires GPU, we set up this experiment in Google colaboratory. Translation experiments are carried out by continuous tuning of the model to conduct better training. Initially, this model is trained using a huge generic corpus then the same set up is used for domain specific corpus. As the given domain specific corpora are very small to conduct efficient translation, training data of domain specific corpora is combined with generic corpus based on vocabulary of the domain specific corpora and the training is continued with the same set up.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4"
},
{
"text": "The proposed model predicts Hindi sentences for the given English test sentences and the sample snapshot of the model is shown in Figure 1 and the performance measure of the proposed model in terms of accuracy and perplexity is shown in Table 5 . Further, the proposed system is evaluated separately using BLEU score (Papineni et al., 2002) for both generic corpus and domain specific corpora. Though there are many challenges with the test dataset, considerable results are obtained for both generic corpus and domain specific corpora.",
"cite_spans": [
{
"start": 317,
"end": 340,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 130,
"end": 138,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 237,
"end": 244,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Result",
"sec_num": "4.1"
},
{
"text": "The results obtained for the given test set with respect to general domain corpus shows 63.43% accuracy with 20.51 perplexity using openNMT model. This model shows considerable accuracy for the generic corpus as it contains lots of challenges related to alignment, mixing of different script, length of the sentences etc. Then, the results obtained for translating the given test set with respect to domain specific AI corpus in the same setup shows 30.63% accuracy with 45.68 perplexity. As this corpus is very small to conduct translation the same is replicated in the result ie., it exhibits poor translation. Then, after applying proposed DA approach the model shows improvement in both accuracy words that improves the translation. Further, the results obtained for translating the given test set with respect to domain specific CHE corpus using openNMT shows 31.57% accuracy with 40.48 perplexity. Then, proposed DA approach is applied and newly constructed corpus is used in the model. It shows improvement in both accuracy and perplexity ie., 42.87% and 29.25 respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Result Analysis",
"sec_num": "4.2"
},
{
"text": "In this English-Hindi translation work, a huge generic corpus and small domain specific corpora are used for translation in openNMT. Further, a simple domain adaptation technique is used to tackle translation issues of low-resource languages. As this approach is language independent it can easily be extended to other low-resource languages. Further, these experiments have exhibited satisfactory results for both generic corpus and domain specific corpora. We would like to explore different preprocessing techniques that helps to translate low resource languages efficiently.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future work",
"sec_num": "5"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1409.0473"
]
},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Massive exploration of neural machine translation architectures",
"authors": [
{
"first": "Denny",
"middle": [],
"last": "Britz",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Goldie",
"suffix": ""
},
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1703.03906"
]
},
"num": null,
"urls": [],
"raw_text": "Denny Britz, Anna Goldie, Minh-Thang Luong, and Quoc Le. 2017. Massive exploration of neural machine translation architectures. arXiv preprint arXiv:1703.03906.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A survey of domain adaptation for neural machine translation",
"authors": [
{
"first": "Chenhui",
"middle": [],
"last": "Chu",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1806.00258"
]
},
"num": null,
"urls": [],
"raw_text": "Chenhui Chu and Rui Wang. 2018. A survey of domain adaptation for neural machine translation. arXiv preprint arXiv:1806.00258.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "What's your pick: Rbmt, smt or hybrid",
"authors": [
{
"first": "Catherine",
"middle": [],
"last": "Dove",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Loskutova",
"suffix": ""
},
{
"first": "Ruben",
"middle": [],
"last": "De La Fuente",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the tenth conference of the Association for Machine Translation in the Americas (AMTA 2012)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Catherine Dove, Olga Loskutova, and Ruben de la Fuente. 2012. What's your pick: Rbmt, smt or hy- brid. In Proceedings of the tenth conference of the Association for Machine Translation in the Americas (AMTA 2012). San Diego, CA.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Incremental domain adaptation for neural machine translation in low-resource settings",
"authors": [
{
"first": "Marimuthu",
"middle": [],
"last": "Kalimuthu",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Barz",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Sonntag",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Arabic Natural Language Processing Workshop",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marimuthu Kalimuthu, Michael Barz, and Daniel Son- ntag. 2019. Incremental domain adaptation for neu- ral machine translation in low-resource settings. In Proceedings of the Fourth Arabic Natural Language Processing Workshop, pages 1-10.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Opennmt: Neural machine translation toolkit",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yuntian",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Senellart",
"suffix": ""
},
{
"first": "Alexander M",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1805.11462"
]
},
"num": null,
"urls": [],
"raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Vincent Nguyen, Jean Senellart, and Alexander M Rush. 2018. Opennmt: Neural machine translation toolkit. arXiv preprint arXiv:1805.11462.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting of the Association for Compu- tational Linguistics, pages 311-318.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Use of google translate in medical communication: evaluation of accuracy",
"authors": [
{
"first": "Sumant",
"middle": [],
"last": "Patil",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Davies",
"suffix": ""
}
],
"year": 2014,
"venue": "Bmj",
"volume": "349",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sumant Patil and Patrick Davies. 2014. Use of google translate in medical communication: evaluation of accuracy. Bmj, 349:g7392.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Domain adaptation for machine translation involving a low-resource language: Google automl vs. from-scratch nmt systems",
"authors": [
{
"first": "Nata\u0161a",
"middle": [],
"last": "Margita\u0161o\u0161tari\u0107",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Pavlovi\u0107",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Boltu\u017ei\u0107",
"suffix": ""
}
],
"year": 2019,
"venue": "Translating and the Computer",
"volume": "41",
"issue": "",
"pages": "113--124",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Margita\u0160o\u0161tari\u0107, Nata\u0161a Pavlovi\u0107, and Filip Boltu\u017ei\u0107. 2019. Domain adaptation for machine translation involving a low-resource language: Google automl vs. from-scratch nmt systems. Translating and the Computer, 41:113-124.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing sys- tems, pages 3104-3112.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Long sentence preprocessing in neural machine translation",
"authors": [],
"year": 2019,
"venue": "2019 IEEE-RIVF International Conference on Computing and Communication Technologies (RIVF)",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ha Nguyen Tien and Huyen Nguyen Thi Minh. 2019. Long sentence preprocessing in neural machine translation. In 2019 IEEE-RIVF International Con- ference on Computing and Communication Tech- nologies (RIVF), pages 1-6. IEEE.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Generalized data augmentation for low-resource translation",
"authors": [
{
"first": "Mengzhou",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Kong",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.03785"
]
},
"num": null,
"urls": [],
"raw_text": "Mengzhou Xia, Xiang Kong, Antonios Anastasopou- los, and Graham Neubig. 2019. Generalized data augmentation for low-resource translation. arXiv preprint arXiv:1906.03785.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Predicted English-Hindi sentences using openNMT",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF0": {
"num": null,
"type_str": "table",
"text": "Details of General domain English-Hindi parallel corpus",
"html": null,
"content": "<table><tr><td>Resource</td><td>No.</td><td>of</td><td>No.</td><td>of</td></tr><tr><td/><td colspan=\"2\">parallel</td><td>words</td><td/></tr><tr><td/><td colspan=\"2\">sentences</td><td/><td/></tr><tr><td colspan=\"3\">IIT Bombay 2,00,000</td><td colspan=\"2\">6,28,56,567</td></tr><tr><td>Bible</td><td>62,073</td><td/><td colspan=\"2\">4,10,589</td></tr><tr><td colspan=\"2\">globalvoices 2,299</td><td/><td colspan=\"2\">1,70,116</td></tr><tr><td colspan=\"2\">CVIT-MKB 5,272</td><td/><td colspan=\"2\">3,54,128</td></tr></table>"
},
"TABREF2": {
"num": null,
"type_str": "table",
"text": "Sample sentences",
"html": null,
"content": "<table><tr><td>corpus</td><td>Sentence</td><td colspan=\"2\">Vocabulary</td></tr><tr><td>Generic corpus</td><td>The Queen said:</td><td colspan=\"2\">queen, said, know,</td></tr><tr><td/><td>Know my nobles</td><td colspan=\"2\">nobles, gracious, let-</td></tr><tr><td/><td>that a gracious letter</td><td colspan=\"2\">ter, delivered</td></tr><tr><td/><td>has been delivered to</td><td/></tr><tr><td/><td>me.</td><td/></tr><tr><td>Domain specific</td><td/><td/></tr><tr><td>corpus</td><td>Example one, in a</td><td colspan=\"2\">example, one, bee,</td></tr><tr><td/><td>bee hive, there are</td><td>hive,</td><td>thousands,</td></tr><tr><td/><td>many thousands of</td><td>workers,</td><td>serve,</td></tr><tr><td/><td>workers bee that all</td><td>queen</td></tr><tr><td/><td>serve one queen bee</td><td/></tr></table>"
},
"TABREF3": {
"num": null,
"type_str": "table",
"text": "Details of domain specific English-Hindi parallel corpora after domain adaptation (for training)",
"html": null,
"content": "<table><tr><td>Corpus</td><td>No.</td><td>of</td><td>No.</td><td>of</td><td>Vocab</td></tr><tr><td>name</td><td colspan=\"2\">parallel</td><td>words</td><td/><td>Size</td></tr><tr><td/><td colspan=\"2\">sentences</td><td/><td/><td/></tr><tr><td>AI</td><td colspan=\"2\">2,28,079</td><td colspan=\"3\">6,66,42,961 98,606</td></tr><tr><td>CHE</td><td colspan=\"2\">2,27,873</td><td colspan=\"3\">6,62,58,875 1,00,006</td></tr><tr><td colspan=\"6\">and perplexity ie., 41.98% and 38.52 respectively.</td></tr><tr><td colspan=\"6\">Because of DA technique used for translation, do-</td></tr><tr><td colspan=\"6\">main specific dataset is increased to capture rare</td></tr></table>"
},
"TABREF4": {
"num": null,
"type_str": "table",
"text": "Details of domain specific English-Hindi parallel corpora before domain adaptation (for training)",
"html": null,
"content": "<table><tr><td>Corpus</td><td>No.</td><td>of</td><td>No.</td><td>of</td></tr><tr><td>name</td><td colspan=\"2\">parallel</td><td>words</td><td/></tr><tr><td/><td colspan=\"2\">sentences</td><td/><td/></tr><tr><td>AI</td><td>4,383</td><td/><td colspan=\"2\">8,05,483</td></tr><tr><td>CHE</td><td>3,567</td><td/><td colspan=\"2\">13,72,980</td></tr></table>"
},
"TABREF5": {
"num": null,
"type_str": "table",
"text": "Performance measurement of the model",
"html": null,
"content": "<table><tr><td>Corpus Name</td><td colspan=\"2\">Accuracy Perplexity</td></tr><tr><td>Generic Corpus</td><td>63.43</td><td>20.51</td></tr><tr><td>AI (Before DA)</td><td>30.63</td><td>45.68</td></tr><tr><td colspan=\"2\">CHE (Before DA) 31.57</td><td>40.48</td></tr><tr><td>AI (After DA)</td><td>41.98</td><td>38.52</td></tr><tr><td>CHE (After DA)</td><td>42.87</td><td>29.25</td></tr></table>"
},
"TABREF6": {
"num": null,
"type_str": "table",
"text": "Details of training and validation sentences used for the model",
"html": null,
"content": "<table><tr><td>Corpus</td><td>No.</td><td>of</td><td>No. of val-</td></tr><tr><td>name</td><td colspan=\"2\">Training</td><td>idation sen-</td></tr><tr><td/><td colspan=\"2\">sentences</td><td>tences</td></tr><tr><td>Generic</td><td colspan=\"2\">2,69,400</td><td>20,244</td></tr><tr><td>AI</td><td colspan=\"2\">2,65,383</td><td>20,400</td></tr><tr><td>CHE</td><td colspan=\"2\">2,46,867</td><td>20,300</td></tr></table>"
}
}
}
}