|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T01:07:52.352268Z" |
|
}, |
|
"title": "cEnTam: Creation and Validation of a New English-Tamil Bilingual Corpus", |
|
"authors": [ |
|
{ |
|
"first": "Sanjanasri", |
|
"middle": [], |
|
"last": "Jp", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Amrita Vishwa Vidyapeetham", |
|
"location": { |
|
"postCode": "641112", |
|
"settlement": "Coimbatore", |
|
"country": "India" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Vijay", |
|
"middle": [ |
|
"Krishna" |
|
], |
|
"last": "Menon", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Amrita Vishwa Vidyapeetham", |
|
"location": { |
|
"postCode": "641112", |
|
"settlement": "Coimbatore", |
|
"country": "India" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Natural Language Processing (NLP), is the field of artificial intelligence that gives the computer the ability to interpret, perceive and extract appropriate information from human languages. Contemporary NLP is predominantly a data-driven process. It employs machine learning and statistical algorithms to learn language structures from textual corpus. While applications of NLP in English, certain European languages such as Spanish, German, etc. have been tremendous, it is not so, in many Indian languages. There are obvious advantages in creating aligned bilingual and multilingual corpora. Machine translation, cross-lingual information retrieval, content availability and linguistic comparison are a few of the most sought after applications of such parallel corpora. This paper explains and validates a parallel corpus we created for English-Tamil bilingual pair.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Natural Language Processing (NLP), is the field of artificial intelligence that gives the computer the ability to interpret, perceive and extract appropriate information from human languages. Contemporary NLP is predominantly a data-driven process. It employs machine learning and statistical algorithms to learn language structures from textual corpus. While applications of NLP in English, certain European languages such as Spanish, German, etc. have been tremendous, it is not so, in many Indian languages. There are obvious advantages in creating aligned bilingual and multilingual corpora. Machine translation, cross-lingual information retrieval, content availability and linguistic comparison are a few of the most sought after applications of such parallel corpora. This paper explains and validates a parallel corpus we created for English-Tamil bilingual pair.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Accurately analyzing NLP tasks requires good quality corpus. However, creating such a corpus is a tedious and laborious task. There are only a few open-source bilingual corpora available for English-Tamil language pair. Existing corpora for English-Tamil language pair is listed in Table 1 . EnTam (EnTam-v2) (Ramasamy et al., 2014) is an English-Tamil bilingual corpus crawled from the publicly available websites, especially form cinema, general news domain, and bible data. The author of this paper claimed that the corpus is plain raw data and requires some pre-processing before handling it for any NLP applications. Open subtitles (Lison and Tiedemann, 2016) is the corpus collected from the opus website. This corpus comprises bilingual movie subtitles that belong to the spoken language category. Tanzil (Tiedemann, 2012 ) is a collection of Quran translations compiled by the Tanzil project. OPUS website (Tiedemann, 2012) is a collection of English-Tamil bilingual localization files from open-source software projects like Ubuntu, KDE4, and GNOME. QED (QCRI Educational Domain) corpus (Abdelali et al., 2014) is again a data set belonging to the spoken language category. It includes bilingual subtitles of educational videos and lectures. The bilingual corpus is transcribed and translated using the AMARA web-based platform. The following shortcomings were observed based on the information from these existing bilingual corpora:", |
|
"cite_spans": [ |
|
{ |
|
"start": 309, |
|
"end": 332, |
|
"text": "(Ramasamy et al., 2014)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 637, |
|
"end": 664, |
|
"text": "(Lison and Tiedemann, 2016)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 812, |
|
"end": 828, |
|
"text": "(Tiedemann, 2012", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 914, |
|
"end": 931, |
|
"text": "(Tiedemann, 2012)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 1096, |
|
"end": 1119, |
|
"text": "(Abdelali et al., 2014)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 282, |
|
"end": 289, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "\u2022 Tanzil is mostly translated poetry and Bible is noncontemporary prose. Hence, this cannot be utilised for generic NLP applications; specific dictionary has to be created.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "\u2022 EnTam is a raw unstructured web corpus and contains a lot of noisy tokens such as image hyperlinks and other non-text web content. High-end pre-processing is required to make it usable. The sentences are aligned merely based on delimiter. The website data is crawled and is roughly comparable, which adversely effects bilingual embedding algorithms due to its high noise content.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "\u2022 Open subtitles and QED are corpora belonging to spoken language style category, which might not help in efficient textual analysis.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "\u2022 Tatoeba corpus has a minimal number of parallel sentences. Hence, it could not be used as standalone data for training machine learning models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "Although these existing corpora for English-Tamil language pair may still be useful in certain bilingual applications, we believe that these corpora still lack features that are strongly desirable for their use in word embedding context. Therefore, for justifiable analysis of semantic relatedness between language pairs using word embedding, a standard corpus has to be developed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "Years back, creating bilingual corpus was an uphill task in NLP especially for Indian languages. Internet breaks the language barrier for both content and access today. Many literary works such as novels, short stories, plays, etc. are being translated among various languages and are made easily accessible mostly through crowd-sourcing. Having rich literature in a language doesn't imply that it is resource rich, at least in a bilingual context; creating parallel corpus is still a mammoth effort. The data provided is a collection of sentences taken from textbooks, bilingual novels, story books and bilingual websites that includes tourism, health and news domain. The source data are merely comparable. The sample data is shown in Table 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 737, |
|
"end": 744, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "The methodology for acquisition of parallel corpus (cEn-Tam) from printed books and websites is shown in Fig. 1 and 2. In the pre-processing phase, the scanned images are cropped, skewed, rotated and even re-scanned wherever necessary to remove noise. The cleaned image is converted to text using Google OCR API. The text is further cleansed manually. It was necessary to ensure that the lines do not get blended with each other or that the font interferes with character recognition. The characters were at times not detected properly, which had to be typed manually. In case of website data, the selective bilingual/monolingual websites are crawled using python library\"Scrapy\" to extract the main text from the web pages. Headline, hyperlinks, images, name(s) of author(s), publication date are all ignored. The extracted raw text is cleansed and normalized to remove punctuation, quotations, brackets, currency chars and digits. Since bilingual websites are already parallel, the", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 105, |
|
"end": 111, |
|
"text": "Fig. 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental design", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "Cleansing/normalization", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Crawling websites", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Cleaned raw text The shorter sentence (less than six tokens/sentence) are less likely to contain any of the linguistic rule patterns, hence, the sentences vary from six to thirty tokens in length, with a corpus average of fifteen tokens per sentence including functional words. . Please find the specifics about the corpus in Table 3 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 326, |
|
"end": 333, |
|
"text": "Table 3", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Sentence aligning", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The bilingual corpora are assessed based on coherence. In a coherent text, there are logical links between the words, sentences, and paragraphs of the text. Coherence can be quantified by measuring similarity between sentences and/or documents. We use simple cosine similarity measure using appropriate embeddings, called the neighbourhood method. This approach assesses the translation quality of words using the bilingual embeddings trained on the aforementioned corpora. It measures the accuracy of the translation for the given source word. The evaluation is based on a test dictionary (AI, 2020). For computing coherence between the sentences, we need to use pre-trained monolingual embeddings in English and Tamil separately from each corpora (Table 1) . Using MUSE (Conneau et al., 2017) , we can generate bilingual embeddings of all the pairs of words in the vocabulary, in an unsupervised manner. We then use these bilingual word embeddings to generate bilingual sentence embeddings. This embeds sentences of source and target language in a shared vector space. Average cosine similarity of the sentences is used as an accuracy metric.", |
|
"cite_spans": [ |
|
{ |
|
"start": 772, |
|
"end": 794, |
|
"text": "(Conneau et al., 2017)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 749, |
|
"end": 758, |
|
"text": "(Table 1)", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparative Analysis of corpora", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "This section discusses the comparative study of various corpora, using Neural Machine Translation (NMT) using the corpus created in-house (cEnTam) and EnTam. The process of translating lots of sentences is very complex and we chose to do it only on two main data sets. The quality of translation is directly assessed using a BLEU and RIBES scoring.A simple NMT architecture is used, to keep the training easy and fast which is shown in Fig. 3 . The induced translation is evaluated based on both Bilingual Evaluation Understudy (BLEU) (Papineni et al., 2002) and Rank-based Intuitive Bilingual Evaluation Score (RIBES) (Isozaki et al., 2010) metric. However, BLEU is known to be a standard metric for Machine Translation (MT) evaluation, RIBES is best suited for distant pair languages like English and Tamil (Tan et al., 2015) . The accuracy can be improved further when used with attention mechanism (Bahdanau et al., 2014) . This evaluation can demonstrate the better coherence of our Corpus.", |
|
"cite_spans": [ |
|
{ |
|
"start": 535, |
|
"end": 558, |
|
"text": "(Papineni et al., 2002)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 619, |
|
"end": 641, |
|
"text": "(Isozaki et al., 2010)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 809, |
|
"end": 827, |
|
"text": "(Tan et al., 2015)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 902, |
|
"end": 925, |
|
"text": "(Bahdanau et al., 2014)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 436, |
|
"end": 442, |
|
"text": "Fig. 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Neural Machine Translation", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "Efficacy of the bilingual embeddings trained over the various corpora are assessed using word level and sentence level neighbourhood. This method is inspired from (Mikolov et al., 2013) . In this approach, we test whether the bilingual embedding is able to generate an appropriate target word for the given source word within the confining window of top similar words. Table 4 shows the performance of Nearest Neighbourhood word tasks. The percentage accuracy of how likely the target words appear as nearest neighbour to the source word within K (words) window size, is measured. We see the value for K=1 itself is very high for our corpus compared to other corpora. This proves that the parallel sentences in our corpus are more coherent compared to others. Table 5 shows the performance of sentence similarity task on various corpora. Considering the performance of the all other corpora in the aforementioned tasks, cEnTam shows considerably better results; EnTam shows the next best results. Henceforth, for comparative study using NMT, cEnTam and EnTam corpora were used. The results are shown in Table 6 . Both the BLEU and RIBES metric yield better scores over translations created using cEnTam corpus over EnTam. This further proves the quality of cEnTam over EnTam in a real machine translation system. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 163, |
|
"end": 185, |
|
"text": "(Mikolov et al., 2013)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 369, |
|
"end": 376, |
|
"text": "Table 4", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 760, |
|
"end": 767, |
|
"text": "Table 5", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 1103, |
|
"end": 1110, |
|
"text": "Table 6", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": "Non-existence of standard bilingual corpora is a major obstruction in effectively utilizing NLP technologies in many languages. Whether it is explainable (AI) analysis of semantic relatedness between language pairs or end-to-end deep learning models, it is necessary to have a standard bilingual corpus. Here, we have effectively demonstrated and implemented a methodology to create bilingual corpora, those are comparatively fast and requires less human effort. The corpus created is sentence aligned, hence it can be used for implementing NLP applications such as machine translation, cross-lingual information retrieval, semantic comparison and bilingual dictionary induction. The validations using nearest neighbourhood approach, sentence similarity and neural machine translation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7." |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We would like to thank Dr.Rajendran S, Professor & Head (Retd.), Department of Linguistics, Tamil University, Thanjavur, India currently serving as adjunct professor in Centre for Computational Engineering and Networking (CEN), Amrita University, India, Dr. A.G. Menon, associate professor (Retd.), Department of Indian Studies and Department of Comparative Linguistics, Leiden University and Dr. Loganathan Ramaswamy, Machine Learning Engineer at MSD and main author of EnTam corpus, Prague, Czech Republic for their immense suggestion on creating bilingual corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgement", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "The AMARA corpus: Building parallel language resources for the educational domain", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Abdelali", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Guzman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Sajjad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Vogel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1856--1862", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Abdelali, A., Guzman, F., Sajjad, H., and Vogel, S. (2014). The AMARA corpus: Building parallel language re- sources for the educational domain. In Proceedings of the Ninth International Conference on Language Re- sources and Evaluation (LREC'14), pages 1856-1862, Reykjavik, Iceland, May. European Language Resources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Pretrained vectors fasttext", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"I" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "AI, F. (2016). Pretrained vectors fasttext.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Neural machine translation by jointly learning to align and translate", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bahdanau, D., Cho, K., and Bengio, Y. (2014). Neural ma- chine translation by jointly learning to align and trans- late.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Word translation without parallel data", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Lample", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Ranzato", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Denoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "J\u00e9gou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1710.04087" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Conneau, A., Lample, G., Ranzato, M., Denoyer, L., and J\u00e9gou, H. (2017). Word translation without parallel data. arXiv preprint arXiv:1710.04087.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Bilbowa: Fast bilingual distributed representations without word alignments", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Gouws", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Corrado", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Workshop and Conference Proceedings", |
|
"volume": "37", |
|
"issue": "", |
|
"pages": "748--756", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gouws, S., Bengio, Y., and Corrado, G. (2015). Bilbowa: Fast bilingual distributed representations without word alignments. In ICML, volume 37 of JMLR Workshop and Conference Proceedings, pages 748-756.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Automatic evaluation of translation quality for distant language pairs", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Isozaki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Hirao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Duh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Sudoh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Tsukada", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "944--952", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Isozaki, H., Hirao, T., Duh, K., Sudoh, K., and Tsukada, H. (2010). Automatic evaluation of translation quality for distant language pairs. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 944-952, Cambridge, MA, October. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "OpenSubtitles2016: Extracting large parallel corpora from movie and TV subtitles", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Lison", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Tiedemann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "923--929", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lison, P. and Tiedemann, J. (2016). OpenSubtitles2016: Extracting large parallel corpora from movie and TV subtitles. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 923-929, Portoro\u017e, Slovenia, May. European Language Resources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Exploiting similarities among languages for machine translation", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Q", |
|
"middle": [ |
|
"V" |
|
], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mikolov, T., Le, Q. V., and Sutskever, I. (2013). Exploit- ing similarities among languages for machine transla- tion. CoRR, abs/1309.4168.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Bleu: a method for automatic evaluation of machine translation", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Papineni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Ward", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W.-J", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "311--318", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Papineni, K., Roukos, S., Ward, T., and Zhu, W.-J. (2002). Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA, July. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "En-Tam: An english-tamil parallel corpus", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Ramasamy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Bojar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "And\u017eabokrtsk\u00fd", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ramasamy, L., Bojar, O., and\u017dabokrtsk\u00fd, Z. (2014). En- Tam: An english-tamil parallel corpus (EnTam v2.0).", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics (\u00daFAL), Faculty of Mathematics and Physics", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "LINDAT/CLARIN digital library at the Institute of For- mal and Applied Linguistics (\u00daFAL), Faculty of Mathe- matics and Physics, Charles University.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "An awkward disparity between bleu / ribes scores and human judgements in machine translation", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Tan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Dehdari", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Van Genabith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the Workshop on Asian Translation (WAT-2015)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "74--81", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tan, L. L., Dehdari, J., and van Genabith, J. (2015). An awkward disparity between bleu / ribes scores and hu- man judgements in machine translation. In Proceedings of the Workshop on Asian Translation (WAT-2015), pages 74-81. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Parallel data, tools and interfaces in OPUS", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Tiedemann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2214--2218", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tiedemann, J. (2012). Parallel data, tools and interfaces in OPUS. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12), pages 2214-2218, Istanbul, Turkey, May. European Language Resources Association (ELRA).", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"text": "Block diagram for creation of parallel corpus (cEnTam) -printed books", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"text": "Block diagram for creation of parallel corpus (cEnTam) -website data sentences are aligned based on delimiter. Aligned sentences are checked manually for corrections. Lengthy sentences are split into shorter ones, to maintain consistency in data.", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"num": null, |
|
"text": "Neural Machine Translation Deep network used for testing corpora performances.", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"type_str": "table", |
|
"html": null, |
|
"text": "Details of existing corpora for English-Tamil language pair", |
|
"num": null, |
|
"content": "<table><tr><td>Source</td><td>Domain</td><td colspan=\"3\">Sentences English Tokens Tamil Tokens</td></tr><tr><td>EnTam</td><td>Generic (bible, cinema, news)</td><td>169.8k</td><td>3.9M</td><td>2.7M</td></tr><tr><td>Open subtitles</td><td>Movie Subtitles</td><td>32.4k</td><td>0.2M</td><td>0.2M</td></tr><tr><td>OPUS website</td><td>Ubuntu,KDE4, GNOME</td><td>111.1k</td><td>3.2M</td><td>1.0M</td></tr><tr><td>Tateoba</td><td>Simple Sentences</td><td>0.3k</td><td>2.1k</td><td>1.6k</td></tr><tr><td>Tanzil</td><td>Quran Data</td><td>93.5k</td><td>2.8M</td><td>7.0M</td></tr><tr><td>QED</td><td>Subtitles of Educational Videos</td><td>0.7k</td><td>1.0M</td><td>0.5M</td></tr></table>" |
|
}, |
|
"TABREF1": { |
|
"type_str": "table", |
|
"html": null, |
|
"text": "Sample data for cEnTam", |
|
"num": null, |
|
"content": "<table><tr><td>English</td></tr></table>" |
|
}, |
|
"TABREF2": { |
|
"type_str": "table", |
|
"html": null, |
|
"text": "Details of cEnTam Corpus.", |
|
"num": null, |
|
"content": "<table><tr><td>Corpus Type</td><td>English (#. of sentences)</td><td>Tamil (#. of sentences)</td></tr><tr><td>Monolingual</td><td>457396</td><td>563568</td></tr><tr><td>Bilingual</td><td>56495</td><td>56495</td></tr></table>" |
|
}, |
|
"TABREF3": { |
|
"type_str": "table", |
|
"html": null, |
|
"text": "Accuracy of the Nearest Neighbour analysis of word translation task using various window sizes in different corpora. The value represents the relative frequency of finding the target translation for a source word amongst the paired sentences expressed", |
|
"num": null, |
|
"content": "<table><tr><td/><td/><td>Window size</td><td/></tr><tr><td>Corpora</td><td colspan=\"3\">(Number of target words / 100 source words)</td></tr><tr><td/><td>K=1</td><td>K=5</td><td>K=10</td></tr><tr><td>EnTam</td><td>11.83</td><td>18.58</td><td>21.7</td></tr><tr><td colspan=\"2\">Open subtitles 11.61</td><td>18.37</td><td>20.53</td></tr><tr><td colspan=\"2\">OPUS website 4.91</td><td>7.06</td><td>7.8</td></tr><tr><td>Tanzil</td><td>0.47</td><td>0.95</td><td>1.05</td></tr><tr><td>QED</td><td>0.06</td><td>0.13</td><td>0.15</td></tr><tr><td>cEnTam</td><td>27.08</td><td>35.15</td><td>39.36</td></tr></table>" |
|
}, |
|
"TABREF4": { |
|
"type_str": "table", |
|
"html": null, |
|
"text": "Average cosine sentence similarity of various corpora. A highest average and a lower deviation of cosine relations between sentence indicate coherence of the corpus.", |
|
"num": null, |
|
"content": "<table><tr><td>Corpora</td><td colspan=\"2\">Avg. Cosine Similarity Std.Dev</td></tr><tr><td>EnTam</td><td>0.12</td><td>0.09</td></tr><tr><td>Open subtitles</td><td>0.06</td><td>0.07</td></tr><tr><td>OPUS website</td><td>0.07</td><td>0.10</td></tr><tr><td>Tanzil</td><td>0.03</td><td>0.13</td></tr><tr><td>QED</td><td>0.04</td><td>0.21</td></tr><tr><td>cEnTam</td><td>0.32</td><td>0.04</td></tr></table>" |
|
}, |
|
"TABREF5": { |
|
"type_str": "table", |
|
"html": null, |
|
"text": "Results of Neural Machine Translation system performance with EnTam and cEnTam corpora", |
|
"num": null, |
|
"content": "<table><tr><td colspan=\"3\">Corpora BLEU RIBES</td></tr><tr><td>EnTam</td><td>0.12</td><td>0.52</td></tr><tr><td>cEnTam</td><td>0.39</td><td>0.74</td></tr></table>" |
|
} |
|
} |
|
} |
|
} |