|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T01:08:06.181490Z" |
|
}, |
|
"title": "Benchmarking Multidomain English-Indonesian Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Tri", |
|
"middle": [ |
|
"Wahyu" |
|
], |
|
"last": "Guntara", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Indonesia", |
|
"location": { |
|
"addrLine": "Kampus UI Depok", |
|
"country": "Indonesia" |
|
} |
|
}, |
|
"email": "guntara@kata.ai" |
|
}, |
|
{ |
|
"first": "Alham", |
|
"middle": [], |
|
"last": "Fikri", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Edinburgh", |
|
"location": { |
|
"country": "Scotland" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Radityo", |
|
"middle": [ |
|
"Eko" |
|
], |
|
"last": "Prasojo", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Free University of Bolzano", |
|
"location": { |
|
"addrLine": "Piazza Domenicani 3" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In the context of Machine Translation (MT) from-and-to English, Bahasa Indonesia has been considered a low-resource language, and therefore applying Neural Machine Translation (NMT) which typically requires large training dataset proves to be problematic. In this paper, we show otherwise by collecting large, publicly-available datasets from the Web, which we split into several domains: news, religion, general, and conversation, to train and benchmark some variants of transformer-based NMT models across the domains. We show using BLEU that our models perform well across them and perform comparably with Google Translate. Our datasets (with the standard split for training, validation, and testing), code, and models are available on https://github.com/gunnxx/indonesian-mt-data.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In the context of Machine Translation (MT) from-and-to English, Bahasa Indonesia has been considered a low-resource language, and therefore applying Neural Machine Translation (NMT) which typically requires large training dataset proves to be problematic. In this paper, we show otherwise by collecting large, publicly-available datasets from the Web, which we split into several domains: news, religion, general, and conversation, to train and benchmark some variants of transformer-based NMT models across the domains. We show using BLEU that our models perform well across them and perform comparably with Google Translate. Our datasets (with the standard split for training, validation, and testing), code, and models are available on https://github.com/gunnxx/indonesian-mt-data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "With approximately 200 million active speakers, Indonesian (Bahasa Indonesia) is the 10th most spoken language in the world (Eberhard et al., 2019 ). Yet, it is still considered to be one of the under-developed languages. Research in Indonesian Natural Language Processing (NLP) in general has suffered from a lack of open data, standardized benchmark, and reproducible code. Recent work in English-Indonesian (En-Id) machine translation (MT), in particular, has either used (1) closed data (Shahih and Purwarianti, 2016; Octoviani et al., 2019) or (2) open data with unpublished split for training, validation, and testing (Hermanto et al., 2015) . Also, mostly only rule-based approaches or Statistical Machine Translation (SMT) were applied (Shahih and Purwarianti, 2016; Octoviani et al., 2019) , whereas newer techniques such as Neural Machine Translation (NMT) based on the state-of-the-art Transformer architecture (Vaswani et al., 2017) , which has been shown to outperform previous architectures such as the Recurrent Neural Network (RNN) in terms of training time and translation accuracy, has not been utilized. Hermanto et al. (2015) trained an RNN En-Id translation model. However, their model was trained only on a small amount of data with less than 24,000 parallel sentences. Furthermore, all these approaches have been evaluated using different datasets, and so it is unclear how well they perform in comparison to each other. With the rise of the data-hungry NMT, effort such as the OPUS data portal (Tiedemann, 2012) , OpenSubtitles (Lison et al., 2018) , and Wikimatrix (Schwenk et al., 2019) , has been made to publish more and more parallel data, including English-Indonesian to the number of millions of pairs. However, to the best of our knowledge, there has been no published work that utilizes the data for English-Indonesian machine translation. Therefore, in this particular context, it is currently unclear how useful the data is. Bahasa Indonesia is a standardized register of Malay and is adopted as the country's national language to unify the archipelago with more than 700 indigenous local languages (Riza, 2008) . Consequently, the daily-spoken col-loquial Indonesian is vastly different from the standardized form due to the influences of the local language and, additionally, some popular foreign languages, such as English or Arabic. This phenomenon affects certain domains, such as the conversational domain where the colloquial Indonesian is typically used more, or the religion domain where Arabic words or phrases are sometimes used \"as is\" instead of being translated. Recent En-Id MT approaches have not yet considered different domains in Bahasa Indonesia (Shahih and Purwarianti, 2016; Octoviani et al., 2019) and instead have focused more on the news domain, which mostly used the standardized Indonesian (Hermanto et al., 2015) . In this work, our goal is to address the above problems by proposing several contributions as follow:", |
|
"cite_spans": [ |
|
{ |
|
"start": 124, |
|
"end": 146, |
|
"text": "(Eberhard et al., 2019", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 491, |
|
"end": 521, |
|
"text": "(Shahih and Purwarianti, 2016;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 522, |
|
"end": 548, |
|
"text": "Octoviani et al., 2019) or", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 624, |
|
"end": 647, |
|
"text": "(Hermanto et al., 2015)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 744, |
|
"end": 774, |
|
"text": "(Shahih and Purwarianti, 2016;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 775, |
|
"end": 798, |
|
"text": "Octoviani et al., 2019)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 922, |
|
"end": 944, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 1123, |
|
"end": 1145, |
|
"text": "Hermanto et al. (2015)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 1518, |
|
"end": 1535, |
|
"text": "(Tiedemann, 2012)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 1552, |
|
"end": 1572, |
|
"text": "(Lison et al., 2018)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 1590, |
|
"end": 1612, |
|
"text": "(Schwenk et al., 2019)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 2134, |
|
"end": 2146, |
|
"text": "(Riza, 2008)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 2701, |
|
"end": 2731, |
|
"text": "(Shahih and Purwarianti, 2016;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 2732, |
|
"end": 2755, |
|
"text": "Octoviani et al., 2019)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 2852, |
|
"end": 2875, |
|
"text": "(Hermanto et al., 2015)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "1. We collect scattered English-Indonesian parallel data available on the Web and divide them into several domains: news, religion, general, and conversation. 2. We introduce new datasets for news and conversation domains by aligning parallel articles and video captions. 3. For each domain, we set a standard data split for training, development, and testing. We further analyze the quality and characteristics of each dataset and each domain. 4. We train several transformer-based NMT models. We perform cross-domain testing to gain some insight into model robustness under domain changes. We conduct a manual evaluation of a sample of our data to assess the relative quality of our translation models further. We compare our results with Google Translate as the state-of-the-art translation tool.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "The rest of the paper is structured as follow: Section 2 discusses the related work, which consists of parallel corpus collection and some En-Id MT approaches. Section 3 discusses the datasets that we use for training and testing. Section 4 describes the state-of-the-art and baseline MT methods that we use in our benchmark. Section 5 details our experiment settings and results, as well as discusses our findings and insights from the results. Finally, Section 6 concludes the paper and outlines some future work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "The OPUS data portal (Tiedemann, 2012) With over 9 million pairs, the OpenSubtitles dataset (Lison et al., 2018) represents around 80% of the En-Id sentence pairs in OPUS. The dataset is collected from the opensubtitles website. 2 Sentence pairs are extracted from two subtitles of different languages via time-slot alignment. Sometimes, there are time-slot mismatches because the subtitles are created using different sources of video with different play speeds and cut-off points. To combat the mismatches, two anchor points are selected as references to trim and to \"stretch in/out\" the other timestamps (Tiedemann, 2008) . Although OPUS is an open platform to publish parallel data, some dataset is not integrated in OPUS yet. Wikimatrix (Schwenk et al., 2019) collects 135 millions parallel sentences from Wikipedia across 85 languages. Multilingual sentence alignment of Wikipedia pages is done by leveraging LASER (Artetxe and Schwenk, 2019b), a massively multilingual sentence embeddings of 93 languages trained on a subset of OPUS. Using LASER, each sentence pair x and y of two different languages is scored using a margin formula that is a ratio of their cosine similarity and the average cosine of their k nearest neighbors, as follows:", |
|
"cite_spans": [ |
|
{ |
|
"start": 21, |
|
"end": 38, |
|
"text": "(Tiedemann, 2012)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 92, |
|
"end": 112, |
|
"text": "(Lison et al., 2018)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 607, |
|
"end": 624, |
|
"text": "(Tiedemann, 2008)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 742, |
|
"end": 764, |
|
"text": "(Schwenk et al., 2019)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "margin(x, y) = cos(x, y) z\u2208NN k (x) cos(x, z) 2k + z\u2208NN k (y) cos(y, z) 2k", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "A margin threshold is applied to decide whether x and y are mutual translations or not. It has been shown to be more consistent than the standard cosine similarity in determining correct translation pairs (Artetxe and Schwenk, 2019a) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 205, |
|
"end": 233, |
|
"text": "(Artetxe and Schwenk, 2019a)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "Using this approach, Wikimatrix obtains at least 1 million En-Id sentences, depending on the threshold used. Nevertheless, the data collected above has not yet been explored to build an English-Indonesian machine translation model. As English-Indonesian parallel data was considered to be low-resourced, attempts on data-driven machine translation are mostly a statistical-and-rule-based hybrid approach. Several examples include a general hybrid MT system where a rule-based morphological analysis is applied to generate an intermediate translation result which is then refined using an SMT model (Yulianti et al., 2011) , a hybrid approach that analyzes Indonesian cliticization (Larasati, 2012a) and utterance disfluency (Shahih and Purwarianti, 2016 ) as a preprocessing step before feeding the training data into an SMT tool. Moving on from SMTs, Octoviani, et al. (2019) developed a neuralnetwork-and-rule-based hybrid approach for phrase-based English-Indonesian Machine Translation. An RNN model is trained to classify the input phrase into a type. Then, a rule-based approach is applied for each phrase type to output the final translation. The approach was evaluated over a dataset of 70 pairs of phrases. Lastly, Hermanto et al.'s work (2015) , which uses RNN, is the only work that we found within the topic of En-Id MT that utilizes NMT. They use the Pan Asia Networking Localization (PANL) dataset 3 , which contains about 24,000 pairs of sentences, as their train and test data. Due to the lack of distributed code from the previous work, we were not able to use them as our baselines. Instead, we use some variants of transformer-based models for our benchmark, which we will explain in details in Section 4.", |
|
"cite_spans": [ |
|
{ |
|
"start": 598, |
|
"end": 621, |
|
"text": "(Yulianti et al., 2011)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 681, |
|
"end": 698, |
|
"text": "(Larasati, 2012a)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 724, |
|
"end": 753, |
|
"text": "(Shahih and Purwarianti, 2016", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 852, |
|
"end": 876, |
|
"text": "Octoviani, et al. (2019)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 1224, |
|
"end": 1253, |
|
"text": "Hermanto et al.'s work (2015)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "We collect data from OPUS (Tiedemann, 2012) which contains Open Subtitles (Lison et al., 2018) among other smaller datasets. Tanzil 4 and Bible-Uedin (Christodouloupoulos and Steedman, 2015) stores parallel Quran and Bible translations, respectively, while JW300 (Agi\u0107 and Vuli\u0107, 2019) collects parallel sentences of Jehovah's Witness religious scripture and articles. Tatoeba 5 is a small database of sentences and translations in a general domain. GlobalVoices dataset 6 is a namesake of a multilingual news website, 7 from which its parallel sentences were crawled. Finally, GNOME 8 , Ubuntu 9 , and KDE4 10 datasets contain parallel software strings taken from their respective localization files. We run the WikiMatrix (Schwenk et al., 2019) script to extract 1.8 million En-Id parallel sentences using a margin threshold value of 1.03 to obtain high-quality pairs in maximum number, as suggested in the paper. Other than OPUS and WikiMatrix, we find more, smaller datasets from the Web. The PANL dataset contains around 24,000 pairs of sentences manually aligned from news articles. IDENTIC (Larasati, 2012b ) is a morphologically-enriched multidomain-dataset that combines the PANL dataset, a subset of Open Subtitles, and 164 manually-aligned sentences from BBC news articles. The Desmond86 dataset 11 contains parallel sentences obtained from BBC (news), Our Daily Bread (ODB) 12 (religion), SMERU 13 (research article), and AusAid 14 (humanitarian report). The Web Inventory of Transcribed and Translated Talks (WIT) (Cettolo et al., 2012) 15 released an extra dataset for the 2017 edition of International Workshop on Spoken Language Translation (IWSLT) 16 , which also contains En-Id pairs extracted from TED talk videos. TALPCo contains highquality pairs of short sentences originally translated from Japanese (Nomoto et al., 2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 26, |
|
"end": 43, |
|
"text": "(Tiedemann, 2012)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 74, |
|
"end": 94, |
|
"text": "(Lison et al., 2018)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 150, |
|
"end": 190, |
|
"text": "(Christodouloupoulos and Steedman, 2015)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 263, |
|
"end": 285, |
|
"text": "(Agi\u0107 and Vuli\u0107, 2019)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 724, |
|
"end": 746, |
|
"text": "(Schwenk et al., 2019)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 1097, |
|
"end": 1113, |
|
"text": "(Larasati, 2012b", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 1665, |
|
"end": 1667, |
|
"text": "16", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1823, |
|
"end": 1844, |
|
"text": "(Nomoto et al., 2018)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Existing Datasets", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "3.2.1. Bilingual BBC and BeritaJakarta (Mitra et al., 2017) We use an earlier version of berita2bahasa.com crawler (Mitra, Sujiani and Negara, 2017) to crawl bilingual BBC 17 and bilingual BeritaJakarta 18 to extract parallel En-Id articles. 19 Each news article in the Bilingual BBC dataset is already paired and properly sentence-split. We observe that the translation style in this dataset is mostly one-to-one at the sentence level, meaning that most sentences are already paired. Although this results in less fluent translations in some cases, we have a straightforward sentence alignment with very few manual adjustments needed. On the other hand, the Bilingual BeritaJakarta dataset is not yet aligned on the article-level. The Indonesian corpora contain 4000 timestamped articles, whereas the English contained 3000 articles. As the dataset was collected into a single clean text file, most of the article fingerprints are lost, and therefore using tools which rely on file fingerprints such as Bitextor (Espl\u00e1-Gomis and Forcada, 2009) is not feasible. We employ a timestamp-based alignment algorithm to find article pairs. First, for each language, articles published on the same date are grouped together. Then, two articles are paired following the order of publishing time, i.e., the first published article in Indonesian on a certain day is paired with the first published article in English on the same day, then the second article, then the third, etc. Mispairings are manually checked and fixed based on the titles. Then, we sentence-split the articles using NLTK (Loper and Bird, 2002) . To ensure high-quality 11 https://github.com/desmond86/Indonesian-English-Bilingual-Corpus. Sentence alignment was manually done, which was confirmed by the dataset owner via private messages.", |
|
"cite_spans": [ |
|
{ |
|
"start": 39, |
|
"end": 59, |
|
"text": "(Mitra et al., 2017)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 115, |
|
"end": 148, |
|
"text": "(Mitra, Sujiani and Negara, 2017)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 242, |
|
"end": 244, |
|
"text": "19", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1581, |
|
"end": 1603, |
|
"text": "(Loper and Bird, 2002)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "New Datasets", |
|
"sec_num": "3.2." |
|
}, |
|
{ |
|
"text": "pairs, sentence alignment is performed manually.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "New Datasets", |
|
"sec_num": "3.2." |
|
}, |
|
{ |
|
"text": "Sunan Ibn Majah is a major hadith 20 collection and has been translated into several languages. We crawled http: //carihadis.com/ 21 for the Indonesian translation and https: //www.islamicfinder.org/ 22 for the English one. However, the Indonesian source uses an older version of Ibn Majah, and therefore uses different hadith indexes, which makes an automated alignment problematic. Therefore, we perform manual alignment instead.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ibn Majah Parallel Translation", |
|
"sec_num": "3.2.2." |
|
}, |
|
{ |
|
"text": "We extract YouTube videos whose captions are available in both English and Indonesian from several channels e.g., TED, TEDx, Khan Academy, Kobasolo, Raditya Dika, and Londokampung. Channels selected are based on our manual observation, that is, whether they contain a good portion of videos having both English and Indonesian captions. The Indonesian captions are transcribed directly, whereas the English captions are translated by their fans. A YouTube caption comes in a series of chunks where each chunk contains the text, the start time, and the duration of that particular chunk. The captions are not well-aligned since the length of parallel sentences in Indonesian and English differ, and only a small part of them can fit into the screen. But, unlike Open Subtitles, all pairs of captions on YouTube follow the same video source; thus, no timestamp stretch or cut-off is necessary. Alignment is done using a greedy algorithm. First, chunks without timestamp intersection in the other language are discarded. Then, starting from the first pair of chunks, we compute how much time they overlap with each other. For instance, if an Id chunk starts from 0:00 and ends at 0:03, while an En chunk starts from 0:01 and ends at 0:04, then altogether they span 4 seconds but they occur at the same time for only 2 seconds. We say that they are together 2/4 = 50% of the time. We call this measure as the intersection of union (IoU) ratio. We say that a pair of chunks are aligned if their IoU ratio falls above a certain threshold. If a pair of chunks do not satisfy the threshold, then the next chunk is appended to the shorter one among the pair, until the threshold is reached. We experimented with various threshold values on a small, randomly selected and manually annotated data, and found that 0.8 is a good threshold for aligning the chunks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Youtube Parallel Caption", |
|
"sec_num": "3.2.3." |
|
}, |
|
{ |
|
"text": "We analyze the collected datasets for their quality and their domain characteristics. We quantitatively explore the datasets, as shown in Table 2 . We mainly assess their quality based on their sentence lengths, unique tokens, noise, and completeness of sentences. We find that most of them are good quality. However, we find some other to be lacking, and decide to drop them. That is, they are not included in our benchmark. : Exploratory data analysis of all datasets. Abbr. denotes the abbreviation of the corpus names. |X| denotes the unique count of a set X, whereas Y denotes the average of bag of values Y . len ratio denotes the absolute ratio between the sentence length of the two languages, En and Id. The absolute ratio between two arbitrary numbers x, y is max(x/y, y/x).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 138, |
|
"end": 145, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dataset Analysis", |
|
"sec_num": "3.3." |
|
}, |
|
{ |
|
"text": "Bold items indicate new datasets. datasets that are dropped, \u2202 datasetes that are partially used, and * datasets with known problems but are used.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset Analysis", |
|
"sec_num": "3.3." |
|
}, |
|
{ |
|
"text": "The Ubuntu and KDE4 datasets are taken from their respective software localization resources, and so we consider them to represent the tech domain. The majority of their \"sentences\" are short, incomplete, and noisy. For example:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset Analysis", |
|
"sec_num": "3.3." |
|
}, |
|
{ |
|
"text": "\u2022 En: \"%s: access ACL '%s': %s at entry %d\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset Analysis", |
|
"sec_num": "3.3." |
|
}, |
|
{ |
|
"text": "\u2022 Id: \"%s: akses ACL '%s': %s at masukan %d\" Therefore, the data as it is right now would not be very useful, and further refinement and filtering are necessary. The GNOME dataset, the third representative of the tech domain, unlike the other two, has higher-quality pairs. However, we could not find any other dataset within the same domain, so we decide to drop the tech domain altogether. 23 The Ibn Majah dataset contains sentences that are too long and need to be split, which is difficult due to inconsistent usage of splitting punctuations (commas, periods, colons, and semicolons) in the corpus. We decide to drop this dataset in our benchmark. The Desmond dataset contains a few numbers of pairs in the domain of Science, which are dropped. Lastly, the IDENTIC dataset has some intersection with the PANL and Open Subtitle datasets. Therefore we only consider the non-intersecting sentences. After filtering out low-quality and redundant data, we combine the datasets falling under the same domain. News domain consists of news articles. Religious domain consists of religious manuscripts or articles. These articles are different from news as they are not in a formal, informative style. Instead, they are written to advocate and inspire religious values, often times citing biblical or quranic anecdotes. Next, we combine all datasets that come from human 23 Experimentally, this is to avoid overfitting our model if it is trained on the tech domain with only one dataset.", |
|
"cite_spans": [ |
|
{ |
|
"start": 392, |
|
"end": 394, |
|
"text": "23", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset Analysis", |
|
"sec_num": "3.3." |
|
}, |
|
{ |
|
"text": "speech (movie, talk, and lecture) into the conversation domain. Lastly, we merge datasets that cover broad topics into the general domain. Then, for each domain, we split it into a train, validation, and test data. The result is shown in Table 3 . Table 3 : Data split and n-gram similarity between validation and training data for each domain.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 238, |
|
"end": 245, |
|
"text": "Table 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 248, |
|
"end": 255, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dataset Analysis", |
|
"sec_num": "3.3." |
|
}, |
|
{ |
|
"text": "For news and religion domain, we choose an exclusive corpus:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 BBC-BJ for news, and \u2022 Desmond ODB (Our Daily Bread, the religion part of Desmond dataset) for religion, to be our validation and test data because (1) they are manually curated and of high-quality, (2) they are much smaller than the rest of training data and therefore do not sacrifice too much portion of data that could have been for training instead, and (3) they have similar sentence length compared to the training data. There is no such corpus for the conversation domain and the general domain. The datasets in the conversation domain are all automatically aligned and therefore are noisy. For the general domain, both Tatoeba and TALPCo are manually curated, but their sentences (especially Tatoeba) are very short compared to Wikimatrix. Therefore, for these two datasets, we do a random split involving all datasets in the domain for validation and testing, each having 2000 unique pairs not present in the training set. For the general domain, we mix shorter sentences from TALPCo and the longer ones from Wikimatrix as our validation and test data. We observe that Tatoeba has similar types of high-quality sentences like TALPCo has, albeit shorter. Therefore we choose TALPCo to be in the validation and test sets instead, because longer sentences mean more difficult and meaningful evaluation. To see the difference between these two split settings, we compute the rate of phrases (in terms of n-grams) that appear in validation set sentences that also appear in the training set sentences. Figure 1 shows this computation for 3 \u2264 n \u2264 8 for each domain. It shows that domains without an exclusive corpus for the validation set has a higher ngram intersections between the validation set and the training set, which means that a model trained on the domain might be overfitted for the dataset and it might prove difficult to see how such a model generalizes to unseen dataset within the same domain. To further emphasize this point, we tried to built another split for the religion domain without the Desmond dataset, that is, the split involves all the other three datasets: Tanzil, Bible, and JW300. The result is that the validation and test sets share significantly more n-grams. We further compute a weighted average of the occurrence ratios across ns, that is", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1509, |
|
"end": 1517, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Domain", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "nsim(V, T ) = 8 n=3 n \u00d7 100 c(n\u2212gram in V appearing in T ) c(n\u2212gram in V ) b n=a n", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where c is a counting function, V is the validation set, and T is the training set. The results of the weighted average of each domain is shown in Table 3 , where the conversation domain is shown to have the highest nsim(V, T ) of 18.5. In the next subsections, we discuss some special characteristics of each domain.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 147, |
|
"end": 154, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Domain", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "3.3.1. News Some sentence pairs in the news domain suffer from the inter-sentence context-preservation issue. For instance, we sometimes find that a single sentence is aligned to two (usually shorter) sentences in the other language in order to capture the whole context of the single sentence. Another observation is the usage of pronouns, which loses context whenever the article is split into sentences and then paired. For example:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 En: The firm says the posts will go around ...", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Id: Sony mengatakan PHK karyawan dilakukan ... In this example, \"Sony\" as an entity is described as \"The firm\". Readers should understand the connection if presented with the whole article, but not as independent sentences. Some sentences are appended with extra information to help the readers understand the news better based on their local knowledge. One of the most common examples is a converted currency, as shown in the example below.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 \"Kalau jauh misalnya di Indramayu, bisa 2,5 juta -3 juta Rupiah.\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 \"If it is far, in Indramayu for instance, it could be around 2,5 -3 million Rupiah ($250 -$300).\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Specifically, in Global Voices, we find translated tweets or Instagram posts, as this news site often include people's reaction on social media in their articles. This part of the text is out-of-domain within the context of news. Furthermore, we find inconsistency in translating or copying the tweet's usernames or tags.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The Tanzil dataset is a Quran translation dataset which has a relatively-imbalanced sentence length between the two languages, evidenced in Table 2 , where an average Indonesian sentence in this dataset is about 50% longer than an average English one. Furthermore, an average pair of sentences in this dataset would, on average, have one of them twice as long as the other. However, we still decide to include the dataset in the domain to avoid overfitting because the remaining datasets are all about Christianity. Another interesting property in the religion domain corpus is the localized names, for example, David to Daud, Mary to Maryam, Gabriel to Jibril, and more. In contrast, entity names are usually kept unchanged in other domains. We also find quite a handful of Indonesian translations of JW300 are missing the end sentence dot (.), even though the end sentence dot is present in their English counterpart. Lastly, we also find some inconsistency in the transliteration, for example praying is sometimes written as \"salat\" or \"shalat\", or repentance as \"tobat\" or \"taubat\".", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 140, |
|
"end": 147, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Religion", |
|
"sec_num": "3.3.2." |
|
}, |
|
{ |
|
"text": "The Tatoeba dataset contains short sentences. However, they contain high-quality full-sentence pairs with precise translation and is widely used in previous work in other languages (Artetxe and Schwenk, 2019b) . Due to its simplicity, we do not use Tatoeba as our test and validation sets. We find that the Wikipedia scraper for Wikimatrix is faulty in some cases, causing some noise coming from unfiltered markup tags.", |
|
"cite_spans": [ |
|
{ |
|
"start": 181, |
|
"end": 209, |
|
"text": "(Artetxe and Schwenk, 2019b)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "General", |
|
"sec_num": "3.3.3." |
|
}, |
|
{ |
|
"text": "Our conversational domain corpus is translated from English. Hence the Indonesian sentences are written in formal language. In practice, Indonesian used informal language in speech, most of the time. In addition, we also used informal language in a conversational situation such as in social media or text messages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conversation", |
|
"sec_num": "3.3.4." |
|
}, |
|
{ |
|
"text": "Transformer based model (Vaswani et al., 2017) is the current state-of-the-art for neural machine translation . Therefore we adopt the standard Transformerbase encoder-decoder model as one of our baseline models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 24, |
|
"end": 46, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transformer-based Machine Translation", |
|
"sec_num": "4.1." |
|
}, |
|
{ |
|
"text": "Generative pretraining has been proved to be effective in improving sentence encoders on downstream tasks. We use two language modeling objectives, Masked Language Modeling (MLM) to leverage our vastly available monolingual corpora and Translation Language Modeling (TLM) to make the network learns alignment between languages better. (Devlin et al., 2018; Radford et al., 2018; Lample and Conneau, 2019) Although both MLM and TLM objectives can be extended to multiple languages, we only pretrain the base Transformer using Indonesian and English dataset since the network itself will only be used on tasks involving Indonesian and English languages. For the MLM objective, the Indonesian monolingual dataset was collected from Leipzig corpora (Goldhahn et al., 2012) , and the English monolingual dataset was collected from WMT'07 and WMT'08. 24 Both datasets come from the news domain and are truncated at 4.8M sentences because of GPU resource limitation. For the TLM objective, Tatoeba and PANL datasets are used.", |
|
"cite_spans": [ |
|
{ |
|
"start": 335, |
|
"end": 356, |
|
"text": "(Devlin et al., 2018;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 357, |
|
"end": 378, |
|
"text": "Radford et al., 2018;", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 379, |
|
"end": 404, |
|
"text": "Lample and Conneau, 2019)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 745, |
|
"end": 768, |
|
"text": "(Goldhahn et al., 2012)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 845, |
|
"end": 847, |
|
"text": "24", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language-Model Pretraining", |
|
"sec_num": "4.2." |
|
}, |
|
{ |
|
"text": "Google Translate is arguably one of the best public translation services available. However, benchmarking with Google Translate is tricky: Their model is regularly updated. Hence the result is not reproducible. We also cannot guarantee that our validation or test set is not present in their training data. However, we still argue that comparing our results with theirs is beneficial.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Google Translate", |
|
"sec_num": "4.3." |
|
}, |
|
{ |
|
"text": "We run our Transformer experiment with XLM Toolkit on a single GPU. We use the Transformer base architecture, consisting of 6 encoder and decoder layers with 8 attention heads. The feed-forward unit-size is 2048, and the embedding size is 512. We increase the batch size from the default 32 to 160 to reduce the gradient noise (Wang et al., 2013; Smith et al., 2017) , which shown to improve the model's quality (Ott et al., 2018; Popel and Bojar, 2018; Aji and Heafield, 2019) . We use Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.0001, \u03b2 1 = 0.9, \u03b2 2 = 0.999.", |
|
"cite_spans": [ |
|
{ |
|
"start": 327, |
|
"end": 346, |
|
"text": "(Wang et al., 2013;", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 347, |
|
"end": 366, |
|
"text": "Smith et al., 2017)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 412, |
|
"end": 430, |
|
"text": "(Ott et al., 2018;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 431, |
|
"end": 453, |
|
"text": "Popel and Bojar, 2018;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 454, |
|
"end": 477, |
|
"text": "Aji and Heafield, 2019)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 502, |
|
"end": 523, |
|
"text": "(Kingma and Ba, 2014)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Setup", |
|
"sec_num": "5.1." |
|
}, |
|
{ |
|
"text": "We train our language model with the same Toolkit. Performance is measured with a BLEU score (Papineni et al., 2002) by using sacreBLEU script (Post, 2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 93, |
|
"end": 116, |
|
"text": "(Papineni et al., 2002)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 143, |
|
"end": 155, |
|
"text": "(Post, 2018)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Setup", |
|
"sec_num": "5.1." |
|
}, |
|
{ |
|
"text": "We first benchmark the significance of language-model pretraining for the Transformer. For this purpose, we train both vanilla Transformer and Transformer with language model pretraining for our news and general domain dataset. From the result shown in Table 4 , we can see that the Transformer with language model pretraining outperforms its vanilla counterpart. We can also see that model trained in general domain outperforms model trained in news domain, therefore suggesting that a standard model with more data is better than a low-resource training with language model pretraining. For the next experiments, we will use a Transformer with a pretrained language model.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 253, |
|
"end": 260, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Model Evaluation", |
|
"sec_num": "5.2." |
|
}, |
|
{ |
|
"text": "We explore the performance when trained across different domains. Our results shown in Table 5 suggest that the model is overfitted towards its specific domain. Model trained with the news domain dataset performed worst due to lack of resource. By combining every dataset, we can see the best performance across every domain. This result is comparable with Google Translate. We picked our best model, which is trained in all training set and evaluate the BLEU on test sets, which can be seen in Table 6 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 87, |
|
"end": 94, |
|
"text": "Table 5", |
|
"ref_id": "TABREF5" |
|
}, |
|
{ |
|
"start": 495, |
|
"end": 502, |
|
"text": "Table 6", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Cross Domain Evaluation", |
|
"sec_num": "5.3." |
|
}, |
|
{ |
|
"text": "We do not have an annotated parallel corpus for English-Indonesian. Our corpus, including the valid and test set, are generated from the crawled data. We discussed previously in section 3. that the currently available dataset are not fully parallel. Therefore, measuring the quality with BLEU only might not be representative. For human evaluation, we select random sentences from each domain. We present three translations: Reference, Google Translate, and our output in random order to our human evaluators. We measure the quality in 2 scores:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Human Evaluation", |
|
"sec_num": "5.4." |
|
}, |
|
{ |
|
"text": "\u2022 Fluency (1-5): How fluent the translation is, regardless of the correctness.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Human Evaluation", |
|
"sec_num": "5.4." |
|
}, |
|
{ |
|
"text": "\u2022 Adequacy (1-5): How correct is the translation, given the source.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Human Evaluation", |
|
"sec_num": "5.4." |
|
}, |
|
{ |
|
"text": "To ensure reliability of the scores, each and all sentences are assigned to 3 scorers. The final score is the averaged score across three evaluators, as shown in Table 7 . Because we have more than two annotators and the scores are ordinal, we use Spearman's \u03c1 to obtain a moderately-high average agreement between annotators of 0.53 for fluency and 0.56 for adequacy out of 240 sentences. Table 7 : Human evaluation score across different domains.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 162, |
|
"end": 169, |
|
"text": "Table 7", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 390, |
|
"end": 397, |
|
"text": "Table 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Human Evaluation", |
|
"sec_num": "5.4." |
|
}, |
|
{ |
|
"text": "The reference translation is the most fluent across every domain. This result is expected, as the reference is written by humans. Reference translation's adequacy scored equally on average, compared to the rest. Our reference is crawled; therefore, it contains several issues, as mentioned in section 3.3.. One main problem in reference translation is that they are translated with document level in mind, therefore reducing adequacy as encapsulated sentence-based translation. This is especially true in conversational, where the reference was translated from the whole session (i.e., talk, or vlog). One example can be seen below: Source \"-Nope, they're shutting us down.\" Ref \"-Tidak, misi ditunda.\" Ours \"-Tidak, mereka menutup kita\". Google Translate \"-Tidak, mereka menutup kita.\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Human Evaluation", |
|
"sec_num": "5.4." |
|
}, |
|
{ |
|
"text": "The reference is literally translated as \"-No, mission postponed.\", which is not the correct translation of the source. However, the reference is in fact acceptable when given the whole document.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Human Evaluation", |
|
"sec_num": "5.4." |
|
}, |
|
{ |
|
"text": "We showed that Bahasa Indonesia has improved from the preconception of being a low-resource language in the context of English MT. We have collected scattered English-Indonesian parallel data and introduced some new parallel datasets through automatic and manual alignments. Our collected datasets numbers in more than 10 million pairs of sentences. We evaluated and categorized those datasets into several domains: news, religion, general, and conversation. We created a standardized split for evaluation to open a pathway for objective evaluation for future En-Id MT research. Our Transformer-based baseline trained with mul-tidomain dataset produces a comparable quality compared to Google Translate and is robust against domain changes. However, we acknowledge that some improvements to our datasetes are necessary. Some important domains like news are still behind in terms of training data, and evidently, its BLEU score is still lacking compared to the general and conversational domain. Furthermore, our manual evaluation has shown that some of our datasets contain noise, especially in the conversation and general domain where the noisy data is still used in validation and testing. In the future, manual data filtering or cleansing on these datasets is important to ensure that we have a standard benchmark that is clean and unbiased.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": "http://opus.nlpl.eu/ as of November 2019 2 https://www.opensubtitles.org", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://panl10n.net/english/OutputsIndonesia2.htm 4 http://tanzil.net/trans/ 5 https://tatoeba.org/ 6 http://casmacat.eu/corpus/global-voices.html 7 https://globalvoices.org/ 8 https://www.gnome.org/ 9 https://ubuntu.com/ 10 https://kde.org/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://odb.org/ 13 https://www.smeru.or.id/ 14 defunct and now replaced by the Australian Aid 15 https://wit3.fbk.eu/ 16 http://workshop2017.iwslt.org/ 17 https://www.bbc.com/indonesia/topik/dwibahasa, 2013 18 beritajakarta.id, 2013 19 https://herrysujaini.blogspot.com/2013/04/kumpulan-monokorpus-bahasa-indonesia.html", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://www.casmacat.eu/corpus/news-commentary.html", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Jw300: A wide-coverage parallel corpus for low-resource languages", |
|
"authors": [ |
|
{ |
|
"first": "\u017d", |
|
"middle": [], |
|
"last": "Agi\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Vuli\u0107", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3204--3210", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Agi\u0107,\u017d. and Vuli\u0107, I. (2019). Jw300: A wide-coverage parallel corpus for low-resource languages. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3204-3210.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Making asynchronous stochastic gradient descent work for transformers", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Aji", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Heafield", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 3rd Workshop on Neural Generation and Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "80--89", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aji, A. F. and Heafield, K. (2019). Making asynchronous stochastic gradient descent work for transformers. In Proceedings of the 3rd Workshop on Neural Generation and Translation, pages 80-89, Hong Kong, November. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Margin-based parallel corpus mining with multilingual sentence embeddings", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Artetxe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Schwenk", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3197--3203", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Artetxe, M. and Schwenk, H. (2019a). Margin-based par- allel corpus mining with multilingual sentence embed- dings. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3197- 3203, Florence, Italy, July. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Massively multilingual sentence embeddings for zero-shot cross-lingual transfer and beyond", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Artetxe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Schwenk", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "7", |
|
"issue": "", |
|
"pages": "597--610", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Artetxe, M. and Schwenk, H. (2019b). Massively multi- lingual sentence embeddings for zero-shot cross-lingual transfer and beyond. Transactions of the Association for Computational Linguistics, 7:597-610.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Findings of the 2018 conference on machine translation (wmt18)", |
|
"authors": [ |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Bojar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Federmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Fishel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Graham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Huck", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Monz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Third Conference on Machine Translation (WMT)", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bojar, O., Federmann, C., Fishel, M., Graham, Y., Had- dow, B., Huck, M., Koehn, P., and Monz, C. (2018). Findings of the 2018 conference on machine transla- tion (wmt18). In Proceedings of the Third Conference on Machine Translation (WMT), Volume 2: Shared Task Papers, pages 272-307. Association for Computational Linguistics, 10.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Wit3: Web inventory of transcribed and translated talks", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Cettolo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Girardi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Federico", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Conference of European Association for Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "261--268", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cettolo, M., Girardi, C., and Federico, M. (2012). Wit3: Web inventory of transcribed and translated talks. In Conference of European Association for Machine Trans- lation, pages 261-268.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "A massively parallel corpus: the bible in 100 languages. Language resources and evaluation", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Christodouloupoulos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Steedman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "49", |
|
"issue": "", |
|
"pages": "375--395", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christodouloupoulos, C. and Steedman, M. (2015). A massively parallel corpus: the bible in 100 languages. Language resources and evaluation, 49(2):375-395.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M.-W", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1810.04805" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2018). Bert: Pre-training of deep bidirectional trans- formers for language understanding. arXiv preprint arXiv:1810.04805.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Bitextor, a free/open-source software to harvest translation memories from multilingual websites", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Espl\u00e1-Gomis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Forcada", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of MT Summit XII", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Espl\u00e1-Gomis, M. and Forcada, M. L. (2009). Bitextor, a free/open-source software to harvest translation mem- ories from multilingual websites. Proceedings of MT Summit XII, Ottawa, Canada. Association for Machine Translation in the Americas.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Building large monolingual dictionaries at the leipzig corpora collection: From 100 to 200 languages", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Goldhahn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Eckart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "U", |
|
"middle": [], |
|
"last": "Quasthoff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "LREC", |
|
"volume": "29", |
|
"issue": "", |
|
"pages": "31--43", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Goldhahn, D., Eckart, T., and Quasthoff, U. (2012). Build- ing large monolingual dictionaries at the leipzig corpora collection: From 100 to 200 languages. In LREC, vol- ume 29, pages 31-43.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Recurrent neural network language model for englishindonesian machine translation: Experimental study", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Hermanto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Adji", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Setiawan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "2015 International Conference on Science in Information Technology (ICSITech)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "132--136", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hermanto, A., Adji, T. B., and Setiawan, N. A. (2015). Recurrent neural network language model for english- indonesian machine translation: Experimental study. In 2015 International Conference on Science in Informa- tion Technology (ICSITech), pages 132-136. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Adam: A method for stochastic optimization", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Kingma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Ba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1412.6980" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kingma, D. P. and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Crosslingual language model pretraining", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Lample", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1901.07291" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lample, G. and Conneau, A. (2019). Cross- lingual language model pretraining. arXiv preprint arXiv:1901.07291.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Handling indonesian clitics: A dataset comparison for an indonesian-english statistical machine translation system", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Larasati", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 26th Pacific Asia Conference on Language, Information, and Computation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "146--152", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Larasati, S. D. (2012a). Handling indonesian clitics: A dataset comparison for an indonesian-english statistical machine translation system. In Proceedings of the 26th Pacific Asia Conference on Language, Information, and Computation, pages 146-152.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Identic corpus: Morphologically enriched indonesian-english parallel corpus", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Larasati", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "LREC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "902--906", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Larasati, S. D. (2012b). Identic corpus: Morphologically enriched indonesian-english parallel corpus. In LREC, pages 902-906.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Opensubtitles2018: Statistical rescoring of sentence alignments in large, noisy parallel corpora", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Lison", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Tiedemann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Kouylekov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lison, P., Tiedemann, J., and Kouylekov, M. (2018). Opensubtitles2018: Statistical rescoring of sentence alignments in large, noisy parallel corpora. In Proceed- ings of the Eleventh International Conference on Lan- guage Resources and Evaluation (LREC 2018).", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Nltk: the natural language toolkit", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Loper", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Bird", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Loper, E. and Bird, S. (2002). Nltk: the natural language toolkit. arXiv preprint cs/0205028.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Rancang bangun aplikasi web scraping untuk korpus paralel indonesia-inggris dengan metode html dom", |
|
"authors": [ |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Mitra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Sujaini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"B P" |
|
], |
|
"last": "Negara", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Jurnal Sistem dan Teknologi Informasi (JUSTIN)", |
|
"volume": "5", |
|
"issue": "1", |
|
"pages": "36--41", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mitra, V., Sujaini, H., and Negara, A. B. P. (2017). Ran- cang bangun aplikasi web scraping untuk korpus paralel indonesia-inggris dengan metode html dom. Jurnal Sis- tem dan Teknologi Informasi (JUSTIN), 5(1):36-41.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Tufs asian language parallel corpus (talpco)", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Nomoto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Okano", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Moeljadi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Sawada", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Twenty-Fourth Annual Meeting of the Association for Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "436--439", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nomoto, H., Okano, K., Moeljadi, D., and Sawada, H. (2018). Tufs asian language parallel corpus (talpco). In Proceedings of the Twenty-Fourth Annual Meeting of the Association for Natural Language Processing, pages 436-439.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "English-indonesian phrase translation using recurrent neural network and adj technique", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Octoviani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Fachrurrozi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Yusliani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Febriady", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Firdaus", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Journal of Physics: Conference Series", |
|
"volume": "1196", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Octoviani, W., Fachrurrozi, M., Yusliani, N., Febriady, M., and Firdaus, A. (2019). English-indonesian phrase translation using recurrent neural network and adj tech- nique. In Journal of Physics: Conference Series, volume 1196, page 012007. IOP Publishing.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Scaling neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Edunov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Grangier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Auli", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Third Conference on Machine Translation: Research Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--9", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ott, M., Edunov, S., Grangier, D., and Auli, M. (2018). Scaling neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 1-9.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Bleu: a method for automatic evaluation of machine translation", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Papineni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Ward", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W.-J", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 40th annual meeting on association for computational linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "311--318", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Papineni, K., Roukos, S., Ward, T., and Zhu, W.-J. (2002). Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311- 318. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Training tips for the transformer model", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Popel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Bojar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "The Prague Bulletin of Mathematical Linguistics", |
|
"volume": "110", |
|
"issue": "1", |
|
"pages": "43--70", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Popel, M. and Bojar, O. (2018). Training tips for the trans- former model. The Prague Bulletin of Mathematical Lin- guistics, 110(1):43-70.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "A call for clarity in reporting BLEU scores", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Post", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Third Conference on Machine Translation: Research Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "186--191", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Post, M. (2018). A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Ma- chine Translation: Research Papers, pages 186-191,", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Association for Computational Linguistics", |
|
"authors": [ |
|
{ |
|
"first": "Brussels", |
|
"middle": [], |
|
"last": "Belgium", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Belgium, Brussels, October. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Improving language understanding by generative pre-training", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Radford", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Narasimhan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Salimans", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Radford, A., Narasimhan, K., Salimans, T., and Sutskever, I. (2018). Improving language un- derstanding by generative pre-training. URL https://s3-us-west-2.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Resources report on languages of indonesia", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Riza", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 6th Workshop on Asian Language Resources", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Riza, H. (2008). Resources report on languages of indone- sia. In Proceedings of the 6th Workshop on Asian Lan- guage Resources.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Wikimatrix: Mining 135m parallel sentences in 1620 language pairs from wikipedia", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Schwenk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Chaudhary", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Gong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Guzm\u00e1n", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1907.05791" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Schwenk, H., Chaudhary, V., Sun, S., Gong, H., and Guzm\u00e1n, F. (2019). Wikimatrix: Mining 135m parallel sentences in 1620 language pairs from wikipedia. arXiv preprint arXiv:1907.05791.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Utterance disfluency handling in indonesian-english machine translation", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Shahih", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Purwarianti", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "2016 International Conference On Advanced Informatics: Concepts, Theory And Application (ICAICTA)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--5", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shahih, K. M. and Purwarianti, A. (2016). Utter- ance disfluency handling in indonesian-english machine translation. In 2016 International Conference On Ad- vanced Informatics: Concepts, Theory And Application (ICAICTA), pages 1-5. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Don't decay the learning rate, increase the batch size", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P.-J", |
|
"middle": [], |
|
"last": "Kindermans", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Ying", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Q", |
|
"middle": [ |
|
"V" |
|
], |
|
"last": "Le", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1711.00489" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Smith, S. L., Kindermans, P.-J., Ying, C., and Le, Q. V. (2017). Don't decay the learning rate, increase the batch size. arXiv preprint arXiv:1711.00489.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Synchronizing translated movie subtitles", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Tiedemann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "LREC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tiedemann, J. (2008). Synchronizing translated movie subtitles. In LREC.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Parallel data, tools and interfaces in opus", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Tiedemann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Lrec", |
|
"volume": "2012", |
|
"issue": "", |
|
"pages": "2214--2218", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tiedemann, J. (2012). Parallel data, tools and interfaces in opus. In Lrec, volume 2012, pages 2214-2218.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Attention is all you need", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "\u0141", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5998--6008", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, \u0141., and Polosukhin, I. (2017). Attention is all you need. In Advances in Neural Infor- mation Processing Systems, pages 5998-6008.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Variance reduction for stochastic gradient optimization", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Smola", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Xing", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "181--189", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wang, C., Chen, X., Smola, A. J., and Xing, E. P. (2013). Variance reduction for stochastic gradient optimization. In Advances in Neural Information Processing Systems, pages 181-189.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Developing indonesian-english hybrid machine translation system", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Yulianti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Budi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Hidayanto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Manurung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adriani", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "2011 International Conference on Advanced Computer Science and Information Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "265--270", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yulianti, E., Budi, I., Hidayanto, A. N., Manurung, H. M., and Adriani, M. (2011). Developing indonesian-english hybrid machine translation system. In 2011 Interna- tional Conference on Advanced Computer Science and Information Systems, pages 265-270. IEEE.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "n-gram occurrences ratio between validation and test set across domains for n from 3 to 8.", |
|
"num": null |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"type_str": "table", |
|
"text": "CorpusAbbr. |sent en\u2212id | |tok en | |tok id | len en len id len ratio Domain/Content", |
|
"html": null, |
|
"content": "<table><tr><td colspan=\"2\">OpenSubtitles v2018 OpenSub</td><td>9.3M</td><td colspan=\"2\">0.4M 0.5M 7.72 6.41</td><td colspan=\"2\">1.32 Movie</td></tr><tr><td>* Tanzil v1</td><td>Tanzil</td><td>0.4M</td><td colspan=\"4\">24.3K 25.4K 21.47 33.05 2.06 Religion</td></tr><tr><td>JW300 v1</td><td>JW300</td><td>0.6M</td><td colspan=\"4\">87.6K 83.2K 17.44 16.26 1.20 Religion</td></tr><tr><td colspan=\"2\">* Tatoeba v20190709 Tatoeba</td><td>9.9K</td><td colspan=\"2\">5.7K 6.9K 7.63 6.62</td><td colspan=\"2\">1.23 General</td></tr><tr><td>QED v2.0a</td><td>QED</td><td>0.3M</td><td colspan=\"4\">82.8K 85.9K 14.65 12.95 1.33 Talk, Lecture</td></tr><tr><td>GNOME v1</td><td>GNOME</td><td>40.4K</td><td colspan=\"4\">29.9K 30.1K 22.19 19.70 1.22 Tech</td></tr><tr><td>bible-uedin v1</td><td>Bible</td><td>59.4K</td><td colspan=\"4\">17.2K 21.0K 29.49 24.03 1.43 Religion</td></tr><tr><td>Ubuntu v14.10</td><td>Ubuntu</td><td>96.5K</td><td colspan=\"2\">37.9K 44.2K 6.26 6.18</td><td colspan=\"2\">1.25 Tech</td></tr><tr><td colspan=\"2\">GlobalVoices v2017q3 GV</td><td>14.4K</td><td colspan=\"4\">27.5K 27.3K 21.06 18.94 1.21 News</td></tr><tr><td>KDE4 v2</td><td>KDE</td><td>14.8K</td><td colspan=\"2\">9.5K 10.9K 5.72 6.26</td><td colspan=\"2\">1.49 Tech</td></tr><tr><td colspan=\"2\">Wikimatrix (T=1.02) Wiki[x]</td><td>1.8M</td><td>1M</td><td colspan=\"3\">0.9M 22.75 21.06 1.22 General</td></tr><tr><td>\u2202 Desmond86</td><td>Dsm</td><td>40.4K</td><td colspan=\"2\">29.9K 30.1K 22.19 19.7</td><td colspan=\"2\">1.22 News, Religion, Science</td></tr><tr><td>\u2202 IDENTIC v1</td><td>IDENTIC</td><td>27.3K</td><td colspan=\"4\">36K 35.4K 22.96 21.29 1.20 News, Movie</td></tr><tr><td>IWSLT 2017</td><td>IWSLT</td><td>0.1M</td><td colspan=\"4\">48.7K 48.2K 19.67 16.85 1.23 Conversation</td></tr><tr><td>PAN Localization</td><td>PANL</td><td>24K</td><td colspan=\"4\">35K 35.5K 22.96 21.29 1.20 News</td></tr><tr><td>TALPCo</td><td>TALPCo</td><td>1.4K</td><td colspan=\"2\">1.2K 1.2K 9.08 7.58</td><td colspan=\"2\">1.26 General</td></tr><tr><td colspan=\"2\">BBC-BeritaJakarta BBC-BJ</td><td>3.9K</td><td colspan=\"4\">10.5K 10.1K 20.36 18.36 1.22 News</td></tr><tr><td>Ibn Majah</td><td>IbnMj</td><td>0.8K</td><td colspan=\"2\">3.9K 4.6K 65.41 51.95</td><td>1.4</td><td>Religion</td></tr><tr><td>YouTube v0</td><td>YT</td><td>0.3M</td><td colspan=\"2\">60.4K 63.4K 9.3 7.93</td><td colspan=\"2\">1.28 Talk, Lecture, Movie</td></tr></table>" |
|
}, |
|
"TABREF2": { |
|
"num": null, |
|
"type_str": "table", |
|
"text": "", |
|
"html": null, |
|
"content": "<table/>" |
|
}, |
|
"TABREF4": { |
|
"num": null, |
|
"type_str": "table", |
|
"text": "Performance of different baselines across News (low-resource) and General (high-resource) domain. Model generally performs well when evaluated with in-domain set. It performs poorly otherwise. An exception can be seen in the low-resource news domain. Adding general-domain to the training set improves the performance across different domains. Ultimately, combining all dataset yields the best results.", |
|
"html": null, |
|
"content": "<table><tr><td>Training Data</td><td colspan=\"4\">EN to ID evaluation (valid set)</td><td/><td/><td colspan=\"3\">ID to EN evaluation (valid set)</td><td/></tr><tr><td colspan=\"11\">News Religious Conv General Average News Religious Conv General Average</td></tr><tr><td>Transformer</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>News</td><td>10.2</td><td>6.5</td><td>9.8</td><td>8.2</td><td>8.7</td><td>9.6</td><td>6.3</td><td>12.3</td><td>8.9</td><td>9.3</td></tr><tr><td>General</td><td>18.8</td><td>15.2</td><td>15.8</td><td>26.8</td><td>19.1</td><td>13.1</td><td>10.2</td><td>9.8</td><td>25.3</td><td>15.4</td></tr><tr><td colspan=\"3\">Transformer + Language Pretraining</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>News</td><td>17.4</td><td>11.5</td><td>14.8</td><td>14.8</td><td>14.6</td><td>15.1</td><td>10.6</td><td>19.6</td><td>16.3</td><td>15.4</td></tr><tr><td>General</td><td>20.0</td><td>15.6</td><td>15.3</td><td>27.8</td><td>19.7</td><td>16.6</td><td>13.7</td><td>13.3</td><td>28.8</td><td>18.1</td></tr><tr><td>Table 4: Training Data</td><td colspan=\"4\">EN to ID evaluation (valid set)</td><td/><td/><td colspan=\"3\">ID to EN evaluation (valid set)</td><td/></tr><tr><td colspan=\"11\">News Religious Conv General Average News Religious Conv General Average</td></tr><tr><td>News</td><td>17.4</td><td>11.5</td><td>14.8</td><td>14.8</td><td>14.6</td><td>15.1</td><td>10.6</td><td>19.6</td><td>16.3</td><td>15.4</td></tr><tr><td>Religious</td><td>16.5</td><td>21.5</td><td>15.4</td><td>18.9</td><td>18.1</td><td>15.1</td><td>20.2</td><td>5.6</td><td>19.3</td><td>15.1</td></tr><tr><td>Conv</td><td>18.9</td><td>15.2</td><td>28.0</td><td>21.0</td><td>20.8</td><td>15.5</td><td>16.6</td><td>33.1</td><td>18.8</td><td>21.0</td></tr><tr><td>General</td><td>20.0</td><td>15.6</td><td>15.3</td><td>27.8</td><td>19.7</td><td>16.6</td><td>13.7</td><td>13.3</td><td>28.8</td><td>18.1</td></tr><tr><td>(a) Training Data</td><td colspan=\"4\">EN to ID evaluation (valid set)</td><td/><td/><td colspan=\"3\">ID to EN evaluation (valid set)</td><td/></tr><tr><td/><td colspan=\"10\">News Religious Conv General Average News Religious Conv General Average</td></tr><tr><td colspan=\"3\">Transformer + Language Pretraining</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>News + general</td><td>21.9</td><td>17.2</td><td>15.3</td><td>27.0</td><td>20.4</td><td>18.4</td><td>15.4</td><td>14.6</td><td>28.8</td><td>19.3</td></tr><tr><td>Relig.+ general</td><td>24.0</td><td>21.3</td><td>16.9</td><td>27.9</td><td>22.5</td><td>19.9</td><td>22.3</td><td>16.1</td><td>28.5</td><td>21.7</td></tr><tr><td>Conv + general</td><td>21.8</td><td>18.2</td><td>27.7</td><td>27.5</td><td>23.8</td><td>18.2</td><td>18.0</td><td>33.6</td><td>27.9</td><td>24.4</td></tr><tr><td>All</td><td>24.6</td><td>21.6</td><td>27.8</td><td>28.1</td><td>25.5</td><td>20.5</td><td>22.5</td><td>33.3</td><td>27.9</td><td>26.1</td></tr><tr><td>Google Translate</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>-</td><td>25.0</td><td>23.8</td><td>27.0</td><td>26.3</td><td>25.5</td><td>25.0</td><td>29.1</td><td>28.9</td><td>28.8</td><td>28.0</td></tr><tr><td>(b)</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>" |
|
}, |
|
"TABREF5": { |
|
"num": null, |
|
"type_str": "table", |
|
"text": "Cross-domain evaluation of Transformer with language pretraining", |
|
"html": null, |
|
"content": "<table><tr><td colspan=\"3\">Test Domain EN to ID ID to EN</td></tr><tr><td>News</td><td>24.4</td><td>20.2</td></tr><tr><td>Religious</td><td>21.3</td><td>22.1</td></tr><tr><td>Conversation</td><td>27.3</td><td>32.4</td></tr><tr><td>General</td><td>28.1</td><td>28.9</td></tr><tr><td>Average</td><td>25.3</td><td>25.9</td></tr></table>" |
|
}, |
|
"TABREF6": { |
|
"num": null, |
|
"type_str": "table", |
|
"text": "Evaluation on test set. We compare our model trained with all dataset with Google Translate (GT).", |
|
"html": null, |
|
"content": "<table><tr><td/><td colspan=\"5\">News Relig. Conv General Avg</td></tr><tr><td>Fluency</td><td/><td/><td/><td/><td/></tr><tr><td>Corpus</td><td>4.78</td><td>4.73</td><td>4.63</td><td>4.63</td><td>4.69</td></tr><tr><td>Ours</td><td>4.44</td><td>4.22</td><td>4.62</td><td>4.21</td><td>4.37</td></tr><tr><td>Google</td><td>4.26</td><td>3.85</td><td>4.53</td><td>3.59</td><td>4.06</td></tr><tr><td>Adequecy</td><td/><td/><td/><td/><td/></tr><tr><td>Corpus</td><td>4.34</td><td>4.58</td><td>3.92</td><td>3.92</td><td>4.19</td></tr><tr><td>Ours</td><td>4.05</td><td>4.09</td><td>4.38</td><td>4.1</td><td>4.15</td></tr><tr><td>Google</td><td>4.27</td><td>3.99</td><td>4.6</td><td>3.92</td><td>4.2</td></tr></table>" |
|
} |
|
} |
|
} |
|
} |