Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
77.5 kB
{
"paper_id": "I13-1033",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:14:38.863092Z"
},
"title": "Multimodal Comparable Corpora as Resources for Extracting Parallel Data: Parallel Phrases Extraction",
"authors": [
{
"first": "Haithem",
"middle": [],
"last": "Afli",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universit\u00e9 du Maine",
"location": {
"addrLine": "Avenue Olivier Messiaen",
"postCode": "F-72085",
"settlement": "-LE MANS",
"country": "France"
}
},
"email": ""
},
{
"first": "Lo\u00efc",
"middle": [],
"last": "Barrault",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universit\u00e9 du Maine",
"location": {
"addrLine": "Avenue Olivier Messiaen",
"postCode": "F-72085",
"settlement": "-LE MANS",
"country": "France"
}
},
"email": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universit\u00e9 du Maine",
"location": {
"addrLine": "Avenue Olivier Messiaen",
"postCode": "F-72085",
"settlement": "-LE MANS",
"country": "France"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Discovering parallel data in comparable corpora is a promising approach for overcoming the lack of parallel texts in statistical machine translation and other NLP applications. In this paper we propose an alternative to comparable corpora of texts as resources for extracting parallel data: a multimodal comparable corpus of audio and texts. We present a novel method to detect parallel phrases from such corpora based on splitting comparable sentences into fragments, called phrases. The audio is transcribed by an automatic speech recognition system, split into fragments and translated with a baseline statistical machine translation system. We then use information retrieval in a large text corpus in the target language, split also into fragments, and extract parallel phrases. We compared our method with parallel sentences extraction techniques. We evaluate the quality of the extracted data on an English to French translation task and show significant improvements over a state-ofthe-art baseline.",
"pdf_parse": {
"paper_id": "I13-1033",
"_pdf_hash": "",
"abstract": [
{
"text": "Discovering parallel data in comparable corpora is a promising approach for overcoming the lack of parallel texts in statistical machine translation and other NLP applications. In this paper we propose an alternative to comparable corpora of texts as resources for extracting parallel data: a multimodal comparable corpus of audio and texts. We present a novel method to detect parallel phrases from such corpora based on splitting comparable sentences into fragments, called phrases. The audio is transcribed by an automatic speech recognition system, split into fragments and translated with a baseline statistical machine translation system. We then use information retrieval in a large text corpus in the target language, split also into fragments, and extract parallel phrases. We compared our method with parallel sentences extraction techniques. We evaluate the quality of the extracted data on an English to French translation task and show significant improvements over a state-ofthe-art baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The development of a statistical machine translation (SMT) system requires one or more parallel corpora called bitexts for training the translation model and monolingual data to build the target language model. Unfortunately, parallel texts are a limited resource and they are often not available for some specific domains and language pairs. That is why, recently, there has been a huge interest in the automatic creation of parallel data. Since comparable corpora exist in large quantities and are much more easily available (Munteanu and Marcu, 2005) , the ability to exploit them is highly beneficial in order to overcome the lack of parallel data. The ability to detect these parallel data enables the automatic creation of large parallel corpora.",
"cite_spans": [
{
"start": 527,
"end": 553,
"text": "(Munteanu and Marcu, 2005)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most of existing studies dealing with comparable corpora look for parallel data at the sentence level (Zhao and Vogel, 2002; Utiyama and Isahara, 2003; Munteanu and Marcu, 2005; Abdul-Rauf and Schwenk, 2011) . However, the degree of parallelism can vary considerably, from noisy parallel texts, to quasi parallel texts (Fung and Cheung, 2004) . Corpora from the last category contain none or few good parallel sentence pairs. However, there could have parallel phrases in comparable sentences that can prove to be helpful for SMT (Munteanu and Marcu, 2006) . As an example, consider Figure 1 , which presents two news articles with their video from the English and French editions of the Euronews website 1 . The articles report on the same event with different sentences that contain some parallel translations at the phrase level. These two documents contain in particular no exact sentence pairs, so techniques for extracting parallel sentences will not give good results. We need a method to extract parallel phrases which exist at the sub-sentential level.",
"cite_spans": [
{
"start": 102,
"end": 124,
"text": "(Zhao and Vogel, 2002;",
"ref_id": "BIBREF27"
},
{
"start": 125,
"end": 151,
"text": "Utiyama and Isahara, 2003;",
"ref_id": "BIBREF25"
},
{
"start": 152,
"end": 177,
"text": "Munteanu and Marcu, 2005;",
"ref_id": "BIBREF13"
},
{
"start": 178,
"end": 207,
"text": "Abdul-Rauf and Schwenk, 2011)",
"ref_id": "BIBREF0"
},
{
"start": 319,
"end": 342,
"text": "(Fung and Cheung, 2004)",
"ref_id": "BIBREF3"
},
{
"start": 530,
"end": 556,
"text": "(Munteanu and Marcu, 2006)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 583,
"end": 591,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For some languages, text comparable corpora may not cover all topics in some specific domains and languages. This is because potential sources of comparable corpora are mainly derived from multilingual news reporting agencies like AFP, Xinhua, Al-Jazeera, BBC etc, or multilingual encyclopedias like Wikipedia, Encarta etc. What we need is exploring other sources like audio to generate parallel data for such domains that can improve the performance of an SMT system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we present a method for detecting and extracting parallel data from multimodal corpora. Our method consists in extracting parallel 2 Extracting parallel data 2.1 Basic Idea Figure 2 shows an example of multimodal comparable data coming from the TED website 2 . We have an audio source of a talk in English and its text translation in French. We think that we can extract parallel data from this corpora, at the sentence and the sub-sentential level. In this work we seek to adapt and to improve machine translation systems that suffer from resource deficiency by automatically extracting parallel data in specific domains. Figure 3 : Principle of the parallel phrase extraction system from multimodal comparable corpora.",
"cite_spans": [],
"ref_spans": [
{
"start": 188,
"end": 196,
"text": "Figure 2",
"ref_id": "FIGREF0"
},
{
"start": 638,
"end": 646,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The basic system architecture is described in Figure 3. We can distinguish three steps: automatic speech recognition (ASR), statistical machine translation (SMT) and information retrieval (IR). The ASR system accepts audio data in the source language L1 and generates an automatic transcription. This transcription is then split into phrases and translated by a baseline SMT system into language L2. Then, we use these translations as queries for an IR system to retrieve most similar phrases in the texts in L2, which were previouslt split into phrases. The transcribed phrases in L1 and the IR result in L2 form the final parallel data. We hope that the errors made by the ASR and SMT systems will not impact too severely the extraction process.",
"cite_spans": [],
"ref_spans": [
{
"start": 46,
"end": 52,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "System Architecture",
"sec_num": "2.2"
},
{
"text": "Our technique is similar to that of (Munteanu and Marcu, 2006 ), but we bypass the need of the Log-Likelihood-Ratio lexicon by using a baseline SMT system and the TER measure (Snover et al., 2006) for filtering. We also report an extension of the work of (Afli et al., 2012) by splitting transcribed sentences and the text parts of the multimodal corpus into phrases with length between two to ten tokens. We extract from each sentence on the corpus all combinations of two to ten sequential words.",
"cite_spans": [
{
"start": 36,
"end": 61,
"text": "(Munteanu and Marcu, 2006",
"ref_id": "BIBREF14"
},
{
"start": 175,
"end": 196,
"text": "(Snover et al., 2006)",
"ref_id": "BIBREF22"
},
{
"start": 255,
"end": 274,
"text": "(Afli et al., 2012)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System Architecture",
"sec_num": "2.2"
},
{
"text": "Our ASR system is a five-pass system based on the open-source CMU Sphinx toolkit 3 (version 3 and 4), similar to the LIUM'08 French ASR system described in (Del\u00e9glise et al., 2009) . The acoustic models are trained in the same manner, except that a multi-layer perceptron (MLP) is added using the bottle-neck feature extraction as described in (Gr\u00e9zl and Fousek, 2008) . Table 1 : Performance of the ASR system on development and test data.",
"cite_spans": [
{
"start": 156,
"end": 180,
"text": "(Del\u00e9glise et al., 2009)",
"ref_id": "BIBREF2"
},
{
"start": 344,
"end": 368,
"text": "(Gr\u00e9zl and Fousek, 2008)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 371,
"end": 378,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Baseline systems",
"sec_num": "2.3"
},
{
"text": "Our SMT system is a phrase-based system (Koehn et al., 2003) based on the Moses SMT toolkit (Koehn et al., 2007) . The standard fourteen feature functions are used, namely phrase and lexical translation probabilities in both directions, seven features for the lexicalized distortion model, a word and a phrase penalty and a target language model. It is constructed as follows. First, word alignments in both directions are calculated. We used the multi-threaded version of the GIZA++ tool (Gao and Vogel, 2008) . Phrases and lexical reorderings are extracted using the default settings of the Moses toolkit. The parameters of our system were tuned on a development corpus, using the MERT tool (Och, 2003) .",
"cite_spans": [
{
"start": 40,
"end": 60,
"text": "(Koehn et al., 2003)",
"ref_id": "BIBREF10"
},
{
"start": 92,
"end": 112,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF11"
},
{
"start": 489,
"end": 510,
"text": "(Gao and Vogel, 2008)",
"ref_id": "BIBREF5"
},
{
"start": 693,
"end": 704,
"text": "(Och, 2003)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline systems",
"sec_num": "2.3"
},
{
"text": "We use the Lemur IR toolkit (Ogilvie and Callan, 2001) for the phrases extraction procedure. We first index all the French text (after splitting it into segments) into a database using Indri Index. This feature enable us to index our text documents in such a way we can use the translated phrases as queries to run information retrieval in the database, with the specialized Indri Query Language. By these means we can retrieve the best matching phrases from the French side of the comparable corpus.",
"cite_spans": [
{
"start": 28,
"end": 54,
"text": "(Ogilvie and Callan, 2001)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline systems",
"sec_num": "2.3"
},
{
"text": "For each candidate phrases pair, we need to decide whether the two phrases are mutual translations. For this, we calculate the TER between them using the tool described in (Servan and ",
"cite_spans": [
{
"start": 172,
"end": 183,
"text": "(Servan and",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline systems",
"sec_num": "2.3"
},
{
"text": "In our experiments, we compare our phrase extraction method (which we call PhrExtract) with the sentence extraction method (SentExtract) of (Afli et al., 2012) . We use the extracted dataset by both methods as additional SMT training data, and measure the quality of the parallel data by its impact on the performance of the SMT system. Thus, the final extracated parallel data is injected into the baseline system. The various SMT systems are evaluated using the BLEU score (Papineni et al., 2002) . We conducted experiments on an English to French machine translation task. All the text data is automatically split into phrases of two to ten tokens.",
"cite_spans": [
{
"start": 140,
"end": 159,
"text": "(Afli et al., 2012)",
"ref_id": "BIBREF1"
},
{
"start": 475,
"end": 498,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "Our multimodal comparable corpus consists of spoken talks in English (audio) and written texts in French. The goal of the TED task is to translate public lectures from English into French. The TED corpus totals about 118 hours of speech. We call the English transcriptions of the audio part TEDasr witch is split into phrases (called TEDasr split). A detailed description of the TED task can be found in (Rousseau et al., 2011) .",
"cite_spans": [
{
"start": 404,
"end": 427,
"text": "(Rousseau et al., 2011)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data description",
"sec_num": "3.1"
},
{
"text": "The development corpus DevTED consists of 19 talks and represents a total of 4 hours and 13 minutes of speech transcribed at the sentence level. The language model is trained with the SRI LM toolkit (Stolcke, 2002) , on all the available French data without the TED data. The baseline system is trained with version 7 of the News-Commentary (nc7) and Europarl (eparl7) corpus. 5 The indexed data consist of the French text part of the TED corpus which contains translations of the English part of the corpus. We call it TEDbi. It is split into phrases (called TEDbi split). Tables 2 and 3 summarize the characteristics of the different corpora used in our experiments.",
"cite_spans": [
{
"start": 199,
"end": 214,
"text": "(Stolcke, 2002)",
"ref_id": "BIBREF23"
},
{
"start": 377,
"end": 378,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data description",
"sec_num": "3.1"
},
{
"text": "We first apply sentence extraction on the TED corpus with a method similar to (Afli et al., 2012) . We then apply phrase extraction on the same data split bitexts # tokens in-domain ? nc7",
"cite_spans": [
{
"start": 78,
"end": 97,
"text": "(Afli et al., 2012)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "3.2"
},
{
"text": "3.7M no eparl7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "3.2"
},
{
"text": "56.4M no DevTED 36k yes As mentioned in section 2.3, the TER score is used as a metric for filtering the result of IR. We keep only the sentences or phrases which have a TER score below a certain threshold determined empirically. Thus, we filter the selected sentences or phrases in each condition with different TER thresholds ranging from 0 to 100 by steps of 10. The extracted parallel data are added to our generic training data in order to adapt the baseline system. Table 4 presents the BLEU score obtained for these different experimental conditions. Our baseline SMT system, trained with generic bitexts achieves a BLEU score of 22.93. We can see that our new method of phrase extraction significantly improve the baseline system more than sentences extraction method until the TER threshold of 80 is reached: the BLEU score increases from 22.93 to 23.70 with the best system of our proposed method and from 22.93 to 23.40 with the best system using the classical method of sentence extraction.",
"cite_spans": [],
"ref_spans": [
{
"start": 472,
"end": 479,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "3.2"
},
{
"text": "The results show that the choice of the appropriate TER threshold depends on the method. We can see that for PhrExtract the best threshold is 60 when the best one is 80 for SentExtract. This last one is also an important point in the general evaluation of the two methods. In fact, we can see on Figure 4 that from this point our proposed method gives less performing results than SentExtract method.",
"cite_spans": [],
"ref_spans": [
{
"start": 296,
"end": 304,
"text": "Figure 4",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "3.2"
},
{
"text": "This suggest to apply combination of the two methods. This corresponds to injecting the extracted phrases and sentences into the training data. The combination method is called CombExtract. Figure 4 presents the comparison of the different experimental conditions in term of BLEU score for each TER threshold. We can see that except for threshold 30, the curve of the combination follows in general the same trajectory of the curve of PhrExtract. These results show that SentExtract has no big impact in combination with the PhrExtract method and the best threshold when using PhrExtract is at 60. This is because of the big difference on the quantity of data between the two methods as we can see in Table 4 . The benefit of our method is that it can generates more quantities of parallel data than the sentence extraction method for each TER threshold, and this difference of quantities improves results of MT system until the TER threshold of 80 is reached. However, we can see in Table 4 that the quality of only 39.35k (TER 80) extracted by SentExtract can have exactly the same impact of 25.3M extracted by our new technique. That is why we intend to investigate in the filtering module of our system.",
"cite_spans": [],
"ref_spans": [
{
"start": 190,
"end": 198,
"text": "Figure 4",
"ref_id": "FIGREF1"
},
{
"start": 701,
"end": 708,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 984,
"end": 991,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "3.2"
},
{
"text": "Research on exploiting comparable corpora goes back to more than 15 years ago (Fung and Yee, 1998; Koehn and Knight, 2000; Vogel, 2003; Gaussier et al., 2004; Li and Gaussier, 2010) . A lot of studies on data acquisition from comparable corpora for machine translation have been reported (Su and Babych, 2012; Hewavitharana and Vogel, 2011; Riesa and Marcu, 2012) .",
"cite_spans": [
{
"start": 78,
"end": 98,
"text": "(Fung and Yee, 1998;",
"ref_id": "BIBREF4"
},
{
"start": 99,
"end": 122,
"text": "Koehn and Knight, 2000;",
"ref_id": "BIBREF9"
},
{
"start": 123,
"end": 135,
"text": "Vogel, 2003;",
"ref_id": "BIBREF26"
},
{
"start": 136,
"end": 158,
"text": "Gaussier et al., 2004;",
"ref_id": "BIBREF6"
},
{
"start": 159,
"end": 181,
"text": "Li and Gaussier, 2010)",
"ref_id": "BIBREF12"
},
{
"start": 288,
"end": 309,
"text": "(Su and Babych, 2012;",
"ref_id": "BIBREF24"
},
{
"start": 310,
"end": 340,
"text": "Hewavitharana and Vogel, 2011;",
"ref_id": "BIBREF8"
},
{
"start": 341,
"end": 363,
"text": "Riesa and Marcu, 2012)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "To the best of our knowledge (Munteanu and Marcu, 2006) was the first attempt to extract parallel sub-sentential fragments (phrases), from comparable corpora. They used a method based on a Log-Likelihood-Ratio lexicon and a smoothing filter. They showed the effectiveness of their method to improve an SMT system from a collection of a comparable sentences. The weakness of their method is that they filter source and target fragments separately, which cannot guarantee that the extracted fragments are a good translations of each other. (Hewavitharana and Vogel, 2011) show a good result with their method based on on a pairwise correlation calculation which suppose that the source fragment has been detected.",
"cite_spans": [
{
"start": 43,
"end": 55,
"text": "Marcu, 2006)",
"ref_id": "BIBREF14"
},
{
"start": 538,
"end": 569,
"text": "(Hewavitharana and Vogel, 2011)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "The second type of approach in extracting parallel phrases is the alignment-based approach (Quirk et al., 2007; Riesa and Marcu, 2012) . These methods are promising, but since the proposed method in (Quirk et al., 2007) do not improve significantly MT performance and model in (Riesa and Marcu, 2012) is designed for parallel data, it's hard to say that this approach is actually effective for comparable data.",
"cite_spans": [
{
"start": 91,
"end": 111,
"text": "(Quirk et al., 2007;",
"ref_id": "BIBREF18"
},
{
"start": 112,
"end": 134,
"text": "Riesa and Marcu, 2012)",
"ref_id": "BIBREF19"
},
{
"start": 199,
"end": 219,
"text": "(Quirk et al., 2007)",
"ref_id": "BIBREF18"
},
{
"start": 277,
"end": 300,
"text": "(Riesa and Marcu, 2012)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "This work is similar to the work by (Afli et al., 2012) where the extraction is done at the phrase level instead of the sentence level. Our methodology is the first effort aimed at detecting translated phrases on a multimodal corpora.",
"cite_spans": [
{
"start": 36,
"end": 55,
"text": "(Afli et al., 2012)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "Since our method can extract parallel phrases from a multimodal corpus, it greatly expands the range of corpora which can be usefully exploited.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "We have presented a fully automatic method for extracting parallel phrases from multimodal comparable corpora, i.e. the source side is available as audio stream and the target side as text. We used a framework to extract parallel data witch combine an automatic speech recognition system, a statistical machine translation system and information retrieval system. We showed by experiments conducted on English-French data, that parallel phrases extracted with this method improves significantly SMT performance. Our approach can be improved in several aspects. The automatic splitting is very simple; more advanced phrases generation might work better, and eliminate redundancy. Trying other method on filtering can also improve the precision of the method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "www.euronews.com/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Carnegie Mellon University: http://cmusphinx.sourceforge.net/ Schwenk, 2011), 4 i.e. between automatic translation, and the phrases selected by IR.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://sourceforge.net/projects/ tercpp/ 5 http://www.statmt.org/europarl/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work has been partially funded by the French Government under the project DEPART.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": "6"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Parallel sentence generation from comparable corpora for improved smt",
"authors": [
{
"first": "S",
"middle": [],
"last": "Abdul-Rauf",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Schwenk",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Abdul-Rauf and H. Schwenk. 2011. Parallel sen- tence generation from comparable corpora for im- proved smt. Machine Translation.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Parallel texts extraction from multimodal comparable corpora",
"authors": [
{
"first": "H",
"middle": [],
"last": "Afli",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Schwenk",
"suffix": ""
}
],
"year": 2012,
"venue": "JapTAL",
"volume": "7614",
"issue": "",
"pages": "40--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Afli, L. Barrault, and H. Schwenk. 2012. Paral- lel texts extraction from multimodal comparable cor- pora. In JapTAL, volume 7614 of Lecture Notes in Computer Science, pages 40-51. Springer.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Improvements to the LIUM french ASR system based on CMU Sphinx: what helps to significantly reduce the word error rate?",
"authors": [
{
"first": "P",
"middle": [],
"last": "Del\u00e9glise",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Est\u00e8ve",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Meignier",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Merlin",
"suffix": ""
}
],
"year": 2009,
"venue": "Interspeech",
"volume": "",
"issue": "",
"pages": "6--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Del\u00e9glise, Y. Est\u00e8ve, S. Meignier, and T. Merlin. 2009. Improvements to the LIUM french ASR sys- tem based on CMU Sphinx: what helps to signifi- cantly reduce the word error rate? In Interspeech 2009, Brighton (United Kingdom), 6-10 september.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Multi-level bootstrapping for extracting parallel sentences from a quasicomparable corpus",
"authors": [
{
"first": "P",
"middle": [],
"last": "Fung",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Cheung",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 20th international conference on Computational Linguistics, COLING '04",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Fung and P. Cheung. 2004. Multi-level bootstrap- ping for extracting parallel sentences from a quasi- comparable corpus. In Proceedings of the 20th in- ternational conference on Computational Linguis- tics, COLING '04.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "An ir approach for translating new words from nonparallel, comparable texts",
"authors": [
{
"first": "P",
"middle": [],
"last": "Fung",
"suffix": ""
},
{
"first": "L",
"middle": [
"Y"
],
"last": "Yee",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 17th international conference on Computational linguistics",
"volume": "1",
"issue": "",
"pages": "414--420",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Fung and L. Y. Yee. 1998. An ir approach for translating new words from nonparallel, compara- ble texts. In Proceedings of the 17th international conference on Computational linguistics -Volume 1, COLING '98, pages 414-420.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Parallel implementations of word alignment tool",
"authors": [
{
"first": "Q",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Vogel",
"suffix": ""
}
],
"year": 2008,
"venue": "Software Engineering, Testing, and Quality Assurance for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "49--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Q. Gao and S. Vogel. 2008. Parallel implementa- tions of word alignment tool. In Software Engineer- ing, Testing, and Quality Assurance for Natural Lan- guage Processing, SETQA-NLP '08, pages 49-57.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A geometric view on bilingual lexicon extraction from comparable corpora",
"authors": [
{
"first": "E",
"middle": [],
"last": "Gaussier",
"suffix": ""
},
{
"first": "J.-M",
"middle": [],
"last": "Renders",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Matveeva",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Goutte",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "D\u00e9jean",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics, ACL '04",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Gaussier, J.-M. Renders, I. Matveeva, C. Goutte, and H. D\u00e9jean. 2004. A geometric view on bilingual lexicon extraction from comparable corpora. In Pro- ceedings of the 42nd Annual Meeting on Association for Computational Linguistics, ACL '04.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Optimizing bottle-neck features for LVCSR",
"authors": [
{
"first": "F",
"middle": [],
"last": "Gr\u00e9zl",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Fousek",
"suffix": ""
}
],
"year": 2008,
"venue": "2008 IEEE International Conference on Acoustics, Speech, and Signal Processing",
"volume": "",
"issue": "",
"pages": "4729--4732",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Gr\u00e9zl and P. Fousek. 2008. Optimizing bottle-neck features for LVCSR. In 2008 IEEE International Conference on Acoustics, Speech, and Signal Pro- cessing, pages 4729-4732. IEEE Signal Processing Society.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Extracting parallel phrases from comparable data",
"authors": [
{
"first": "S",
"middle": [],
"last": "Hewavitharana",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Vogel",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 4th Workshop on Building and Using Comparable Corpora: Comparable Corpora and the Web, BUCC '11",
"volume": "",
"issue": "",
"pages": "61--68",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Hewavitharana and S. Vogel. 2011. Extracting par- allel phrases from comparable data. In Proceedings of the 4th Workshop on Building and Using Compa- rable Corpora: Comparable Corpora and the Web, BUCC '11, pages 61-68.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Estimating word translation probabilities from unrelated monolingual corpora using the em algorithm",
"authors": [
{
"first": "P",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "711--715",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Koehn and K. Knight. 2000. Estimating word trans- lation probabilities from unrelated monolingual cor- pora using the em algorithm. In Proceedings of the Seventeenth National Conference on Artificial Intel- ligence and Twelfth Conference on Innovative Ap- plications of Artificial Intelligence, pages 711-715. AAAI Press.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Statistical phrase-based translation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Franz",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Conference of the North American Chapter",
"volume": "1",
"issue": "",
"pages": "48--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Koehn, Franz J. Och, and D. Marcu. 2003. Sta- tistical phrase-based translation. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology -Volume 1, NAACL '03, pages 48-54.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Moses: open source toolkit for statistical machine translation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Moran",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Constantin",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Herbst",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, ACL '07",
"volume": "",
"issue": "",
"pages": "177--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Koehn, H. Hoang, A. Birch, C. Callison-Burch, M. Federico, N. Bertoldi, B. Cowan, W. Shen, C. Moran, R. Zens, C. Dyer, O. Bojar, A. Constantin, and E. Herbst. 2007. Moses: open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the ACL on Interac- tive Poster and Demonstration Sessions, ACL '07, pages 177-180.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Improving corpus comparability for bilingual lexicon extraction from comparable corpora",
"authors": [
{
"first": "B",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Gaussier",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics, COLING '10",
"volume": "",
"issue": "",
"pages": "644--652",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Li and E. Gaussier. 2010. Improving corpus com- parability for bilingual lexicon extraction from com- parable corpora. In Proceedings of the 23rd Inter- national Conference on Computational Linguistics, COLING '10, pages 644-652.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Improving Machine Translation Performance by Exploiting Non-Parallel Corpora",
"authors": [
{
"first": "D",
"middle": [
"S"
],
"last": "Munteanu",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2005,
"venue": "Computational Linguistics",
"volume": "31",
"issue": "4",
"pages": "477--504",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. S. Munteanu and D. Marcu. 2005. Improv- ing Machine Translation Performance by Exploiting Non-Parallel Corpora. Computational Linguistics, 31(4):477-504.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Extracting parallel sub-sentential fragments from non-parallel corpora",
"authors": [
{
"first": "D",
"middle": [
"S"
],
"last": "Munteanu",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, ACL-44",
"volume": "",
"issue": "",
"pages": "81--88",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. S. Munteanu and D. Marcu. 2006. Extracting parallel sub-sentential fragments from non-parallel corpora. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Compu- tational Linguistics, ACL-44, pages 81-88.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Minimum error rate training in statistical machine translation",
"authors": [
{
"first": "Franz",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Annual Meeting on Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "160--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz J. Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting on Association for Computa- tional Linguistics -Volume 1, ACL '03, pages 160- 167, Stroudsburg, PA, USA. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Experiments using the lemur toolkit",
"authors": [
{
"first": "P",
"middle": [],
"last": "Ogilvie",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Callan",
"suffix": ""
}
],
"year": 2001,
"venue": "Procedding of the Trenth Text Retrieval Conference (TREC-10)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Ogilvie and J. Callan. 2001. Experiments using the lemur toolkit. Procedding of the Trenth Text Re- trieval Conference (TREC-10).",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "K",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "W.-J",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL '02",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meet- ing on Association for Computational Linguistics, ACL '02, pages 311-318.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Generative models of noisy translations with applications to parallel fragment extraction",
"authors": [
{
"first": "Q",
"middle": [],
"last": "Quirk",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Udupa",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Menezes",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of MT Summit XI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Q. Quirk, R. Udupa, and A. Menezes. 2007. Gener- ative models of noisy translations with applications to parallel fragment extraction. In In Proceedings of MT Summit XI, European Association for Machine Translation.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Automatic parallel fragment extraction from noisy data",
"authors": [
{
"first": "J",
"middle": [],
"last": "Riesa",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT '12",
"volume": "",
"issue": "",
"pages": "538--542",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Riesa and D. Marcu. 2012. Automatic parallel frag- ment extraction from noisy data. In Proceedings of the 2012 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT '12, pages 538-542.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "LIUM's systems for the IWSLT 2011 speech translation tasks",
"authors": [
{
"first": "A",
"middle": [],
"last": "Rousseau",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Bougares",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Del\u00e9glise",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Est\u00e8ve",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Rousseau, F. Bougares, P. Del\u00e9glise, H. Schwenk, and Y. Est\u00e8ve. 2011. LIUM's systems for the IWSLT 2011 speech translation tasks. International Workshop on Spoken Language Translation 2011.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Optimising multiple metrics with mert",
"authors": [
{
"first": "C",
"middle": [],
"last": "Servan",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Schwenk",
"suffix": ""
}
],
"year": 2011,
"venue": "The Prague Bulletin of Mathematical Linguistics (PBML)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Servan and H. Schwenk. 2011. Optimising multiple metrics with mert. The Prague Bulletin of Mathe- matical Linguistics (PBML).",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A study of translation edit rate with targeted human annotation",
"authors": [
{
"first": "S",
"middle": [],
"last": "Snover",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Dorr",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Micciulla",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Makhoul",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of Association for Machine Translation in the Americas",
"volume": "",
"issue": "",
"pages": "223--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Snover, B. Dorr, R. Schwartz, M. Micciulla, and J. Makhoul. 2006. A study of translation edit rate with targeted human annotation. Proceedings of As- sociation for Machine Translation in the Americas, pages 223-231.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "SRILM -an extensible language modeling toolkit",
"authors": [
{
"first": "A",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2002,
"venue": "International Conference on Spoken Language Processing",
"volume": "",
"issue": "",
"pages": "257--286",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Stolcke. 2002. SRILM -an extensible lan- guage modeling toolkit. In International Confer- ence on Spoken Language Processing, pages 257- 286, November.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Measuring comparability of documents in non-parallel corpora for efficient extraction of (semi-)parallel translation equivalents",
"authors": [
{
"first": "F",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Babych",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Joint Workshop on Exploiting Synergies between Information Retrieval and Machine Translation (ESIRMT) and Hybrid Approaches to Machine Translation (HyTra), EACL 2012",
"volume": "",
"issue": "",
"pages": "10--19",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Su and B. Babych. 2012. Measuring comparabil- ity of documents in non-parallel corpora for effi- cient extraction of (semi-)parallel translation equiv- alents. In Proceedings of the Joint Workshop on Exploiting Synergies between Information Retrieval and Machine Translation (ESIRMT) and Hybrid Ap- proaches to Machine Translation (HyTra), EACL 2012, pages 10-19. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Reliable measures for aligning japanese-english news articles and sentences",
"authors": [
{
"first": "M",
"middle": [],
"last": "Utiyama",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Isahara",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Annual Meeting on Association for Computational Linguistics -Volume 1, ACL '03",
"volume": "",
"issue": "",
"pages": "72--79",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Utiyama and H. Isahara. 2003. Reliable measures for aligning japanese-english news articles and sen- tences. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics -Vol- ume 1, ACL '03, pages 72-79.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Using noisy bilingual data for statistical machine translation",
"authors": [
{
"first": "S",
"middle": [],
"last": "Vogel",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the tenth conference on European chapter of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "175--178",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Vogel. 2003. Using noisy bilingual data for sta- tistical machine translation. In Proceedings of the tenth conference on European chapter of the Asso- ciation for Computational Linguistics -Volume 2, EACL '03, pages 175-178.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Adaptive parallel sentences mining from web bilingual news collection",
"authors": [
{
"first": "B",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Vogel",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 2002 IEEE International Conference on Data Mining, ICDM '02",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Zhao and S. Vogel. 2002. Adaptive parallel sen- tences mining from web bilingual news collection. In Proceedings of the 2002 IEEE International Con- ference on Data Mining, ICDM '02, Washington, DC, USA. IEEE Computer Society.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Example of multimodal comparable corpora from the TED website.phrases.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF1": {
"text": "Performance of PhrExtract, SentExtract and their combination in term of BLEU score for each TER threshold.",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF1": {
"content": "<table><tr><td>Corpus</td><td>% WER</td></tr><tr><td colspan=\"2\">Development 19.2</td></tr><tr><td>Test</td><td>17.4</td></tr></table>",
"type_str": "table",
"html": null,
"num": null,
"text": ".3 shows the performances of the ASR system on the development and test corpora."
},
"TABREF2": {
"content": "<table><tr><td>Data</td><td colspan=\"2\"># tokens in-domain ?</td></tr><tr><td>TEDasr</td><td>1.8M</td><td>yes</td></tr><tr><td>TEDbi</td><td>1.9M</td><td>yes</td></tr><tr><td>TEDbi split</td><td>80.4M</td><td>yes</td></tr><tr><td colspan=\"2\">TEDasr split 82.7M</td><td>yes</td></tr></table>",
"type_str": "table",
"html": null,
"num": null,
"text": "MT training and development data."
},
"TABREF3": {
"content": "<table><tr><td>: Comparable data used for the extraction</td></tr><tr><td>experiments.</td></tr><tr><td>as described in 2.2. Then, both methods are com-</td></tr><tr><td>pared.</td></tr></table>",
"type_str": "table",
"html": null,
"num": null,
"text": ""
},
"TABREF5": {
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null,
"text": "Number of tokens extracted and BLEU scores on DevTED obtained with PhrExtract and Sen-tExtract methods for each TER threshold."
}
}
}
}