{ "paper_id": "I11-1047", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:30:34.324412Z" }, "title": "Mining Parallel Documents Using Low Bandwidth and High Precision CLIR from the Heterogeneous Web", "authors": [ { "first": "Simon", "middle": [], "last": "Shi", "suffix": "", "affiliation": { "laboratory": "", "institution": "Human Language Technology Center Hong Kong University of Science and Technology (HKUST)", "location": { "addrLine": "Clear Water Bay", "settlement": "Hong Kong" } }, "email": "" }, { "first": "Pascale", "middle": [], "last": "Fung", "suffix": "", "affiliation": { "laboratory": "", "institution": "Human Language Technology Center Hong Kong University of Science and Technology (HKUST)", "location": { "addrLine": "Clear Water Bay", "settlement": "Hong Kong" } }, "email": "pascale@ust.hk" }, { "first": "Emmanuel", "middle": [], "last": "Prochasson", "suffix": "", "affiliation": {}, "email": "emmanuel@butter.com.hk" }, { "first": "Chi-Kiu", "middle": [], "last": "Lo", "suffix": "", "affiliation": { "laboratory": "", "institution": "Human Language Technology Center Hong Kong University of Science and Technology (HKUST)", "location": { "addrLine": "Clear Water Bay", "settlement": "Hong Kong" } }, "email": "" }, { "first": "Dekai", "middle": [], "last": "Wu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Human Language Technology Center Hong Kong University of Science and Technology (HKUST)", "location": { "addrLine": "Clear Water Bay", "settlement": "Hong Kong" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We propose a content-based approach to mine parallel resources from the entire web using cross lingual information retrieval (CLIR) with search query relevance score (SQRS). Our method improves mining recall by going beyond URL matching to find parallel documents from non-parallel sites. We introduce SQRS to improve the precision of mining. Our method makes use of search engines to query for target document given each source document and therefore does not require downloading target language documents in batch mode, reducing computational cost on the local machines and bandwidth consumption. We obtained a very high mining precision (88%) on the parallel documents by the pure CLIR approach. After extracting parallel sentences from the mined documents and using them to train an SMT system, we found that the SMT performance, with 29.88 BLEU score, is comparable to that obtained with high quality manually translated parallel sentences with 29.54 BLEU score, illustrating the excellent quality of the mined parallel material.", "pdf_parse": { "paper_id": "I11-1047", "_pdf_hash": "", "abstract": [ { "text": "We propose a content-based approach to mine parallel resources from the entire web using cross lingual information retrieval (CLIR) with search query relevance score (SQRS). Our method improves mining recall by going beyond URL matching to find parallel documents from non-parallel sites. We introduce SQRS to improve the precision of mining. Our method makes use of search engines to query for target document given each source document and therefore does not require downloading target language documents in batch mode, reducing computational cost on the local machines and bandwidth consumption. We obtained a very high mining precision (88%) on the parallel documents by the pure CLIR approach. After extracting parallel sentences from the mined documents and using them to train an SMT system, we found that the SMT performance, with 29.88 BLEU score, is comparable to that obtained with high quality manually translated parallel sentences with 29.54 BLEU score, illustrating the excellent quality of the mined parallel material.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Parallel resources such as bilingual lexicon and sentence translations are typically obtained from translated parallel documents. The web has now grown into an archive of trillions of URLs, heterogeneous in nature, in the last decade. There is a need to readdress the problem of how to mine parallel documents from the web.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We suggest that parallel documents can be mined with high precision from web sites that are not necessarily parallel to each other.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Parallel resources reside on a diverse range of websites which can be classified into the following categories:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Parallel websites: single website with structurally aligned bilingual pages. Typically they are websites of institutions, governments and commercial companies. (e.g. Financial Times Chinese/English, Wall Street Journal Chinese/English). Structure based methods were previously proposed to mine parallel documents from these websites: Resnik and Smith (2003) used (1) parent pages containing links to versions of one document in different languages and (2) sibling pages contains link to translation of the current documents. They also rely on the URL and anchor text to spot language specific version of documents.", "cite_spans": [ { "start": 334, "end": 357, "text": "Resnik and Smith (2003)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A structural alignment using DOM tree representation was proposed by Shi et al. (2006) to align parallel documents by using HTML structure. They identify the translational equivalent texts and hyperlinks between two parallel DOM trees to find parallel documents.", "cite_spans": [ { "start": 69, "end": 86, "text": "Shi et al. (2006)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, the web is a heterogeneous collection of documents that extend far beyond bilingual and comparable pages with obvious structural features, such as similar URLs or common titles. Structural features only work for bilingual websites or document pairs that are already linked by editors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Comparable websites: websites that contain parallel content in different languages without any structural relation between document pairs. Press agencies have independent content management systems and editors for publishing news in different languages. (e.g. Reuters China vs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Quasi-comparable websites: independent websites that somewhere contain translated parallel contents. They may contain stories, documentations and books chapters in many languages on different websites. (e.g. Forbes, Fortune)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reuters)", "sec_num": null }, { "text": "Instead of structural cues such as URLs, hyperlinks and HTML trees, content based approach are applied to find extra parallel resources from comparable and quasi-comparable websites. Nie et al. (1999) proposed to download all source language and target language documents and then perform Cross Language Information Retrieval (Grefenstette, 1998) to extract candidate parallel documents. Marcu (2005, 2006) also focused on mining parallel documents from a downloaded collection of news articles, using time stamp alignment and content matching. More recently, Jiang et al. (2009) proposed an adaptive pattern-based bilingual data mining method to mine bilingual web pages for parallel phrases and terms. Uszkoreit et al. (2010) aligned parallel documents by querying n-gram index built from translation of multilingual documents. All these approaches require a huge local achieve of both source and target documents. This can be very costly when we want to query the entire web.", "cite_spans": [ { "start": 183, "end": 200, "text": "Nie et al. (1999)", "ref_id": "BIBREF9" }, { "start": 326, "end": 346, "text": "(Grefenstette, 1998)", "ref_id": "BIBREF2" }, { "start": 388, "end": 406, "text": "Marcu (2005, 2006)", "ref_id": null }, { "start": 560, "end": 579, "text": "Jiang et al. (2009)", "ref_id": "BIBREF5" }, { "start": 704, "end": 727, "text": "Uszkoreit et al. (2010)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Reuters)", "sec_num": null }, { "text": "Moreover, Uszkoreit et al. (2010) makes use of statistical machine translation (SMT) system to translate all documents into target language to build a query index. Due to the complexity of machine translation algorithms, it is still resource wasteful to download all target language documents, machine translate them, then select the desired candidate parallel documents.", "cite_spans": [ { "start": 10, "end": 33, "text": "Uszkoreit et al. (2010)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Reuters)", "sec_num": null }, { "text": "Web content is being updated continuously. The above methods need to crawl for all documents in the target language. This is costly in terms of CPU consumption, bandwidth usage and disk storage utilization. This step can be replaced with search engine APIs by several search queries generated from source documents to save CPU and bandwidth consumption.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reuters)", "sec_num": null }, { "text": "As most research institutions interested in mining parallel documents do not possess a large number of CPUs or storage on the scale of the world's top search companies, it is also desirable that any site can scale the mining speed and volume according to the computing resources available to them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reuters)", "sec_num": null }, { "text": "To this end, we propose a low bandwidth CLIR method to on the one hand complement structural matching, and on the other hand reduce the complexity of content matching. Hong et al. (2010) proposed a mining approach on selected Chinese news article containing cue phrases. In non-oracle queries, 45% of the parallel or comparable documents were found among top search results. This is a benchmark in mining precision.", "cite_spans": [ { "start": 168, "end": 186, "text": "Hong et al. (2010)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Reuters)", "sec_num": null }, { "text": "As the parallel resources mined are often times used to improve SMT systems or yield bilingual lexicons, it is desirable that the mining output is of high precision.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reuters)", "sec_num": null }, { "text": "Our proposed approach ( Figure 1 ) primarily aims to discover parallel documents from all kinds of parallel, comparable or quasi-comparable websites on the World Wide Web. We take advantage of online search engines to find candidate documents thereby saving bandwidth, computational cost and dispenses with crawling for and storing all documents in the target language in an archive. Content based approach queries the document in target language using keywords from documents in the source language. In our approach, queries are generated from source documents and expanded dynamically by search result quality as feedback. Neither machine translation of the full Figure 1 . Parallel Document Mining using CLIR with Relevance Feedback text no downloading of target documents is needed.", "cite_spans": [], "ref_spans": [ { "start": 24, "end": 32, "text": "Figure 1", "ref_id": null }, { "start": 665, "end": 673, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "The Low Bandwidth High Precision Content Based Approach", "sec_num": "2" }, { "text": "We suggest query expansion feedback score is the key in improving the precision of target documents found. If a source document is found to have no translation in the target language, the system simply returns .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Low Bandwidth High Precision Content Based Approach", "sec_num": "2" }, { "text": "We cannot enter documents with thousands of words directly into an online search engine. We need to convert full text into keywords to perform automated queries. A keyword may exist in multiple articles. However, several keywords cam uniquely identify a document if they are grouped together as a keyword set (Jiang et al., 2009) .", "cite_spans": [ { "start": 309, "end": 329, "text": "(Jiang et al., 2009)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Representing Source Document", "sec_num": "2.1" }, { "text": "We then translate each keyword to target language to form the initial query.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Representing Source Document", "sec_num": "2.1" }, { "text": "There are several reasons why using the translated keyword set as query directly, as proposed by Hong et al. (2010) , does not always yields the desired target document: 1) Keyword translation might not correspond to the actual words in the target document; 2) Certain keywords in the target document might have been removed by content editors; 3) There are errors in keyword translation or selection.", "cite_spans": [ { "start": 97, "end": 115, "text": "Hong et al. (2010)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Representing Source Document", "sec_num": "2.1" }, { "text": "It is essential to select appropriate keywords to find the desired target document in a search engine. Two conditions that an appropriate keyword set should satisfy are: (1) they should represent the document exclusively (Jiang et al., 2009) (2) they should have unique or common translation in both languages.", "cite_spans": [ { "start": 221, "end": 241, "text": "(Jiang et al., 2009)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Representing Source Document", "sec_num": "2.1" }, { "text": "We suggest that words with high TF-IDFs and English words in Chinese text are usually keywords that fulfill both conditions above.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Representing Source Document", "sec_num": "2.1" }, { "text": "To obtain TF-IDFs that are representative of the keyword in the source document, they are trained from all source documents under the same domain name (e.g. www.ftchinese.com).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "E T K K K K T : set of words with high TF-IDF score K E : set of English words in Chinese documents", "sec_num": null }, { "text": "Keywords in K E are more important because most of them are words used in the target document. However, in many cases, there are additional words in K E so that we cannot find any document by directly searching for K E . Our method removes keywords with the lowest TF-IDF score from K E until a non-empty result is obtained.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "E T K K K K T : set of words with high TF-IDF score K E : set of English words in Chinese documents", "sec_num": null }, { "text": "Search Query Relevance Score (SQRS)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translating Source Documents with", "sec_num": "2.2" }, { "text": "Search engines use multiple criteria, such as keyword significance, domain popularity, date, popularity, page rank and etc., to return the most relevant documents that match the query. For mining a translated document pair, we need to somehow overcome the impact of page popularity and rank, and aim for content matching only. Instead of ranking keywords locally and send single query, we take the above search engine criteria into account to amend queries.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translating Source Documents with", "sec_num": "2.2" }, { "text": "To avoid adding erroneously translated keywords and further reduce the amount of undesirable documents downloaded, we introduced the search query relevance score (SQRS), defined in Equation 1, that describes how well the search result is and how we can refine the query. The score is determined by comparing the query with highlighted keywords in search result. Generally, a webpage has higher SQRS if the summary contains more keywords that match the query.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translating Source Documents with", "sec_num": "2.2" }, { "text": "Commercial search engines omit some keywords when there is no document in their index containing all the keywords. In such cases, the rank of documents usually changes significantly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translating Source Documents with", "sec_num": "2.2" }, { "text": "The following example shows search results of two search queries ( Figure 2 ) generated from the Chinese version of My Space launching new version of website 1 . \"|\" indicates separation of keywords. In Query 2, we added fashion which is the English translation of \" \" (but the actual English version used hottest). The rank of search result changed and each summary omitted at least one keyword in the query (Table 2) This phenomenon suggests that the document with all keywords in Query 2 does not exist on the web. The recently added keyword fashion must be erroneously translated.", "cite_spans": [], "ref_spans": [ { "start": 67, "end": 75, "text": "Figure 2", "ref_id": "FIGREF0" }, { "start": 409, "end": 418, "text": "(Table 2)", "ref_id": null } ], "eq_spans": [], "section": "Translating Source Documents with", "sec_num": "2.2" }, { "text": "In many similar cases, an erroneously translated keyword can pollute the query quality and decrease the rank of target document. Parallel 1 Source: http://cn.reuters.com/article/CNTechNews/ idCNCHINA-3233720101027 on May 10, 2011 document mining cannot rely on the document rank of search engine. The system must have a mechanism to detect the problem when expanding the query. Otherwise, a batch of irrelevant documents will be downloaded and need to be filtered out.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translating Source Documents with", "sec_num": "2.2" }, { "text": "We ran experiments to find target documents of 112 randomly selected source documents and compare their SQRSs. 81 or 72.3% target documents have the highest SQRS among other URLs in the search results. It implies the SQRS are an effective measure of query formation and keyword translation. Although the query may include multiple translations of a keyword in a bilingual lexicon, the SQRS ensures that there is minimum adverse effect from incorrect translations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translating Source Documents with", "sec_num": "2.2" }, { "text": "To improve the precision of the keyword set, we further use SQRS for relevance feedback as shown in Figure 3 . First, we rank the keywords in K T by their TF-IDF scores. Next, the query is expanded by SQRS. When keyword w is added to current query, we compare the maximum SQRS c among top n results with the previous highest score SQRS p without w. w will be discarded from the keywords if SQRS c K max | c>C Search N SQRS c >SQRS c-1 Y # of result if a pair failed the verification process. We propose using both dynamic time warping (DTW) and R 2 regression as in (Cheung and Fung, 2004) on every pair of the source and targets document to evaluate their parallelness.", "cite_spans": [ { "start": 245, "end": 268, "text": "(Cheung and Fung, 2004)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Document Verification", "sec_num": "2.4" }, { "text": "DTW alignment is faster than machine translation (MT). We measure the word level DTW score between source document and target document with local constrain of 5 (Equation 2). Stop words are removed from the English text before DTW processing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dynamic Time Warping (DTW) Score", "sec_num": "2.4.1" }, { "text": "If the there is an entry in the bi-lexicon for a pair of i-th Chinese word and j-th English respectively, the cost of point (i,j) is 0, otherwise 1. The total cost is normalized by maximum number of steps (moves) from (0,0) to (m,n) to convert DTW score to a number between 0 and 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dynamic Time Warping (DTW) Score", "sec_num": "2.4.1" }, { "text": "Parallel document pairs tend to have a path close to the diagonal line with high DTW score. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dynamic Time Warping (DTW) Score", "sec_num": "2.4.1" }, { "text": "The parallel documents contain parallel sentences that may have different word orders, especially in the case of English and Chinese. The DTW score may be affected by different word order. We propose to use R 2 regression as an additional score to measure the deviation of the matching path of shared words in both documents from the diagonal. ( Figure 5 ) # of occurrence of c in t,", "cite_spans": [], "ref_spans": [ { "start": 346, "end": 354, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "R 2 Regression", "sec_num": "2.4.2" }, { "text": "where Q is the query, k is keyword, w is English word and T is the short text with highlighted keywords in search result.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SQRS(Q,T)=", "sec_num": null }, { "text": "Equation 1. Definition of SQRS Figure 5 . R 2 of Parallel and Non-Parallel Document Pairs R 2 are normalized by the slope:", "cite_spans": [], "ref_spans": [ { "start": 31, "end": 39, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "SQRS(Q,T)=", "sec_num": null }, { "text": "Slope R R score / 2 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SQRS(Q,T)=", "sec_num": null }, { "text": "DTW score helps filter out non-parallel pairs and R 2 is introduced as a supplementary feature to improve the precision of extracted parallel documents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combining DTW and R 2", "sec_num": "2.4.3" }, { "text": "A comparison of using these measures is shown in ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combining DTW and R 2", "sec_num": "2.4.3" }, { "text": "The final step of verification uses structural features of the document pair candidates:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Structural Features", "sec_num": "2.4.4" }, { "text": "x Language: mined document should be in the target language x Absolute size: mined documents should not have too small/large in file length x Size difference: source and target documents must have similar size x Document type: both documents must be content page in a website", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Structural Features", "sec_num": "2.4.4" }, { "text": "Since search engines rank target documents by various criteria, such as the popularity-based page rank, some legitimate bilingual website documents might not be found by our proposed content based method, content based approach using search engines. We propose to supplement our approach with URL matching patterns if the content based method has found several pairs of source and target documents under the same hostname.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Find One Get More", "sec_num": "2.5" }, { "text": "Source # Chinese Docs ftchinese.com 11,009 cn.wsj.com 3,327 cn.reuters.com 8,570 forbeschina.com 6,281 fortunechina.com 593 Total 29,780 Table 6 . Source Documents for Pure CLIR Approach", "cite_spans": [], "ref_spans": [ { "start": 137, "end": 144, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Find One Get More", "sec_num": "2.5" }, { "text": "We examine the pairs found by the content based method and look for any parallel pairs coming from the same hostname or whether a pattern can be generalized from these URLs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Find One Get More", "sec_num": "2.5" }, { "text": "We apply this URL pattern to all Chinese pages under this domain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Find One Get More", "sec_num": "2.5" }, { "text": "All pairs found by both methods are subjected to pass the verification process in Section 2.4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Find One Get More", "sec_num": "2.5" }, { "text": "We evaluate our approach on two sets of experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3" }, { "text": "As a baseline of the content base method, we directly use English words in the original Chinese document as keyword. Then, we add keywords ranked by TF-IDF to query the target document but not perform SQRS to expand query.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline", "sec_num": "3.1" }, { "text": "Finally, SQRS is used to refine each keyword to get better results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline", "sec_num": "3.1" }, { "text": "We use both Google and Bing Search APIs to search the keyword sets. Results from different search engines are merged together by URLs. For each query, we consider eight URLs which is the default number of search engine APIs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline", "sec_num": "3.1" }, { "text": "We generalize URL patterns (if any) from document pairs when we find some document pairs by content based method on parallel websites. By Find One Get More, we extract more parallel webpages that follow those URL patterns.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline", "sec_num": "3.1" }, { "text": "Source (Chinese) documents in our experiments are news from the following 5 agencies:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parallel Document Extraction Accuracy", "sec_num": "3.2" }, { "text": "Parallel (bilingual) websites:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parallel Document Extraction Accuracy", "sec_num": "3.2" }, { "text": "(1) Financial Times Chinese (ftchinese.com) (2) Wall Street Journal Chinese (cn.wsj.com) Parallel website contain both Chinese and English document under the same host and can be aligned with URL matching.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parallel Document Extraction Accuracy", "sec_num": "3.2" }, { "text": "(3) Reuters China (cn.reuters.com) (4) Forbes China (forbeschina.com) (5) Fortune China (fortunechina.com) Documents on quasi-comparable or comparable websites may have target documents on either the corresponding agencies' global site (e.g. cn.reuters.com and www.reuters.com) or somewhere else. Parallel documents from such websites cannot be found by URL matching.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparable/quasi-comparable websites:", "sec_num": null }, { "text": "We applied our content based approach to the above sites to find target documents and evaluate the mining precision.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparable/quasi-comparable websites:", "sec_num": null }, { "text": "The percentage of parallel documents that we can successfully find is highly dependent on the type of documents and search engine index. Calculating recall, on the other hand, is only possible for sites we already knew. For comparable or quasi-comparable sites, it is not possible to have the oracle target documents for evaluation because:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparable/quasi-comparable websites:", "sec_num": null }, { "text": "1) Some source documents may not have translation in the target language 2) Target language pages may not be indexed by search engines 3) Manual evaluation of all documents for recall calculation is not feasible", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparable/quasi-comparable websites:", "sec_num": null }, { "text": "In the verification process, we discard the document pairs if:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparable/quasi-comparable websites:", "sec_num": null }, { "text": "x DTW score>0.25 (88% precision) x R 2 score>1.0E-5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparable/quasi-comparable websites:", "sec_num": null }, { "text": "x Article size is too small x Size of source and target too different x URL is root (/) under hostname x Text in wrong language", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparable/quasi-comparable websites:", "sec_num": null }, { "text": "We manually evaluate the effectiveness of our method on randomly selected document pairs. Only parallel document pairs are considered as correct.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparable/quasi-comparable websites:", "sec_num": null }, { "text": "In order to obtain a sentence alignment for pairs of document, we first need to extract the proper content of each page and remove the header and footers that are of little interest and are unlikely to be parallel anyway.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parallel Sentence Extraction", "sec_num": "3.3" }, { "text": "We first segment the documents in sentences and filter out improper ones, such as English sen-tence containing Chinese characters, or Chinese sentence containing roman characters only. We then use DTW again to find a continuous path in the documents and extract the longest one. The header and footer will generally not align and will be discarded; only the chunk of true alignable content will be preserved.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parallel Sentence Extraction", "sec_num": "3.3" }, { "text": "Using this method, we manage to find the beginning and the end of source and target content and extract it. Then discard pairs of document whose number of extracted sentences are too different. Sentence alignment is performed on the remaining documents using the Champollion ToolKit (Ma, 2006) , which is already trained for Chinese-English document pairs.", "cite_spans": [ { "start": 283, "end": 293, "text": "(Ma, 2006)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Parallel Sentence Extraction", "sec_num": "3.3" }, { "text": "Finally, we filter all the sentences using a simple word overlap score. Sentences whose lengths are too different or whose word overlap score is too low are discarded, to ensure a high precision at the end. We directly search all English keywords in Chinese documents and found 153 target documents (baseline). Then we search translation of top ranked TF-IDF keywords (ii). With SQRS further improved 23.56% of output sentences comparing to baseline ( Table 7) . The precision in the three experiments are the same.", "cite_spans": [], "ref_spans": [ { "start": 452, "end": 460, "text": "Table 7)", "ref_id": null } ], "eq_spans": [], "section": "Parallel Sentence Extraction", "sec_num": "3.3" }, { "text": "Among the 29,680 Chinese documents retrieved from the five news agencies, we obtained 7,253 parallel document pairs with 88% precision by content based approach alone.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parallel Document Extraction Accuracy", "sec_num": "4.2" }, { "text": "In many such cases, parallel document pairs are on different websites and be found neither by URL matching nor by content-based methods that use times stamps for matching.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parallel Document Extraction Accuracy", "sec_num": "4.2" }, { "text": "With the Find One Get More approach, we increase the output of parallel documents from parallel websites. Table 8 shows that using URL matching can improve the output quantity a lot, compensating for the missing target documents with low page ranks. For parallel bilingual websites, the pure content based method can find about 1/3 of the target documents compared to the CLIR+URL method. It shows that, however, our query expansion with relevance feedback approach has higher recall than the 18% produced by the local ranked keywords in Hong et al. (2010) .", "cite_spans": [ { "start": 538, "end": 556, "text": "Hong et al. (2010)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 106, "end": 113, "text": "Table 8", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Find One Get More", "sec_num": "4.3" }, { "text": "Among the 15,469 Chinese-English document pairs, we extracted 225,374 parallel sentence pairs with mining precision of over 97% based on human evaluation on randomly selected sentence pairs . We evaluate the quality of those sentences for training machine translation with the Moses SMT engine. We compare the BLEU score obtained with a 4,097,357 sentence pairs corpus, manually aligned (baseline) and the BLEU score obtained with the same corpus, replacing 225,374 sentence pairs by the ones we extracted (CLIR). Results are presented in Table 9 , they are evaluated on the NIST MT06 evaluation set. BLEU Baseline 29.54 CLIR 29.88 Table 9 . BLEU score obtained for SMT These results show that our set of sentences, together with a larger parallel corpus, yield results similar to the one obtained with manually aligned sentences only.", "cite_spans": [], "ref_spans": [ { "start": 539, "end": 547, "text": "Table 9", "ref_id": null }, { "start": 633, "end": 640, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "Parallel Sentence Extraction for SMT", "sec_num": "4.4" }, { "text": "The extracted sentences have been processed for rare word translation extraction. (Prochasson and Fung, 2011) ", "cite_spans": [ { "start": 82, "end": 109, "text": "(Prochasson and Fung, 2011)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Parallel Sentence Extraction for SMT", "sec_num": "4.4" }, { "text": "We carried out our mining experiments on workstation with 8 states of arts CPU cores. The average time taken for each source document is 30 seconds which is only bottle-necked by the usage limitation of search engine APIs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Performance and Scalability", "sec_num": "4.5" }, { "text": "As the TF/IDF scores are pre-trained only from the source documents, and our CLIR approach mines target document for each source document individually. Our system can be easily scaled to run in parallel on multiple servers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Performance and Scalability", "sec_num": "4.5" }, { "text": "In this paper, we have proposed a content based CLIR approach to search any part of the Web to find parallel documents without the limitation of URL-matched bilingual web sites. Our method transforms an input source document into a target language query set, then it makes use of search engine APIs, and a proposed query relevance feedback mechanism, and finds the target language document if it exists on the web. We propose a search query relevance score (SQRS) that checks for precision of the query keywords we use to represent the source document. Our proposed method does not require machine translation, nor does it require downloading all documents in the target language into an archive for document matching, thereby saving computational resources.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "The query expansion and relevance feedback by SQRS which measures translation correctness ensures high precision in the target document found. Using a verification process, the web documents are further filtered by dynamic time warping and regression scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "Experimental results show an 88% mining precision on the parallel documents extracted from parallel, comparable and quasi-comparable web sites.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "Another experiment on extracting bilingual sentences from the mined documents shows that the sentence extraction adds another layer of verification which further improves the precision from 88% to 97%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "SMT experiments on using our mined parallel sentences, together with a larger baseline training set, to train an SMT system show comparable performances from using our data to that of using manually aligned bilingual sentences. Our system is scalable to run on multiple servers simultaneously and is linear in time to the number of input source documents. It can also be run continuously to discover and mine for newly added web documents that were not there previously. It is also extendable to mine for parallel documents in multiple target languages at the same time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" } ], "back_matter": [ { "text": "This project is partially funded by a subcontract from BBN, under the DARPA GALE project.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgement", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "On the use of comparable corpora to improve SMT performance", "authors": [ { "first": "Sadaf", "middle": [], "last": "Abdul-Rauf", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics (EACL'06)", "volume": "", "issue": "", "pages": "16--23", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sadaf Abdul-Rauf and Holger Schwenk. 2009. On the use of comparable corpora to improve SMT per- formance. In Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics (EACL'06), pages 16- 23.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Development of a large-scale web crawler and search engine infrastructure", "authors": [ { "first": "Yoshikiyo", "middle": [], "last": "Susumu Akamine", "suffix": "" }, { "first": "Daisuke", "middle": [], "last": "Kato", "suffix": "" }, { "first": "Keiji", "middle": [], "last": "Kawahara", "suffix": "" }, { "first": "Kentaro", "middle": [], "last": "Shinzato", "suffix": "" }, { "first": "Sadao", "middle": [], "last": "Inui", "suffix": "" }, { "first": "Yutaka", "middle": [], "last": "Kurohashi", "suffix": "" }, { "first": "", "middle": [], "last": "Kidawara", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 3rd international Universal Communication Symposium(IUCS'09)", "volume": "", "issue": "", "pages": "126--131", "other_ids": {}, "num": null, "urls": [], "raw_text": "Susumu Akamine, Yoshikiyo Kato, Daisuke Kawaha- ra, Keiji Shinzato, Kentaro Inui, Sadao Kurohashi, and Yutaka Kidawara. 2009. Development of a large-scale web crawler and search engine infra- structure. In Proceedings of the 3rd international Universal Communication Symposium(IUCS'09), pages 126-131.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Cross-Language Information Retrieval", "authors": [ { "first": "Gregory", "middle": [], "last": "Grefenstette", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gregory Grefenstette. 1998. Cross-Language Infor- mation Retrieval. Kluwer Academic.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "An empirical study on web mining of parallel data", "authors": [ { "first": "Gumwon", "middle": [], "last": "Hong", "suffix": "" }, { "first": "Chi-Ho", "middle": [], "last": "Li", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Hae-Chang", "middle": [], "last": "Rim", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23rd International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "474--482", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gumwon Hong, Chi-Ho Li, Ming Zhou, and Hae- Chang Rim. 2010. An empirical study on web min- ing of parallel data. In Proceedings of the 23rd In- ternational Conference on Computational Linguis- tics (Coling 2010), pages 474-482, Beijing, China, August. Coling 2010 Organizing Committee.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Web corpus mining by instance of wikipedia", "authors": [ { "first": "R\u00fcdiger", "middle": [], "last": "Gleim", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Mehler", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "Dehmer", "suffix": "" } ], "year": 2006, "venue": "WAC '06: Proceedings of the 2nd International Workshop on Web as Corpus", "volume": "", "issue": "", "pages": "67--74", "other_ids": {}, "num": null, "urls": [], "raw_text": "R\u00fcdiger Gleim, Alexander Mehler, and Matthias Dehmer. 2006. Web corpus mining by instance of wikipedia. In WAC '06: Proceedings of the 2nd In- ternational Workshop on Web as Corpus, pages 67-74, Morristown, NJ, USA. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A ranking approach to keyphrase extraction", "authors": [ { "first": "Xin", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Yunhua", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Hang", "middle": [], "last": "Li", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval, SIGIR'09", "volume": "", "issue": "", "pages": "756--757", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xin Jiang, Yunhua Hu, and Hang Li. 2009. A ranking approach to keyphrase extraction. In Proceedings of the 32nd international ACM SIGIR conference on Research and development in information re- trieval, SIGIR'09, pages 756-757, New York, NY, USA. ACM.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Champollion: A robust parallel text sentence aligner", "authors": [ { "first": "Xiaoyi", "middle": [], "last": "Ma", "suffix": "" } ], "year": 2006, "venue": "Proceedings of LREC-2006", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaoyi Ma. 2006. Champollion: A robust parallel text sentence aligner. In Proceedings of LREC-2006.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Improving machine translation performance by exploiting non-parallel corpora", "authors": [ { "first": "Stefan", "middle": [], "last": "Dragos", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Munteanu", "suffix": "" }, { "first": "", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2005, "venue": "Computational Linguistics", "volume": "31", "issue": "", "pages": "477--504", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dragos Stefan Munteanu and Daniel Marcu. 2005. Improving machine translation performance by ex- ploiting non-parallel corpora. Computational Lin- guistics, 31:477-504, December.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Extracting parallel sub-sentential fragments from nonparallel corpora", "authors": [ { "first": "Stefan", "middle": [], "last": "Dragos", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Munteanu", "suffix": "" }, { "first": "", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, ACL-44", "volume": "", "issue": "", "pages": "81--88", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dragos Stefan Munteanu and Daniel Marcu. 2006. Extracting parallel sub-sentential fragments from nonparallel corpora. In Proceedings of the 21st In- ternational Conference on Computational Linguis- tics and the 44th annual meeting of the Association for Computational Linguistics, ACL-44, pages 81- 88, Morristown, NJ, USA. Association for Compu- tational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Cross-language information retrieval based on parallel texts and automatic mining of parallel texts from the web", "authors": [ { "first": "Michel", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Pierre", "middle": [], "last": "Simard", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Isabelle", "suffix": "" }, { "first": "Universit De", "middle": [], "last": "Dur", "suffix": "" }, { "first": "", "middle": [], "last": "Montral", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the 22nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval", "volume": "", "issue": "", "pages": "74--81", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nie, Michel Simard, Pierre Isabelle, Richard Dur, and Universit De Montral. 1999. Cross-language in- formation retrieval based on parallel texts and au- tomatic mining of parallel texts from the web. In Proceedings of the 22nd Annual International ACM SIGIR Conference on Research and Devel- opment in Information Retrieval, pages 74-81.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Parallel web text mining for cross-language information retrieval", "authors": [ { "first": "Jiang", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Jian-Yun", "middle": [], "last": "Nie", "suffix": "" } ], "year": 2000, "venue": "Recherche d'Informations Assist\u00e9e par Ordinateur (RIAO)", "volume": "", "issue": "", "pages": "62--77", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiang Chen and Jian-Yun Nie. 2000. Parallel web text mining for cross-language information retrieval. In Recherche d'Informations Assist\u00e9e par Ordinateur (RIAO), pages 62-77.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The web as a parallel corpus", "authors": [ { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2003, "venue": "Computational Linguistics", "volume": "29", "issue": "", "pages": "349--380", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philip Resnik and Noah A. Smith. 2003. The web as a parallel corpus. Computational Linguistics, 29:349-380, September.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A dom tree alignment model for mining parallel data from the web", "authors": [ { "first": "Lei", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Cheng", "middle": [], "last": "Niu", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, ACL-44", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lei Shi, Cheng Niu, Ming Zhou, and Jianfeng Gao. 2006. A dom tree alignment model for mining par- allel data from the web. In Proceedings of the 21st International Conference on Computational Lin- guistics and the 44th annual meeting of the Associ- ation for Computational Linguistics, ACL-44, pag- es 489-496, Morristown, NJ, USA. Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Large Scale Parallel Document Mining for Machine Translation", "authors": [ { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Jay", "middle": [], "last": "Ponte", "suffix": "" }, { "first": "Ashok", "middle": [], "last": "Popat", "suffix": "" }, { "first": "Moshe", "middle": [], "last": "Dubiner", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23rd International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "1101--1109", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jakob Uszkoreit, Jay Ponte, Ashok Popat, and Moshe Dubiner. 2010. Large Scale Parallel Document Mining for Machine Translation. In Proceedings of the 23rd International Conference on Computa- tional Linguistics (Coling 2010), pages 1101-1109, Beijing, China, August. Coling 2010 Organizing Committee.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Unsupervised Learning of a Spontaneous and Colloquial Speech Lexicon in Chinese", "authors": [ { "first": "Chi", "middle": [], "last": "Cheung", "suffix": "" }, { "first": "Pascale", "middle": [], "last": "Shun", "suffix": "" }, { "first": "", "middle": [], "last": "Fung", "suffix": "" } ], "year": 2004, "venue": "International Journal of Speech Technology", "volume": "7", "issue": "", "pages": "173--178", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cheung Chi Shun and Pascale Fung. 2004. Unsuper- vised Learning of a Spontaneous and Colloquial Speech Lexicon in Chinese. In International Jour- nal of Speech Technology, Vol. 7, No. 2, pp 173- 178, Apr 2004.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Aligning word senses using bilingual corpora", "authors": [ { "first": "Marine", "middle": [], "last": "Carpuat", "suffix": "" }, { "first": "Pascale", "middle": [], "last": "Fung", "suffix": "" }, { "first": "Grace", "middle": [], "last": "Ngai", "suffix": "" } ], "year": 2006, "venue": "ACM Transactions on Asian Language and Information Processing", "volume": "5", "issue": "2", "pages": "89--120", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marine Carpuat, Pascale Fung, and Grace Ngai. 2006. Aligning word senses using bilingual corpora. ACM Transactions on Asian Language and Infor- mation Processing, 5(2):89-120.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Trillions of comparable documents", "authors": [ { "first": "Pascale", "middle": [], "last": "Fung", "suffix": "" }, { "first": "Emmanuel", "middle": [], "last": "Prochasson", "suffix": "" }, { "first": "Simon", "middle": [], "last": "Shi", "suffix": "" } ], "year": 2010, "venue": "LREC Workshop on Building and Using Comparable Corpora", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pascale Fung, Emmanuel Prochasson, and Simon Shi. 2010. Trillions of comparable documents, In LREC Workshop on Building and Using Comparable Corpora, Malta, May 2010.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Rare word translation extraction from aligned comparable documents", "authors": [ { "first": "Emmanuel", "middle": [], "last": "Prochasson", "suffix": "" }, { "first": "Pascale", "middle": [], "last": "Fung", "suffix": "" } ], "year": 2011, "venue": "The 49th Annual Meeting of the Association for Computational Linguistics (ACL'11)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emmanuel Prochasson and Pascale Fung. 2011. Rare word translation extraction from aligned compara- ble documents. The 49th Annual Meeting of the As- sociation for Computational Linguistics (ACL'11), Portland, USA.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "num": null, "text": "Search Result of Query 1 (Left) and 2 (Right) on Google.com" }, "FIGREF1": { "type_str": "figure", "uris": null, "num": null, "text": "Flowchart of Query Expansion Algorithm" }, "FIGREF2": { "type_str": "figure", "uris": null, "num": null, "text": "DTW with local distance of 5 Figure 4 shows the DTW paths of a parallel document pair and a non-parallel pair. The paral-lel documents are aligned and the path with minimum cost is shown along the diagonal of the graph." }, "FIGREF3": { "type_str": "figure", "uris": null, "num": null, "text": "DTW of Parallel and Non-Parallel Pair" }, "TABREF3": { "html": null, "content": "
DTW # Pairs # Parallel Precision %
>0.4512212199.18
>0.4022421997.77
>0.3529828896.64
>0.3035433795.20
>0.2838936493.57
>0.2642938990.68
>0.2545640388.38
>0.2448841585.04
>0.2254541776.51
>0.2062742667.94
", "type_str": "table", "text": "is the relationship between DTW score and precision of candidate pairs. The precision of output sentences increases if the DTW score threshold is set higher.", "num": null }, "TABREF4": { "html": null, "content": "
.
DTW (>0.22)R 2 (1.0E-5,1)DTW+R 2
# Pairs545534481
# Parallel417403399
Precision % 76.5175.4782.95
Table 5. Mining Precision of DTW and R 2
", "type_str": "table", "text": "", "num": null }, "TABREF7": { "html": null, "content": "
. Output Document Pairs of 4.2 & 4.3
", "type_str": "table", "text": "", "num": null } } } }