|
{ |
|
"paper_id": "O04-2001", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T08:01:00.995623Z" |
|
}, |
|
"title": "Bilingual Collocation Extraction Based on Syntactic and Statistical Analyses", |
|
"authors": [ |
|
{ |
|
"first": "Chien-Cheng", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "National Tsing Hua University", |
|
"location": { |
|
"addrLine": "Address: 101, Kuangfu Road", |
|
"settlement": "Hsinchu", |
|
"country": "Taiwan" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Chang", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "National Tsing Hua University", |
|
"location": { |
|
"addrLine": "Address: 101, Kuangfu Road", |
|
"settlement": "Hsinchu", |
|
"country": "Taiwan" |
|
} |
|
}, |
|
"email": "jschang@cs.nthu.edu.tw" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In this paper, we describe an algorithm that employs syntactic and statistical analysis to extract bilingual collocations from a parallel corpus. Collocations are pervasive in all types of writing and can be found in phrases, chunks, proper names, idioms, and terminology. Therefore, automatic extraction of monolingual and bilingual collocations is important for many applications, including natural language generation, word sense disambiguation, machine translation, lexicography, and cross language information retrieval.", |
|
"pdf_parse": { |
|
"paper_id": "O04-2001", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In this paper, we describe an algorithm that employs syntactic and statistical analysis to extract bilingual collocations from a parallel corpus. Collocations are pervasive in all types of writing and can be found in phrases, chunks, proper names, idioms, and terminology. Therefore, automatic extraction of monolingual and bilingual collocations is important for many applications, including natural language generation, word sense disambiguation, machine translation, lexicography, and cross language information retrieval.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Collocations can be classified as lexical or grammatical collocations. Lexical collocations exist between content words, while a grammatical collocation exists between a content word and function words or a syntactic structure. In addition, bilingual collocations can be rigid or flexible in both languages. Rigid collocation refers to words in a collocation must appear next to each other, or otherwise (flexible/elastic). We focus in this paper on extracting rigid lexical bilingual collocations. In our method, the preferred syntactic patterns are obtained from idioms and collocations in a machine-readable dictionary. Collocations matching the patterns are extracted from aligned sentences in a parallel corpus. We use a new alignment method based on punctuation statistics for sentence alignment. The punctuation-based approach is found to outperform the length-based approach with precision rates approaching 98%. The obtained collocations are subsequently matched up based on cross-linguistic statistical association. Statistical association between the whole collocations as well as words in collocations is used to link a collocation with its counterpart collocation in the other language. We implemented the proposed method on a very large Chinese-English parallel corpus and obtained satisfactory results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Collocations, like terminology, tends to be lexicalized and to have a somewhat more restricted meaning than the surface forms suggest [Justeson and Katz, 1995] . Collocations are recurrent combinations of words that co-occur more often than they normally would based on chance. The words in a collocation may appear next to each other (rigid collocations) or in other locations (flexible/elastic collocations). On the other hand, collocations can also be classified as lexical or grammatical collocations [Benson, Benson, Ilson, 1986] . Lexical collocations exist between content words, while a grammatical collocation exists between a content word and function words or a syntactic structure. Collocations are pervasive in all types of writing and can be found in phrases, chunks, proper names, idioms, and terminology. Collocations in one language are usually difficult to translate directly into another language word for word; therefore, they present a challenge for machine translation systems and second language learners alike.", |
|
"cite_spans": [ |
|
{ |
|
"start": 134, |
|
"end": 159, |
|
"text": "[Justeson and Katz, 1995]", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 505, |
|
"end": 534, |
|
"text": "[Benson, Benson, Ilson, 1986]", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "Automatic extraction of monolingual and bilingual collocations is important for many applications, including natural language generation, word sense disambiguation, machine translation, lexicography, and cross language information retrieval. Hank and Church [1990] pointed out the usefulness of mutual information for identifying monolingual collocations in lexicography. Justeson and Katz [1995] proposed to identify technical terminology based on preferred linguistic patterns and discourse properties of repetition. Among the many general methods presented by Manning and Schutze [1999] , the best results can be achieved through filtering based on both linguistic and statistical constraints. Smadja [1993] presented a method called EXTRACT, based on the mean variance of the distance between two collocates , that is capable of computing elastic collocations. Kupiec [1993] proposed to extract bilingual noun phrases using statistical analysis of the co-occurrence of phrases. Smadja, McKeown, and Hatzivassiloglou [1996] extended the EXTRACT approach to handle bilingual collocation based mainly on the statistical measures of the Dice coefficient. Dunning [1993] pointed out the weakness of mutual information and showed that log likelihood ratios are more effective in identifying monolingual collocations, especially when the occurrence count is very low. Both Smadja and Kupiec used the statistical association between whole collocations in two languages without examining the constituent words. For a collocation and its non-compositional translation equivalent, this approach is reasonable. For instance, with the bilingual collocation (\"\u64e0\u7834\u982d\uff02, \"stop at nothing\uff02) shown in Example 1, it will not be helpful to examine the statistical association between \"stopping\uff02 and \"\u64e0\uff02 [ji, squeeze] (or \"\u7834\uff02 [bo, broken] and \"\u982d\uff02 [tou, head] for that matter). However, for the bilingual collocation (\"\u6e1b\u85aa\uff02, \" pay cut\uff02 ) shown in Example 2, considering the statistical association between \"pay\uff02 and \"\u85aa\uff02 [xin, wage] as well as between \"cut\uff02 and \"\u6e1b\uff02 [jian, reduce] certainly makes sense. Moreover, we have more data with which to make statistical inferences between words than between phrases. Therefore, measuring the statistical association of collocations based on constituent words will help us cope with the data sparseness problem. We will be able to extract bilingual collocations with high reliability even when they appear together in aligned sentences only once or twice.", |
|
"cite_spans": [ |
|
{ |
|
"start": 251, |
|
"end": 264, |
|
"text": "Church [1990]", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 372, |
|
"end": 396, |
|
"text": "Justeson and Katz [1995]", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 563, |
|
"end": 589, |
|
"text": "Manning and Schutze [1999]", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 697, |
|
"end": 710, |
|
"text": "Smadja [1993]", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 865, |
|
"end": 878, |
|
"text": "Kupiec [1993]", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 982, |
|
"end": 1026, |
|
"text": "Smadja, McKeown, and Hatzivassiloglou [1996]", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 1155, |
|
"end": 1169, |
|
"text": "Dunning [1993]", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 1806, |
|
"end": 1818, |
|
"text": "[bo, broken]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1998, |
|
"end": 2009, |
|
"text": "[xin, wage]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "They are stopping at nothing to get their kids into \"star schools\" \u4ed6\u5011\u64e0\u7834\u982d\u4e5f\u8981\u628a\u5b69\u5b50\u9001\u9032\u660e\u661f\u5c0f\u5b78 Source: 1995/02 No Longer Just an Academic Question: Educational Alternatives Come to Taiwan", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example 1", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Not only haven't there been layoffs or pay cuts, the year-end bonus and the performance review bonuses will go out as usual .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example 2", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Since collocations can be rigid or flexible in both languages, there are, in general, three types of bilingual collocation matches. In Example 1, (\"\u64e0\u7834\u982d\uff02,\"stop at nothing\uff02) is a pair of rigid collocation, and (\"\u628a\u2026\u9001\u9032\", \"get \u2026 into\") is a pair of elastic collocation. In Example 3 ,(\"\u8d70\u2026\u7684\uf937\u7dda', \"take the path of\" ) is an example of a pair of elastic and rigid collocations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\uf967\u4f46\uf967\u865e\u88c1\u54e1\u3001\u6e1b\u85aa\uff0c\uf98e\u7d42\u734e\uf90a\u3001\u8003\u7e3e\u734e\uf90a\u9084\u90fd\u7167\u767c\uf967\u8aa4 Source: 1991/01 Filling the Iron Rice Bowl", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Lin Ku-fang, a worker in ethnomusicology, worries too, but his way is not to take the path of revolutionizing Chinese music or making it more \"symphonic\"; rather, he goes directly into the tradition, looking into it for \"good music\" that has lasted undiminished for a hundred generations. In this paper, we describe an algorithm that employs syntactic and statistical analyses to extract rigid lexical bilingual collocations from a parallel corpus. Here, we focus on bilingual collocations, which have some lexical correlation between them and are rigid in both languages. To cope with the data sparseness problem, we use the statistical association between two collocations as well as that between their constituent words. In Section 2, we describe how we obtain the preferred syntactic patterns from collocations and idioms in a machine-readable dictionary. Examples will be given to show how collocations matching the patterns are extracted and aligned for given aligned sentence pairs in a parallel corpus. We implemented the proposed method in an experiment on the Chinese-English parallel corpus of Sinorama Magazine and obtained satisfactory results. We describe the experiments and our evaluation in section 3. The limitations of the study and related issues are taken up in section 4. We conclude and give future directions of research in section 5.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example 3", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this chapter, we will describe how we obtain bilingual collocations by using preferred syntactic patterns and associative information. Consider a pair of aligned sentences in a parallel corpus such as that shown in Example 4 below:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Extraction of Bilingual Collocations", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "The civil service rice bowl, about which people always said \"you can't get filled up, but you won't starve to death either,\" is getting a new look with the economic downturn. Not only haven't there been layoffs or pay cuts, the year-end bonus and the performance review bonuses will go out as usual, drawing people to compete for their own \"iron rice bowl.\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example 4", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Source: 1991/01 Filling the Iron Rice Bowl", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u4ee5\u5f80\u4e00\u5411\u88ab\u8a8d\u70ba\u300c\u5403\uf967\u98fd\u3001\u9913\uf967\u6b7b\u300d\u7684\u516c\u5bb6\u98ef\uff0c\u503c\u6b64\u7d93\u6fdf\u666f\u6c23\u4f4e\u8ff7\u4e4b\u969b\uff0c \uf967\u4f46\uf967\u865e\u88c1\u54e1\u3001\u6e1b\u85aa\uff0c\uf98e\u7d42\u734e\uf90a\u3001\u8003\u7e3e\u734e\uf90a\u9084\u90fd\u7167\u767c\uf967\u8aa4\uff0c\u56e0\u800c\u4fc3\u4f7f\uf967\u5c11 \u4eba\u56de\u982d\u7af6\u9010\u9019\u96bb\u300c\u9435\u98ef\u7897\u300d\u3002", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We can extract the following collocations and translation counterparts:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u4ee5\u5f80\u4e00\u5411\u88ab\u8a8d\u70ba\u300c\u5403\uf967\u98fd\u3001\u9913\uf967\u6b7b\u300d\u7684\u516c\u5bb6\u98ef\uff0c\u503c\u6b64\u7d93\u6fdf\u666f\u6c23\u4f4e\u8ff7\u4e4b\u969b\uff0c \uf967\u4f46\uf967\u865e\u88c1\u54e1\u3001\u6e1b\u85aa\uff0c\uf98e\u7d42\u734e\uf90a\u3001\u8003\u7e3e\u734e\uf90a\u9084\u90fd\u7167\u767c\uf967\u8aa4\uff0c\u56e0\u800c\u4fc3\u4f7f\uf967\u5c11 \u4eba\u56de\u982d\u7af6\u9010\u9019\u96bb\u300c\u9435\u98ef\u7897\u300d\u3002", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(civil service rice bowl, \u516c\u5bb6\u98ef) (get filled up, \u5403\u2026\u98fd) (starve to death, \u9913\u2026\u6b7b) (economic downturn, \u7d93\u6fdf\u666f\u6c23\u4f4e\u8ff7) (pay cuts, \u6e1b\u85aa) (year-end bonus, \u5e74\u7d42\u734e\u91d1) (performance review bonuses, \u8003\u7e3e\u734e\u91d1) (iron rice bowl, \u9435\u98ef\u7897)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u4ee5\u5f80\u4e00\u5411\u88ab\u8a8d\u70ba\u300c\u5403\uf967\u98fd\u3001\u9913\uf967\u6b7b\u300d\u7684\u516c\u5bb6\u98ef\uff0c\u503c\u6b64\u7d93\u6fdf\u666f\u6c23\u4f4e\u8ff7\u4e4b\u969b\uff0c \uf967\u4f46\uf967\u865e\u88c1\u54e1\u3001\u6e1b\u85aa\uff0c\uf98e\u7d42\u734e\uf90a\u3001\u8003\u7e3e\u734e\uf90a\u9084\u90fd\u7167\u767c\uf967\u8aa4\uff0c\u56e0\u800c\u4fc3\u4f7f\uf967\u5c11 \u4eba\u56de\u982d\u7af6\u9010\u9019\u96bb\u300c\u9435\u98ef\u7897\u300d\u3002", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In section 2.1, we will first show how that process is carried out for Example 4 using the proposed approach. A formal description of our method will be given in section 2.2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u4ee5\u5f80\u4e00\u5411\u88ab\u8a8d\u70ba\u300c\u5403\uf967\u98fd\u3001\u9913\uf967\u6b7b\u300d\u7684\u516c\u5bb6\u98ef\uff0c\u503c\u6b64\u7d93\u6fdf\u666f\u6c23\u4f4e\u8ff7\u4e4b\u969b\uff0c \uf967\u4f46\uf967\u865e\u88c1\u54e1\u3001\u6e1b\u85aa\uff0c\uf98e\u7d42\u734e\uf90a\u3001\u8003\u7e3e\u734e\uf90a\u9084\u90fd\u7167\u767c\uf967\u8aa4\uff0c\u56e0\u800c\u4fc3\u4f7f\uf967\u5c11 \u4eba\u56de\u982d\u7af6\u9010\u9019\u96bb\u300c\u9435\u98ef\u7897\u300d\u3002", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "To extract bilingual collocations, we first run part of speech tagger on both sentences. For instance, for Example 4, we get the results of tagging shown in Examples 4A and 4B.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "An Example of Extracting Bilingual Collocations", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "In the tagged English sentence, we identify phrases that follow a syntactic pattern from a set of training data of collocations. For instance, \"jj nn\" is one of the preferred syntactic structures. Thus, \"civil service,\" \"economic downturn,\" \"own iron\" etc are matched. See Table 1 for more details. For Example 4, the phrases shown in Examples 4C and 4D are considered to be potential candidates for collocations because they match at least two distinct collocations listed in LDOCE:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 273, |
|
"end": 281, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "An Example of Extracting Bilingual Collocations", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Example 4A", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "An Example of Extracting Bilingual Collocations", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The/at civil/jj service/nn rice/nn bowl/nn ,/, about/in which/wdt people/nns always/rb said/vbd \"/`` you/ppss can/md 't/* get/vb filled/vbn up/rp ,/, but/cc you/ppss will/md 't/* starve/vb to/in death/nn either/cc ,/rb \"/'' is/bez getting/vbg a/at new/jj look/nn with/in the/at economic/jj downturn/nn ./. Not/nn only/rb have/hv 't/* there/rb been/ben layoffs/nns or/cc pay/vb cuts/nns ,/, the/at year/nn -/in end/nn bonus/nn and/cc the/at performance/nn review/nn bonuses/nn will/md go/vb out/rp as/ql usual/jj ,/, drawing/vbg people/nns to/to compete/vb for/in their/pp$ own/jj \"/`` iron/nn rice/nn bowl/nn ./. \"/''", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "An Example of Extracting Bilingual Collocations", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "\u4ee5\u5f80/Nd \u4e00\u5411/Dd \u88ab/P02 \u8a8d\u70ba/VE2 \u300c/PU \u5403/VC \uf967/Dc \u98fd/VH \u3001/PU \u9913\uf967\u6b7b/VR \u300d/PU \u7684/D5 \u516c\u5bb6/Nc \u98ef/Na \uff0c/PU \u503c\u6b64/Ne \u7d93\u6fdf/Na \u666f\u6c23/Na \u4f4e\u8ff7/VH \u4e4b\u969b/NG \uff0c/PU \uf967\u4f46/Cb \uf967\u865e/VK \u88c1\u54e1/VC \u3001/PU \u6e1b\u85aa/VB \uff0c/PU \uf98e\u7d42\u734e\uf90a/Na \u3001/PU \u8003\u7e3e/Na \u734e\uf90a/Na \u9084\u90fd/Db \u7167/VC \u767c/VD \uf967\u8aa4/VH \uff0c /PU \u56e0\u800c/Cb \u4fc3\u4f7f/VL \uf967\u5c11/Ne \u4eba/Na \u56de\u982d/VA \u7af6\u9010/VC \u9019/Ne \u96bb/Nf \u300c/PU \u9435\u98ef\u7897/Na \u300d/PU", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example 4B", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\"civil service,\uff02 \uff02rice bowl,\uff02 \uff02iron rice bow,\uff02 \uff02fill up,\uff02 \uff02economic downturn,\uff02 \uff02end bonus,\uff02 \uff02year -end bonus,\uff02 \uff02go out,\uff02 \uff02performance review,\uff02 \uff02performance review bonus,\uff02 \uff02pay cut,\uff02 \uff02starve to death,\uff02 \uff02civil service rice,\uff02 \uff02service rice,\uff02 \uff02service rice bowl,\uff02 \uff02people always,\uff02 \uff02get fill,\uff02 \uff02people to compete,\uff02 \uff02layoff or pay,\uff02 \uff02new look,\uff02 \uff02draw people\uff02", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example 4C", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\"\u5403\uf967\u98fd,\uff02 \"\u9913\uf967\u6b7b,\uff02 \"\u516c\u5bb6\u98ef,\uff02 \"\u7d93\u6fdf\u666f\u6c23,\uff02 \"\u666f\u6c23\u4f4e\u8ff7,\uff02 \"\u7d93 \u6fdf\u666f\u6c23\u4f4e\u8ff7,\uff02 \"\u88c1\u54e1,\uff02 \"\u6e1b\u85aa,\uff02 \"\uf98e\u7d42\u734e\uf90a,\uff02 \"\u8003\u7e3e\u734e\uf90a,\uff02 \"\u7af6 \u9010,\uff02 \uff02\u9435\u98ef\u7897.\uff02", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example 4D", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Although \"new look\" and \"draw people\" are legitimate phrases, they are more like \"free combinations\" than collocations. That is reflected by their low log likelihood ratio values. For this research, we proceed to determine how tightly the two words in overlapping bigrams within a collocation are associated with each other; we calculate the minimum of the log likelihood ratio values for all the bigrams. Then, we filter out the candidates whose POS patterns appear only once or have minimal log likelihood ratios of less than 7.88. See Tables 1 and 2 for more details.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 538, |
|
"end": 553, |
|
"text": "Tables 1 and 2", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Example 4D", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In the tagged Chinese sentence, we basically proceed in the same way to identify the candidates of collocations, based on the preferred linguistic patterns of the Chinese translations of collocations in an English-Chinese MRD. However, since there is no space delimiter between words, it is at times difficult to say whether a translation is a multi-word collocation or a single word, in which case it should not be considered as a collocation. For this reason, we take multiword and singleton phrases (with two or more characters) into consideration. For instance, in tagged Example 4, we extract and consider these candidates shown in Tables 1 and 2 as the counterparts of English collocations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example 4D", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Notes that at this point, we have not pinned collocations down but allow overlapping and conflicting candidates such as \"\u7d93\u6fdf\u666f\u6c23,\" \"\u666f\u6c23\u4f4e\u8ff7,\" and \"\u7d93\u6fdf\u666f\u6c23\u4f4e\u8ff7.\" See Tables 3 and 4 for more details. To align collocations in both languages, we employ the Competitive Linking Algorithm proposed by Melamed [1996] to conduct word alignment. Basically, the proposed algorithm CLASS, the Collocation Linking Algorithm based on Syntax and Statistics, is a greedy method that selects collocation pairs. The pair with the highest association value takes precedence over those with lower values. CLASS also imposes a one-to-one constraint on the collocation pairs selected. Therefore, the algorithm at each step considers only pairs with words that haven't been selected previously. However, CLASS differs with CLA(Competitive Linking Algorithm) in that it considers the association between the two candidate collocations based on two measures: the Logarithmic Likelihood Ratio between the two collocations in question as a whole; the translation probability of collocation based on constituent words.", |
|
"cite_spans": [ |
|
{ |
|
"start": 285, |
|
"end": 299, |
|
"text": "Melamed [1996]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 154, |
|
"end": 169, |
|
"text": "Tables 3 and 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Example 4D", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In the case of Example 4, the CLASS algorithm first calculates the counts of collocation candidates in the English and Chinese parts of the corpus. The collocations are matched up randomly across from English to Chinese. Subsequently, the co-occurrence counts of these candidates matched across from English to Chinese are also tallied. From the monolingual collocation candidate counts and cross language concurrence counts, we produce the LLR values and the collocation translation probability derived from word alignment analysis. Those collocation pairs with zero translation probability are ignored. The lists are sorted in descending order of LLR values, and the pairs with low LLR value are discarded. Again, in the case of Example 4, the greedy selection process of collocation starts with the first entry in the sorted list and proceeds as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example 4D", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "1. The first, third, and fourth pairs, (\"iron rice bowl,\" \"\u9435\u98ef\u7897\"), (\"year-end bonus,\" \"\uf98e \u7d42\u734e\uf90a\"), and (\"economic downturn,\" \"\u7d93\u6fdf\u666f\u6c23\u4f4e\u8ff7\"), are selected first. Thus, conflicting pairs will be excluded from consideration, including the second pair, fifth pair and so on. 2. The second entry (\"rice bowl,\" \"\u9435\u98ef\u7897\"), fifth entry (\"economic downturn,\" \"\u503c\u6b64\u7d93 \u6fdf\u666f\u6c23\") and so on conflict with the second and third entries that have already been selected. Therefore, CLASS skips over these entries.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example 4D", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "3. The entries (\"performance review bonus,\" \"\u8003\u7e3e\u734e\uf90a\"), (\"civil service rice,\" \"\u516c\u5bb6 \u98ef\"), (\"pay cuts,\" \"\u6e1b\u85aa\"), and (\"starve to death,\" \"\u9913\uf967\u6b7b\") are selected next.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example 4D", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "4. CLASS proceeds through the rest of the list and the other list without finding any entries that do not conflict with the seven entries previously selected. 5. The program terminates and outputs a list of seven collocations. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example 4D", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this section, we describe formally how CLASS works. We assume the availability of a parallel corpus and a list of collocations in a bilingual MRD. We also assume that the sentences and words have been aligned in the parallel corpus. We will describe how CLASS extracts bilingual collocations from such a parallel corpus. CLASS carries out a number of preprocessing steps to calculate the following information:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Method", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "1. lists of preferred POS patterns of collocation in both languages;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Method", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "2. collocation candidates matching the preferred POS patterns;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Method", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "3. n-gram statistics for both languages, N = 1, 2;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Method", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "4. log likelihood ratio statistics for two consecutive words in both languages;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Method", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "5. log likelihood ratio statistics for a pair of candidates of bilingual collocations across one language to the other;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Method", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "6. content word alignment based on the Competitive Linking Algorithm [Melamed, 1997.] Figure 1 illustrates how the method works for each aligned sentence pair (C, E) in the corpus. Initially, part of speech taggers process C and E. After that, collocation candidates are extracted based on preferred POS patterns and statistical association between consecutive words in a collocation. The collocation candidates are subsequently matched up from one language to the other. These pairs are sorted according to the log likelihood ratio and collocation translation probability. A greedy selection process goes through the sorted list and selects bilingual collocations subject to one-to-one constraint. The detailed algorithm is given below: Figure 1 . The major components in the proposed CLASS algorithm.", |
|
"cite_spans": [ |
|
{ |
|
"start": 69, |
|
"end": 85, |
|
"text": "[Melamed, 1997.]", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 738, |
|
"end": 746, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Method", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Input: A list of bilingual collocations from a machine-readable dictionary Output:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preprocessing: Extracting preferred POS patterns P and Q in both languages", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preprocessing: Extracting preferred POS patterns P and Q in both languages", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Perform part of speech tagging for both languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preprocessing: Extracting preferred POS patterns P and Q in both languages", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Calculate the number of instances for all POS patterns in both languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Eliminate the POS patterns with instance counts of 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "3.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Extract bilingual collocations from aligned sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Collocation Linking Alignment based on Syntax and Statistics", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(1) A pair of aligned sentences (C, E), C = (C 1 C 2 \u2026 C n ) and E = (E 1 E 2 \u2026 E m ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Input:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(2) Preferred POS patterns P and Q in both languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Input:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Output: Aligned bilingual collocations in (C, E)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Input:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "1. C is segmented and tagged with part of speech information T.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Input:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "2. E is tagged with part of speech sequences S.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Input:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "2 2 2 1 1 1 2 2 2 1 1 1 ) 1 ( ) 1 ( ) 1 ( ) 1 ( log 2 ) ; ( 2 2 1 1 2 k n k k n k k n k k n k p p p p p p p p y x LLR \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 =", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Log-likelihood ratio: LLR(x;y)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "k 1 : # of pairs that contain x and y simultaneously. k 2 : # of pairs that contain x but do not contain y. n 1 : # of pairs that contain y n 2 : # of pairs that does not contain y p 1 = k 1 /n 1, p 2 = k 2 /n 2 , p = (k 1 +k 2 )/(n 1 +n 2 ) 3. Match T against P and match S against Q to extract collocation candidates X 1 , X 2 ,....X k in English and Y 1 , Y 2 , ...,Y e in Chinese. 4. Consider each bilingual collocation candidate (X i , Y j ) in turn and calculate the minimal log likelihood ratio LLR between X i and Y j :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Log-likelihood ratio: LLR(x;y)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "MLLR (D) = ) , ( 1 i i 1 , 1 min + \u2212 = W W LLR n i .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Log-likelihood ratio: LLR(x;y)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "5. Eliminate candidates with LLR that are smaller than a threshold (7.88).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Log-likelihood ratio: LLR(x;y)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "English collocation candidates to Chinese ones:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Match up all possible links from", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": "(D 1 , F 1 ), (D 1 , F 2 ), \u2026 (D i , F j ), \u2026 ( D m , F n ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Match up all possible links from", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": "7. Calculate LLR for (D i , F j ) and discard pairs with LLR value that are lower than 7.88.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Match up all possible links from", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": ") | ( 1 ) | ( max j j i e c P k F D P i D c F e", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Collocation translation probability P(x | y)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2208 \u2208 \u2211 = k : number of words in the English collocation F j 8. The only candidate list of bilingual collocations considered is the one with non-zero collocation translation probability P(D i , F j ) values. The list is then sorted based on the LLR values and collocation translation probability. 9. Go down the list and select a bilingual collocation if it does not conflict with a previous selection.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Collocation translation probability P(x | y)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "10. Output the bilingual collocation selected in Step 9.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Collocation translation probability P(x | y)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We have implemented CLASS using the Longman Dictionary of Contemporary English, English-Chinese Edition, and the parallel corpus of Sinorama magazine. The articles from", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments and Evaluation", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "Sinorama covered a wide range of topics, reflecting the personalities, places, and events in Taiwan for the previous three decades. We experimented on articles mainly dating from 1995 to 2002. Sentence and word alignment were carried out first to obtain the Sinorama Parallel Corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments and Evaluation", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "Sentence alignment is a very important aspect of CLASS. It is the basis for good collocation alignment. We use a new alignment method based on punctuation statistics [Yeh & Chang, 2002] . The punctuation-based approach has been found to outperform the length-based approach with precision rates approaching 98%. With the sentence alignment approach, we obtained approximately 50,000 reliably aligned sentences containing 1,756,000 Chinese words (about 2,534,000 Chinese characters) and 2,420,000 English words in total.", |
|
"cite_spans": [ |
|
{ |
|
"start": 166, |
|
"end": 185, |
|
"text": "[Yeh & Chang, 2002]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments and Evaluation", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "The content words were aligned using the Competitive Linking Algorithm. Alignment of content words resulted in a probabilistic dictionary with 229,000 entries. We evaluated 100 random sentence samples with 926 linking types, and the achieved precision rate was 93.3%. Most of the errors occurred with English words having no counterpart in the corresponding Chinese sentence. Translators do not always translate word for word. For instance, with the word \"water\" in Example 5, it seems that there is no corresponding pattern in the Chinese sentence. Another major cause of errors was collocations that were not translated compositionally. For instance, the word \"State\" in the Example 6 is a part of the collocation \"United States,\" and \"\u7f8e\u570b\" is more highly associated with \"United\" than \"States\"; therefore, due to the one-to-one constraint \"States\" will not be aligned with \"\u7f8e\u570b\". Most often, it will be aligned incorrectly. About 49% of the error links were of this type.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments and Evaluation", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "The boat is indeed a vessel from the mainland that illegally entered Taiwan waters. The words were a \"mark\" added by the Taiwan Garrison Command before sending it back.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example 5", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Source: 1990/10 Letters to the Editor", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u7de8\u6309\uff1a\u6b64\u8239\u7684\u78ba\u662f\u5927\uf9d3\u5077\u6e21\uf92d\u53f0\u8239\u96bb\uff0c\u90a3\u516b\u500b\u5b57\u53ea\uf967\u904e\u662f\u8b66\u7e3d\u5728\u9063 \u8fd4\u524d\u7d66\u5b83\u52a0\u7684\u300c\u8a18\u865f\u300d\uff01", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Figures issued by the American Immigration Bureau show that most Chinese immigrants had set off from Kwangtung and Hong Kong, which is why the majority of overseas Chinese in the United States to this day are of Cantonese origin.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example 6", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Source: 1990/09 All Across the World: The Chinese Global Village", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u7531\u7f8e\u570b\u79fb\u6c11\u5c40\u767c\u8868\u7684\uf969\u5b57\uf92d\u770b\uff0c\u4e2d\u570b\u79fb\u6c11\u4ee5\u5f9e\u5ee3\u6771\u3001\u9999\u6e2f\u51fa\u6d77\u8005\u6700\u591a\uff0c\u6545 \u5230\u73fe\u5728\u70ba\u6b62\uff0c\u7f8e\u570b\u83ef\u50d1\u4ecd\u4ee5\u539f\u7c4d\u5ee3\u6771\u8005\u4f54\u5927\u591a\uf969\u3002", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We obtained the word-to-word translation probability from the result of word alignment. The translation probability P(c|e) is calculated as followed: Take \"pay\" as an example. Table 6 shows the various alignment translations for \"pay\" and the translation probability. Before running CLASS, we obtained 10,290 English idioms, collocations, and phrases together with 14,945 Chinese translations in LDOCE. After part of speech taggi ng, we had 1,851 distinct English patterns and 4326 Chinese patterns. To calculate the statistical association within words in a monolingual collocation and across the bilingual collocations, we built N-grams for the Sinorama Parallel Corpus. There were 790,000 Chinese word bigrams and 669,000 distinct English bigrams. CLASS identified around 595,000 Chinese collocation candidates (184,000 distinct types) and 230,000 English collocation candidates (135,000 distinct types) through this process.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 176, |
|
"end": 183, |
|
"text": "Table 6", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "\u7531\u7f8e\u570b\u79fb\u6c11\u5c40\u767c\u8868\u7684\uf969\u5b57\uf92d\u770b\uff0c\u4e2d\u570b\u79fb\u6c11\u4ee5\u5f9e\u5ee3\u6771\u3001\u9999\u6e2f\u51fa\u6d77\u8005\u6700\u591a\uff0c\u6545 \u5230\u73fe\u5728\u70ba\u6b62\uff0c\u7f8e\u570b\u83ef\u50d1\u4ecd\u4ee5\u539f\u7c4d\u5ee3\u6771\u8005\u4f54\u5927\u591a\uf969\u3002", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "P(c|e) = ) ( ) ,", |
|
"eq_num": "(" |
|
} |
|
], |
|
"section": "\u7531\u7f8e\u570b\u79fb\u6c11\u5c40\u767c\u8868\u7684\uf969\u5b57\uf92d\u770b\uff0c\u4e2d\u570b\u79fb\u6c11\u4ee5\u5f9e\u5ee3\u6771\u3001\u9999\u6e2f\u51fa\u6d77\u8005\u6700\u591a\uff0c\u6545 \u5230\u73fe\u5728\u70ba\u6b62\uff0c\u7f8e\u570b\u83ef\u50d1\u4ecd\u4ee5\u539f\u7c4d\u5ee3\u6771\u8005\u4f54\u5927\u591a\uf969\u3002", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We selected 100 sentences to evaluate the performance. We focused on rigid lexical collocations. The average English sentence had 45.3 words, while the average Chinese sentence had 21.4 words. The two human judges, both master students majoring in Foreign Languages, identified the bilingual collocations in these sentences. We then compared the bilingual collocations produced by CLASS against the answer keys. The evaluation produced an average recall rate = 60.9 % and precision rate = 85.2 % (see Table 7 ). ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 501, |
|
"end": 508, |
|
"text": "Table 7", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "\u7531\u7f8e\u570b\u79fb\u6c11\u5c40\u767c\u8868\u7684\uf969\u5b57\uf92d\u770b\uff0c\u4e2d\u570b\u79fb\u6c11\u4ee5\u5f9e\u5ee3\u6771\u3001\u9999\u6e2f\u51fa\u6d77\u8005\u6700\u591a\uff0c\u6545 \u5230\u73fe\u5728\u70ba\u6b62\uff0c\u7f8e\u570b\u83ef\u50d1\u4ecd\u4ee5\u539f\u7c4d\u5ee3\u6771\u8005\u4f54\u5927\u591a\uf969\u3002", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This paper describes a new approach to the automatic acquisition of bilingual collocations from a parallel corpus. Our method is an extension of Melamed's Competitive Linking Algorithm for word alignment. It combines both linguistic and statistical information and uses it to recognize monolingual and bilingual collocations in a much simpler way than Smadja's work does. Our approach differs from previous work in the following ways:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "1. We use a data-driven approach to extract monolingual collocations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "2. Unlike Smadja and Kupiec, we do not commit to two sets of monolingual collocations. Instead, we consider many overlapping and conflicting candidates and rely on cross linguistic statistics to revolve the issue.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "3. We combine both type of information related to the whole collocation as well as to the constituent words to achieve more reliable probabilistic estimation of aligned collocations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "Our approach is limited by its reliance on training data consisting of mostly rigid collocation patterns, and it is not applicable to elastic collocations such as \"jump on \u2026 bandwagon.\" For instance, the program cannot handle the elastic collocation in the following example:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "Example 7", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "Taiwan has had the good fortune to jump on this high-profit bandwagon and has been able to snatch a substantial lead over countries like Malaysia and mainland China, which have just started in this industry.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u53f0\u7063\u5e78\u800c\u8d95\u642d\uf9ba\u4e00\u7a0b\u7372\uf9dd\u8c50\u539a\u7684\u9806\u98a8\uf902\uff0c\u53ef\u4ee5\u5c07\u76ee\u524d\u525b\u8981\u8d77\u6b65\u7684\u99ac\uf92d\u897f \u4e9e\u3001\u4e2d\u570b\u5927\uf9d3\u7b49\u570b\u5bb6\u9060\u62cb\u8eab\u5f8c\u3002T", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This limitation can be partially alleviated by matching nonconsecutive word sequences against existing lists of collocations for the two languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Source: Sinorama, 1996, Dec Issue Page 22, Stormy Waters for Taiwan\uff07s ICs", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Another limitation has to do with bilingual collocations, which are not literal translations. For instance, \"difficult and intractable\" can not yet be handled by the program, because it is not a word for word translation of \"\u6840\u50b2\uf967\u99b4\".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Source: Sinorama, 1996, Dec Issue Page 22, Stormy Waters for Taiwan\uff07s ICs", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This saying means that no matter how difficult and intractable a person may seem, there will always be someone else who can cut him down to size.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u610f\u601d\u662f\uf96f\u4e00\u500b\u518d\u600e\u9ebc\u6840\u50b2\uf967\u99b4\u7684\u4eba\uff0c\u90fd\u6703\u6709\u4eba\u6709\u8fa6\u6cd5\u5236\u670d\u4ed6\u3002", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In the experiment, we found that this limitation may be partially solved by splitting the candidate list of bilingual collocations into two lists: one (NZ) with non-zero phrase translation probabilistic values and the other (ZE) with zero values. The two lists can then be sorted based on the LLR values. After extracting bilingual collocations from the NZ list, we could continue to go down the ZE list and select bilingual collocations that did not conflict with previously selection.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Source: 1990/05 A Fierce Horse Ridden by a Fierce Rider", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In the proposed method, we do no take advantage of the correspondence between POS patterns in one language with those in the other. Some linking mistakes seem to be avoidable if POS information is used. For example, the aligned collocation for \"issue/vb visas/nns\" is \"\u7c3d\u8b49/Na\", not \"\u767c/VD \u7c3d\u8b49/Na.\" However, the POS pattern \"vb nn\" appears to be more compatible with \"VD Na\" than with \"Na.\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Source: 1990/05 A Fierce Horse Ridden by a Fierce Rider", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Example 9", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Source: 1990/05 A Fierce Horse Ridden by a Fierce Rider", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The Republic of China broke relations with Australia in 1972, after the country recognized the Chinese Communists, and because of the lack of formal diplomatic relations, Australia felt it could not issue visas on Taiwan. Instead, they were handled through its consulate in Hong Kong and then sent back to Taiwan, the entire process requiring five days to a week to complete.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u4e00\u4e5d\u4e03\u4e8c\uf98e\u6fb3\u6d32\u627f\u8a8d\u4e2d\u5171\uff0c\u4e2d\u83ef\u6c11\u570b\u5373\u65bc\u6b64\u6642\u8207\u6fb3\u65b7\u4ea4\u3002\u56e0\u70ba\u7121\u6b63\u5f0f\u90a6 \u4ea4\uff0c\u6fb3\u6d32\uf967\u80fd\u5728\u53f0\u7063\u767c\u7c3d\u8b49\uff0c\u800c\u7531\u6fb3\u6d32\u99d0\u9999\u6e2f\u7684\u4f7f\u9928\u4ee3\u8fa6\uff0c\u7136\u5f8c\u5c07\u7c3d\u8b49\u9001 \u56de\u53f0\u7063\uff0c\u7c3d\u8b49\u624b\u7e8c\u7d04\u9700\u4e94\u5929\u81f3\u4e00\u5468\u3002", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Source: 1990/04 Visas for Australia to Be Processed in Just 24 Hours A number of mistakes are caused by erroneous word segments in the Chinese tagger. For instance, \"\u5927\u5b78\u53ca\u7814\u7a76\u751f\u51fa\u570b\u671f\u9593\" should be segmented as \" \u5927\u5b78 / \u53ca / \u7814\u7a76\u751f / \u51fa\u570b / \u671f\u9593\" but instead is segmented as \"\u5927\u5b78 / \u53ca / \u7814\u7a76 / \u751f\u51fa / \u570b / \u671f\u9593 / \u7684 / \u5b78\u696d.\" Another major source of segmentation mistakes has to do with proper names and their transliterations. These name entities that are not included in the database are usually segmented into single Chinese characters. For instance, \"...\u4e00\u66f8\u4f5c\u8005\uf9c7\u5b78\u929a\u6307\u51fa...\" is segmented as \" ... / \u4e00 / \u66f8 / \u4f5c\u8005 / \uf9c7 / \u5b78 / \u929a / \u6307\u51fa / ...,\" while \"...\u5728\u5308\u7259\uf9dd\u5730\u5340 \u5efa\u570b\u7684\u99ac\u672d\u723e\u4eba...\" is segmented as \"...\u5728 / \u5308\u7259\uf9dd / \u5730\u5340 / \u5efa\u570b / \u7684 / \u99ac / \u672d / \u723e / \u4eba / ....\" Therefore, handling these name entities in a pre-process should be helpful to avoid segmenting mistakes and alignment difficulties.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u4e00\u4e5d\u4e03\u4e8c\uf98e\u6fb3\u6d32\u627f\u8a8d\u4e2d\u5171\uff0c\u4e2d\u83ef\u6c11\u570b\u5373\u65bc\u6b64\u6642\u8207\u6fb3\u65b7\u4ea4\u3002\u56e0\u70ba\u7121\u6b63\u5f0f\u90a6 \u4ea4\uff0c\u6fb3\u6d32\uf967\u80fd\u5728\u53f0\u7063\u767c\u7c3d\u8b49\uff0c\u800c\u7531\u6fb3\u6d32\u99d0\u9999\u6e2f\u7684\u4f7f\u9928\u4ee3\u8fa6\uff0c\u7136\u5f8c\u5c07\u7c3d\u8b49\u9001 \u56de\u53f0\u7063\uff0c\u7c3d\u8b49\u624b\u7e8c\u7d04\u9700\u4e94\u5929\u81f3\u4e00\u5468\u3002", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this paper, we have presented an algorithm that employs syntactic and statistical analyses to extract rigid bilingual collocations from a parallel corpus. Phrases matching the preferred patterns are extracted from aligned sentences in a parallel corpus. These phrases are subsequently matched up based on cross-linguistic statistical association. Statistical association between the whole collocations as well as words in the collocations is used jointly to link a collocation with its counterpart. We implemented the proposed method on a very large Chinese-English parallel corpus and obtained satisfactory results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "A number of interesting future directions suggest themselves. First, it would be interesting to see how effectively we can extend the method to longer and elastic collocations and to grammatical collocations. Second, bilingual collocations that are proper names and transliterations may need additional consideration. Third, it will be interesting to see if the performance can be improved using cross language correspondence between POS patterns.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "5." |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "The BBI Combinatory Dictionary of English: A Guide to Word Combinations", |
|
"authors": [ |
|
{ |
|
"first": "Morton", |
|
"middle": [], |
|
"last": "Benson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Evelyn", |
|
"middle": [], |
|
"last": "Benson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Ilson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1986, |
|
"venue": "John Benjamins", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Benson, Morton., Evelyn Benson, and Robert Ilson.\" The BBI Combinatory Dictionary of English: A Guide to Word Combinations. \" John Benjamins, Amsterdam, Netherlands, 1986.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Looking for needles in a haystack", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Choueka", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1988, |
|
"venue": "RIAO, Conference on User-Oriented Context Based Text and Image Handling", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "609--623", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Choueka, Y. \"Looking for needles in a haystack\", RIAO, Conference on User-Oriented Context Based Text and Image Handling, Cambridge, 1988, pp. 609-623.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Automatic retrieval of frequent idiomatic and collocational expressions in a large corpus", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Choueka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Neuwitz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1983, |
|
"venue": "Journal of the Association for Literary and Linguistic Computing", |
|
"volume": "4", |
|
"issue": "1", |
|
"pages": "34--38", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Choueka, Y.; Klein, and Neuwitz, E.. \"Automatic retrieval of frequent idiomatic and collocational expressions in a large corpus.\" Journal of the Association for Literary and Linguistic Computing, 4(1), 1983, pp34-38.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Word association norms, mutual information, and lexicography", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Church", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Hanks", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Computational Linguistics", |
|
"volume": "16", |
|
"issue": "1", |
|
"pages": "22--29", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Church, K. W. and Hanks, P. \"Word association norms, mutual information, and lexicography.\" Computational Linguistics, 16(1) , 1990, pp. 22-29.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Termight: Identifying and translation technical terminology", |
|
"authors": [ |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Dagan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Church", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Proc. of the 4th Conference on Applied Natural Language Processing (ANLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "34--40", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dagan, I. and K. Church. \"Termight: Identifying and translation technical terminology\". In Proc. of the 4th Conference on Applied Natural Language Processing (ANLP), 1994, pages 34-40.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Accurate methods for the statistics of surprise and coincidence", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Dunning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Computational Linguistics", |
|
"volume": "19", |
|
"issue": "1", |
|
"pages": "61--75", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dunning, T. \"Accurate methods for the statistics of surprise and coincidence\", Computational Linguistics 19:1, 1993, pp.61-75.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Learning bilingual collocations by word-level sorting", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Haruno", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Ikehara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Yamazaki", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Proc. of the 16th International Conference on Computational Linguistics (COLING '96)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "525--530", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Haruno, M., S. Ikehara, and T. Yamazaki. \"Learning bilingual collocations by word-level sorting.\" In Proc. of the 16th International Conference on Computational Linguistics (COLING '96), 1996, pp. 525-530.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Character-based Collocation for Mandarin Chinese", |
|
"authors": [ |
|
{ |
|
"first": "C.-R", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K.-J", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y.-Y.", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proc. of the 38th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "540--543", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Huang, C.-R., K.-J. Chen, Y.-Y. Yang, \"Character-based Collocation for Mandarin Chinese\", In Proc. of the 38th Annual Meeting of the Association for Computational Linguistics, 2000, pp. 540-543.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Acquiring collocations for lexical choice between near-synonyms", |
|
"authors": [ |
|
{ |
|
"first": "Diana", |
|
"middle": [], |
|
"last": "Inkpen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zaiu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Graeme", |
|
"middle": [], |
|
"last": "Hirst", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the Workshop on Unsupervised Lexical Acquisition, 40th Annual Meeting of the Association for Computational Lin-guistics (ACL 2002", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "67--76", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Inkpen, Diana Zaiu and Hirst, Graeme. \"Acquiring collocations for lexical choice between near-synonyms.\" In Proceedings of the Workshop on Unsupervised Lexical Acquisition, 40th Annual Meeting of the Association for Computational Lin-guistics (ACL 2002), 2002, pp. 67-76.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Technical Terminology: some linguistic properties and an algorithm for identification in text", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Justeson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Slava", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Katz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Natural Language Engineering", |
|
"volume": "1", |
|
"issue": "1", |
|
"pages": "9--27", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Justeson, J.S. and Slava M. Katz. \"Technical Terminology: some linguistic properties and an algorithm for identification in text.\" Natural Language Engineering, 1(1), 1995, pp. 9-27.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "An algorithm for finding noun phrase correspondences in bilingual corpora", |
|
"authors": [ |
|
{ |
|
"first": "Julian", |
|
"middle": [], |
|
"last": "Kupiec", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "17--22", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kupiec, Julian. \"An algorithm for finding noun phrase correspondences in bilingual corpora.\" In Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics, 1993, pp. 17-22.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Using collocation statistics in information extraction", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Proc. of the Seventh Message Understanding Conference (MUC-7)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lin, D. \"Using collocation statistics in information extraction.\" In Proc. of the Seventh Message Understanding Conference (MUC-7), 1998.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Foundations of Statistical Natural Language Processing", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Schutze", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "C", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Manning and H. Schutze. \"Foundations of Statistical Natural Language Processing,\" C., MIT Press, 1999.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "A Word-to-Word Model of Translational Equivalence", |
|
"authors": [ |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Melamed", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Dan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Proceedings of the 35st Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "490--497", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Melamed, I. Dan. \"A Word-to-Word Model of Translational Equivalence.\" In Proceedings of the 35st Annual Meeting of the Association for Computational Linguistics, 1997, pp 490-497.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Retrieving collocations from text: Xtract", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Smadja", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Computational Linguistics", |
|
"volume": "19", |
|
"issue": "1", |
|
"pages": "143--177", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Smadja, F. \"Retrieving collocations from text: Xtract.\" Computational Linguistics, 19(1) 1993, pp143-177.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Translating collocations for bilingual lexicons: A statistical approach", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Smadja", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Mckeown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Hatzivassiloglou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Computational Linguistics", |
|
"volume": "22", |
|
"issue": "1", |
|
"pages": "1--38", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Smadja, F., K.R. McKeown, and V. Hatzivassiloglou. \"Translating collocations for bilingual lexicons: A statistical approach.\" Computational Linguistics, 22(1) ,1996, pp 1-38.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Using Punctuation Marks for Bilingual Sentence Alignment", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Yeh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yeh, \"Using Punctuation Marks for Bilingual Sentence Alignment.\" Master thesis, 2003, National Tsing Hua University, Taiwan", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"text": "c) : the number of alignment links between a Chinese word c and an English word e; count(e) : the number of instances of e in alignment links.", |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"text": "", |
|
"html": null, |
|
"content": "<table><tr><td>E-collocation Candidate Pairs</td><td>Part of Speech</td><td>LDOCE example</td><td>Pattern Count</td><td>Min LLR</td></tr><tr><td>civil service</td><td>jj nn</td><td>hard cash</td><td>1562</td><td>496.156856</td></tr><tr><td>rice bowl</td><td>nn nn</td><td>beef steak</td><td>1860</td><td>99.2231161</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF2": { |
|
"num": null, |
|
"text": "", |
|
"html": null, |
|
"content": "<table><tr><td colspan=\"4\">English collocations Chinese collocations LLR Collocation Translation Prob.</td></tr><tr><td>iron rice bowl</td><td>\u9435\u98ef\u7897</td><td>103.3</td><td>0.0202</td></tr><tr><td>rice bowl</td><td>\u9435\u98ef\u7897</td><td>77.74</td><td>0.0384</td></tr><tr><td>year-end bonus</td><td>\uf98e\u7d42\u734e\uf90a</td><td>59.21</td><td>0.0700</td></tr><tr><td>economic downturn</td><td colspan=\"2\">\u7d93\u6fdf \u666f\u6c23 \u4f4e\u8ff7 32.4</td><td>0.9359</td></tr><tr><td>economic downturn</td><td colspan=\"2\">\u503c\u6b64 \u7d93\u6fdf \u666f\u6c23 32.4</td><td>0.4359</td></tr><tr><td>...</td><td>...</td><td>...</td><td>...</td></tr><tr><td>performance review bonus</td><td>\u8003\u7e3e \u734e\uf90a</td><td>30.32</td><td>0.1374</td></tr><tr><td>economic downturn</td><td>\u666f\u6c23 \u4f4e\u8ff7</td><td>29.82</td><td>0.2500</td></tr><tr><td>civil service rice</td><td>\u516c\u5bb6 \u98ef</td><td>29.08</td><td>0.0378</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"num": null, |
|
"text": "", |
|
"html": null, |
|
"content": "<table><tr><td colspan=\"2\">Translation Count</td><td>Translation Prob.</td><td>Translation</td><td>Count</td><td>Translation Prob.</td></tr><tr><td>\u4ee3\u50f9</td><td>34</td><td>0.1214</td><td>\u82b1\u9322</td><td>7</td><td>0.025</td></tr><tr><td>\u9322</td><td>31</td><td>0.1107</td><td>\u51fa\u9322</td><td>6</td><td>0.0214</td></tr><tr><td>\u8cbb\u7528</td><td>21</td><td>0.075</td><td>\u79df</td><td>6</td><td>0.0214</td></tr><tr><td>\u4ed8\u8cbb</td><td>16</td><td>0.0571</td><td>\u767c\u7d66</td><td>6</td><td>0.0214</td></tr><tr><td>\uf9b4</td><td>16</td><td>0.0571</td><td>\u4ed8\u51fa</td><td>5</td><td>0.0179</td></tr><tr><td>\u7e73</td><td>16</td><td>0.0571</td><td>\u85aa\u8cc7</td><td>5</td><td>0.0179</td></tr><tr><td>\u652f\u4ed8</td><td>13</td><td>0.0464</td><td>\u4ed8\u9322</td><td>4</td><td>0.0143</td></tr><tr><td>\u7d66</td><td>13</td><td>0.0464</td><td>\u52a0\u85aa</td><td>4</td><td>0.0143</td></tr><tr><td>\u85aa\u6c34</td><td>11</td><td>0.0393</td><td>...</td><td>...</td><td>...</td></tr><tr><td>\u8ca0\u64d4</td><td>9</td><td>0.0321</td><td>\u7a4d\u6b20</td><td>2</td><td>0.0071</td></tr><tr><td>\u8cbb</td><td>9</td><td>0.0321</td><td>\u7e73\u6b3e</td><td>2</td><td>0.0071</td></tr><tr><td>\u7d66\u4ed8</td><td>8</td><td>0.0286</td><td/><td/><td/></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF4": { |
|
"num": null, |
|
"text": "", |
|
"html": null, |
|
"content": "<table><tr><td># keys</td><td>#answers</td><td>#hits</td><td>#errors</td><td>Recall</td><td>Precision</td></tr><tr><td>382</td><td>273</td><td>233</td><td>40</td><td>60.9%</td><td>85.2%</td></tr></table>", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |