ACL-OCL / Base_JSON /prefixM /json /mwe /2020.mwe-1.6.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:14:53.188095Z"
},
"title": "AlphaMWE: Construction of Multilingual Parallel Corpora with MWE Annotations",
"authors": [
{
"first": "Han",
"middle": [
"\u03a0"
],
"last": "Lifeng",
"suffix": "",
"affiliation": {
"laboratory": "ADAPT Research Centre \u2126 Insight Centre for Data Analytics",
"institution": "Dublin City University",
"location": {
"settlement": "Dublin",
"country": "Ireland"
}
},
"email": "lifeng.han@adaptcentre.ie"
},
{
"first": "Gareth",
"middle": [
"J F"
],
"last": "Jones",
"suffix": "",
"affiliation": {
"laboratory": "ADAPT Research Centre \u2126 Insight Centre for Data Analytics",
"institution": "Dublin City University",
"location": {
"settlement": "Dublin",
"country": "Ireland"
}
},
"email": "gareth.jones@dcu.ie"
},
{
"first": "Alan",
"middle": [],
"last": "Smeaton",
"suffix": "",
"affiliation": {
"laboratory": "ADAPT Research Centre \u2126 Insight Centre for Data Analytics",
"institution": "Dublin City University",
"location": {
"settlement": "Dublin",
"country": "Ireland"
}
},
"email": "alan.smeaton@dcu.ie"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this work, we present the construction of multilingual parallel corpora with annotation of multiword expressions (MWEs). MWEs include verbal MWEs (vMWEs) defined in the PARSEME shared task that have a verb as the head of the studied terms. The annotated vMWEs are also bilingually and multilingually aligned manually. The languages covered include English, Chinese, Polish, and German. Our original English corpus is taken from the PARSEME shared task in 2018. We performed machine translation of this source corpus followed by human post editing and annotation of target MWEs. Strict quality control was applied for error limitation, i.e., each MT output sentence received first manual post editing and annotation plus second manual quality rechecking. One of our findings during corpora preparation is that accurate translation of MWEs presents challenges to MT systems. To facilitate further MT research, we present a categorisation of the error types encountered by MT systems in performing MWE related translation. To acquire a broader view of MT issues, we selected four popular state-of-the-art MT models for comparisons namely: Microsoft Bing Translator, GoogleMT, Baidu Fanyi and DeepL MT. Because of the noise removal, translation post editing and MWE annotation by human professionals, we believe our AlphaMWE dataset will be an asset for cross-lingual and multilingual research, such as MT and information extraction. Our multilingual corpora are available as open access at github.com/poethan/AlphaMWE.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In this work, we present the construction of multilingual parallel corpora with annotation of multiword expressions (MWEs). MWEs include verbal MWEs (vMWEs) defined in the PARSEME shared task that have a verb as the head of the studied terms. The annotated vMWEs are also bilingually and multilingually aligned manually. The languages covered include English, Chinese, Polish, and German. Our original English corpus is taken from the PARSEME shared task in 2018. We performed machine translation of this source corpus followed by human post editing and annotation of target MWEs. Strict quality control was applied for error limitation, i.e., each MT output sentence received first manual post editing and annotation plus second manual quality rechecking. One of our findings during corpora preparation is that accurate translation of MWEs presents challenges to MT systems. To facilitate further MT research, we present a categorisation of the error types encountered by MT systems in performing MWE related translation. To acquire a broader view of MT issues, we selected four popular state-of-the-art MT models for comparisons namely: Microsoft Bing Translator, GoogleMT, Baidu Fanyi and DeepL MT. Because of the noise removal, translation post editing and MWE annotation by human professionals, we believe our AlphaMWE dataset will be an asset for cross-lingual and multilingual research, such as MT and information extraction. Our multilingual corpora are available as open access at github.com/poethan/AlphaMWE.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Multiword Expressions (MWEs) have long been of interest to both natural language processing (NLP) researchers and linguists (Sag et al., 2002; Constant et al., 2017; Pulcini, 2020) . The automatic processing of MWEs has posed significant challenges for some fields in computational linguistics (CL), such as word sense disambiguation (WSD), parsing and (automated) translation (Lambert and Banchs, 2005; Bouamor et al., 2012; Skadina, 2016; Li et al., 2019; Han et al., 2020) . This is caused by both the variety and the richness of MWEs as they are used in language.",
"cite_spans": [
{
"start": 124,
"end": 142,
"text": "(Sag et al., 2002;",
"ref_id": "BIBREF15"
},
{
"start": 143,
"end": 165,
"text": "Constant et al., 2017;",
"ref_id": "BIBREF4"
},
{
"start": 166,
"end": 180,
"text": "Pulcini, 2020)",
"ref_id": "BIBREF12"
},
{
"start": 377,
"end": 403,
"text": "(Lambert and Banchs, 2005;",
"ref_id": "BIBREF9"
},
{
"start": 404,
"end": 425,
"text": "Bouamor et al., 2012;",
"ref_id": "BIBREF2"
},
{
"start": 426,
"end": 440,
"text": "Skadina, 2016;",
"ref_id": "BIBREF18"
},
{
"start": 441,
"end": 457,
"text": "Li et al., 2019;",
"ref_id": "BIBREF10"
},
{
"start": 458,
"end": 475,
"text": "Han et al., 2020)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Various definitions of MWEs have included both syntactic structure and semantic viewpoints from different researchers covering syntactic anomalies, non-compositionality, nonsubstitutability and ambiguity (Constant et al., 2017) . For instance, Baldwin and Kim (2010) define MWEs as \"lexical items that: (i) can be decomposed into multiple lexemes; and (ii) display lexical, syntactic, semantic, pragmatic and/or statistical idiomaticity\". However, as noted by NLP researchers for example in (Constant et al., 2017) , there are very few bilingual or even multilingual parallel corpora with MWE annotations available for cross-lingual NLP research and for downstream applications such as machine translation (MT) .",
"cite_spans": [
{
"start": 204,
"end": 227,
"text": "(Constant et al., 2017)",
"ref_id": "BIBREF4"
},
{
"start": 244,
"end": 266,
"text": "Baldwin and Kim (2010)",
"ref_id": "BIBREF0"
},
{
"start": 491,
"end": 514,
"text": "(Constant et al., 2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "With regard to MWE research, verbal MWEs are a mature category that has received attention from many researchers (Maldonado et al., 2017) . Verbal MWEs have a verb as the head of the studied term and function as verbal phrases, such as \"kick the bucket\", \"cutting capers\" and \"go to one's head\". In this work, we present the construction of a multilingual corpus with vMWEs annotation, including English-Chinese, English-German and English-Polish language pairs. The same source monolingual corpus is in English with its vMWE tags from the shared task affiliated with the SIGLEX-MWE workshop in 2018 Ramisch et al., 2018) . Several state-of-the-art (SOTA) MT models were used to perform an automated translation, and then human post editing and annotation for the target languages was conducted with cross validation to ensure the quality, i.e., with each sentence receiving post-editing and rechecking by at least two people.",
"cite_spans": [
{
"start": 113,
"end": 137,
"text": "(Maldonado et al., 2017)",
"ref_id": "BIBREF11"
},
{
"start": 600,
"end": 621,
"text": "Ramisch et al., 2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In order to get a deeper insight into the difficulties of processing MWEs we carried out a categorisation of the errors made by MT models when processing MWEs. From this we conclude that current state-of-the-art MT models are far from reaching parity with humans in terms of translation performance, especially on idiomatic MWEs, even for sentence level translation, although researchers sometimes claim otherwise Hassan et al., 2018) .",
"cite_spans": [
{
"start": 414,
"end": 434,
"text": "Hassan et al., 2018)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of this paper is organised as follows. In the next section we present related work and then detail the corpus preparation stages including selection of MT models. We then look at the various kinds of issues that MT has with MWEs. This analysis, along with the public release of the corpora as a resource to the community, is the main contribution of the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There are a number of existing studies which focus on the creation of monolingual corpora with vMWE annotations, such as the PARSEME shared task corpora (Savary et al., 2017; Ramisch et al., 2018) . The 2020 edition of this task covers 14 languages including Chinese, Hindi, and Turkish as non-European languages. Some work from monolingual English corpora includes the MWE aware \"English Dependency Corpus\" from the Linguistic Data Consortium (LDC2017T01) that covers compound words used to train parsing models. Also related to this are English MWEs from \"web reviews data\" by Schneider et al. (2014) that covers noun, verb and preposition super-senses and English verbal MWEs from and Kato et al. (2018) that covers PARSEME shared task defined vMWE categories. However, all these works were performed in monolingual settings, independently by different language speakers without any bilingual alignment. These corpora are helpful for monolingual MWE research such as discovery or identification, however, it would be difficult to use these corpora for bilingual or multilingual research such as MT or cross-lingual information extraction.",
"cite_spans": [
{
"start": 153,
"end": 174,
"text": "(Savary et al., 2017;",
"ref_id": "BIBREF16"
},
{
"start": 175,
"end": 196,
"text": "Ramisch et al., 2018)",
"ref_id": "BIBREF13"
},
{
"start": 579,
"end": 602,
"text": "Schneider et al. (2014)",
"ref_id": "BIBREF17"
},
{
"start": 688,
"end": 706,
"text": "Kato et al. (2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The work most related to ours is from Vincze (2012) , who created an English-Hungarian parallel corpus with annotations for light verb constructions (LVCs). As many as 703 LVCs for Hungarian and 727 for English were annotated in this work, and a comparison between English and Hungarian data was carried out. However, the work did not cover other types of vMWEs, for instance inherently adpositional verbs, verbal idioms, or verb-particle constructions, and it was not extended to any other language pairs. In our work, we annotate in a multilingual setting including far distance languages such as English, German, Polish and Chinese, in addition to the extension of vMWE categories. In other recent work Han et al. (2020) , we performed an automatic construction of bilingual MWE terms based on a parallel corpus, in this case English-Chinese and English-German. We first conducted automated extraction of monolingual MWEs based on part-of-speech (POS) patterns and then aligned the two side monolingual MWEs into bilingual terms based on statistical lexical translation probability. However, due to the automated procedure, the extracted bilingual \"MWE terms\" contain not only MWEs but also normal phrases. Part of the reason for this is due to the POS pattern design which is a challenging task for each language and needs to be further refined (Skadina, 2016; Rikters and Bojar, 2017; Han et al., 2020) . ",
"cite_spans": [
{
"start": 38,
"end": 51,
"text": "Vincze (2012)",
"ref_id": "BIBREF21"
},
{
"start": 706,
"end": 723,
"text": "Han et al. (2020)",
"ref_id": "BIBREF5"
},
{
"start": 1349,
"end": 1364,
"text": "(Skadina, 2016;",
"ref_id": "BIBREF18"
},
{
"start": 1365,
"end": 1389,
"text": "Rikters and Bojar, 2017;",
"ref_id": "BIBREF14"
},
{
"start": 1390,
"end": 1407,
"text": "Han et al., 2020)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this section we describe our corpus preparation method, selection of the MT models used in our investigation, and the resulting open-source corpora AlphaMWE.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Work",
"sec_num": "3"
},
{
"text": "To construct a well aligned multilingual parallel corpus, our approach is to take a monolingual corpus from the PARSEME vMWE discovery and identification shared task as our root corpus. Our rationale here is that this shared task is well established and its process of tagging and categorisation is clear. Furthermore, as we plan to extend the MWE categories in the future, we enrich the PARSEME shared task corpus with potential for other downstream research and applications, including bilingual and multilingual NLP models. The English corpus we used from the PARSEME shared task follows the annotation guidelines having a broad range of vMWE categories tagged. These include inherently adpositional verbs, light verb constructions, multi-verb constructions, verbal idioms, and verb-particle constructions. The English corpus contains sentences from several different topics, such as news, literature, and IT documents. For the IT document domain, vMWEs are usually easier or more straightforward to translate, with a high chance of repetition, e.g. \"apply filter\" and \"based on\". For the literature annotations, the vMWEs include richer samples with many idiomatic or metaphor expressions, such as \"cutting capers\" and \"gone slightly to someone's head\" that cause MT issues. Fig. 1 shows our workflow. This first used MT models to perform automated translation for the target language direction, then human post editing of the output hypotheses with annotation of the corresponding target side vMWEs which are aligned with the source English ones. Finally, to avoid human introduced errors, we apply a cross validation strategy, where each sentence receives at least a second person's quality checking after the first post-editing. Tagging errors are more likely to occur if only one human has seen each sentence (we discuss some error samples from English source corpus in later sections).",
"cite_spans": [],
"ref_spans": [
{
"start": 1279,
"end": 1285,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Corpus Preparation",
"sec_num": "3.1"
},
{
"text": "We tested a number of example sentences from the English testset to compare state-of-theart MT from Microsoft Bing (Chowdhary and Greenwood, 2017) , GoogleMT (Vaswani et al., 2017) , Baidu Fanyi (Sun et al., 2019) , and DeepL 1 , as in Fig. 2 . We illustrate the comparative performances with two worked example translations. As a first example sentence, GoogleMT and Bing Translator have very similar outputs, where the MT output sentences try to capture and produce as much information as possible, but make the sentences redundant or awkward to read, such as the phrase \"\u9a8c\u8bc1... \u662f\u5426\u9a8c\u8bc1\u4e86 (y\u00e0n zh\u00e8ng ... Sh\u00ec f\u01d2u y\u00e0n zh\u00e8ng le)\" where they use a repeated word \"\u9a8c\u8bc1\" (y\u00e0n zh\u00e8ng, verify). Although the DeepL Translator does not produce a Two sample sentences' MT outputs comparison from head of test file Source # text = SQL Server verifies that the account name and password were validated when the user logged on to the system and grants access to the database, without requiring a separate logon name or password.",
"cite_spans": [
{
"start": 115,
"end": 146,
"text": "(Chowdhary and Greenwood, 2017)",
"ref_id": "BIBREF3"
},
{
"start": 158,
"end": 180,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF20"
},
{
"start": 195,
"end": 213,
"text": "(Sun et al., 2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 236,
"end": 242,
"text": "Fig. 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "MT Model Selection",
"sec_num": "3.2"
},
{
"text": "DeepL # text = SQL Server \u4f1a\u5728\u2f64\u7528\u6237\u767b\u5f55\u7cfb\u7edf\u65f6\u9a8c\u8bc1\u8d26\u6237\u540d\u548c\u5bc6\u7801\uff0c\u5e76\u6388\u4e88\u5bf9\u6570\u636e\u5e93\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u2f7d\u800c\u4e0d\u8981\u6c42\u5355\u72ec\u7684\u767b\u5f55\u540d\u6216\u5bc6\u7801\u3002",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MT Model Selection",
"sec_num": "3.2"
},
{
"text": "Google \uff03text = SQL Server\u9a8c\u8bc1\u2f64\u7528\u6237\u767b\u5f55\u7cfb\u7edf\u65f6\u662f\u5426\u9a8c\u8bc1\u4e86\uf9ba\u5e10\u6237\u540d\u548c\u5bc6\u7801\uff0c\u5e76\u6388\u4e88\u5bf9\u6570\u636e\u5e93\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u2f7d\u800c\u2f46\u65e0\u9700\u5355\u72ec\u7684\u767b\u5f55\u540d\u6216\u5bc6\u7801\u3002 Bing [text] SQL Server \u9a8c\u8bc1\u2f64\u7528\u6237\u767b\u5f55\u5230\u7cfb\u7edf\u65f6\u662f\u5426\u9a8c\u8bc1\u4e86\uf9ba\u5e10\u6237\u540d\u79f0\u548c\u5bc6\u7801\uff0c\u5e76\u6388\u4e88\u5bf9\u6570\u636e\u5e93\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u2f7d\u800c\u2f46\u65e0\u9700\u5355\u72ec\u7684\u767b\u5f55\u540d\u79f0\u6216\u5bc6 \u7801\u3002 Baidu #\u6216\u8005\uff0c\u5728\u6ca1\u6709\u5bc6\u7801\u7684\u60c5\u51b5\u4e0b\u9a8c\u8bc1\u2f64\u7528\u6237\u540d\u548c\u2f64\u7528\u6237\u540d\u662f\u5426\u88ab\u767b\u5f55\u5230\u6570\u636e\u5e93\u4e2d\uff0c\u5e76\u4e14\u7cfb\u7edf\u662f\u5426\u6388\u4e88\u767b\u5f55\u6743\u9650\u3002",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MT Model Selection",
"sec_num": "3.2"
},
{
"text": "Ref.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MT Model Selection",
"sec_num": "3.2"
},
{
"text": "# \u2f42\u6587\u672c = SQL Server \u4f1a\u5728\u2f64\u7528\u6237\u767b\u5f55\u7cfb\u7edf\u65f6\u9a8c\u8bc1\u8d26\u6237\u540d\u548c\u5bc6\u7801\u7684\u6709\u6548\u6027\uff0c\u5e76\u6388\u4e88\u5bf9\u6570\u636e\u5e93\u7684\u8bbf\u95ee\u6743\u9650\uff0c\u2f7d\u800c\u4e0d\u8981\u6c42\u5355\u72ec\u7684\u767b\u5f55\u540d\u6216\u5bc6 \u7801\u3002",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MT Model Selection",
"sec_num": "3.2"
},
{
"text": "Source # text = See the http://officeupdate.microsoft.com/, Microsoft Developer Network Web site for more information on TSQL.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MT Model Selection",
"sec_num": "3.2"
},
{
"text": "DeepL perfect translation since it drops the source word \"validated\" which should be translated as \"\u6709 \u6548\u6027 (y\u01d2u xi\u00e0o x\u00ecng)\"(as one candidate translation), the overall output is fluent and the source sentence meaning is mostly preserved. Baidu translator yields the worst output in this example. It produces some words that were not in the source sentence (\u6216\u8005, hu\u00f2 zh\u011b, or), loses some important terms'translation from source sentence (\"SQL Server\", the subject of the sentence), and the reordering of the sentence fails resulting in an incorrect meaning (\"\u5728\u6ca1\u6709\u5bc6\u7801\u7684\u60c5\u51b5 \u4e0b, z\u00e0i m\u00e9i y\u01d2u m\u00ec m\u01ce de q\u00edng ku\u00e0ng xi\u00e0\" is moved from the end of the sentence to the front and made as a condition). So, for this case, DeepL performed best. For a second example sentence, GoogleMT confused the original term TSQL as SQL. Bing MT had a similar issue with the last example, i.e. it produced redundant information \"\u6709\u5173 (y\u01d2u gu\u0101n)\" (about/on). In addition it concatenated the website address and normal phrase \"\u4e86\u89e3\u6709 \u5173 (li\u01ceo ji\u011b y\u01d2u gu\u0101n)\" together with a hyperlink. GoogleMT and Bing both translate half of the source term/MWE \"Microsoft Developer Network Web\" as \"Microsoft \u5f00\u53d1\u4eba\u5458\u7f51\u7edc\u7f51\u7ad9\" (k\u0101i f\u0101 r\u00e9n yu\u00e1n w\u01ceng lu\u00f2 w\u01ceng zh\u00e0n) where they kept \"Microsoft\" but translated \"Developer Network Web\". Although this is one reasonable output since Microsoft is a general popular named entity while \"Developer Network Web\" consists of common words, we interpret \"Microsoft Developer Network Web\" as a named entity/MWE in the source sentence that consists of all capitalised words which would be better translated overall as \"\u5fae\u8f6f\u5f00\u53d1\u4eba\u5458\u7f51\u7edc\u7f51\u7ad9 (w\u0113i ru\u01cen k\u0101i f\u0101 r\u00e9n yu\u00e1n w\u01ceng lu\u00f2 w\u01ceng zh\u00e0n)\" or be kept as the original capitalised words as a foreign term in the output, which is how DeepL outputs this expression. However, Baidu totally drops out this MWE translation and another word translation is not accurate, \"more\" into \u8be6\u7ec6 (xi\u00e1ng x\u00ec). Based on these samples, we chose to use DeepL as the provider of our MT hypotheses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MT Model Selection",
"sec_num": "3.2"
},
{
"text": "Regarding the size of the corpus, we extracted all 750 English sentences which have vMWE tags included. The target languages covered so far include Chinese, German and Polish with sample sentences in Appendix (Fig. 11 ). There are several situations and decisions that are worth noting: a) when the original English vMWEs are translated into a general phrase in the target language but not choosing sequence of MWEs, we tried to offer two different references, with one of them being revised in a vMWE/MWE presentation in the target; b) when the original English sentence terms were translated into the correct target language but in a different register, e.g. the source language has low register (thx, for instance), we offer two reference sentences, with one of them using the same low register and the other with (formal) full word spelling; c) for the situations where a single English word or normal phrase is translated into a typical vMWE in the target language, or both source and target sentences include vMWEs but the source vMWE was not annotated in the original English corpus, we made some additions to include such vMWE (pairs) into AlphaMWE; d) for some wrong/incorrect annotation in the source English corpus, or some mis-spelling of words, we corrected them in AlphaMWE; e) we chose English as root/source corpus, since the post-editing and annotation of target languages requires the human annotators to be fluent/native in both-side languages, and all editors were fluent in English as well as being native speakers in the specific target languages respectively. We examined the development and test data sets from the annual Workshop of MT (WMT) and also from the NIST MT challenges where they offered approximately 2,000 sentences for development/testing over some years. This means that our bilingual/multilingual corpora with 750 sentences is comparable to such standard shared task usage.",
"cite_spans": [],
"ref_spans": [
{
"start": 209,
"end": 217,
"text": "(Fig. 11",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Result: AlphaMWE",
"sec_num": "3.3"
},
{
"text": "We performed an analysis of the behaviour of various MT systems when required to translate MWEs or MWEs related context. Due to space limitations, in this paper we focus on the English\u2192Chinese language pair. We also highlight some issues on English\u2192German and En-glish\u2192Polish in the next section, but leave the detailed analysis of other language pairs for future work. When MT produces incorrect or awkward translations this can fall into many different categories, and from our analysis we classify them as: common sense, super sense, abstract phrase, idiom, metaphor and ambiguity, with ambiguity further sub-divided. These classifications are to be further refined in the future, e.g. the differences between metaphor and idiom are usually fuzzy. We now list each of these with examples to support future MT research on improving the quality of MT when handling MWEs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MT Issues with MWEs",
"sec_num": "4"
},
{
"text": "The first error category is the common sense issue. For instance, the sentence in Fig. 3 includes the vMWE \"waved down\" which in general understanding indicates that \"he succeeded in getting the cab\" and not only \"waved his hand\". However, in the translation by DeepL and Bing this vMWE was wrongly translated as \"he waved his hand to the cab\" missing part of the original meaning; the MT output by GoogleMT is also incorrect, saying \"he waves with the cab in hand\";",
"cite_spans": [],
"ref_spans": [
{
"start": 82,
"end": 88,
"text": "Fig. 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Common Sense",
"sec_num": "4.1"
},
{
"text": "Source Each time he took a walk, he felt as though he were leaving himself behind, and by giving himself up to the movement of the streets, by reducing himself to a seeing eye, he was able to escape the obligation to think, and this, more than anything else, brought him a measure of peace, a salutatory emptiness within. \u594e\u6069\u66fe\u6709\u4ed6\u7684\u7591\u8651\uff0c\u4f46\u8fd9\u662f\u4ed6\u5f00\u5c55\u2f2f\u5de5\u4f5c\u7684\u6240\u6709\u4f9d\u636e\uff0c\u662f\u4ed6\u901a\u5f80\u73b0\u5728\u7684\u552f\u2f00\u4e00\u6865\u6881\u3002(k\u0101i zh\u01cen g\u014dng zu\u00f2 de su\u01d2 y\u01d2u y\u012b j\u00f9)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Common Sense",
"sec_num": "4.1"
},
{
"text": "Figure 5: MT issues with MWEs: abstract phrase the Baidu translation of this sentence is semantically correct that \"he waved and got one cab\" though it does not use a corresponding Chinese side vMWE \" \u62db\u624b\u793a\u505c (zh\u0101o sh\u01d2u sh\u00ec t\u00edng)\" 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Common Sense",
"sec_num": "4.1"
},
{
"text": "For this category of translation issue, it is related to a form of state of mind and we need to make a logical prediction to guess the positiveness or negativeness of some words, in the choice of Chinese characters. As in Fig. 4 , the MT systems each have advantages for different parts of this long sentence. However, none of them is perfect. For instance, for the translation of vMWE \"giving (himself) up (to)\", the DeepL and Baidu outputs give very literal translation by saying \"he gives himself to\", the Bing translator drops the vMWE, while GoogleMT preserves the correct meaning in the translation \"\u6295\u8eab\u4e8e (t\u00f3u sh\u0113n y\u00fa)\" from the reference indicating \"he devoted himself\". However, GoogleMT's output for the phrase \"salutatory emptiness within\" is very poor and makes no sense; the reference is \"the emptiness that he welcomes\" for which Baidu has a closer translation \"\u5185\u5728\u7684\u81f4\u610f\u7684\u7a7a\u865a (n\u00e8i z\u00e0i de zh\u00ec y\u00ec de k\u014dng x\u016b)\". All four MT outputs also use the same Chinese words \"\u7a7a\u865a (k\u014dng x\u016b)\" which is a term with negative meaning, however, the sentence indicates that he is welcoming this emptiness, which should be the corresponding Chinese words \"\u7a7a\u65e0 (k\u014dng w\u00fa)\", an unbiased or positive meaning.",
"cite_spans": [],
"ref_spans": [
{
"start": 222,
"end": 228,
"text": "Fig. 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Super Sense",
"sec_num": "4.2"
},
{
"text": "The abstract phrases can have different exact meanings and we usually need some background information from the sentence or paragraph to select the correct word choices in the target Source I was smoking my pipe quietly by my dismantled steamer, and saw them all cutting capers in the light, with their arms lifted high, when the stout man with mustaches came tearing down to the river, a tin pail in his hand, assured me that everybody was 'behaving splendidly, splendidly, dipped about a quart of water and tore back again. Figure 6 : MT issues with MWEs: idioms language 3 . With the example sentence in Fig. 5 , from the context, we know that \"go on\" in this sentence means \"to work from\" using all the information he had. The phrase \"this was all he had to go on\" is then to be interpreted as \"this is all the information he had to work from\". At the end of the sentence, \"the present\" is the \"present person\" he needs to look for (with the picture of this person's younger age portrait). However, Bing translated it as \"this is (where) he had to go\" which is an incorrect interpretation of \"had to go\"; furthermore, Bing's translation of the second half of the sentence kept the English order, without any reordering between the words, which is grammatically incorrect in Chinese, i.e. \"\u4ed6\u552f\u4e00\u7684\u6865\u6881\u5230\u73b0\u5728 (t\u0101 w\u00e9i y\u012b de qi\u00e1o li\u00e1ng d\u00e0o xi\u00e0n z\u00e0i)\". GoogleMT and Baidu translated it as \"what he need to do\" which is also far from correct, while DeepL successfully translated the part \"his only thing to relying on\" but dropped the phrase \"go on\", i.e., to do what. Abstract Phrase can include Super Sense as its sub-category, however, it does not necessarily relate to a state of mind.",
"cite_spans": [],
"ref_spans": [
{
"start": 526,
"end": 534,
"text": "Figure 6",
"ref_id": null
},
{
"start": 607,
"end": 613,
"text": "Fig. 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Abstract Phrase",
"sec_num": "4.3"
},
{
"text": "The use of idioms often causes wrongly translated sentences, mostly resulting in humorous output due to literal translation. For example, in the sentence in Fig. 6 , the vMWEs \"cutting capers\" and \"tore back\" are never translated correctly at the same time by any of the four MT models we used. The idiom \"cutting capers\" indicates frolic or romp, to \"act in the manner of a young goat clumsily frolicking about\" and here it means \"they are in a happy mood, playful and lively movement\" which should properly be translated as the corresponding Chinese idiom \"\u6b22\u547c\u96c0\u8dc3 (hu\u0101n h\u016b qu\u00e8 yu\u00e8, happily jumping around like sparrows)\". However, all four MT models translated it literally into \"cutting\" actions just with different subjects, i.e., what they cut. The idiom (slang) \"tore back\" means the stout man walked back rapidly, which the Baidu translation gives the closest translation as \"\u5f80\u56de\u8dd1 (w\u01ceng hu\u00ed p\u01ceo, run back)\" but the other three models translated into an action \"tear something (to be broken)\" which is incorrect.",
"cite_spans": [],
"ref_spans": [
{
"start": 157,
"end": 163,
"text": "Fig. 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Idioms",
"sec_num": "4.4"
},
{
"text": "The first sentence vMWE \"blown to bits\" in Fig. 7 is a metaphor to indicate \"everything is gone\", instead of the physical \"blowing action\". However, the three MT models DeepL, GoogleMT and Source The what? Auster laughed, and in that laugh everything was suddenly blown to bits. The chair was comfortable, and the beer had gone slightly to his head. Context An old Mormon missionary in Nauvoo once gripped my knee hard as we sat side by side, and he put his arm about me and called me \"Brother.\" We'd only met ten minutes before. He took me to his good bosom. His eyes began to mist. I was a prospect, an exotic prospect in old tennis shoes and a sweatshirt. His heart opened to me. It opened like a cuckoo clock. But it did not \u2026 Figure 8 : MT issues with MWEs: context-unaware ambiguity Baidu translate it as \"exploded into pieces (by bombs)\", while BingMT translates it even more literally into \"blown to (computer) bits\". There is a corresponding Chinese vMWE \"\u5316\u4e3a\u4e4c\u6709 (hu\u00e0 w\u00e9i w\u016b y\u01d2u, vanish into nothing)\" which would be a proper choice for this source vMWE translation. The second sentence vMWE \"gone (slightly) to his head\" is a metaphor to indicate \"got slightly drunk\". However, all four MT models translate it as physically \"beer moved to his head\" but by slightly different means such as flow or flutter. The corresponding translation as a MWE should be \"\u5fae\u5fae\u8ba9\u4ed6\u4e0a\u4e86\u5934 (w\u00e9i w\u00e9i r\u00e0ng t\u0101 sh\u00e0ng le t\u00f3u)\", using the same characters, but the character order here makes so much difference, meaning \"slightly drunk\".",
"cite_spans": [],
"ref_spans": [
{
"start": 43,
"end": 49,
"text": "Fig. 7",
"ref_id": null
},
{
"start": 731,
"end": 739,
"text": "Figure 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Metaphor",
"sec_num": "4.5"
},
{
"text": "We encountered different kinds of situation that cause ambiguity in the resulting translation when it meets MWEs or named entities, so we further divide ambiguity into three sub-classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ambiguity",
"sec_num": "4.6"
},
{
"text": "In this case, the context, i.e. the background information, is needed for correct translation of the sentence. For instance, see Fig. 8 . DeepL gives the translation \"it did not give me time though\", while Bing and GoogleMT give the same translation \"it/this did not give me one day's time\" and Baidu outputs a grammatically incorrect sentence. From the pre-context, we understand that it means the speaker \"did not feel that is special to him\" or \"did not have affection of that\" after all the Mormon missionary's effort towards him. Interestingly, there is a popular Chinese idiom (slang) that matches this meaning very well \"\u4e0d\u662f\u6211\u7684\u83dc (b\u00f9 sh\u00ec w\u01d2 Source The moment they know the de-gnoming's going on they storm up to have a look. Then someone says that it can't be long now before the Russians write Arafat off. Figure 9: MT issues with MWEs: social/literature-unaware ambiguity de c\u00e0i, literally not my dish)\". From this point of view, the context based MT model deserves some more attentions, instead of only focusing on sentence level. When we tried to put all background context information as shown in Fig.8 into the four MT models, they produce as the same output for this studied sentence, as for sentence level MT. This indicates that current MT models still focus on sentence-by-sentence translation when meeting paragraphs, instead of using context inference.",
"cite_spans": [],
"ref_spans": [
{
"start": 129,
"end": 135,
"text": "Fig. 8",
"ref_id": null
},
{
"start": 1106,
"end": 1111,
"text": "Fig.8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Context-Unaware Ambiguity",
"sec_num": "4.6.1"
},
{
"text": "In this case, social knowledge of current affairs from news, or literature knowledge about some newly invented entities / phrases are required in order to get a correct translation output. For instance, Fig. 9 includes two sentences, one from politics and another from literature. In the first sentence, \"de-gnoming\" is a literature word from Harry Potter, invented by its author, to refer to the process of ridding a garden of gnomes, a small magical beast. Without this literature knowledge it is not possible to translate the sentence correctly. For instance, even though this sentence is from a very popular novel that has been translated into most languages, DeepL translated it as \"\u53bb\u6838 (q\u00f9 h\u00e9, de-nuclear)\", Bing translated it as \"\u53bb\u8bfa\u683c\u660e (q\u00f9 nu\u00f2 g\u00e9 m\u00edng, de-nu\u00f2g\u00e9m\u00edng\" where \"nu\u00f2g\u00e9m\u00edng\" is a simulation of the pronunciation of \"gnoming\" in a Chinese way, Baidu translated it as \"\u5fb7\u683c\u8bfa\u660e (d\u00e9 g\u00e9 nu\u00f2 m\u00edng)\" which is the simulation of the pronunciation of the overall term \"de-gnoming\".",
"cite_spans": [],
"ref_spans": [
{
"start": 203,
"end": 209,
"text": "Fig. 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Social/Literature-Unaware Ambiguity",
"sec_num": "4.6.2"
},
{
"text": "In the second sentence, \"write Arafat off\" is to dismiss \"Yasser Arafat\", Chairman of the Palestine Liberation Organization, who is a historical person's name. However, all three models DeepL, Bing, and GoogleMT translated it into \"\u628a/\u5c06\u963f\u62c9\u6cd5\u7279\u6ce8\u9500 (b\u01ce/ji\u0101ng \u0101 l\u0101 f\u01ce t\u00e8 zh\u00f9 xi\u0101o, deregister Arafat)\" which treated \"Arafat\" as a tittle of certain policy/proceeding, not being able to recognize it as a personal named entity, while Baidu made the effort to use the Chinese idiom \"\u4e00\u7b14\u52fe\u9500 (y\u012b b\u01d0 g\u014du xi\u0101o, cancel everything, or never mention historical conflicts)\" for \"write off\" but it is not a correct translation. Interestingly, if we put these two sentences into a web search engine it retrieves the correct web pages as context in the top list of the search result. This may indicate that future MT models could consider to include web search results as part of their knowledge of background for translation purposes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Social/Literature-Unaware Ambiguity",
"sec_num": "4.6.2"
},
{
"text": "This kind of MWE ambiguity can be solved by the coherence of the sentence itself, for instance, the example in Fig. 10 . The four MT models all translated the vMWE itself \"have an operation\" correctly in meaning preservation by \"\u505a/\u63a5\u53d7/\u52a8\u624b\u672f (zu\u00f2/ji\u0113 sh\u00f2u/d\u00f2ng sh\u01d2u sh\u00f9)\" just with different Chinese word choices. However, none of the MT models translated the \"reason of the operation\", i.e., \"complaint\" correctly. The word complaint has two most commonly Figure 10: MT issues with MWEs: coherence-unaware ambiguity used meanings \"a statement that something is unsatisfactory or unacceptable\" or \"an illness or medical condition\" and all four models chose the first one. According to simple logic of social life, people do not need to \"have an operation\" due to \"a statement\", instead their \"medical condition\" should have been chosen to translate the word \"complaint\". Because of the incorrectly chosen candidate translation of the word \"complaint\", Bing's output even invented a new term in Chinese \"\u6295\u8bc9\u624b\u672f (t\u00f3u s\u00f9 sh\u01d2u sh\u00f9, a surgery of complaint statement kind)\" which makes no sense.",
"cite_spans": [],
"ref_spans": [
{
"start": 111,
"end": 118,
"text": "Fig. 10",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Coherence-Unaware Ambiguity",
"sec_num": "4.6.3"
},
{
"text": "In this paper, we presented the construction of multilingual parallel corpora, AlphaMWE, with vMWEs as pioneer annotations by native speakers of the corresponding languages. We described the procedure of MT model selection, human post editing and annotation, and compared different state-of-the-art MT models and classified the MT errors from vMWEs related sentence/context translations. We characterised the errors into different categories to help MT research to focus on one or more of them to improve the performance of MT. We performed the same process as described here for English\u2192Chinese, English\u2192German and English\u2192Polish and similarly categorised the MT issues when handling MWEs. The En-glish\u2192German issues can be categorized into: (1) there are cases where the corresponding German translation of English MWEs can be one word, which is partially because that German has separable verbs, (2) the automated translation to German is biased towards choosing the polite or formal form of the words, which is generally fine but depends on the context to decide which form is more suitable, and (3) English vMWEs are often not translated as vMWEs to German. In the main, English\u2192Polish MT errors fall into the category of coherence-unaware errors, literal translation errors and context unaware situation errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "We name our process as AlphaMWE to indicate that we will continue to maintain the developed corpora which are publicly available and extend them into other possible language pairs, e.g. Spanish, French and Italian (under-development) . We also plan to extend the annotated MWE genres beyond the vMWEs defined in the PARSEME shared task.",
"cite_spans": [
{
"start": 195,
"end": 233,
"text": "French and Italian (under-development)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "The chair was comfortable, and the beer had gone slightly to his head. I was smoking my pipe quietly by my dismantled steamer, and saw them all cutting capers in the light, with their arms lifted high, when the stout man with mustaches came tearing down to the river, a tin pail in his hand, assured me that everybody was 'behaving splendidly, splendidly, dipped about a quart of water and tore back again. (the italic was not annotated in source English) Krzes\u0142o by\u0142o wygodne, a piwo lekko uderzy\u0142o mu do g\u0142owy. [ sourceVMWE: gone (slightly) to his head] [targetVMWE: (lekko) uderzy\u0142o mu do g\u0142owy] Cicho pali\u0142em swoj\u0105 fajk\u0119 przy zdemontowanym parowcu i widzia\u0142em, jak wszyscy pl\u0105saj\u0105 w \u015bwietle, z podniesionymi wysoko ramionami, gdy twardziel z w\u0105sami przyszed\u0142 szybkim krokiem do rzeki, blaszany wiaderko w d\u0142oni, zapewni\u0142 mnie, \u017ce wszyscy \"zachowuj\u0105 si\u0119 wspaniale, wspaniale, nabra\u0142 oko\u0142o \u0107wiartk\u0119 wody i zawr\u00f3ci\u0142 szybkim krokiem\". [sourceVMWE: cutting capers; tearing down; tore back][targetVMWE: pl\u0105saj\u0105; przyszed\u0142 szybkim krokiem; zawr\u00f3ci\u0142 szybkim krokiem] AlphaMWE corpora examples from multilingual parallel files. \"cutting capers\" was annotated as VID type of MWEs, while \"tearing down\" and \"tore back\" were not annotated in the source English corpus. We added them into AlphaMWE multilingual corpora since they do cause translation errors for most state-of-the-art MT models. The bilingual MWEs are aligned with their appearance order from sentence inside the afterwards attached bracket-pairs. Figure 11 : AlphaMWE corpora samples with two sentences (VLC.cause). However, the phrase is with \"put...on\" instead of \"put...trance\". \"put someone into a trance\" is a phrase to express \"make someone into a half-conscious state\". However, for this sentence, if we check back a bit further of the context, it means \"he put on his cloth in a kind of trance\". The word \"trance\" is affiliated with the phrase \"in a kind of trance\" instead of \"put\".",
"cite_spans": [],
"ref_spans": [
{
"start": 1505,
"end": 1514,
"text": "Figure 11",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Plain English Corpus",
"sec_num": null
},
{
"text": "https://www.deepl.com/en/translator (All testing was performed in 2020/07 from 4 MT models)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We give full sentence pronunciation (in Pinyin) of Chinese characters in this figure, for the following examples, we only present the Chinese Pinyin for MWEs and studied words of the sentences to save space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "it sometimes belongs to the context-unaware ambiguity (CUA) that we will mention later, however, CUA not necessarily means \"abstract phrase\", and usually needs paragraph information, not only sentence level. Furthermore, in some situations, we just don't know how to interpret \"abstract phrase\", i.e. the candidate interpretations are unknown without context, and this is different from ambiguity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "a proper translation: Nastroszy\u0142a sobie pi\u00f3ra i rzuci\u0142a mu spojrzenie g\u0142\u0119bokiego obrzydzenia. Also the MT output word for \"Nastroszy\u0142a\" was \"Zdruzgota\u0142a\" which is wrong meaning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The ADAPT Centre for Digital Content Technology is funded under the SFI Research Centres Programme (Grant 13/RC/2106) and is co-funded under the European Regional Development Fund. The input of Alan Smeaton is part-funded by SFI under grant number SFI/12/RC/2289 (Insight Centre). The authors are very grateful to their colleagues who helped to create the AlphaMWE corpora by post editing and annotation work across all language pairs, to Yi Lu and Dr. Paolo Bolzoni for helping with the experiments, to Lorin Sweeney, Roise McGagh, and Eoin Treacy for discussions about English MWEs and terms, Yandy Wong for discussion of Cantonese examples, and Hailin Hao and Yao Tong for joining the first discussion. The authors also thank the anonymous reviewers for their thorough reviews and insightful comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": "As shown in the examples (Fig. 11 ) from Chinese, German and Polish, all involved languages are sentence by sentence aligned, including the vMWEs paired with order which are put behind the sentences into the bracket pairs. AlphaMWE also includes statistics of the annotated vMWEs, and a multilingual vMWEs glossary. The AlphaMWE corpora are divided evenly into five portions which were designed in the post-editing and annotation stage. As a result, it is convenient for researchers to use them for testing NLP models, choosing any subset portion or combination, or cross validation usage.",
"cite_spans": [],
"ref_spans": [
{
"start": 25,
"end": 33,
"text": "(Fig. 11",
"ref_id": null
}
],
"eq_spans": [],
"section": "Appendix A: AlphaMWE Corpus Presentation Examples.",
"sec_num": null
},
{
"text": "Some error annotations of vMWEs in source monolingual corpus surly have some impact on the accuracy level of the vMWE discovery and identification shared task, but also affect the bilingual usage of AlphaMWE, so we tried to address all these cases. For instance, the example sentence in Fig. 5 , English corpus annotated wrongly the sequence \"had to go on\" as a verbal idioms (VIDs) which is not accurate. The verb \"had\" here is affiliated with \"all he had\" instead of \"to go on\". So either we shall annotate \"go on\" as vMWE in the sentence or the overall clause \"all he had to go on\" as a studied term.Another example with a different type of vMWE is the sentence \"He put them on in a kind of trance.\" where the source English corpus tagged \"put\" and \"trance\" as Light-verb construction Appendix B: English\u2192German/Polish MT Samples Reflecting Afore Mentioned MWE Related Issues.",
"cite_spans": [],
"ref_spans": [
{
"start": 287,
"end": 293,
"text": "Fig. 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Error Examples from English Corpus Fixed in AlphaMWE",
"sec_num": null
},
{
"text": "Firstly, for the English vMWE translates into single German word, let's see the vMWE \"woke up\" the sentence \"An old woman with crinkly grey hair woke up at her post outside the lavatory and opened the door, smiling and grasping a filthy cleaning rag.\" has corresponding German aligned word \"erwachte\" with a suitable translation \"Eine alte Frau mit krausem, grauem Haar erwachte auf ihrem Posten vor der Toilette und \u00f6ffnete die T\u00fcr, l\u00e4chelte und griff nach einem schmutzigen Putzlappen.\". This also occurs in English to Chinese translation, such as an English verb+particle MWE getting aligned to one single Chinese character/word. For example, in this sentence \"The fact that my name has been mixed up in this.\", the vMWE (VPC) mixed up gets aligned to single character word \"\u6df7 (h\u00f9n)\" in a suitable translation \"\u4e8b\u5b9e\u4e0a\uff0c\u6211\u7684\u540d\u5b57\u5df2\u7ecf\u88ab\u6df7\u5728\u8fd9\u91cc\u9762\u4e86\u3002 (Sh\u00ec sh\u00ed sh\u00e0ng, w\u01d2 de m\u00edng z\u00ec y\u01d0 j\u012bng b\u00e8i h\u00f9n z\u00e0i zh\u00e8 l\u01d0 mi\u00e0n le)\".Secondly, for the automatic translation to German that is very biased towards choosing the polite or formal form, see the examples such as \"Sie\"instead of the second form singular \"du\"for \"you\", \"auf Basis von\" instead of \"basierend auf\" for \"based on\". To achieve a higher accuracy level of MT, it shall depend on the context of usage to decide which form is more suitable. Thirdly, for the English verbal multiword expressions that are often not translated as verbal multiword expressions to German. This indicates some further work to explore by MT researchers to develop better models to have the machine producing corresponding German existing MWEs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "English\u2192German",
"sec_num": null
},
{
"text": "Regarding the MT output issues on English to Polish that fall into coherence-unaware error, for instance, the vMWE \"write off\" in sentence \"Then someone says that it can't be long now before the Russians write Arafat off.\" was translated as \"Wypisz\u0105\" (Potem kto\u015b m\u00f3wi, \u017ce ju\u017c nied\u0142ugo Rosjanie wypisz\u0105 Arafata.) which means \"prescribe\", instead of correct one \"spisz\u0105 na straty (Arafata)\". This error shall be able to avoid by the coherence of the sentence itself in meaning preservation models.For the literal translation, we can see the example vMWE \"gave (him) a look\" in the sentence \"She ruffled her feathers and gave him a look of deep disgust.\" which was literally translated as \"da\u0142a mu spojrzenie\", however, in Polish, people use \"throw a look\" as \"rzuci\u0142a (mu) spojrzenie\" instead of \"gave (da\u0142a, a female form)\" 4 . Another example of literal translation leading to errors is the vMWE \"turn the tables\" from sentence \"Now Iran wants to turn the tables and is inviting cartoonists to do their best by depicting the Holocaust.\" which is translated as \"odwr\u00f3ci\u0107 stoliki (turn tables)\", however, it shall be \"odwr\u00f3ci\u0107 sytuacj\u0119 (turn the situation)\" or \"odwr\u00f3ci\u0107 rol\u0119 (turn role)\" with a proper translation \"Teraz Iran chce odwr\u00f3ci\u0107 sytuacj\u0119 i zach\u0119ca rysownik\u00f3w, by zrobili wszystko, co w ich mocy, przedstawiaj\u0105c Holocaust.\" These two examples present the localization issue in the target language.For the context unaware issue, we can look back to the example sentence \"But it did not give me the time of day.\" from Fig. 8 . It was literally translated word by word into \"Ale nie da\u0142o mi to pory dnia.\" which is in the sense of hour/time. However, it shall be \"Nie s\u0105dz\u0119 aby to by\u0142o co\u015b wyj\u0105tkowo/szczeg\u00f3lnie dla mnie. (I do not think this is special to me.)\" based on the context, or \"Ale to nie moja bajka\" as an idiomatic expression which means \"not my fairy tale\" (indicating not my cup of tea). (Fig.12) Figure 12 : AlphaMWE corpora initial contact list (with alphabetical order)",
"cite_spans": [],
"ref_spans": [
{
"start": 1525,
"end": 1531,
"text": "Fig. 8",
"ref_id": null
},
{
"start": 1909,
"end": 1917,
"text": "(Fig.12)",
"ref_id": null
},
{
"start": 1918,
"end": 1927,
"text": "Figure 12",
"ref_id": null
}
],
"eq_spans": [],
"section": "English\u2192Polish",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Multiword expressions",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "Su",
"middle": [
"Nam"
],
"last": "Kim",
"suffix": ""
}
],
"year": 2010,
"venue": "Handbook of Natural Language Processing",
"volume": "",
"issue": "",
"pages": "267--292",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timothy Baldwin and Su Nam Kim. 2010. Multiword expressions. In Handbook of Natural Language Processing, Second Edition, pages 267-292. Chapman and Hall.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Findings of the 2017 conference on machine translation (WMT17)",
"authors": [
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Rajen",
"middle": [],
"last": "Chatterjee",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Shujian",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Huck",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Varvara",
"middle": [],
"last": "Logacheva",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
},
{
"first": "Matteo",
"middle": [],
"last": "Negri",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
},
{
"first": "Raphael",
"middle": [],
"last": "Rubino",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Turchi",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Second Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "169--214",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ond\u0159ej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Shujian Huang, Matthias Huck, Philipp Koehn, Qun Liu, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Raphael Rubino, Lucia Specia, and Marco Turchi. 2017. Findings of the 2017 conference on machine translation (WMT17). In Proceedings of the Second Conference on Machine Translation, pages 169-214, Copenhagen, Denmark, September. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Identifying bilingual multi-word expressions for statistical machine translation",
"authors": [
{
"first": "Dhouha",
"middle": [],
"last": "Bouamor",
"suffix": ""
},
{
"first": "Nasredine",
"middle": [],
"last": "Semmar",
"suffix": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Zweigenbaum",
"suffix": ""
}
],
"year": 2012,
"venue": "Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dhouha Bouamor, Nasredine Semmar, and Pierre Zweigenbaum. 2012. Identifying bilingual multi-word expressions for statistical machine translation. In Conference on Language Resources and Evaluation.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Emt: End to end model training for msr machine translation",
"authors": [
{
"first": "Vishal",
"middle": [],
"last": "Chowdhary",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Greenwood",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 1st Workshop on Data Management for End-to-End Machine Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vishal Chowdhary and Scott Greenwood. 2017. Emt: End to end model training for msr machine trans- lation. In Proceedings of the 1st Workshop on Data Management for End-to-End Machine Learning.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Survey: Multiword expression processing: A Survey",
"authors": [
{
"first": "Mathieu",
"middle": [],
"last": "Constant",
"suffix": ""
},
{
"first": "G\u00fcl\u015fen",
"middle": [],
"last": "Eryi\u01e7it",
"suffix": ""
},
{
"first": "Johanna",
"middle": [],
"last": "Monti",
"suffix": ""
},
{
"first": "Lonneke",
"middle": [],
"last": "Van Der",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Plas",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Ramisch",
"suffix": ""
},
{
"first": "Amalia",
"middle": [],
"last": "Rosner",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Todirascu",
"suffix": ""
}
],
"year": 2017,
"venue": "Computational Linguistics",
"volume": "43",
"issue": "4",
"pages": "837--892",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mathieu Constant, G\u00fcl\u015fen Eryi\u01e7it, Johanna Monti, Lonneke van der Plas, Carlos Ramisch, Michael Ros- ner, and Amalia Todirascu. 2017. Survey: Multiword expression processing: A Survey. Computational Linguistics, 43(4):837-892.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "MultiMWE: Building a multi-lingual multiword expression (MWE) parallel corpora",
"authors": [
{
"first": "Lifeng",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "J",
"middle": [
"F"
],
"last": "Gareth",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smeaton",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of The 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "2970--2979",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lifeng Han, Gareth J. F. Jones, and Alan Smeaton. 2020. MultiMWE: Building a multi-lingual multi- word expression (MWE) parallel corpora. In Proceedings of The 12th Language Resources and Evalu- ation Conference, pages 2970-2979, Marseille, France. European Language Resources Association.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Google's multilingual neural machine translation system: Enabling zero-shot translation",
"authors": [
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Thorat",
"suffix": ""
},
{
"first": "Fernanda",
"middle": [
"B"
],
"last": "Vi\u00e9gas",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Wattenberg",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Macduff",
"middle": [],
"last": "Hughes",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Tho- rat, Fernanda B. Vi\u00e9gas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's multilingual neural machine translation system: Enabling zero-shot translation. CoRR, abs/1611.04558.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Construction of Large-scale English Verbal Multiword Expression Annotated Corpus",
"authors": [
{
"first": "Akihiko",
"middle": [],
"last": "Kato",
"suffix": ""
},
{
"first": "Hiroyuki",
"middle": [],
"last": "Shindo",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Akihiko Kato, Hiroyuki Shindo, and Yuji Matsumoto. 2018. Construction of Large-scale English Verbal Multiword Expression Annotated Corpus. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan, May 7-12, 2018.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Data Inferred Multi-word Expressions for Statistical Machine Translation",
"authors": [
{
"first": "Patrik",
"middle": [],
"last": "Lambert",
"suffix": ""
},
{
"first": "Rafael",
"middle": [
"E"
],
"last": "Banchs",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of Machine Translation Summit X",
"volume": "",
"issue": "",
"pages": "396--403",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrik Lambert and Rafael E. Banchs. 2005. Data Inferred Multi-word Expressions for Statistical Machine Translation. In Proceedings of Machine Translation Summit X, pages 396-403, Thailand.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Neural name translation improves neural machine translation",
"authors": [
{
"first": "Xiaoqing",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jinghui",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Jiajun",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": ""
}
],
"year": 2019,
"venue": "Machine Translation",
"volume": "",
"issue": "",
"pages": "93--100",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaoqing Li, Jinghui Yan, Jiajun Zhang, and Chengqing Zong. 2019. Neural name translation improves neural machine translation. In Jiajun Chen and Jiajun Zhang, editors, Machine Translation, pages 93-100, Singapore. Springer Singapore.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Detection of verbal multi-word expressions via conditional random fields with syntactic dependency features and semantic re-ranking",
"authors": [
{
"first": "Alfredo",
"middle": [],
"last": "Maldonado",
"suffix": ""
},
{
"first": "Lifeng",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Erwan",
"middle": [],
"last": "Moreau",
"suffix": ""
},
{
"first": "Ashjan",
"middle": [],
"last": "Alsulaimani",
"suffix": ""
},
{
"first": "Carl",
"middle": [],
"last": "Koel Dutta Chowdhury",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Vogel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2017,
"venue": "The 13th Workshop on Multiword Expressions @ EACL 2017. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alfredo Maldonado, Lifeng Han, Erwan Moreau, Ashjan Alsulaimani, Koel Dutta Chowdhury, Carl Vogel, and Qun Liu. 2017. Detection of verbal multi-word expressions via conditional random fields with syntactic dependency features and semantic re-ranking. In The 13th Workshop on Multiword Expressions @ EACL 2017. ACL.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "English-derived multi-word and phraseological units across languages in the global anglicism database",
"authors": [
{
"first": "Virginia",
"middle": [],
"last": "Pulcini",
"suffix": ""
}
],
"year": 2020,
"venue": "Textus, English Studies in Italy",
"volume": "1",
"issue": "2020",
"pages": "127--143",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Virginia Pulcini. 2020. English-derived multi-word and phraseological units across languages in the global anglicism database. Textus, English Studies in Italy, (1/2020):127-143.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Edition 1.1 of the PARSEME shared task on automatic identification of verbal multiword expressions",
"authors": [
{
"first": "Carlos",
"middle": [],
"last": "Ramisch",
"suffix": ""
},
{
"first": "Silvio",
"middle": [
"Ricardo"
],
"last": "Cordeiro",
"suffix": ""
},
{
"first": "Agata",
"middle": [],
"last": "Savary",
"suffix": ""
},
{
"first": "Veronika",
"middle": [],
"last": "Vincze",
"suffix": ""
},
{
"first": "Archna",
"middle": [],
"last": "Verginica Barbu Mititelu",
"suffix": ""
},
{
"first": "Maja",
"middle": [],
"last": "Bhatia",
"suffix": ""
},
{
"first": "Marie",
"middle": [],
"last": "Buljan",
"suffix": ""
},
{
"first": "Polona",
"middle": [],
"last": "Candito",
"suffix": ""
},
{
"first": "Voula",
"middle": [],
"last": "Gantar",
"suffix": ""
},
{
"first": "Tunga",
"middle": [],
"last": "Giouli",
"suffix": ""
},
{
"first": "Abdelati",
"middle": [],
"last": "G\u00fcng\u00f6r",
"suffix": ""
},
{
"first": "Uxoa",
"middle": [],
"last": "Hawwari",
"suffix": ""
},
{
"first": "Jolanta",
"middle": [],
"last": "I\u00f1urrieta",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Kovalevskait\u0117",
"suffix": ""
},
{
"first": "Timm",
"middle": [],
"last": "Krek",
"suffix": ""
},
{
"first": "Chaya",
"middle": [],
"last": "Lichte",
"suffix": ""
},
{
"first": "Johanna",
"middle": [],
"last": "Liebeskind",
"suffix": ""
},
{
"first": "Carla",
"middle": [],
"last": "Monti",
"suffix": ""
},
{
"first": "Behrang",
"middle": [],
"last": "Parra Escart\u00edn",
"suffix": ""
},
{
"first": "Renata",
"middle": [],
"last": "Qasemizadeh",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Ramisch",
"suffix": ""
},
{
"first": "Ivelina",
"middle": [],
"last": "Schneider",
"suffix": ""
},
{
"first": "Ashwini",
"middle": [],
"last": "Stoyanova",
"suffix": ""
},
{
"first": "Abigail",
"middle": [],
"last": "Vaidya",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Walsh",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions",
"volume": "",
"issue": "",
"pages": "222--240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carlos Ramisch, Silvio Ricardo Cordeiro, Agata Savary, Veronika Vincze, Verginica Barbu Mititelu, Archna Bhatia, Maja Buljan, Marie Candito, Polona Gantar, Voula Giouli, Tunga G\u00fcng\u00f6r, Abde- lati Hawwari, Uxoa I\u00f1urrieta, Jolanta Kovalevskait\u0117, Simon Krek, Timm Lichte, Chaya Liebeskind, Johanna Monti, Carla Parra Escart\u00edn, Behrang QasemiZadeh, Renata Ramisch, Nathan Schneider, Ivelina Stoyanova, Ashwini Vaidya, and Abigail Walsh. 2018. Edition 1.1 of the PARSEME shared task on automatic identification of verbal multiword expressions. In Proceedings of the Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions (LAW-MWE-CxG-2018), pages 222-240, Santa Fe, New Mexico, USA, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Paying Attention to Multi-Word Expressions in Neural Machine Translation",
"authors": [
{
"first": "Mat\u012bss",
"middle": [],
"last": "Rikters",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 16th Machine Translation Summit",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mat\u012bss Rikters and Ond\u0159ej Bojar. 2017. Paying Attention to Multi-Word Expressions in Neural Machine Translation. In Proceedings of the 16th Machine Translation Summit, Nagoya, Japan.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Multiword expressions: A pain in the neck for nlp",
"authors": [
{
"first": "A",
"middle": [],
"last": "Ivan",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Sag",
"suffix": ""
},
{
"first": "Francis",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "Ann",
"middle": [],
"last": "Bond",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Copestake",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Flickinger",
"suffix": ""
}
],
"year": 2002,
"venue": "Computational Linguistics and Intelligent Text Processing",
"volume": "",
"issue": "",
"pages": "1--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan A. Sag, Timothy Baldwin, Francis Bond, Ann Copestake, and Dan Flickinger. 2002. Multiword expressions: A pain in the neck for nlp. In Alexander Gelbukh, editor, Computational Linguistics and Intelligent Text Processing, pages 1-15, Berlin, Heidelberg. Springer Berlin Heidelberg.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The PARSEME shared task on automatic identification of verbal multiword expressions",
"authors": [
{
"first": "Agata",
"middle": [],
"last": "Savary",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Ramisch",
"suffix": ""
},
{
"first": "Silvio",
"middle": [],
"last": "Cordeiro",
"suffix": ""
},
{
"first": "Federico",
"middle": [],
"last": "Sangati",
"suffix": ""
},
{
"first": "Veronika",
"middle": [],
"last": "Vincze",
"suffix": ""
},
{
"first": "Behrang",
"middle": [],
"last": "Qasem-Izadeh",
"suffix": ""
},
{
"first": "Marie",
"middle": [],
"last": "Candito",
"suffix": ""
},
{
"first": "Fabienne",
"middle": [],
"last": "Cap",
"suffix": ""
},
{
"first": "Voula",
"middle": [],
"last": "Giouli",
"suffix": ""
},
{
"first": "Ivelina",
"middle": [],
"last": "Stoyanova",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Doucet",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 13th Workshop on Multiword Expressions",
"volume": "",
"issue": "",
"pages": "31--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Agata Savary, Carlos Ramisch, Silvio Cordeiro, Federico Sangati, Veronika Vincze, Behrang Qasem- iZadeh, Marie Candito, Fabienne Cap, Voula Giouli, Ivelina Stoyanova, and Antoine Doucet. 2017. The PARSEME shared task on automatic identification of verbal multiword expressions. In Proceed- ings of the 13th Workshop on Multiword Expressions (MWE 2017), pages 31-47, Valencia, Spain.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Comprehensive annotation of multiword expressions in a social web corpus",
"authors": [
{
"first": "Nathan",
"middle": [],
"last": "Schneider",
"suffix": ""
},
{
"first": "Spencer",
"middle": [],
"last": "Onuffer",
"suffix": ""
},
{
"first": "Nora",
"middle": [],
"last": "Kazour",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Danchik",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"T"
],
"last": "Mordowanec",
"suffix": ""
},
{
"first": "Henrietta",
"middle": [],
"last": "Conrad",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC-2014)",
"volume": "",
"issue": "",
"pages": "455--461",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nathan Schneider, Spencer Onuffer, Nora Kazour, Emily Danchik, Michael T. Mordowanec, Henrietta Conrad, and Noah A. Smith. 2014. Comprehensive annotation of multiword expressions in a social web corpus. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC-2014), pages 455-461, Reykjavik, Iceland, May. European Languages Resources Association.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Multi-word expressions in english-latvian machine translation",
"authors": [
{
"first": "Inguna",
"middle": [],
"last": "Skadina",
"suffix": ""
}
],
"year": 2016,
"venue": "Baltic J. Modern Computing",
"volume": "4",
"issue": "",
"pages": "811--825",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Inguna Skadina. 2016. Multi-word expressions in english-latvian machine translation. Baltic J. Modern Computing, 4:811-825.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Baidu neural machine translation systems for WMT19",
"authors": [
{
"first": "Meng",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Bojian",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Zhongjun",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "374--381",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Meng Sun, Bojian Jiang, Hao Xiong, Zhongjun He, Hua Wu, and Haifeng Wang. 2019. Baidu neu- ral machine translation systems for WMT19. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 374-381, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Conference on Neural Information Processing System",
"volume": "",
"issue": "",
"pages": "6000--6010",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Conference on Neural Information Processing System, pages 6000-6010.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Light verb constructions in the SzegedParalellFX English-Hungarian parallel corpus",
"authors": [
{
"first": "Veronika",
"middle": [],
"last": "Vincze",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)",
"volume": "",
"issue": "",
"pages": "2381--2388",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Veronika Vincze. 2012. Light verb constructions in the SzegedParalellFX English-Hungarian parallel corpus. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12), pages 2381-2388, Istanbul, Turkey, May. European Language Resources Association.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Constructing an annotated corpus of verbal MWEs for English",
"authors": [
{
"first": "Abigail",
"middle": [],
"last": "Walsh",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Bonial",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Geeraert",
"suffix": ""
},
{
"first": "John",
"middle": [
"P"
],
"last": "Mccrae",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Schneider",
"suffix": ""
},
{
"first": "Clarissa",
"middle": [],
"last": "Somers",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions (LAW-MWE-CxG-2018)",
"volume": "",
"issue": "",
"pages": "193--200",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abigail Walsh, Claire Bonial, Kristina Geeraert, John P. McCrae, Nathan Schneider, and Clarissa Somers. 2018. Constructing an annotated corpus of verbal MWEs for English. In Proceedings of the Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions (LAW-MWE-CxG- 2018), pages 193-200, Santa Fe, New Mexico, USA, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Google's neural machine translation system: Bridging the gap between human and machine translation",
"authors": [
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Norouzi",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Klingner",
"suffix": ""
},
{
"first": "Apurva",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Xiaobing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Gouws",
"suffix": ""
},
{
"first": "Yoshikiyo",
"middle": [],
"last": "Kato",
"suffix": ""
},
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "Hideto",
"middle": [],
"last": "Kazawa",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Stevens",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Kurian",
"suffix": ""
},
{
"first": "Nishant",
"middle": [],
"last": "Patil",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2016,
"venue": "Oriol Vinyals",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Workflows to prepare AlphaMWE."
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Sample comparison of outputs from four MT models."
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "MT issues with MWEs: common sense. Pinyin offered by GoogleMT with post-editing."
},
"TABREF1": {
"html": null,
"content": "<table/>",
"type_str": "table",
"num": null,
"text": "SourceAt the corner of 72nd Street and Madison Avenue, he waved down a cab.DeepL\u572872\u8857\u548c\u9ea6\u8fea\u900a\u2f24\u5927\u9053\u7684\u62d0\u2f93\u89d2\u5904\uff0c\u4ed6\u5411\u2f00\u4e00\u8f86\u51fa\u79df\u2ecb\u8f66\u62db\u2f3f\u624b\u3002Z\u00e0i 72 ji\u0113 h\u00e9 m\u00e0i d\u00ed x\u00f9n d\u00e0 d\u00e0o de gu\u01cei ji\u01ceo ch\u00f9, t\u0101 xi\u00e0ng y\u012b li\u00e0ng ch\u016b z\u016b ch\u0113 zh\u0101o sh\u01d2u. Bing \u572872\u8857\u548c\u9ea6\u8fea\u900a\u2f24\u5927\u9053\u7684\u62d0\u2f93\u89d2\u5904\uff0c\u4ed6\u6325\u2f3f\u624b\u2f70\u793a\u610f\u2f00\u4e00\u8f86\u51fa\u79df\u2ecb\u8f66\u3002 z\u00e0i 72 ji\u0113 h\u00e9 m\u00e0i d\u00ed x\u00f9n d\u00e0 d\u00e0o de gu\u01cei ji\u01ceo ch\u00f9 , t\u0101 hu\u012b sh\u01d2u sh\u00ec y\u00ec y\u00ed li\u00e0ng ch\u016b z\u016b ch\u0113. Google \u5728\u7b2c72\u8857\u548c\u9ea6\u8fea\u900a\u2f24\u5927\u8857\u7684\u62d0\u2f93\u89d2\u5904\uff0c\u4ed6\u6325\u821e\u7740\u51fa\u79df\u2ecb\u8f66\u3002 Z\u00e0i d\u00ec 72 ji\u0113 h\u00e9 m\u00e0i d\u00ed x\u00f9n d\u00e0 ji\u0113 de gu\u01cei ji\u01ceo ch\u00f9, t\u0101 hu\u012b w\u01d4 zhe ch\u016b z\u016b ch\u0113. Baidu \u572872\u8857\u548c\u9ea6\u8fea\u900a\u2f24\u5927\u8857\u7684\u62d0\u2f93\u89d2\u5904\uff0c\u4ed6\u6325\u2f3f\u624b\u53eb\u4e86\uf9ba\u2f00\u4e00\u8f86\u51fa\u79df\u2ecb\u8f66\u3002 z\u00e0i 72 ji\u0113 h\u00e9 m\u00e0i d\u00ed x\u00f9n d\u00e0 ji\u0113 de gu\u01cei ji\u01ceo ch\u00f9, t\u0101 hu\u012b sh\u01d2u ji\u00e0o le y\u00ed li\u00e0ng ch\u016b z\u016b ch\u0113. Ref. \u572872\u8857\u548c\u9ea6\u8fea\u900a\u2f24\u5927\u9053\u7684\u62d0\u2f93\u89d2\u5904\uff0c\u4ed6\u62db\u2f3f\u624b\u2f70\u793a\u505c\u4e86\uf9ba\u2f00\u4e00\u8f86\u51fa\u79df\u2ecb\u8f66\u3002 Z\u00e0i 72 ji\u0113 h\u00e9 m\u00e0i d\u00ed x\u00f9n d\u00e0 d\u00e0o de gu\u01cei ji\u01ceo ch\u00f9, t\u0101 zh\u0101o sh\u01d2u sh\u00ec t\u00edng le y\u012b li\u00e0ng ch\u016b z\u016b ch\u0113."
},
"TABREF4": {
"html": null,
"content": "<table><tr><td colspan=\"2\">Google \u4ec0\uf9fd\u4e48\u554a Auster\u7b11\u4e86\uf9ba\u8d77\u6765\uff0c\u5728\u90a3\u7b11\u58f0\u4e2d\uff0c\u2f00\u4e00\u5207\u7a81\u7136\u88ab\u70b8\u788e\u4e86\uf9ba\u3002 (b\u00e8i zh\u00e0 su\u00ec le)</td></tr><tr><td/><td>\u6905\u2f26\u5b50\u5f88\u8212\u670d\uff0c\u5564\u9152\u5fae\u5fae\u98d8\u5230\u4ed6\u7684\u5934\u4e0a\u3002(w\u0113i w\u0113i pi\u0101o d\u00e0o t\u0101 de t\u00f3u sh\u00e0ng)</td></tr><tr><td>Baidu</td><td>\u4ec0\uf9fd\u4e48\uff1f\u5965\u65af\u7279\u7b11\u4e86\uf9ba\uff0c\u5728\u90a3\u7b11\u58f0\u4e2d\uff0c\u2f00\u4e00\u5207\u90fd\u7a81\u7136\u88ab\u70b8\u6210\u788e\u7247\u3002(b\u00e8i zh\u00e0 ch\u00e9ng su\u00ec pi\u00e0n)</td></tr><tr><td/><td>\u6905\u2f26\u5b50\u5f88\u8212\u670d\uff0c\u5564\u9152\u5df2\u7ecf\u7a0d\u7a0d\u6d41\u5230\u4ed6\u7684\u5934\u4e0a\u4e86\uf9ba\u3002(sh\u0101o sh\u0101o li\u00fa d\u00e0o t\u0101 de t\u00f3u sh\u00e0ng le)</td></tr><tr><td>Ref.</td><td>\u90a3\u4e2a\u4ec0\uf9fd\u4e48\uff1f\u5965\u65af\u7279\u7b11\u4e86\uf9ba\uff0c\u5728\u8fd9\u7b11\u58f0\u4e2d\uff0c\u2f00\u4e00\u5207\u7a81\u7136\u5316\u4e3a\u4e4c\u6709\u3002(hu\u00e0 w\u00e9i w\u016b y\u01d2u)</td></tr><tr><td/><td>\u6905\u2f26\u5b50\u5f88\u8212\u670d\uff0c\u5564\u9152\u5df2\u7ecf\u5fae\u5fae\u8ba9\u4ed6\u4e0a\u4e86\uf9ba\u5934\u3002 (w\u0113i w\u0113i r\u00e0ng t\u0101 sh\u00e0ng le t\u00f3u)</td></tr><tr><td/><td>Figure 7: MT issues with MWEs: metaphor</td></tr><tr><td>Source</td><td>But it did not give me the time of day.</td></tr><tr><td>DeepL</td><td>\u4f46\u5b83\u5e76\u6ca1\u6709\u7ed9\u6211\u65f6\u95f4\u3002 (g\u011bi w\u01d2 sh\u00ed ji\u0101n)</td></tr><tr><td>Bing</td><td>\u4f46\u5b83\u6ca1\u6709\u7ed9\u6211\u2f00\u4e00\u5929\u7684\u65f6\u95f4\u3002 (g\u011bi w\u01d2 y\u012b ti\u0101n de sh\u00ed ji\u0101n)</td></tr><tr><td>Google</td><td>\u4f46\u8fd9\u6ca1\u6709\u7ed9\u6211\u2f00\u4e00\u5929\u7684\u65f6\u95f4\u3002 (g\u011bi w\u01d2 y\u012b ti\u0101n de sh\u00ed ji\u0101n)</td></tr><tr><td>Baidu</td><td>\u4f46\u5b83\u6ca1\u6709\u7ed9\u6211\u2f00\u4e00\u5929\u4e2d\u7684\u65f6\u95f4\u3002 (g\u011bi w\u01d2 y\u012b ti\u0101n zh\u014dng de sh\u00ed ji\u0101n)</td></tr><tr><td>Ref.</td><td/></tr></table>",
"type_str": "table",
"num": null,
"text": "DeepL \u90a3\u4e2a\u4ec0\uf9fd\u4e48\uff1f\u5965\u65af\u7279\u7b11\u4e86\uf9ba\uff0c\u5728\u8fd9\u7b11\u58f0\u4e2d\uff0c\u2f00\u4e00\u5207\u7a81\u7136\u88ab\u70b8\u5f97\u7c89\u788e\u3002(b\u00e8i zh\u00e0 d\u00e9 f\u011bn su\u00ec) \u6905\u2f26\u5b50\u5f88\u8212\u670d\uff0c\u5564\u9152\u5df2\u7ecf\u5fae\u5fae\u5230\u4e86\uf9ba\u4ed6\u7684\u5934\u4e0a\u3002 (w\u0113i w\u0113i d\u00e0o le t\u0101 de t\u00f3u sh\u00e0ng)Bing \u4ec0\uf9fd\u4e48\uff1f\u5965\u65af\u7279\u7b11\u4e86\uf9ba\uff0c\u5728\u7b11\uff0c\u2f00\u4e00\u5207\u90fd\u7a81\u7136\u88ab\u5439\u6210\u4f4d\u3002(b\u00e8i chu\u012b ch\u00e9ng w\u00e8i) \u6905\u2f26\u5b50\u5f88\u8212\u670d\uff0c\u5564\u9152\u7a0d\u5fae\u5230\u4ed6\u7684\u5934\u53bb\u4e86\uf9ba\u3002 (sh\u0101o w\u0113i d\u00e0o t\u0101 de t\u00f3u q\u00f9 le)"
},
"TABREF5": {
"html": null,
"content": "<table><tr><td>Bing</td><td>\u5f53\u4ed6\u4eec\u77e5\u9053\u53bb\u8bfa\u683c\u660e\u662f\u600e\u4e48\u56de\u4e8b\uff0c \u4ed6\u4eec\u51b2\u4e86\uf9ba\u8d77\u6765\u770b\u770b\u3002 (q\u00f9 nu\u00f2 g\u00e9 m\u00edng)</td></tr><tr><td/><td>\u7136\u540e\u6709\u2f08\u4eba\u8bf4\uff0c\u73b0\u5728\u4fc4\u7f57\u65af\u2f08\u4eba\u8981\u4e0d\u2ed3\u957f\u4e86\uf9ba\uff0c\u5c31\u628a\u963f\u62c9\u6cd5\u7279\u6ce8\u9500\u4e86\uf9ba\u3002(b\u01ce \u0101 l\u0101 f\u01ce t\u00e8 zh\u00f9 xi\u0101o le)</td></tr><tr><td colspan=\"2\">Google \u5f53\u4ed6\u4eec\u77e5\u9053\u6b63\u5728\u9010\u6e10\u6d88\u5931\u7684\u90a3\u2f00\u4e00\u523b\uff0c\u4ed6\u4eec\u4fbf\uf965\u51b2\u4e0a\u53bb\u770b\u770b\u3002 (zh\u00e8ng z\u00e0i zh\u00fa ji\u00e0n xi\u0101o sh\u012b)</td></tr><tr><td/><td>\u7136\u540e\u6709\u2f08\u4eba\u8bf4\uff0c\u4e0d\u4e45\u4e4b\u540e\u4fc4\u7f57\u65af\u2f08\u4eba\u5c06\u963f\u62c9\u6cd5\u7279\u6ce8\u9500\u3002 (ji\u0101ng \u0101 l\u0101 f\u01ce t\u00e8 zh\u00f9 xi\u0101o)</td></tr><tr><td>Baidu</td><td>\u4ed6\u4eec\u2f00\u4e00\u77e5\u9053\u5fb7\u683c\u8bfa\u660e\u6b63\u5728\u8fdb\u2f8f\u884c\ufa08\uff0c\u5c31\u51b2\u4e0a\u53bb\u770b\u2f00\u4e00\u770b\u3002 (d\u00e9 g\u00e9 nu\u00f2 m\u00edng)</td></tr><tr><td/><td>\u7136\u540e\u6709\u2f08\u4eba\u8bf4\uff0c\u4fc4\u56fd\u2f08\u4eba\u5f88\u5feb\u5c31\u4f1a\u628a\u963f\u62c9\u6cd5\u7279\u2f00\u4e00\u7b14\u52fe\u9500\u4e86\uf9ba\u3002 (b\u01ce \u0101 l\u0101 f\u01ce t\u00e8 y\u012b b\u01d0 g\u014du xi\u0101o le)</td></tr><tr><td>Ref.</td><td>\u2f00\u4e00\u77e5\u9053\u53bb\u5730\u7cbe\u7684\u4e8b\u5728\u8fdb\u2f8f\u884c\ufa08\uff0c\u4ed6\u4eec\u5c31\u51b2\u4e0a\u53bb\u89c2\u770b\u3002 (q\u00f9 d\u00ec j\u012bng)</td></tr><tr><td/><td>\u7136\u540e\u6709\u2f08\u4eba\u8bf4\uff0c\u73b0\u5728\u2f64\u7528\u4e0d\u4e86\uf9ba\u591a\u4e45\uff0c\u4fc4\u7f57\u65af\u2f08\u4eba\u5c31\u4f1a\u628a\u963f\u62c9\u6cd5\u7279\u4e0b\u8ab2 / \u8ba9\u2026\u4e0b\u53f0\u3002 (b\u01ce \u0101 l\u0101 f\u01ce t\u00e8 xi\u00e0 k\u00e8; r\u00e0ng\u2026xi\u00e0 t\u00e1i)</td></tr></table>",
"type_str": "table",
"num": null,
"text": "DeepL \u4ed6\u4eec\u2f00\u4e00\u77e5\u9053\u53bb\u6838\u7684\u4e8b\uff0c\u5c31\u4f1a\u51b2\u4e0a\u53bb\u770b\u2f00\u4e00\u770b\u3002 (q\u00f9 h\u00e9) \u7136\u540e\u6709\u2f08\u4eba\u8bf4\uff0c\u73b0\u5728\u2f64\u7528\u4e0d\u4e86\uf9ba\u591a\u4e45\uff0c\u4fc4\u7f57\u65af\u2f08\u4eba\u5c31\u4f1a\u628a\u963f\u62c9\u6cd5\u7279\u6ce8\u9500\u3002(b\u01ce \u0101 l\u0101 f\u01ce t\u00e8 zh\u00f9 xi\u0101o)"
},
"TABREF6": {
"html": null,
"content": "<table><tr><td>Baidu</td><td>\u4e24\u4e2a\u2f49\u6708\u524d\uff0c\u6211\u56e0\u4e3a\u4e25\u91cd\u7684\u6295\u8bc9\u4e0d\u5f97\u4e0d\u52a8\u2f3f\u624b\u672f\u3002 (t\u00f3u s\u00f9 \u2026 d\u00f2ng sh\u01d2u sh\u00f9)</td></tr><tr><td>Ref.</td><td>\u4e24\u4e2a\u2f49\u6708\u524d\uff0c\u6211\u56e0\u4e3a\u2f00\u4e00\u6b21\u4e25\u91cd\u7684\u75c7\u72b6\u4e0d\u5f97\u4e0d\u505a\u2f3f\u624b\u672f\u3002(zh\u00e8ng zhu\u00e0ng \u2026 zu\u00f2 sh\u01d2u sh\u00f9)</td></tr></table>",
"type_str": "table",
"num": null,
"text": "Source Two months ago I had to have an operation for a serious complaint.DeepL \u4e24\u4e2a\u2f49\u6708\u524d\uff0c\u6211\u56e0\u4e3a\u2f00\u4e00\u6b21\u4e25\u91cd\u7684\u6295\u8bc9\u4e0d\u5f97\u4e0d\u505a\u2f3f\u624b\u672f\u3002(t\u00f3u s\u00f9 \u2026 zu\u00f2 sh\u01d2u sh\u00f9)Bing\u4e24\u4e2a\u2f49\u6708\u524d\uff0c\u6211\u4e0d\u5f97\u4e0d\u505a\u2f00\u4e00\u4e2a\u4e25\u91cd\u7684\u6295\u8bc9\u2f3f\u624b\u672f\u3002 (zu\u00f2 \u2026 t\u00f3u s\u00f9 sh\u01d2u sh\u00f9)Google \u4e24\u4e2a\u2f49\u6708\u524d\uff0c\u6211\u4e0d\u5f97\u4e0d\u63a5\u53d7\u2f00\u4e00\u6b21\u2f3f\u624b\u672f\u4ee5\u5e94\u5bf9\u4e25\u91cd\u7684\u6295\u8bc9\u3002 (ji\u0113 sh\u00f2u y\u012b c\u00ec sh\u01d2u sh\u00f9 \u2026 t\u00f3u s\u00f9)"
},
"TABREF7": {
"html": null,
"content": "<table><tr><td>Target</td></tr><tr><td>Target</td></tr><tr><td>Polish</td></tr><tr><td>Corpus</td></tr></table>",
"type_str": "table",
"num": null,
"text": "English MWEs gone (slightly) to his head, cutting capers, tearing down, tore back Target Chiense Corpus \u6905\u2f26\u5b50\u5f88\u8212\u670d\uff0c\u5564\u9152\u5df2\u7ecf\u5fae\u5fae\u8ba9\u4ed6\u4e0a\u4e86\uf9ba\u5934\u3002[sourceVMWE: gone (slightly) to his head][targetVMWE: (\u5fae\u5fae)\u8ba9\u4ed6\u4e0a \u4e86\uf9ba\u5934] \u6211\u5728\u62c6\u5f00\u7684\u6c7d\u8239\u65c1\u9759\u9759\u5730\u62bd\u7740\u70df\u2f43\u6597\uff0c\u770b\u5230\u4ed6\u4eec\u90fd\u5728\u706f\u5149\u4e0b\u6b22\u547c\u96c0\u8dc3\uff0c\u2fbc\u9ad8\u4e3e\u53cc\u81c2\uff0c\u8fd9\u65f6\uff0c\u90a3\u4e2a\u7559\uf9cd\u7740\u80e1\u2f26\u5b50\u7684\u2f24\u5927\u5757\u5934\uff0c \u2f3f\u624b\u2fa5\u91cc\uf9e9\u62ff\u7740\u2f00\u4e00\u4e2a\u94c1\u2f6a\u76ae\u6876\uff0c\u5feb\u901f\u6765\u5230\u6cb3\u8fb9\uff0c\u5411\u6211\u786e\u4fdd\u2f24\u5927\u5bb6\u90fd\"\u8868\u73b0\u5f97\u5f88\u7cbe\u5f69\uff0c\u5f88\u7cbe\u5f69\"\uff0c\u4ed6\u6d78\u4e86\uf9ba\u2f24\u5927\u7ea6\u2f00\u4e00\u5938\u8131\u7684\u2f54\u6c34\uff0c\u2f1c\u53c8\u5feb \u901f\u56de\u53bb\u4e86\uf9ba\u3002[sourceVMWE: cutting capers; tearing down; tore back][targetVMWE: \u6b22\u547c\u96c0\u8dc3; \u5feb\u901f\u5230; \u5feb\u901f\u56de\u53bb] German Corpus Der Stuhl war bequem, und das Bier war ihm leicht zu Kopf gestiegen. [sourceVMWE: gone (slightly) to his head][targetVMWE: (leicht) zu Kopf gestiegen] Ich rauchte leise meine Pfeife an meinem zerlegten Dampfer und sah, wie sie alle im Licht mit hoch erhobenen Armen Luftspr\u00fcnge machten, als der st\u00e4mmige Mann mit Schnurrbart mit einem Blecheimer in der Hand zum Fluss hinunterkam und mir versicherte, dass sich alle \"pr\u00e4chtig, pr\u00e4chtig benahmen, etwa einen Liter Wasser eintauchte und wieder zur\u00fcckwankte\". [sourceVMWE: cutting capers; tearing down; tore back] [targetVMWE: Luftspr\u00fcnge machten; hinunterkam; zur\u00fcckwankte]"
}
}
}
}