{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:14:52.093521Z" }, "title": "Automatic detection of unexpected/erroneous collocations in learner corpus", "authors": [ { "first": "Jen-Yu", "middle": [], "last": "Li", "suffix": "", "affiliation": {}, "email": "jenyuli@gmail.com" }, { "first": "Thomas", "middle": [], "last": "Gaillat", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This research investigates the collocational errors made by English learners in a learner corpus. It focuses on the extraction of unexpected collocations. A system was proposed and implemented with open source toolkit. Firstly, the collocation extraction module was evaluated by a corpus with manually annotated collocations. Secondly, a standard collocation list was collected from a corpus of native speaker. Thirdly, a list of unexpected collocations was generated by extracting candidates from a learner corpus and discarding the standard collocations on the list. The overall performance was evaluated, and possible sources of error were pointed out for future improvement.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "This research investigates the collocational errors made by English learners in a learner corpus. It focuses on the extraction of unexpected collocations. A system was proposed and implemented with open source toolkit. Firstly, the collocation extraction module was evaluated by a corpus with manually annotated collocations. Secondly, a standard collocation list was collected from a corpus of native speaker. Thirdly, a list of unexpected collocations was generated by extracting candidates from a learner corpus and discarding the standard collocations on the list. The overall performance was evaluated, and possible sources of error were pointed out for future improvement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Multiword expressions (MWEs) are word combinations which present lexical, syntactic, semantic, pragmatic or statistical idiosyncrasies. The boundary between MWEs and collocations is subtle. In Ramisch et al. (2018) , they defined collocations as combinations of words whose idiosyncrasy is purely statistical and show no substantial semantic idiosyncrasy. In this way they oppose MWEs to collocations. Some researchers (Sag et al., 2002) regard collocations as any statistically significant cooccurrences, which include all kinds of MWEs. Some other researchers (Garcia et al., 2019; Baldwin and Kim, 2010) consider collocations as a subset of MWEs. For Tutin (2013) , collocation is a category of semantic phraseme. As defined by Mel'\u010duk (1998) , a phraseme is a set of phrase which is not free (without freedom of selection of its signified and without freedom of combination of its components). In this sense, the meaning of phraseme is quite similar to MWE. In this research, we considered collocation as a subset of semantic phraseme and a subset of MWEs as well. To constrain the set of collocation candidates, we focus on the Verb-Noun (VN) construction.", "cite_spans": [ { "start": 193, "end": 214, "text": "Ramisch et al. (2018)", "ref_id": "BIBREF15" }, { "start": 419, "end": 437, "text": "(Sag et al., 2002)", "ref_id": "BIBREF18" }, { "start": 562, "end": 583, "text": "(Garcia et al., 2019;", "ref_id": "BIBREF6" }, { "start": 584, "end": 606, "text": "Baldwin and Kim, 2010)", "ref_id": "BIBREF0" }, { "start": 654, "end": 666, "text": "Tutin (2013)", "ref_id": "BIBREF23" }, { "start": 731, "end": 745, "text": "Mel'\u010duk (1998)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Second language learners usually have problems with collocations. Some researchers have reported that the errors are related to the learners' L1 (Nesselhauf, 2003; Hong et al., 2011) . The correction of wrong collocations 1 , such as to *create [construct] a taller and safer building, in written essays can help learners increase their competence and thus their proficiency in English writing (Meunier and Granger, 2008) . Therefore, the automatic detection and correction of erroneous collocations would be helpful for learners. Designing such a system would support specific feedback messages that could be employed to guide learners in their meta-cognitive learning processes (Shute 2008 ).", "cite_spans": [ { "start": 145, "end": 163, "text": "(Nesselhauf, 2003;", "ref_id": "BIBREF13" }, { "start": 164, "end": 182, "text": "Hong et al., 2011)", "ref_id": "BIBREF7" }, { "start": 394, "end": 421, "text": "(Meunier and Granger, 2008)", "ref_id": "BIBREF12" }, { "start": 680, "end": 691, "text": "(Shute 2008", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Such a system may be based on two kinds of corpora: a learner corpus which is used to extract known collocational errors, and a reference corpus to extract standard English collocations (Shei and Pain, 2000) . Chang et al. (2008) proposed a method of bilingual collocation extraction from a parallel corpus to provide phrasal translation memory. Their system performance was exceptionally good (preci-sion=0.98, recall=0.91). However, this approach required a bilingual dictionary, a parallel corpus for a specific L1 and English, as well as word-alignment matching of translations.", "cite_spans": [ { "start": 186, "end": 207, "text": "(Shei and Pain, 2000)", "ref_id": "BIBREF20" }, { "start": 210, "end": 229, "text": "Chang et al. (2008)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper presents a preliminary research on a learner corpus. In the following sections, we will briefly explain the method, present the results, and give some discussions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We propose a system to extract unexpected collocations in three stages: (a) implementation and evaluation of a collocation extraction module; (b) collection of standard collocations from a native corpus; (c) extraction of wrong collocations from a learner corpus. The main principle is, firstly, to extract all possible collocations in the learner corpus, and then identify standard collocations by the reference (collocations extracted from native corpus); the remainder of the items are considered as wrong collocations. Three evaluation points were made, aiming at the collocation extraction module, the reference of standard collocations, and the extraction of wrong collocations, respectively. The system diagram and the three stages are shown in Figure 1 . Stage A. Implementation and evaluation of the collocation extraction module: collocations were extracted from the PARSing and Multi-word Expressions (PARSEME 2 ) corpus (Savary et al., 2015) with the implemented module. The results were saved as the PARSEME List. According to Garcia et al. (2019) , light verb constructions (LVCs) can be regarded as collocations in VN form. The manually annotated LVCs were therefore retrieved and saved as the PARSEME LVC List. It is the gold standard (i.e. the ground truth) to evaluate the extraction module and to fine tune the parameters in the scripts.", "cite_spans": [ { "start": 932, "end": 953, "text": "(Savary et al., 2015)", "ref_id": "BIBREF19" }, { "start": 1040, "end": 1060, "text": "Garcia et al. (2019)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 752, "end": 760, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Method", "sec_num": "2" }, { "text": "Stage B. Collection of standard collocations: to have a large list of standard collocations, we used the implemented module to extract collocations from the British National Corpus (BNC 3 ) (BNC Consortium, 2007) to form a list of standard collocations (the BNC List). The reference of standard collocations was built by merging the BNC List and the PARSEME LVC List. It was evaluated by manual verification. The errors in the reference list would degrade the credibility of our gold standard and thus might have a negative influence on the overall performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2" }, { "text": "Stage C. Extraction of wrong collocations: we used the implemented module to extract candidate collocations (named as the NUCLE List) from the National University of Singapore Corpus of Learner English (NUCLE 4 ) (Dahlmeier et al., 2013) . The sentences manually annotated with erroneous collocations (Wci tag) were also exported, and the VN terms in these sentences were detected and saved in the NUCLE WC List. It was used to evaluate the overall performance of our system. The scripts 5 were written in Python with Natural Language Toolkit (NLTK) 6 (Bird and Loper, 2004) . Five lexical association measures were used in collocation extraction tasks, namely the raw frequency counting, t-test, chi-square test, log likelihood ratio, and pointwise mutual information. The formulas as well as an evaluation of 84 measures can be found in Pecina (2010) .", "cite_spans": [ { "start": 213, "end": 237, "text": "(Dahlmeier et al., 2013)", "ref_id": "BIBREF4" }, { "start": 552, "end": 574, "text": "(Bird and Loper, 2004)", "ref_id": "BIBREF1" }, { "start": 839, "end": 852, "text": "Pecina (2010)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "2" }, { "text": "To evaluate the module, we extracted the collocations from PARSEME and compared them with the PARSEME LVC List. The precision, recall, F1 and F0.5 scores were used as the accuracy metric. The best precision rate is 0.11 for the bigram detection with minimal frequency of 2, using raw frequency measure, and with the top 200 collocations. Meanwhile, the best recall rate is 0.11 when both bigram and trigram detection are used, and with minimal frequency equals 2 for top 300 collocations, with the log likelihood ratio or with the raw frequency measure. the best F1 and F0.5 are both 0.08 for the bigram detection using raw frequency measure with a minimal frequency of 2 and with top 300 collocations. Pointwise mutual information and chi-square methods cannot give good results even without applying filters. The results obtained by t-test methods are similar to raw frequency method. The window size was set to four. Shorter or longer window lengths were tried but did not have good results, which means the words of a collocation tends to co-occur in the span of four words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of the collocation extraction module", "sec_num": "3.1" }, { "text": "For manual verification, 200 candidates were randomly sampled from the BNC list and given to an experienced English teacher. He validated firstly obvious collocations like take place. For the candidates that he was not sure about, he consulted the Corpus of Contemporary American English (COCA) collocate search tool 7 . If he found the candidate in the COCA corpus, it was validated; if not, the candidate was discarded. The final precision rate is 0.57.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation of the BNC list", "sec_num": "3.2" }, { "text": "Ideally the union of the BNC List and the PARSEME LVC List (noted as BNC \u22c3 PARSEME LVC) gives us the standard collocations, and NUCLE WC List gives the wrong collocations. Ideally there should be no overlapping in standard and wrong collocations. However, we found that there are intersections between the NUCLE WC List and the PARSEME LVC (11 collocations), between the NUCLE WC List and the BNC List (20 collocations), and between all three lists (4 collocations). The amount of this overlapping is therefore 27 (20+11-4=27), noted as NUCLE WC \u22c2 (BNC \u22c3 PARSEME LVC); it is about 1.8% of the NUCLE WC List.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Intersections between lists", "sec_num": "3.3" }, { "text": "Candidates were extracted from NUCLE and compared with the gold standard, i.e. the NUCLE WC List (1,471 erroneous VN collocations). Various thresholds of log likelihood ratio were tested for optimization. Figure 2(a) shows the global view of precision and recall versus different thresholds, and Figure 2(b) gives a zoom-in of threshold from zero to twelve. The highest precision is 0.5 when the threshold value is set to 430, where only two candidates are extracted. The precision and recall meet at the same level about 0.04 when the threshold is set to eight, and 1,408 candidates are extracted. The maximal recall (0.83) is obtained by extracting all possible candidates (54,471), and the precision becomes extremely low (0.02).", "cite_spans": [], "ref_spans": [ { "start": 205, "end": 216, "text": "Figure 2(a)", "ref_id": null }, { "start": 296, "end": 308, "text": "Figure 2(b)", "ref_id": null } ], "eq_spans": [], "section": "Optimization by selecting a threshold of Log Likelihood Ratio", "sec_num": "3.4" }, { "text": "(a) (b) (c) (d) Figure 2 . Precision, Recall, F1 and F0.5 scores versus log likelihood ratio. Figure 2 (c) and 2(d) demonstrate the global view and zoom-in of the F1 and F0.5 trends. We can see that the F0.5 reaches its peaks (0.05) when the threshold is set to eight or ten; while the F1 fluctuates around 0.04 to 0.05 when threshold is set lower than eight. Considering all four indices, the optimal value of the threshold can be set about eight.", "cite_spans": [], "ref_spans": [ { "start": 16, "end": 24, "text": "Figure 2", "ref_id": null }, { "start": 94, "end": 102, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Optimization by selecting a threshold of Log Likelihood Ratio", "sec_num": "3.4" }, { "text": "As our experiment configuration is capable to extract wrong collocations from the leaner corpus, the overall performance is not satisfactory. Hence, we reviewed the results and point out some possible sources of errors for future studies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussions and conclusion", "sec_num": "4" }, { "text": "First, regarding the PARSEME corpus, the gold standard was built based on the LVC tag, so it may be that the verbs of the collocations were biased. In fact, 44 out of 85 collocations on the list were constructed only by five verbs, namely do, get, give, have, and take. Therefore, the evaluation of the module was also biased. Regarding the BNC List, we have reached a precision of 0.57 due to the large size of corpus (100 million words) and a strict selection (top 10 for each sub-directory of the BNC). However, comparing with a previous study (Jian et al., 2004) which extracted 631,638 VN collocations from the BNC, we found that our standard collocation reference list (BNC \u22c3 PARSEME LVC) was much smaller (n=942) and may have a negative influence on the performance. Regarding the NUCLE, because the Part-Of-Speech (POS) and the lemma are not available, we used a POS tagger and a Lemmatizer. Yet, their performances were not evaluated, so the gold standard NUCLE WC List was not perfectly accurate. As for the whole system, it may be helpful to incorporate a word dependency parser module to identify the object noun which received the action of the verb.", "cite_spans": [ { "start": 547, "end": 566, "text": "(Jian et al., 2004)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Discussions and conclusion", "sec_num": "4" }, { "text": "Our approach has shown a method to detect erroneous collocations in learner English. As it relies on the accurate extraction of a reference list, our next step will consist in exploring larger corpora for extraction. Such an extraction module would be of great benefit as part of a Computer Aided Language Learning System dedicated to the analysis of phraseology in learner texts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussions and conclusion", "sec_num": "4" }, { "text": "In this research, the terms wrong collocations, erroneous collocations, unexpected collocations, and collocational errors are interchangeable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://lindat.mff.cuni.cz/repository/xmlui/handle/11372/LRT-2842 3 https://ota.bodleian.ox.ac.uk/repository/xmlui/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "NUCLE is a collection of 1,414 essays (in a total of 1.2 million words) written by students who are non-native English speakers. It is available by submitting a license agreement via https://www.comp.nus.edu.sg/~nlp/corpora.html 5 Source codes are available online: https://github.com/jenyuli/wrong_collocation_extraction 6 https://www.nltk.org/ 7 https://www.english-corpora.org/coca/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Multiword Expressions", "authors": [ { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "" }, { "first": "Su", "middle": [ "Nam" ], "last": "Kim", "suffix": "" } ], "year": 2010, "venue": "Handbook of Natural Language Processing", "volume": "", "issue": "", "pages": "267--292", "other_ids": {}, "num": null, "urls": [], "raw_text": "Timothy Baldwin and Su Nam Kim. 2010. Multiword Expressions. In Nitin Indurkhya and Fred J. Damerau, edi- tors, Handbook of Natural Language Processing, pages 267-292. Chapman and Hall/CRC, Boca Raton, FL, USA, Second edition.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "NLTK: The Natural Language Toolkit", "authors": [ { "first": "Steven", "middle": [], "last": "Bird", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Loper", "suffix": "" } ], "year": 2004, "venue": "The Companion Volume to the Proceedings of 42st Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "214--217", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steven Bird and Edward Loper. 2004. NLTK: The Natural Language Toolkit. In The Companion Volume to the Proceedings of 42st Annual Meeting of the Association for Computational Linguistics, pages 214-217, Barce- lona, Spain, July. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The British National Corpus. Distributed by Bodleian Libraries", "authors": [ { "first": "", "middle": [], "last": "Bnc Consortium", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "BNC Consortium, 2007, The British National Corpus. Distributed by Bodleian Libraries, University of Oxford.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "An automatic collocation writing assistant for Taiwanese EFL learners: A case of corpus-based NLP technology", "authors": [ { "first": "Yu-Chia", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Jason", "middle": [ "S" ], "last": "Chang", "suffix": "" }, { "first": "Hao-Jan", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Hsien-Chin", "middle": [], "last": "Liou", "suffix": "" } ], "year": 2008, "venue": "Computer Assisted Language Learning", "volume": "21", "issue": "3", "pages": "283--299", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yu-Chia Chang, Jason S. Chang, Hao-Jan Chen, and Hsien-Chin Liou. 2008. An automatic collocation writing assistant for Taiwanese EFL learners: A case of corpus-based NLP technology. Computer Assisted Language Learning, 21(3):283-299, July.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Building a Large Annotated Corpus of Learner English: The NUS Corpus of Learner English", "authors": [ { "first": "Daniel", "middle": [], "last": "Dahlmeier", "suffix": "" }, { "first": "Siew Mei", "middle": [], "last": "Hwee Tou Ng", "suffix": "" }, { "first": "", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Eighth Workshop on Innovative Use of NLP for Building Educational Applications", "volume": "", "issue": "", "pages": "22--31", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Dahlmeier, Hwee Tou Ng, and Siew Mei Wu. 2013. Building a Large Annotated Corpus of Learner English: The NUS Corpus of Learner English. In Proceedings of the Eighth Workshop on Innovative Use of NLP for Building Educational Applications, pages 22-31, Atlanta, Georgia, June. Association for Computational Lin- guistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Collocations in Corpus-Based Language Learning Research: Identifying, Comparing, and Interpreting the Evidence. Language Learning", "authors": [ { "first": "Dana", "middle": [], "last": "Gablasova", "suffix": "" }, { "first": "Vaclav", "middle": [], "last": "Brezina", "suffix": "" }, { "first": "Tony", "middle": [], "last": "Mcenery", "suffix": "" } ], "year": 2017, "venue": "", "volume": "67", "issue": "", "pages": "155--179", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dana Gablasova, Vaclav Brezina, and Tony McEnery. 2017. Collocations in Corpus-Based Language Learning Research: Identifying, Comparing, and Interpreting the Evidence. Language Learning, 67(S1):155-179.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Pay Attention when you Pay the Bills. A Multilingual Corpus with Dependency-based and Semantic Annotation of Collocations", "authors": [ { "first": "Marcos", "middle": [], "last": "Garcia", "suffix": "" }, { "first": "Marcos", "middle": [ "Garc\u00eda" ], "last": "Salido", "suffix": "" }, { "first": "Susana", "middle": [], "last": "Sotelo", "suffix": "" }, { "first": "Estela", "middle": [], "last": "Mosqueira", "suffix": "" }, { "first": "Margarita", "middle": [], "last": "Alonso-Ramos", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4012--4019", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marcos Garcia, Marcos Garc\u00eda Salido, Susana Sotelo, Estela Mosqueira, and Margarita Alonso-Ramos. 2019. Pay Attention when you Pay the Bills. A Multilingual Corpus with Dependency-based and Semantic Annotation of Collocations. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4012-4019, Florence, Italy, July. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Collocations in Malaysian English learners' writing: A corpus-based error analysis", "authors": [ { "first": "Ang Leng", "middle": [], "last": "Hong", "suffix": "" }, { "first": "Hajar", "middle": [ "Abdul" ], "last": "Rahim", "suffix": "" }, { "first": "Tan", "middle": [ "Kim" ], "last": "Hua", "suffix": "" }, { "first": "Khazriyati", "middle": [], "last": "Salehuddin", "suffix": "" } ], "year": 2011, "venue": "The Southeast Asian Journal of English Language Studies", "volume": "3", "issue": "", "pages": "31--44", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ang Leng Hong, Hajar Abdul Rahim, Tan Kim Hua, and Khazriyati Salehuddin. 2011. Collocations in Malaysian English learners' writing: A corpus-based error analysis. 3L: The Southeast Asian Journal of English Language Studies, 17(Special Issue):31-44.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Collocational Translation Memory Extraction Based on Statistical and Linguistic Information", "authors": [ { "first": "Yu-Chia", "middle": [], "last": "Jia-Yan Jian", "suffix": "" }, { "first": "Jason", "middle": [ "S" ], "last": "Chang", "suffix": "" }, { "first": "", "middle": [], "last": "Chang", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 16th Conference on Computational Linguistics and Speech Processing", "volume": "", "issue": "", "pages": "257--264", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jia-Yan Jian, Yu-Chia Chang, and Jason S. Chang. 2004. Collocational Translation Memory Extraction Based on Statistical and Linguistic Information. In Proceedings of the 16th Conference on Computational Linguistics and Speech Processing, pages 257-264, Taipei, Taiwan, September. The Association for Computational Linguistics and Chinese Language Processing (ACLCLP).", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Verb-Noun Collocations in Second Language Writing: A Corpus Analysis of Learners", "authors": [ { "first": "Batia", "middle": [], "last": "Laufer", "suffix": "" }, { "first": "Tina", "middle": [], "last": "Waldman", "suffix": "" } ], "year": 2011, "venue": "", "volume": "61", "issue": "", "pages": "647--672", "other_ids": {}, "num": null, "urls": [], "raw_text": "Batia Laufer and Tina Waldman. 2011. Verb-Noun Collocations in Second Language Writing: A Corpus Analysis of Learners' English. Language Learning, 61(2):647-672.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Collocation Errors. In Automated grammatical error detection for language learners", "authors": [ { "first": "Claudia", "middle": [], "last": "Leacock", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "63--71", "other_ids": {}, "num": null, "urls": [], "raw_text": "Claudia Leacock. 2010. Collocation Errors. In Automated grammatical error detection for language learners, pages 63-71. Morgan & Claypool Publishers, California.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Collocations and Lexical Functions", "authors": [ { "first": "Igor", "middle": [], "last": "Mel", "suffix": "" }, { "first": "'", "middle": [], "last": "", "suffix": "" } ], "year": 1998, "venue": "Phraseology: theory, analysis, and applications, Oxford linguistics", "volume": "", "issue": "", "pages": "23--53", "other_ids": {}, "num": null, "urls": [], "raw_text": "Igor Mel'\u010duk. 1998. Collocations and Lexical Functions. In Anthony P. Cowie, editor, Phraseology: theory, anal- ysis, and applications, Oxford linguistics, pages 23-53. Oxford Univ. Press, Oxford.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Phraseology in foreign language learning and teaching", "authors": [ { "first": "Fanny", "middle": [], "last": "Meunier", "suffix": "" }, { "first": "Sylviane", "middle": [], "last": "Granger", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fanny Meunier and Sylviane Granger, editors. 2008. Phraseology in foreign language learning and teaching. John Benjamins Pub. Co, Amsterdam ; Philadelphia.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "The Use of Collocations by Advanced Learners of English and Some Implications for", "authors": [ { "first": "Nadja", "middle": [], "last": "Nesselhauf", "suffix": "" } ], "year": 2003, "venue": "Teaching. Applied Linguistics", "volume": "24", "issue": "2", "pages": "223--242", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nadja Nesselhauf. 2003. The Use of Collocations by Advanced Learners of English and Some Implications for Teaching. Applied Linguistics, 24(2):223-242, June.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Lexical association measures and collocation extraction. Language Resources and Evaluation", "authors": [ { "first": "Pavel", "middle": [], "last": "Pecina", "suffix": "" } ], "year": 2010, "venue": "", "volume": "44", "issue": "", "pages": "137--158", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pavel Pecina. 2010. Lexical association measures and collocation extraction. Language Resources and Evaluation, 44(1/2):137-158.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Edition 1.1 of the PARSEME Shared Task on Automatic Identification of Verbal Multiword Expressions", "authors": [ { "first": "Carlos", "middle": [], "last": "Ramisch", "suffix": "" }, { "first": "Silvio", "middle": [], "last": "Cordeiro", "suffix": "" }, { "first": "Agata", "middle": [], "last": "Savary", "suffix": "" }, { "first": "Veronika", "middle": [], "last": "Vincze", "suffix": "" }, { "first": "Archna", "middle": [], "last": "Verginica Barbu Mititelu", "suffix": "" }, { "first": "Maja", "middle": [], "last": "Bhatia", "suffix": "" }, { "first": "Marie", "middle": [], "last": "Buljan", "suffix": "" }, { "first": "Polona", "middle": [], "last": "Candito", "suffix": "" }, { "first": "Voula", "middle": [], "last": "Gantar", "suffix": "" }, { "first": "Tunga", "middle": [], "last": "Giouli", "suffix": "" }, { "first": "Abdelati", "middle": [], "last": "G\u00fcng\u00f6r", "suffix": "" }, { "first": "Uxoa", "middle": [], "last": "Hawwari", "suffix": "" }, { "first": "Jolanta", "middle": [], "last": "I\u00f1urrieta", "suffix": "" }, { "first": "Simon", "middle": [], "last": "Kovalevskait\u0117", "suffix": "" }, { "first": "Timm", "middle": [], "last": "Krek", "suffix": "" }, { "first": "Chaya", "middle": [], "last": "Lichte", "suffix": "" }, { "first": "Johanna", "middle": [], "last": "Liebeskind", "suffix": "" }, { "first": "Carla", "middle": [], "last": "Monti", "suffix": "" }, { "first": "", "middle": [], "last": "Parra Escart\u00edn", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions", "volume": "", "issue": "", "pages": "222--240", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carlos Ramisch, Silvio Cordeiro, Agata Savary, Veronika Vincze, Verginica Barbu Mititelu, Archna Bhatia, Maja Buljan, Marie Candito, Polona Gantar, Voula Giouli, Tunga G\u00fcng\u00f6r, Abdelati Hawwari, Uxoa I\u00f1urrieta, Jolanta Kovalevskait\u0117, Simon Krek, Timm Lichte, Chaya Liebeskind, Johanna Monti, Carla Parra Escart\u00edn, et al. 2018. Edition 1.1 of the PARSEME Shared Task on Automatic Identification of Verbal Multiword Expressions. In Proceedings of the Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions (LAW- MWE-CxG-2018), pages 222-240, Santa Fe, United States, August. Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Progressives, patterns. pedagogy: a corpusdriven approach to English progressive forms, functions, contexts, and didactics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "130--135", "other_ids": {}, "num": null, "urls": [], "raw_text": "In Progressives, patterns. pedagogy: a corpus- driven approach to English progressive forms, functions, contexts, and didactics, pages 130-135. J. Benjamins Pub. Co, Amsterdam ; Philadelphia.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Multiword Expressions: A Pain in the Neck for NLP", "authors": [ { "first": "Ivan", "middle": [], "last": "Sag", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "" }, { "first": "Francis", "middle": [], "last": "Bond", "suffix": "" }, { "first": "Ann", "middle": [], "last": "Copestake", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Flickinger", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 3rd International Conference on Computational Linguistics and Intelligent Text Processing, number 2276", "volume": "", "issue": "", "pages": "1--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ivan Sag, Timothy Baldwin, Francis Bond, Ann Copestake, and Dan Flickinger. 2002. Multiword Expressions: A Pain in the Neck for NLP. In Proceedings of the 3rd International Conference on Computational Linguistics and Intelligent Text Processing, number 2276, pages 1-15. Springer.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "PARSEME -PARSing and Multiword Expressions within a European multilingual network", "authors": [ { "first": "Agata", "middle": [], "last": "Savary", "suffix": "" }, { "first": "Manfred", "middle": [], "last": "Sailer", "suffix": "" }, { "first": "Yannick", "middle": [], "last": "Parmentier", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Rosner", "suffix": "" }, { "first": "Victoria", "middle": [], "last": "Ros\u00e9n", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Przepi\u00f3rkowski", "suffix": "" }, { "first": "Cvetana", "middle": [], "last": "Krstev", "suffix": "" }, { "first": "Veronika", "middle": [], "last": "Vincze", "suffix": "" }, { "first": "Beata", "middle": [], "last": "W\u00f3jtowicz", "suffix": "" }, { "first": "Gyri", "middle": [], "last": "Sm\u00f8rdal Losnegaard", "suffix": "" }, { "first": "Carla", "middle": [], "last": "Parra Escart\u00edn", "suffix": "" }, { "first": "Jakub", "middle": [], "last": "Waszczuk", "suffix": "" }, { "first": "Mathieu", "middle": [], "last": "Constant", "suffix": "" }, { "first": "Petya", "middle": [], "last": "Osenova", "suffix": "" }, { "first": "Federico", "middle": [], "last": "Sangati", "suffix": "" } ], "year": 2015, "venue": "7th Language & Technology Conference: Human Language Technologies as a Challenge for Computer Science and Linguistics (LTC 2015)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Agata Savary, Manfred Sailer, Yannick Parmentier, Michael Rosner, Victoria Ros\u00e9n, Adam Przepi\u00f3rkowski, Cvetana Krstev, Veronika Vincze, Beata W\u00f3jtowicz, Gyri Sm\u00f8rdal Losnegaard, Carla Parra Escart\u00edn, Jakub Waszczuk, Mathieu Constant, Petya Osenova, and Federico Sangati. 2015. PARSEME -PARSing and Multi- word Expressions within a European multilingual network. In 7th Language & Technology Conference: Human Language Technologies as a Challenge for Computer Science and Linguistics (LTC 2015), Pozna\u0144, Poland, November.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "An ESL Writer's Collocational Aid", "authors": [ { "first": "Chi-Chiang", "middle": [], "last": "Shei", "suffix": "" }, { "first": "Helen", "middle": [], "last": "Pain", "suffix": "" } ], "year": 2000, "venue": "Computer Assisted Language Learning", "volume": "13", "issue": "2", "pages": "167--182", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chi-Chiang Shei and Helen Pain. 2000. An ESL Writer's Collocational Aid. Computer Assisted Language Learn- ing, 13(2):167-182, April.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Verbal Multiword Expressions: Idiomaticity and flexibility", "authors": [ { "first": "Livnat", "middle": [], "last": "Herzig Sheinfux", "suffix": "" }, { "first": "Tali", "middle": [ "Arad" ], "last": "Greshler", "suffix": "" }, { "first": "Nurit", "middle": [], "last": "Melnik", "suffix": "" }, { "first": "Shuly", "middle": [], "last": "Wintner", "suffix": "" } ], "year": 2019, "venue": "Representation and parsing of multiword expressions: Current trends", "volume": "", "issue": "", "pages": "35--68", "other_ids": {}, "num": null, "urls": [], "raw_text": "Livnat Herzig Sheinfux, Tali Arad Greshler, Nurit Melnik, and Shuly Wintner. 2019. Verbal Multiword Expres- sions: Idiomaticity and flexibility. In Yannick Parmentier and Jakub Waszczuk, editors, Representation and parsing of multiword expressions: Current trends, pages 35-68. Language Science Press, Berlin.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Focus on Formative Feedback", "authors": [ { "first": "Valerie", "middle": [ "J" ], "last": "Shute", "suffix": "" } ], "year": 2008, "venue": "Review of Educational Research", "volume": "78", "issue": "1", "pages": "153--89", "other_ids": {}, "num": null, "urls": [], "raw_text": "Valerie J. Shute. 2008. Focus on Formative Feedback. Review of Educational Research 78(1):153-89, March.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Les collocations lexicales : une relation essentiellement binaire d\u00e9finie par la relation pr\u00e9dicatargument", "authors": [ { "first": "Agn\u00e8s", "middle": [], "last": "Tutin", "suffix": "" } ], "year": 2013, "venue": "Langages, n\u00b0", "volume": "189", "issue": "1", "pages": "47--63", "other_ids": {}, "num": null, "urls": [], "raw_text": "Agn\u00e8s Tutin. 2013. Les collocations lexicales : une relation essentiellement binaire d\u00e9finie par la relation pr\u00e9dicat- argument. Langages, n\u00b0 189(1):47-63, April.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "text": "The system diagram and the three stages.", "uris": null } } } }