{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:15:02.188758Z" }, "title": "ERMI at PARSEME Shared Task 2020: Embedding-Rich Multiword Expression Identification", "authors": [ { "first": "Zeynep", "middle": [], "last": "Yirmibe\u015foglu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Bogazi\u00e7i University", "location": { "postCode": "34342", "settlement": "Bebek, Istanbul", "country": "Turkey" } }, "email": "zeynep.yirmibesoglu@boun.edu.tr" }, { "first": "Tunga", "middle": [], "last": "G\u00fcng\u00f6r", "suffix": "", "affiliation": { "laboratory": "", "institution": "Bogazi\u00e7i University", "location": { "postCode": "34342", "settlement": "Bebek, Istanbul", "country": "Turkey" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper describes the ERMI system submitted to the closed track of the PARSEME shared task 2020 on automatic identification of verbal multiword expressions (VMWEs). ERMI is an embedding-rich bidirectional LSTM-CRF model, which takes into account the embeddings of the word, its POS tag, dependency relation, and its head word. The results are reported for 14 languages, where the system is ranked 1 st in the general cross-lingual ranking of the closed track systems, according to the Unseen MWE-based F 1 .", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "This paper describes the ERMI system submitted to the closed track of the PARSEME shared task 2020 on automatic identification of verbal multiword expressions (VMWEs). ERMI is an embedding-rich bidirectional LSTM-CRF model, which takes into account the embeddings of the word, its POS tag, dependency relation, and its head word. The results are reported for 14 languages, where the system is ranked 1 st in the general cross-lingual ranking of the closed track systems, according to the Unseen MWE-based F 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Multiword expressions (MWEs) are lexical items that consist of multiple lexemes. The challenge of identifying MWEs comes from the fact that their properties cannot directly be deducted from the lexical, syntactic, semantic, pragmatic, and statistical properties of their components (Baldwin and Kim, 2010) . Addressing this challenge, the PARSEME shared task 2020 is a campaign that encourages the development of automatic verbal MWE (VMWE) identification models in a multilingual context. In this third edition of the PARSEME shared task, the focus is on identifying VMWEs that are unseen in training data. For this task, dev, test, train and raw corpora have been provided for 14 languages.", "cite_spans": [ { "start": 282, "end": 305, "text": "(Baldwin and Kim, 2010)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "ERMI (Embedding-Rich Multiword expression Identification) is a multilingual system with a bidirectional LSTM-CRF architecture, which can take as input the embeddings of the word, its POS tag, dependency relation, and its head word. Since the main focus of the shared task is to identify unseen VMWEs, we experiment with how the addition of the head word embedding affects the prediction results for different languages. In addition, we also take advantage of the raw corpora in a semi-supervised teacher-student neural model carrying the same LSTM-CRF architecture for two languages (EL, TR). We use no external resources in the training of our system, thus participating in the closed track.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The results for all 14 languages in the closed track have been submitted where language-specific combinations of the above-mentioned embeddings have been used as input to the system. The system has been ranked 1 st in the general cross-lingual ranking of the closed track systems for the Unseen MWE-based F 1 , and 2 nd for the Global MWE-based and Global Token-based F 1 metrics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Named entity recognition (NER) and MWE detection can be considered similar tasks, thus encouraging similar architectures. Neural models have been preferred frequently for NER (Lample et al., 2016; , and for detecting VMWEs in the previous edition of PARSEME (Ehren et al., 2018; Boros and Burtica, 2018; Berk et al., 2018; Taslimipoor and Rohanian, 2018; Stodden et al., 2018; Zampieri et al., 2018) .", "cite_spans": [ { "start": 175, "end": 196, "text": "(Lample et al., 2016;", "ref_id": "BIBREF9" }, { "start": 258, "end": 278, "text": "(Ehren et al., 2018;", "ref_id": "BIBREF6" }, { "start": 279, "end": 303, "text": "Boros and Burtica, 2018;", "ref_id": "BIBREF4" }, { "start": 304, "end": 322, "text": "Berk et al., 2018;", "ref_id": "BIBREF2" }, { "start": 323, "end": 354, "text": "Taslimipoor and Rohanian, 2018;", "ref_id": "BIBREF13" }, { "start": 355, "end": 376, "text": "Stodden et al., 2018;", "ref_id": "BIBREF12" }, { "start": 377, "end": 399, "text": "Zampieri et al., 2018)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "System Description", "sec_num": "2" }, { "text": "In order to detect VMWEs, we develop a system 1 consisting of three (two supervised, one semisupervised) neural network models, all of which carrying the same, bidirectional LSTM-CRF architecture, as proposed by Huang et al. (2015) for sequence tagging tasks. All models consist of three layers:", "cite_spans": [ { "start": 212, "end": 231, "text": "Huang et al. (2015)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "System Description", "sec_num": "2" }, { "text": "This work is licensed under a Creative Commons Attribution 4.0 International Licence. Licence details: http:// creativecommons.org/licenses/by/4.0/.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Description", "sec_num": "2" }, { "text": "1 ERMI is freely available at https://github.com/zeynepyirmibes/ERMI the input layer, LSTM layer, and CRF layer, implemented using Keras (Chollet and others, 2015) with Tensorflow backend (Abadi et al., 2015) . The architecture of each model is shown in Figure 1 . The input of our neural networks is an embedding layer, where we provide the model with the concatenation of the embeddings of the word, its POS tag, its dependency relation to the head, and the head of the word (for some languages). We do not use a pre-trained word embedding model. Instead, we exploit the provided raw corpora 2 , which are gathered specifically for this task and are in the same domain as the annotated corpora 3 ; and train FastText word embedding models for each of the 14 languages separately, using Gensim's Fasttext implementation (\u0158eh\u016f\u0159ek and Sojka, 2010) . The embedding vector dimension for all languages is 300 (for word and head word embeddings), whereas the vocabulary size of the embedding models vary due to different sizes of raw corpora. Due to computational limitations, we only use a portion of the raw corpora for FR, PL and SV languages.", "cite_spans": [ { "start": 137, "end": 163, "text": "(Chollet and others, 2015)", "ref_id": null }, { "start": 188, "end": 208, "text": "(Abadi et al., 2015)", "ref_id": "BIBREF0" }, { "start": 821, "end": 846, "text": "(\u0158eh\u016f\u0159ek and Sojka, 2010)", "ref_id": null } ], "ref_spans": [ { "start": 254, "end": 262, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "System Description", "sec_num": "2" }, { "text": "We develop two supervised neural models (ERMI, and ERMI-head) differing only in the input layer. For the input (embedding) layer, word and head word embeddings each of dimension 300 are extracted from the Fasttext embedding models that we pretrained from the raw corpora. Dependency relation and POS tag embeddings are represented as one-hot encodings, and then converted into embeddings during training. Hence, the dimension of the dependency relation embedding for each language is the number of unique DEPREL tags encountered in the training data plus one, for unknown tags in the test data. The same logic holds for the POS tag embedding.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Supervised ERMI", "sec_num": "2.1" }, { "text": "For our basic ERMI model, we use as input the concatenation of the embeddings of the word (CoNLL-U's FORM), its POS tag (UPOS), and its dependency relation to the head word (DEPREL). For our second supervised model, ERMI-head, we also concatenate the embedding of the head of the word (CoNLL-U's HEAD) to the input layer, in order to incorporate the relationship the word has to its dependent word, which, as we observe for some languages (EU, FR, HE, HI, PL, TR), aids in the decision of whether a word is to be annotated as part of a VMWE.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Supervised ERMI", "sec_num": "2.1" }, { "text": "Differing only in the input layer, both models pass the input features to the bidirectional LSTM layer, where past (via forward LSTM states) and future (via backward LSTM states) information are taken into account. The output of the LSTM layer is then passed to the CRF layer, which connects consecutive output layers to produce the final output. With this approach, we incorporate both past and future information using the bi-LSTM architecture, and also the sentence level tag information using the CRF 132 layer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Supervised ERMI", "sec_num": "2.1" }, { "text": "In the third edition of PARSEME, raw (unlabeled) corpora are provided for all languages, thus enabling the possibility of semi-supervised learning. Hence, we exploit a portion of the raw corpus in addition to the annotated training corpus, and propose a teacher-student model (TeachERMI). The aim is to be able to also train on unlabeled data, as suggested by Wu et al. (2020) , where they train a teacher-student cross-lingual Named Entity Recognition (NER) model.", "cite_spans": [ { "start": 360, "end": 376, "text": "Wu et al. (2020)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Semi-supervised ERMI", "sec_num": "2.2" }, { "text": "In this approach, we first train a teacher model for every language separately, on the labeled training set. The teacher model is one of ERMI, or ERMI-head, depending on the validation results per language. Afterwards, we take a portion of the unlabeled raw corpus (corresponding to the half of the size of the training corpus for that language), and label it using the teacher model that we trained. Then, we combine the annotated training corpus with the raw corpus labeled by the teacher model, and train a student model. We observe that this approach only performs better than the teacher model (ERMI or ERMI-head) for Greek (EL) and Turkish (TR). Thus, we employ this approach (TeachERMI) for only two languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semi-supervised ERMI", "sec_num": "2.2" }, { "text": "Tagging Scheme: During pre-processing, we adopt the bigappy-unicrossy tagging scheme proposed by Berk et al. (2019) to better represent overlapping (nesting and crossing) and discontinuous MWEs.", "cite_spans": [ { "start": 97, "end": 115, "text": "Berk et al. (2019)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3" }, { "text": "Datasets: During the validation runs (results of which are explained in Section 4.1), we concatenate the training and development corpora for each language, and randomly split 90% for training and 10% for testing. For the teacher-student model, we also use a portion of the raw corpora (roughly half the size of the training sets). After selecting the best system (out of ERMI, ERMI-head, and TeachERMI) for each language, we train our final models using the combined training and development sets, and use the blind test data for testing. For Turkish (TR) and Greek (EL), we develop a teacher-student model, using 10,796 and 9,510 sentences, respectively, of the provided raw corpora in addition to the development and training sets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3" }, { "text": "Hyperparameters: We choose the mini batch size and number of epochs with respect to the size of training sets for each language (ref. Table 1 ). We limit the mini batch size between 8-32, drawn from the conclusions of Reimers and Gurevych (2017) , where they experiment with five sequence tagging tasks with LSTM architectures, and deduct the optimal mini batch size for large training corpora. We use a fixed dropout rate of 0.1 for all bi-LSTM layers.", "cite_spans": [ { "start": 218, "end": 245, "text": "Reimers and Gurevych (2017)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 134, "end": 141, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3" }, { "text": "We make validation runs on the training and development data, and compare our three neural models for each language. Afterwards, we report the official results of the selected systems on the blind test set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4" }, { "text": "The validation results of our three systems (ERMI, ERMI-head, TeachERMI) are compared for all languages, and the best-performing system (with respect to Unseen MWE-based, Global MWE-based, and Global Token-based F 1 ) for each language is selected for the final submission. In Table 1 , we report the validation results together with the hyperparameters used during training.", "cite_spans": [], "ref_spans": [ { "start": 277, "end": 284, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Validation Results", "sec_num": "4.1" }, { "text": "For us, the most interesting part of evaluating the validation runs is the comparison between ERMI and ERMI-head. We observe that the addition of head word embeddings to the input layer improves the Unseen MWE-based F 1 score significantly for the EU, FR, HE, HI, PL and TR languages (4.98% on average for these languages). We also have the opportunity to observe that the teacher-student model enables the enlargement of the training corpus by around 50%, thus enabling better generalization for EL and TR. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Validation Results", "sec_num": "4.1" }, { "text": "We evaluate the validation runs (Table 1) , and train the ERMI system for DE, GA, IT, PT, RO, SV and ZH languages; the ERMI-head system for EU, FR, HE, HI, and PL languages. For Turkish (TR), we train TeachERMI using the ERMI-head input layer (including the head word embedding), and for Greek (EL), we train TeachERMI using the ERMI input layer (excluding the head word embedding), judging from these languages' validation results. Having selected the most appropriate system for each language, we present the official results in the closed track for all 14 languages on the blind test data in Table 2 .", "cite_spans": [], "ref_spans": [ { "start": 32, "end": 41, "text": "(Table 1)", "ref_id": "TABREF2" }, { "start": 595, "end": 602, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Test Results", "sec_num": "4.2" }, { "text": "We have been able to observe from the validation results that the addition of head word embeddings to the input layer significantly aided in detecting unseen VMWEs for EU, FR, HE, HI, PL and TR. In order to observe the effect of head word embeddings on VMWE detection in the final test set, we removed the head word embeddings from the input layer for one of those languages (EU), and obtained a 24.64% Unseen MWE-based F1 score from the ERMI model, as compared to the 26.99% that we've obtained in the official results with ERMI-head. For DE, GA, IT, PT, RO, SV and ZH, our ERMI model (without head word embeddings in the input layer) performed better than ERMI-head and TeachERMI during the validation runs. To examine this phenomenon in the blind test set, we also trained the ERMI-head system for one of those languages (IT). The 43.84% Global MWE-based F1 score of ERMI for IT drops to 36.88% when head-word embeddings are added to the input layer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4.3" }, { "text": "Analyzing the presence and absence of head-word embeddings in the embedding layer for each language, we deduct that feeding a language-specific input layer to the neural models increased our overall performance. Using also the raw corpora for EL and TR languages with the teacher-student model, we have been able to benefit from training on unlabeled data, which may be preferable for low resource scenarios. For TR, the validation results show the superiority of ERMI-head over ERMI, and of TeachERMI over ERMI-head. Hence, the final system for Turkish is TeachERMI with the ERMI-head input layer. We also run ERMI-head for the final test set, where we obtain a Global MWE-based F1 score of %63.47, whereas the official score of TeachERMI for TR is %64.38, showing us the benefit of using the teacherstudent model for this language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4.3" }, { "text": "When we look at the performance of our system for the MWE-based F1 score per VMWE category, we can see that our system outperforms the other closed track system in the LVC.full category for HI, TR and ZH, and is ranked 2 nd among all seven (open and closed track) systems for HI. Our system also predicts MVCs better than other systems that submitted their results for IT and PT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4.3" }, { "text": "Our overall system ranked 1 st among 2 systems in the closed track, and 3 rd among 9 systems in both open and closed tracks with respect to Unseen MWE-based F1, which was the focus of this edition of PARSEME. It is worth noting that, although we did not make use of any external resources (participating in the closed track), we outperformed most of the systems in the open track that exploit such resources. Our system also ranked 1 st in the closed track for the HI, RO, TR and ZH languages in the Global MWE-based F1 metric and 5 th for all 14 languages among all systems in the Global MWE-based and Token-based F1 metric.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4.3" }, { "text": "In this paper we proposed an embedding-rich bidirectional LSTM-CRF system. In addition to word, POS and dependency relation embeddings, we exploited head word embeddings, especially to tackle the issue of predicting unseen VMWEs. Within the closed track, we have used the raw corpora to train word embeddings, as well as proposing a semi-supervised teacher-student model, providing the opportunity of training on unlabeled data for VMWE identification. These methods have increased the generalisation power, enabling our system to perform best in predicting unseen VMWEs in the closed track.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "http://hdl.handle.net/11234/1-3416 3 http://hdl.handle.net/11234/1-3367", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The numerical calculations reported in this paper were partially performed at TUBITAK ULAKBIM, High Performance and Grid Computing Center (TRUBA resources).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Tensor-Flow: Large-scale machine learning on heterogeneous systems. Software available from tensorflow.org", "authors": [ { "first": "Mart\u00edn", "middle": [], "last": "Abadi", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Agarwal", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Barham", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Brevdo", "suffix": "" }, { "first": "Zhifeng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Craig", "middle": [], "last": "Citro", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Andy", "middle": [], "last": "Davis", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" }, { "first": "Matthieu", "middle": [], "last": "Devin", "suffix": "" }, { "first": "Sanjay", "middle": [], "last": "Ghemawat", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Goodfellow", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Harp", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Irving", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Isard", "suffix": "" }, { "first": "Yangqing", "middle": [], "last": "Jia", "suffix": "" }, { "first": "Rafal", "middle": [], "last": "Jozefowicz", "suffix": "" }, { "first": "Lukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Manjunath", "middle": [], "last": "Kudlur", "suffix": "" }, { "first": "Josh", "middle": [], "last": "Levenberg", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mart\u00edn Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dande- lion Man\u00e9, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Vi\u00e9gas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2015. Tensor- Flow: Large-scale machine learning on heterogeneous systems. Software available from tensorflow.org.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Multiword expressions", "authors": [ { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "" }, { "first": "Su", "middle": [ "Nam" ], "last": "Kim", "suffix": "" } ], "year": 2010, "venue": "Handbook of Natural Language Processing", "volume": "", "issue": "", "pages": "978--1420085921", "other_ids": {}, "num": null, "urls": [], "raw_text": "Timothy Baldwin and Su Nam Kim. 2010. Multiword expressions. In Nitin Indurkhya and Fred J. Damerau, editors, Handbook of Natural Language Processing, Second Edition, pages 267-292. CRC Press, Taylor and Francis Group, Boca Raton, FL. ISBN 978-1420085921.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Deep-BGT at PARSEME shared task 2018: Bidirectional LSTM-CRF model for verbal multiword expression identification", "authors": [ { "first": "G\u00f6zde", "middle": [], "last": "Berk", "suffix": "" }, { "first": "Berna", "middle": [], "last": "Erden", "suffix": "" }, { "first": "Tunga", "middle": [], "last": "G\u00fcng\u00f6r", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions (LAW-MWE-CxG-2018)", "volume": "", "issue": "", "pages": "248--253", "other_ids": {}, "num": null, "urls": [], "raw_text": "G\u00f6zde Berk, Berna Erden, and Tunga G\u00fcng\u00f6r. 2018. Deep-BGT at PARSEME shared task 2018: Bidirectional LSTM-CRF model for verbal multiword expression identification. In Proceedings of the Joint Workshop on Lin- guistic Annotation, Multiword Expressions and Constructions (LAW-MWE-CxG-2018), pages 248-253, Santa Fe, New Mexico, USA, August. ACL.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Representing overlaps in sequence labeling tasks with a novel tagging scheme: bigappy-unicrossy", "authors": [ { "first": "G\u00f6zde", "middle": [], "last": "Berk", "suffix": "" }, { "first": "Berna", "middle": [], "last": "Erden", "suffix": "" }, { "first": "Tunga", "middle": [], "last": "G\u00fcng\u00f6r", "suffix": "" } ], "year": 2019, "venue": "20th International Conference on Computational Linguistics and Intelligent Text Processing (CICLing)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "G\u00f6zde Berk, Berna Erden, and Tunga G\u00fcng\u00f6r. 2019. Representing overlaps in sequence labeling tasks with a novel tagging scheme: bigappy-unicrossy. In Alexander Gelbukh, editor, 20th International Conference on Computational Linguistics and Intelligent Text Processing (CICLing), La Rochelle, France.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "GBD-NER at PARSEME shared task 2018: Multi-word expression detection using bidirectional long-short-term memory networks and graph-based decoding", "authors": [ { "first": "Tiberiu", "middle": [], "last": "Boros", "suffix": "" }, { "first": "Ruxandra", "middle": [], "last": "Burtica", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions (LAW-MWE-CxG-2018)", "volume": "", "issue": "", "pages": "254--260", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tiberiu Boros and Ruxandra Burtica. 2018. GBD-NER at PARSEME shared task 2018: Multi-word expression detection using bidirectional long-short-term memory networks and graph-based decoding. In Proceedings of the Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions (LAW-MWE-CxG- 2018), pages 254-260, Santa Fe, New Mexico, USA, August. ACL.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Mumpitz at PARSEME shared task 2018: A bidirectional LSTM for the identification of verbal multiword expressions", "authors": [ { "first": "Rafael", "middle": [], "last": "Ehren", "suffix": "" }, { "first": "Timm", "middle": [], "last": "Lichte", "suffix": "" }, { "first": "Younes", "middle": [], "last": "Samih", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions (LAW-MWE-CxG-2018)", "volume": "", "issue": "", "pages": "261--267", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rafael Ehren, Timm Lichte, and Younes Samih. 2018. Mumpitz at PARSEME shared task 2018: A bidirectional LSTM for the identification of verbal multiword expressions. In Proceedings of the Joint Workshop on Linguis- tic Annotation, Multiword Expressions and Constructions (LAW-MWE-CxG-2018), pages 261-267, Santa Fe, New Mexico, USA, August. ACL.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The effect of morphology in named entity recognition with sequence tagging", "authors": [ { "first": "Onur", "middle": [], "last": "G\u00fcng\u00f6r", "suffix": "" }, { "first": "Tunga", "middle": [], "last": "Gungor", "suffix": "" }, { "first": "Suzan", "middle": [], "last": "Uskudarli", "suffix": "" } ], "year": 2019, "venue": "Natural Language Engineering", "volume": "25", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Onur G\u00fcng\u00f6r, Tunga Gungor, and Suzan Uskudarli. 2019. The effect of morphology in named entity recognition with sequence tagging. Natural Language Engineering, 25:147-169, 01.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Bidirectional LSTM-CRF models for sequence tagging", "authors": [ { "first": "Zhiheng", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1508.01991" ] }, "num": null, "urls": [], "raw_text": "Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF models for sequence tagging. arXiv preprint arXiv:1508.01991.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Neural architectures for named entity recognition", "authors": [ { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Miguel", "middle": [], "last": "Ballesteros", "suffix": "" }, { "first": "Sandeep", "middle": [], "last": "Subramanian", "suffix": "" }, { "first": "Kazuya", "middle": [], "last": "Kawakami", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "260--270", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260-270, San Diego, California, June. ACL.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Software framework for topic modelling with large corpora", "authors": [ { "first": "Petr", "middle": [], "last": "Radim\u0159eh\u016f\u0159ek", "suffix": "" }, { "first": "", "middle": [], "last": "Sojka", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks", "volume": "", "issue": "", "pages": "45--50", "other_ids": {}, "num": null, "urls": [], "raw_text": "Radim\u0158eh\u016f\u0159ek and Petr Sojka. 2010. Software framework for topic modelling with large corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45-50, Valletta, Malta, May. ELRA.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Reporting score distributions makes a difference: Performance study of LSTM-networks for sequence tagging", "authors": [ { "first": "Nils", "middle": [], "last": "Reimers", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "338--348", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nils Reimers and Iryna Gurevych. 2017. Reporting score distributions makes a difference: Performance study of LSTM-networks for sequence tagging. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 338-348, Copenhagen, Denmark, September. ACL.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "TRAPACC and TRAPACCS at PARSEME shared task 2018: Neural transition tagging of verbal multiword expressions", "authors": [ { "first": "Regina", "middle": [], "last": "Stodden", "suffix": "" }, { "first": "Behrang", "middle": [], "last": "Qasemizadeh", "suffix": "" }, { "first": "Laura", "middle": [], "last": "Kallmeyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions (LAW-MWE-CxG-2018)", "volume": "", "issue": "", "pages": "268--274", "other_ids": {}, "num": null, "urls": [], "raw_text": "Regina Stodden, Behrang QasemiZadeh, and Laura Kallmeyer. 2018. TRAPACC and TRAPACCS at PARSEME shared task 2018: Neural transition tagging of verbal multiword expressions. In Proceedings of the Joint Work- shop on Linguistic Annotation, Multiword Expressions and Constructions (LAW-MWE-CxG-2018), pages 268- 274, Santa Fe, New Mexico, USA, August. ACL.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "SHOMA at parseme shared task on automatic identification of VMWEs: Neural multiword expression tagging with high generalisation", "authors": [ { "first": "Shiva", "middle": [], "last": "Taslimipoor", "suffix": "" }, { "first": "Omid", "middle": [], "last": "Rohanian", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1809.03056" ] }, "num": null, "urls": [], "raw_text": "Shiva Taslimipoor and Omid Rohanian. 2018. SHOMA at parseme shared task on automatic identification of VMWEs: Neural multiword expression tagging with high generalisation. arXiv preprint arXiv:1809.03056.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Single-/multisource cross-lingual NER via teacher-student learning on unlabeled data in target language", "authors": [ { "first": "Qianhui", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Zijia", "middle": [], "last": "Lin", "suffix": "" }, { "first": "F", "middle": [], "last": "B\u00f6rje", "suffix": "" }, { "first": "Jian-Guang", "middle": [], "last": "Karlsson", "suffix": "" }, { "first": "Biqing", "middle": [], "last": "Lou", "suffix": "" }, { "first": "", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.12440" ] }, "num": null, "urls": [], "raw_text": "Qianhui Wu, Zijia Lin, B\u00f6rje F Karlsson, Jian-Guang Lou, and Biqing Huang. 2020. Single-/multi- source cross-lingual NER via teacher-student learning on unlabeled data in target language. arXiv preprint arXiv:2004.12440.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Veyn at PARSEME shared task 2018: Recurrent neural networks for VMWE identification", "authors": [ { "first": "Nicolas", "middle": [], "last": "Zampieri", "suffix": "" }, { "first": "Manon", "middle": [], "last": "Scholivet", "suffix": "" }, { "first": "Carlos", "middle": [], "last": "Ramisch", "suffix": "" }, { "first": "Benoit", "middle": [], "last": "Favre", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions (LAW-MWE-CxG-2018)", "volume": "", "issue": "", "pages": "290--296", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nicolas Zampieri, Manon Scholivet, Carlos Ramisch, and Benoit Favre. 2018. Veyn at PARSEME shared task 2018: Recurrent neural networks for VMWE identification. In Proceedings of the Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions (LAW-MWE-CxG-2018), pages 290-296, Santa Fe, New Mexico, USA, August. ACL.", "links": null } }, "ref_entries": { "TABREF2": { "text": "Validation results and hyperparameters of our three models for each language.", "type_str": "table", "num": null, "html": null, "content": "
Unseen MWE-basedGlobal MWE-basedGlobal Token-based
LanguageSystemPRF1RankPRF1RankPRF1Rank
DEERMI24.02 20.27 21.98163.23 44.66 52.35276.14 42.66 54.682
ELTeachERMI 28.70 31.00 29.81167.20 56.16 61.19275.19 58.82 66.002
EUERMI-head 21.13 37.33 26.99175.95 70.35 73.04280.10 72.25 75.972
FRERMI-head 18.54 35.67 24.40161.52 61.30 61.41270.86 65.53 68.092
GAERMI14.79 6.989.48132.62 13.99 19.58269.71 21.15 32.451
HEERMI-head 11.49 6.628.40141.81 24.85 31.17246.72 26.22 33.592
HIERMI-head 37.09 41.67 39.25163.48 56.32 59.69179.48 62.00 69.661
ITERMI17.44 10.00 12.71166.27 32.75 43.84275.45 32.55 45.482
PLERMI-head 23.28 29.24 25.92173.92 64.91 69.12277.87 65.86 71.362
PTERMI24.63 33.33 28.33168.84 59.46 63.81273.62 58.80 65.382
ROERMI16.45 30.10 21.28185.67 81.57 83.57188.69 82.97 85.741
SVERMI31.16 28.67 29.86172.68 55.73 63.08277.24 52.53 62.532
TRTeachERMI 37.28 35.67 36.46167.11 61.86 64.38169.11 62.42 65.601
ZHERMI47.49 34.67 40.08166.67 55.98 60.86170.92 58.99 64.411
Total25.25 27.23 26.20164.78 52.85 58.21273.65 54.48 62.632
" }, "TABREF3": { "text": "Official Language-specific Results of ERMI", "type_str": "table", "num": null, "html": null, "content": "" } } } }