{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:29:30.367972Z" }, "title": "Building Low-Resource NER Models Using Non-Speaker Annotations", "authors": [ { "first": "Tatiana", "middle": [], "last": "Tsygankova", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Pennsylvania", "location": { "addrLine": "19104 \u266d Duolingo", "postCode": "15206", "settlement": "Philadelphia, Pittsburgh", "region": "PA, PA" } }, "email": "" }, { "first": "Francesca", "middle": [], "last": "Marini", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Pennsylvania", "location": { "addrLine": "19104 \u266d Duolingo", "postCode": "15206", "settlement": "Philadelphia, Pittsburgh", "region": "PA, PA" } }, "email": "fmarini@seas.upenn.edu" }, { "first": "Stephen", "middle": [], "last": "Mayhew", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Pennsylvania", "location": { "addrLine": "19104 \u266d Duolingo", "postCode": "15206", "settlement": "Philadelphia, Pittsburgh", "region": "PA, PA" } }, "email": "stephen@duolingo.com" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Pennsylvania", "location": { "addrLine": "19104 \u266d Duolingo", "postCode": "15206", "settlement": "Philadelphia, Pittsburgh", "region": "PA, PA" } }, "email": "danroth@seas.upenn.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In low-resource natural language processing (NLP), the key problems are a lack of target language training data, and a lack of native speakers to create it. Cross-lingual methods have had notable success in addressing these concerns, but in certain common circumstances, such as insufficient pretraining corpora or languages far from the source language, their performance suffers. In this work we propose a complementary approach to building low-resource Named Entity Recognition (NER) models using \"non-speaker\" (NS) annotations, provided by annotators with no prior experience in the target language. We recruit 30 participants in a carefully controlled annotation experiment with Indonesian, Russian, and Hindi. We show that use of NS annotators produces results that are consistently on par or better than cross-lingual methods built on modern contextual representations, and have the potential to outperform with additional effort. We conclude with observations of common annotation patterns and recommended implementation practices, and motivate how NS annotations can be used in addition to prior methods for improved performance. 1", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "In low-resource natural language processing (NLP), the key problems are a lack of target language training data, and a lack of native speakers to create it. Cross-lingual methods have had notable success in addressing these concerns, but in certain common circumstances, such as insufficient pretraining corpora or languages far from the source language, their performance suffers. In this work we propose a complementary approach to building low-resource Named Entity Recognition (NER) models using \"non-speaker\" (NS) annotations, provided by annotators with no prior experience in the target language. We recruit 30 participants in a carefully controlled annotation experiment with Indonesian, Russian, and Hindi. We show that use of NS annotators produces results that are consistently on par or better than cross-lingual methods built on modern contextual representations, and have the potential to outperform with additional effort. We conclude with observations of common annotation patterns and recommended implementation practices, and motivate how NS annotations can be used in addition to prior methods for improved performance. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Work in low-resource languages is not only academically compelling, breaking from popular use of massive compute power on unlimited English data, but also useful, resulting in improved digital tools for under-resourced communities. Two common strategies for low-resource NLP include (a) building cross-lingual models, and (b) annotating data in the target language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Cross-lingual approaches -in which models are trained on some high-resource language, and applied to the target language -have been Figure 1 : An example of how romanized Hindi text can be annotated without prior language knowledge.", "cite_spans": [], "ref_spans": [ { "start": 132, "end": 140, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "shown to be surprisingly effective (Wu and Dredze, 2019; Lample and Conneau, 2019) . However, in common circumstances, such as when working with languages with insufficient training corpora or those far from the available source languages, cross-lingual methods suffer (Wu and Dredze, 2020; K et al., 2020) . Absent sufficient crosslingual methods, conventional wisdom suggests that only native (or fluent) speakers of a language can provide useful data to train NLP models. But in low-resource scenarios, fluent speakers may not be readily available.", "cite_spans": [ { "start": 35, "end": 56, "text": "(Wu and Dredze, 2019;", "ref_id": "BIBREF29" }, { "start": 57, "end": 82, "text": "Lample and Conneau, 2019)", "ref_id": "BIBREF12" }, { "start": 269, "end": 290, "text": "(Wu and Dredze, 2020;", "ref_id": "BIBREF30" }, { "start": 291, "end": 306, "text": "K et al., 2020)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To address this limitation, we hypothesize that the search for annotators can be extended beyond fluent speakers. In this work, we propose an unconventional approach for low-resource named entity recognition (NER) by getting annotations from annotators with no familiarity in the target language, referred to as \"non-speaker\" (NS) annotation. We posit that annotators are able to use phonetic, syntactic, and even semantic information from their languages of fluency to inform recognition. One example of how phonetic information can be used for NER annotation is shown in Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 573, "end": 581, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We test our hypothesis in a carefully controlled annotation experiment, comparing the performance of non-speaker (NS) annotators to that of fluent speakers (FS) in Indonesian, Russian, and Hindi.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our findings are summarized in two key takeaways: (1) non-speaker annotators are able to produce useful annotations despite having no experience annotating or learning the target language; and (2) non-speaker annotations are on par or better than cross-lingual methods built on modern contextual representations. We conclude with observations over factors that can influence NS annotation quality, such as availability of a good romanization system, or presence of capitalization in the target language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Named Entity Recognition (NER) has been studied for many years (Ratinov and Roth, 2009; Lample et al., 2016; Ma and Hovy, 2016) , with most focus on English and a few other European languages (Tjong Kim Sang and De Meulder, 2003) .", "cite_spans": [ { "start": 63, "end": 87, "text": "(Ratinov and Roth, 2009;", "ref_id": "BIBREF22" }, { "start": 88, "end": 108, "text": "Lample et al., 2016;", "ref_id": "BIBREF11" }, { "start": 109, "end": 127, "text": "Ma and Hovy, 2016)", "ref_id": "BIBREF14" }, { "start": 192, "end": 229, "text": "(Tjong Kim Sang and De Meulder, 2003)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Recently, there has been growing interest in low-resource NLP, with work in part-of-speech tagging (Plank and Agi\u0107, 2018) , parsing (Rasooli and Collins, 2017), machine translation (Xia et al., 2019) , and other fields. Low-resource NER has seen work using Wikipedia (Tsai et al., 2016) , self attention (Xie et al., 2018) , and multilingual contextual representations (Wu and Dredze, 2019) .", "cite_spans": [ { "start": 99, "end": 121, "text": "(Plank and Agi\u0107, 2018)", "ref_id": "BIBREF20" }, { "start": 181, "end": 199, "text": "(Xia et al., 2019)", "ref_id": "BIBREF31" }, { "start": 267, "end": 286, "text": "(Tsai et al., 2016)", "ref_id": "BIBREF28" }, { "start": 304, "end": 322, "text": "(Xie et al., 2018)", "ref_id": "BIBREF32" }, { "start": 369, "end": 390, "text": "(Wu and Dredze, 2019)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "There has been a small amount of work using non-speaker annotations (Mayhew et al., 2019a) , but mainly as an application of a technique, falling short of the exhaustive study in this paper.", "cite_spans": [ { "start": 68, "end": 90, "text": "(Mayhew et al., 2019a)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Several interfaces exist for non-speaker annotations in NER, including TALEN (Mayhew, 2018) , which we use, ELISA IE (Lin et al., 2018) , and Dragonfly (Costello et al., 2020) , which performed small-scale experiments with non-speaker annotators.", "cite_spans": [ { "start": 77, "end": 91, "text": "(Mayhew, 2018)", "ref_id": "BIBREF15" }, { "start": 117, "end": 135, "text": "(Lin et al., 2018)", "ref_id": "BIBREF13" }, { "start": 152, "end": 175, "text": "(Costello et al., 2020)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "A similar approach has been proposed for machine translation (Hermjakob et al., 2018b) and speech recognition (Chen et al., 2016) . In the former case (assuming the translation direction is Foreign-to-English), it is often sufficient to translate several of the most important content words, then reconstruct the most likely sentence that uses these. In speech recognition, it is possible to listen to a language one does not speak, and produce a phonetic transcriptions that can be aggregated with others into a reasonable transcription, a process referred to as mismatched crowdsourcing. ", "cite_spans": [ { "start": 61, "end": 86, "text": "(Hermjakob et al., 2018b)", "ref_id": "BIBREF9" }, { "start": 110, "end": 129, "text": "(Chen et al., 2016)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Our experiment consisted of a series of trials, typically attended by 1-5 participants. Each trial ran for four hours and consisted of three tasks:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3" }, { "text": "(1) one-hour instructional training, (2) 20-minute English annotation exercise, and (3) series of five 30-minute sessions annotating documents in the target language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3" }, { "text": "We chose three target languages: Indonesian, Russian, and Hindi. These languages were chosen based on availability of gold-annotated data and fluent speakers, and language difficulty. The constraint of available fluent speakers for annotation, which we use as a point of comparison on non-speaker annotation performance, led us to choose mid-to high-resource languages for evaluation. To read accounts of similar techniques used on true low-resource languages, see the applications section ( \u00a74.3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Selection", "sec_num": null }, { "text": "We define language difficulty as the taskspecific difficulty experienced by an English speaker creating NER annotations in the target language. In practice, this difficulty mainly depends on script and capitalization, but may also depend on other factors such as language family and number of English loanwords. Under this task-specific definition and relevant properties summarized in Table 1 , Indonesian is identified as the \"easiest\" language, Russian is \"intermediate,\"", "cite_spans": [], "ref_spans": [ { "start": 386, "end": 393, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Language Selection", "sec_num": null }, { "text": "and Hindi is the \"hardest.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Selection", "sec_num": null }, { "text": "Participant Selection In total, there were 30 participants involved in the study, selected largely through a network of friends and acquaintances at the University of Pennsylvania. All participants were uniformly paid $10/hour for their time and were preliminarily screened for language exposure. We chose not to use crowd-sourcing platforms, such as Mechanical Turk, to allow flexibility in administration format and recruitment strategy. The methodology for the study was approved by the Institutional Review Board at the university. Data We used gold-annotated NER data from the LORELEI project (Strassel and Tracey, 2016; Tracey et al., 2019) . This data uses 4 entity tags: Person, Organization, Location, and Geopolitical Entity. We created splits of these datasets ourselves, statistics of which can be seen in Table 2 . These corpora are not parallel. Accounting for annotation speed differences, FS and NS annotators were given document sets of different sizes to annotate during the same time frame. Each document set used in the experiment was annotated by at least two participants. (visual reference in Figure 2 ).", "cite_spans": [ { "start": 598, "end": 625, "text": "(Strassel and Tracey, 2016;", "ref_id": "BIBREF25" }, { "start": 626, "end": 646, "text": "Tracey et al., 2019)", "ref_id": "BIBREF27" } ], "ref_spans": [ { "start": 818, "end": 825, "text": "Table 2", "ref_id": "TABREF4" }, { "start": 1116, "end": 1124, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Language Selection", "sec_num": null }, { "text": "NI FS NI FS Bi-LSTM NS NS 4 NS NS 1 NS NS 4 NS NS 1 Bi-LSTM ... ... ... ...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Selection", "sec_num": null }, { "text": "Task 1: Instructional Training In total, two instructional documents were used -one providing an overview of the task goals and annotation software, and the other outlining key annotation principles in the form of an interactive annotation guideline quiz. The annotation software used was TALEN (Mayhew, 2018) , a tool designed for annotating named entities when the annotators don't speak the target language. goal of this exercise was both to familiarize the participants with the software interface and provide an indicator of their annotator potential and understanding of the annotation guidelines, used later to filter out low-quality annotators.", "cite_spans": [ { "start": 295, "end": 309, "text": "(Mayhew, 2018)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Language Selection", "sec_num": null }, { "text": "Participants completed their 2.5 hours of annotation in 5 sessions of 30 minutes each. All FS annotators spent their time annotating documents in their native language, while NS annotators worked with foreign languages that they had no prior exposure to. Given that all of the languages used in the study were highto mid-resource, annotators were given explicit instructions not to use external model resources such as Google Translate, but were allowed to use internet search to determine the nature of the entities. For Russian and Hindi, which do not use Latin script, we provided uroman (Hermjakob et al., 2018a) romanization, so that the script was not a barrier to successful annotation (Figure 1 ). Summary statistics of the annotated documents can be found in Table 3 . Note that the larger annotated data size from the NS annotators reflects the fact that there were more NS annotators than FS annotators, a choice we deliberately made. Table 4 : Annotation quality of annotations collected from fluent speaker (FS) and non-speaker (NS) annotators against the gold data.", "cite_spans": [ { "start": 591, "end": 616, "text": "(Hermjakob et al., 2018a)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 693, "end": 702, "text": "(Figure 1", "ref_id": null }, { "start": 768, "end": 775, "text": "Table 3", "ref_id": "TABREF6" }, { "start": 946, "end": 953, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Task 3: Target Language Annotation Sessions", "sec_num": null }, { "text": "This section describes the analysis done on the gathered FS and NS annotations, through the setup of our models and metrics used and key experimental takeaways.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments & Analysis", "sec_num": "4" }, { "text": "Two Performance Measures In this work, we report two distinct F 1 performance measures: Annotation Quality and Model Performance. Annotation Quality refers to the results of participant annotation compared to the existing gold annotations on the same documents. In this evaluation, no model is trained, and we simply calculate the F 1 scores by treating NS annotations as predictions themselves (results reported in Tables 3 and 5) .", "cite_spans": [], "ref_spans": [ { "start": 416, "end": 431, "text": "Tables 3 and 5)", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Models & Metrics", "sec_num": "4.1" }, { "text": "In contrast, Model Performance refers to the more traditional NER setup, in which we train a model over obtained annotations, and predict on some held out test set. The following sections outline the results of this performance metric (results reported in Figure 3 ).", "cite_spans": [], "ref_spans": [ { "start": 256, "end": 264, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Models & Metrics", "sec_num": "4.1" }, { "text": "To account for random errors, we prioritized recruiting at least two participants to annotate each document set. We then used English exercise scores to choose between the resulting conflicting annotations for the same document sets. A summary of the data selection process is shown in Figure 2 .", "cite_spans": [], "ref_spans": [ { "start": 286, "end": 294, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Data Preparation", "sec_num": null }, { "text": "In order to ensure that documents lacking annotations were considered to be NS annotator mistakes rather than negative training examples, we removed all empty documents from the NS data before training. No other pre-processing was done.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Preparation", "sec_num": null }, { "text": "For all experiments, we used a standard BiLSTM-CRF model (Ma and Hovy, 2016) implemented in AllenNLP (Gardner et al., 2018) , and used multilingual BERT embeddings (Devlin et al., 2019) , which have been shown to exhibit surprising cross-lingual properties (Wu and Dredze, 2019) . For the sake of speed and simplicity, we use BERT embeddings as features, and do not fine-tune the model. For each dataset, we train with 5 random seeds (Reimers and Gurevych, 2017) and report the average. We recognize that these annotations are missing many entities. Following recent work on partial annotations, we use an iterative method from (Mayhew et al., 2019a) called Constrained Binary Learning (CBL) that detects unmarked tokens likely to be entities and down-weights them in training. Subsequent results reported use this method on all FS and NS annotations.", "cite_spans": [ { "start": 57, "end": 76, "text": "(Ma and Hovy, 2016)", "ref_id": "BIBREF14" }, { "start": 101, "end": 123, "text": "(Gardner et al., 2018)", "ref_id": "BIBREF6" }, { "start": 164, "end": 185, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF4" }, { "start": 257, "end": 278, "text": "(Wu and Dredze, 2019)", "ref_id": "BIBREF29" }, { "start": 434, "end": 462, "text": "(Reimers and Gurevych, 2017)", "ref_id": "BIBREF23" }, { "start": 628, "end": 650, "text": "(Mayhew et al., 2019a)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Machine Learning Models", "sec_num": null }, { "text": "Baseline Comparisons Given that there is little prior work on this subject, it's hard to compare our results against an established baseline. To contextualize our results, we compare NS models against FS models and cross-lingual methods. However, both are imperfect comparisons and should be interpreted with caution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Machine Learning Models", "sec_num": null }, { "text": "In our comparison with FS models, the main difficulty is unequal training data size. Our experimental design intentionally left us with more NS annotations than FS annotations (see Table 3 ). It might be tempting to address this difficulty by balancing data sizes, however constraining the NS annotations to the sizes of the FS data would not give a fair comparison: the imbalance reflects the real-life scenario in which non-speakers of a language are far easier to find than speakers of the language, who may not be available at all.", "cite_spans": [], "ref_spans": [ { "start": 181, "end": 189, "text": "Table 3", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Machine Learning Models", "sec_num": null }, { "text": "In our comparison with cross-lingual models, the main difficulty is the strength of pre-trained embeddings for the baseline models. As a strong language-independent baseline for existing crosslingual methods, we trained models on English NER data and evaluated on the target language test data (experiments with related languages showed similar results, and were omitted for space constraints). Our experimental decision to use relatively high-resource languages meant that mBERT models had access to reasonably large amounts of pre-training data (each language was in the top 50 by Wikipedia size), and are therefore unfairly strong. One would expect crosslingual performance to decrease on lower resource languages (Wu and Dredze, 2020).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Machine Learning Models", "sec_num": null }, { "text": "Figure 3 summarizes the results of this experiment by providing a comparison of models trained on non-speaker (NS) and fluent speaker (FS) annotations to cross-lingual models. From these results we distill two main takeaways.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Main Results", "sec_num": "4.2" }, { "text": "The results of our experiments show that across all languages, non-speaker annotations have produced meaningful results. In Indonesian, NS models are evidently strong and perform at a similar level to models trained on fluent-speaker annotations. This is likely attributable to the high entity overlap between English and Indonesian and limited language-specific information required for successful annotation. In practice, this indicates that 2.5 hours of language exposure was enough for NS annotators to produce annotations with quality sufficient enough to be useful. The gap between NS and FS model performance widens on other languages, and correlates with an annotation quality drop. This suggests that 2.5 hours are not sufficient to produce NS annotations rivaling FS models (however, as we will see in Takeaway 2, this is sufficient to rival cross-lingual baselines). One reason is that in more difficult languages, annotators need more time to become acquainted with the language, so we could expect more substantial improvements over time. To test this hypothesis, we examined mean annotation quality trends of NS annotators, summarized in Table 5 . Across all languages, we see annotators improving over time. For Russian and Hindi in particular, we observe a more overt learning curve indicating that there are more nuances to these languages which must be noticed by annotators over time. This upwards positive trend in annotation quality suggests that the NS results reported here are not the peak results that could be achieved. With additional training and experience, NS annotators can produce stronger results even in more difficult languages.", "cite_spans": [], "ref_spans": [ { "start": 1152, "end": 1159, "text": "Table 5", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Takeaway 1: NS Annotation Works", "sec_num": null }, { "text": "In Figure 3 , across all languages performance of models built on NS annotations (blue bars) consistently matches or exceeds the performance of cross-lingual models (red bars). Again, in a lowresource scenario we might expect cross-lingual model performance to drop substantially, so the fact that they are comparable in this situation is encouraging. Additional experiments combining NS and English data (purple bars) shows improvements in Indonesian and Russian, but inconclusive changes in Hindi. Altogether, these results demonstrate that using NS annotations is one of the most effective available ways of building an NER model in a low-resource scenario.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 11, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Takeaway 2: NS Remains On Par With Cross-Lingual Baselines", "sec_num": null }, { "text": "While an unexpected observation shows that FS scores are always 15-20 points below models trained on gold-annotated data, we hypothesize that this difference is mainly attributed to training level and not language ability (Geva et al., 2019) .", "cite_spans": [ { "start": 222, "end": 241, "text": "(Geva et al., 2019)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Takeaway 2: NS Remains On Par With Cross-Lingual Baselines", "sec_num": null }, { "text": "Although the use of non-speaker annotators has little representation in the research community, there have been several projects that lean on this idea heavily. In the LORELEI evaluations, 2 research groups were tasked with producing NLP tools on truly low-resource languages (including Kinyarwanda, Sinhalese, Ilocano, and Odiya) within a short time frame. A number of new techniques came out of these evaluations, and many groups resorted to using non-speaker annotators (Cheung et al., 2017; Mayhew et al., 2017 Mayhew et al., , 2019b . In each group, annotators were trained more thoroughly than in the empirical study here, and exhibited a more focused and long-term effort. However, in these projects, the goal was to maximize the final score, not make careful observations of the annotation process. This paper fulfills that need.", "cite_spans": [ { "start": 473, "end": 494, "text": "(Cheung et al., 2017;", "ref_id": "BIBREF2" }, { "start": 495, "end": 514, "text": "Mayhew et al., 2017", "ref_id": "BIBREF17" }, { "start": 515, "end": 537, "text": "Mayhew et al., , 2019b", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Low-resource Applications", "sec_num": "4.3" }, { "text": "While Section 4 showed quantitative outcomes of experimental processes, this section explores the many factors that can contribute to obtaining high quality NS annotations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "When capitalization is available in the target language, it is a strong indicator for named entities. Analyzing NS annotations over languages with capitalization -Indonesian and Russian -shows that over 90% of annotated tokens are capitalized, a rate similar to what we would expect in English.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NS Annotation Practices & Strengths", "sec_num": null }, { "text": "For languages with non-Latin scripts -Russian and Hindi -NS annotators often relied on phonetic clues and always annotated on romanized versions of the text. Having access to wellromanized text is critical, as it helps NS annotators make connections between English cognates or previously tagged entities. Some real examples of phonetically recognizable entities from Hindi are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NS Annotation Practices & Strengths", "sec_num": null }, { "text": "A majority of entities tagged in languages with no capitalization are either geo-political entities (i.e. Pakistan, America) or well-known Western names (i.e. Obama, Twitter, BBC). Once an annotator learns a word representation in the target language, they tend to tag every instance as an entity. As a result, we found that NS annotators tend to tag a proportionally less diverse set of entities than FS annotators.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "paakistaan, biibiisii hindii, baamglaadesh", "sec_num": null }, { "text": "What makes a good annotator? Analyzing participant language familiarity and instructional quiz scores shows that neither multilingualism nor initial guideline understanding present a clear predictor for good annotators. Participants who performed best were detail-oriented, patient, and often proactively vocalized their interest in the task or the top annotator award incentive.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "paakistaan, biibiisii hindii, baamglaadesh", "sec_num": null }, { "text": "One strength of human non-speaker annotators to annotate NER is that, unlike an automatic system, they are able to make inferences over common sense world knowledge. For example, they may use a header to pick out the domain of a document, or use neighboring entities to inform decisions, as in Figure 1 , where the presence of New York suggests Central Park as an entity.", "cite_spans": [], "ref_spans": [ { "start": 294, "end": 302, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "paakistaan, biibiisii hindii, baamglaadesh", "sec_num": null }, { "text": "Looking to other NLP tasks, it seems clear that NS annotations of conceptually in-depth tasks such as dependency parsing or textual entailment are unlikely to have usable quality. However, for tasks such as part of speech tagging, it could be possible, especially with the help of a tag lexicon and an elementary grammar.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "How does this generalize to other tasks?", "sec_num": null }, { "text": "We demonstrate the effectiveness of using nonspeaker annotations as an alternative to crosslingual methods for building low-resource NER models. A qualitative exploration of the resulting data provides insights about what makes NS annotators so unintuitively successful. One avenue for future exploration is with active learning (Settles, 2009) , which has been shown to help in low-resource situations (Chaudhary et al., 2019) . Further work may also explore optimal ways to combine NS annotators with FS annotators, should they be available.", "cite_spans": [ { "start": 329, "end": 344, "text": "(Settles, 2009)", "ref_id": "BIBREF24" }, { "start": 403, "end": 427, "text": "(Chaudhary et al., 2019)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "This work was supported by Contract HR0011-18-2-0052 and Contract HR0011-15-C-0113 with the US Defense Advanced Research Projects Agency (DARPA). Approved for Public Release, Distribution Unlimited. The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgement", "sec_num": "7" }, { "text": "For more details, see: http://cogcomp.org/page/publication_view/941", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.nist.gov/itl/iad/mig/ lorehlt-evaluations", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A little annotation does a lot of good: A study in bootstrapping low-resource named entity recognizers", "authors": [ { "first": "Aditi", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Jiateng", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Zaid", "middle": [], "last": "Sheikh", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Jaime", "middle": [], "last": "Carbonell", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "5164--5174", "other_ids": { "DOI": [ "10.18653/v1/D19-1520" ] }, "num": null, "urls": [], "raw_text": "Aditi Chaudhary, Jiateng Xie, Zaid Sheikh, Graham Neubig, and Jaime Carbonell. 2019. A little annotation does a lot of good: A study in bootstrapping low-resource named entity recognizers. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5164-5174, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Mismatched crowdsourcing based language perception for under-resourced languages", "authors": [ { "first": "Wenda", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Hasegawa-Johnson", "suffix": "" }, { "first": "Nancy", "middle": [ "F" ], "last": "Chen", "suffix": "" } ], "year": 2016, "venue": "Procedia Computer Science", "volume": "81", "issue": "", "pages": "23--29", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wenda Chen, Mark Hasegawa-Johnson, and Nancy F Chen. 2016. Mismatched crowdsourcing based language perception for under-resourced languages. Procedia Computer Science, 81:23-29.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "ELISA system description for LoReHLT", "authors": [ { "first": "Leon", "middle": [], "last": "Cheung", "suffix": "" }, { "first": "Thamme", "middle": [], "last": "Gowda", "suffix": "" }, { "first": "Ulf", "middle": [], "last": "Hermjakob", "suffix": "" }, { "first": "Nelson", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "May", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Mayn", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Leon Cheung, Thamme Gowda, Ulf Hermjakob, Nelson Liu, Jonathan May, Alexandra Mayn, Nima Pourdamghani, Michael Pust, Kevin Knight, Nikolaos Malandrakis, et al. 2017. ELISA system description for LoReHLT 2017.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Dragonfly: Advances in non-speaker annotation for low resource languages", "authors": [ { "first": "Cash", "middle": [], "last": "Costello", "suffix": "" }, { "first": "Shelby", "middle": [], "last": "Anderson", "suffix": "" }, { "first": "Caitlyn", "middle": [], "last": "Bishop", "suffix": "" }, { "first": "James", "middle": [], "last": "Mayfield", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Mcnamee", "suffix": "" } ], "year": 2020, "venue": "LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cash Costello, Shelby Anderson, Caitlyn Bishop, James Mayfield, and Paul McNamee. 2020. Dragonfly: Advances in non-speaker annotation for low resource languages. In LREC.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "authors": [], "year": null, "venue": "", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "AllenNLP: A deep semantic natural language processing platform", "authors": [ { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Joel", "middle": [], "last": "Grus", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Oyvind", "middle": [], "last": "Tafjord", "suffix": "" }, { "first": "Pradeep", "middle": [], "last": "Dasigi", "suffix": "" }, { "first": "Nelson", "middle": [ "F" ], "last": "Liu", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Schmitz", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of Workshop for NLP Open Source Software (NLP-OSS)", "volume": "", "issue": "", "pages": "1--6", "other_ids": { "DOI": [ "10.18653/v1/W18-2501" ] }, "num": null, "urls": [], "raw_text": "Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke Zettlemoyer. 2018. AllenNLP: A deep semantic natural language processing platform. In Proceedings of Workshop for NLP Open Source Software (NLP- OSS), pages 1-6, Melbourne, Australia. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Are we modeling the task or the annotator? an investigation of annotator bias in natural language understanding datasets", "authors": [ { "first": "Mor", "middle": [], "last": "Geva", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Berant", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "1161--1166", "other_ids": { "DOI": [ "10.18653/v1/D19-1107" ] }, "num": null, "urls": [], "raw_text": "Mor Geva, Yoav Goldberg, and Jonathan Berant. 2019. Are we modeling the task or the annotator? an investigation of annotator bias in natural language understanding datasets. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1161-1166, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Out-of-the-box universal Romanization tool uroman", "authors": [ { "first": "Ulf", "middle": [], "last": "Hermjakob", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "May", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2018, "venue": "Proceedings of ACL 2018, System Demonstrations", "volume": "", "issue": "", "pages": "13--18", "other_ids": { "DOI": [ "10.18653/v1/P18-4003" ] }, "num": null, "urls": [], "raw_text": "Ulf Hermjakob, Jonathan May, and Kevin Knight. 2018a. Out-of-the-box universal Romanization tool uroman. In Proceedings of ACL 2018, System Demonstrations, pages 13-18, Melbourne, Australia. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Translating a language you don't know in the Chinese room", "authors": [ { "first": "Ulf", "middle": [], "last": "Hermjakob", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "May", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Pust", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2018, "venue": "Proceedings of ACL 2018, System Demonstrations", "volume": "", "issue": "", "pages": "62--67", "other_ids": { "DOI": [ "10.18653/v1/P18-4011" ] }, "num": null, "urls": [], "raw_text": "Ulf Hermjakob, Jonathan May, Michael Pust, and Kevin Knight. 2018b. Translating a language you don't know in the Chinese room. In Proceedings of ACL 2018, System Demonstrations, pages 62-67, Melbourne, Australia. Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Cross-Lingual Ability of Multilingual BERT: An Empirical Study", "authors": [ { "first": "K", "middle": [], "last": "Karthikeyan", "suffix": "" }, { "first": "Zihan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Mayhew", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karthikeyan K, Zihan Wang, Stephen Mayhew, and Dan Roth. 2020. Cross-Lingual Ability of Multilingual BERT: An Empirical Study.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Neural architectures for named entity recognition", "authors": [ { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Miguel", "middle": [], "last": "Ballesteros", "suffix": "" }, { "first": "Sandeep", "middle": [], "last": "Subramanian", "suffix": "" }, { "first": "Kazuya", "middle": [], "last": "Kawakami", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "260--270", "other_ids": { "DOI": [ "10.18653/v1/N16-1030" ] }, "num": null, "urls": [], "raw_text": "Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260-270, San Diego, California. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Crosslingual language model pretraining", "authors": [ { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1901.07291" ] }, "num": null, "urls": [], "raw_text": "Guillaume Lample and Alexis Conneau. 2019. Cross- lingual language model pretraining. arXiv preprint arXiv:1901.07291.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Platforms for non-speakers annotating names in any language", "authors": [ { "first": "Ying", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Cash", "middle": [], "last": "Costello", "suffix": "" }, { "first": "Boliang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Di", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Heng", "middle": [], "last": "Ji", "suffix": "" }, { "first": "James", "middle": [], "last": "Mayfield", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Mcnamee", "suffix": "" } ], "year": 2018, "venue": "Proceedings of ACL 2018, System Demonstrations", "volume": "", "issue": "", "pages": "1--6", "other_ids": { "DOI": [ "10.18653/v1/P18-4001" ] }, "num": null, "urls": [], "raw_text": "Ying Lin, Cash Costello, Boliang Zhang, Di Lu, Heng Ji, James Mayfield, and Paul McNamee. 2018. Platforms for non-speakers annotating names in any language. In Proceedings of ACL 2018, System Demonstrations, pages 1-6, Melbourne, Australia. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "End-to-end sequence labeling via bi-directional LSTM-CNNs-CRF", "authors": [ { "first": "Xuezhe", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1064--1074", "other_ids": { "DOI": [ "10.18653/v1/P16-1101" ] }, "num": null, "urls": [], "raw_text": "Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional LSTM-CNNs- CRF. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1064-1074, Berlin, Germany. Association for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "TALEN: Tool for Annotation of Low-resource ENtities", "authors": [ { "first": "Stephen", "middle": [], "last": "Mayhew", "suffix": "" } ], "year": 2018, "venue": "ACL Demonstrations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Mayhew. 2018. TALEN: Tool for Annotation of Low-resource ENtities. In ACL Demonstrations.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Named Entity Recognition with Partially Annotated Training Data", "authors": [ { "first": "Stephen", "middle": [], "last": "Mayhew", "suffix": "" }, { "first": "Snigdha", "middle": [], "last": "Chaturvedi", "suffix": "" }, { "first": "Chen-Tse", "middle": [], "last": "Tsai", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2019, "venue": "Proc. of the Conference on Computational Natural Language Learning (CoNLL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Mayhew, Snigdha Chaturvedi, Chen-Tse Tsai, and Dan Roth. 2019a. Named Entity Recognition with Partially Annotated Training Data. In Proc. of the Conference on Computational Natural Language Learning (CoNLL).", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "University of Illinois LoReHLT17 Submission", "authors": [ { "first": "Stephen", "middle": [], "last": "Mayhew", "suffix": "" }, { "first": "Chase", "middle": [], "last": "Duncan", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Sammons", "suffix": "" }, { "first": "Chen-Tse", "middle": [], "last": "Tsai", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" }, { "first": "Xin", "middle": [], "last": "Li", "suffix": "" }, { "first": "Haojie", "middle": [], "last": "Pan", "suffix": "" }, { "first": "Sheng", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Jennifer", "middle": [], "last": "Zou", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Mayhew, Chase Duncan, Mark Sammons, Chen-Tse Tsai, Dan Roth, Xin Li, Haojie Pan, Sheng Zhou, Jennifer Zou, and Yangqiu Song. 2017. University of Illinois LoReHLT17 Submission. Technical report.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "University of Pennsylvania LoReHLT", "authors": [ { "first": "Stephen", "middle": [], "last": "Mayhew", "suffix": "" }, { "first": "Tatiana", "middle": [], "last": "Tsygankova", "suffix": "" }, { "first": "Francesca", "middle": [], "last": "Marini", "suffix": "" }, { "first": "Zihan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jane", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Xingyu", "middle": [], "last": "Fu", "suffix": "" }, { "first": "Weijia", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Zian", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Wenpeng", "middle": [], "last": "Yin", "suffix": "" }, { "first": "K", "middle": [], "last": "Karthikeyan", "suffix": "" }, { "first": "Jamaal", "middle": [], "last": "Hay", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Shur", "suffix": "" }, { "first": "Jennifer", "middle": [], "last": "Sheffield", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Mayhew, Tatiana Tsygankova, Francesca Marini, Zihan Wang, Jane Lee, Xiaodong Yu, Xingyu Fu, Weijia Shi, Zian Zhao, Wenpeng Yin, Karthikeyan K, Jamaal Hay, Michael Shur, Jennifer Sheffield, and Dan Roth. 2019b. University of Pennsylvania LoReHLT 2019 Submission. Technical report.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "University of Pennsylvania LoReHLT", "authors": [ { "first": "Stephen", "middle": [], "last": "Mayhew", "suffix": "" }, { "first": "Shyam", "middle": [], "last": "Upadhyay", "suffix": "" }, { "first": "Wenpeng", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Huo", "suffix": "" }, { "first": "Devanshu", "middle": [], "last": "Jain", "suffix": "" }, { "first": "Prasanna", "middle": [], "last": "Poudyal", "suffix": "" }, { "first": "Tatiana", "middle": [], "last": "Tsygankova", "suffix": "" }, { "first": "Yihao", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Xin", "middle": [], "last": "Li", "suffix": "" }, { "first": "Nitish", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Chase", "middle": [], "last": "Duncan", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Sammons", "suffix": "" }, { "first": "Jennifer", "middle": [], "last": "Sheffield", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Mayhew, Shyam Upadhyay, Wenpeng Yin, Lucia Huo, Devanshu Jain, Prasanna Poudyal, Tatiana Tsygankova, Yihao Chen, Xin Li, Nitish Gupta, Chase Duncan, Mark Sammons, Jennifer Sheffield, and Dan Roth. 2018. University of Pennsylvania LoReHLT 2018 Submission. Technical report.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Distant supervision from disparate sources for lowresource part-of-speech tagging", "authors": [ { "first": "Barbara", "middle": [], "last": "Plank", "suffix": "" }, { "first": "\u017deljko", "middle": [], "last": "Agi\u0107", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "614--620", "other_ids": { "DOI": [ "10.18653/v1/D18-1061" ] }, "num": null, "urls": [], "raw_text": "Barbara Plank and \u017deljko Agi\u0107. 2018. Distant supervision from disparate sources for low- resource part-of-speech tagging. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 614-620, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Cross-lingual syntactic transfer with limited resources", "authors": [ { "first": "Mohammad", "middle": [], "last": "Sadegh", "suffix": "" }, { "first": "Rasooli", "middle": [], "last": "", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2017, "venue": "Transactions of the Association for Computational Linguistics", "volume": "5", "issue": "", "pages": "279--293", "other_ids": { "DOI": [ "10.1162/tacl_a_00061" ] }, "num": null, "urls": [], "raw_text": "Mohammad Sadegh Rasooli and Michael Collins. 2017. Cross-lingual syntactic transfer with limited resources. Transactions of the Association for Computational Linguistics, 5:279-293.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Design Challenges and Misconceptions in Named Entity Recognition", "authors": [ { "first": "Lev", "middle": [], "last": "Ratinov", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2009, "venue": "Proc. of the Conference on Computational Natural Language Learning (CoNLL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lev Ratinov and Dan Roth. 2009. Design Challenges and Misconceptions in Named Entity Recognition. In Proc. of the Conference on Computational Natural Language Learning (CoNLL).", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Reporting score distributions makes a difference: Performance study of LSTM-networks for sequence tagging", "authors": [ { "first": "Nils", "middle": [], "last": "Reimers", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "338--348", "other_ids": { "DOI": [ "10.18653/v1/D17-1035" ] }, "num": null, "urls": [], "raw_text": "Nils Reimers and Iryna Gurevych. 2017. Reporting score distributions makes a difference: Performance study of LSTM-networks for sequence tagging. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 338-348, Copenhagen, Denmark. Association for Computational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Active learning literature survey", "authors": [ { "first": "Burr", "middle": [], "last": "Settles", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Burr Settles. 2009. Active learning literature survey. Technical report, University of Wisconsin-Madison Department of Computer Sciences.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "LORELEI language packs: Data, tools, and resources for technology development in low resource languages", "authors": [ { "first": "Stephanie", "middle": [], "last": "Strassel", "suffix": "" }, { "first": "Jennifer", "middle": [], "last": "Tracey", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016)", "volume": "", "issue": "", "pages": "3273--3280", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephanie Strassel and Jennifer Tracey. 2016. LORELEI language packs: Data, tools, and resources for technology development in low resource languages. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), pages 3273-3280, Portoro\u017e, Slovenia. European Language Resources Association (ELRA).", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition", "authors": [ { "first": "Erik", "middle": [ "F" ], "last": "Tjong", "suffix": "" }, { "first": "Kim", "middle": [], "last": "Sang", "suffix": "" }, { "first": "Fien", "middle": [], "last": "De Meulder", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003", "volume": "", "issue": "", "pages": "142--147", "other_ids": {}, "num": null, "urls": [], "raw_text": "Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142-147.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Corpus building for low resource languages in the DARPA LORELEI program", "authors": [ { "first": "Jennifer", "middle": [], "last": "Tracey", "suffix": "" }, { "first": "Stephanie", "middle": [], "last": "Strassel", "suffix": "" }, { "first": "Ann", "middle": [], "last": "Bies", "suffix": "" }, { "first": "Zhiyi", "middle": [], "last": "Song", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Arrigo", "suffix": "" }, { "first": "Kira", "middle": [], "last": "Griffitt", "suffix": "" }, { "first": "Dana", "middle": [], "last": "Delgado", "suffix": "" }, { "first": "Dave", "middle": [], "last": "Graff", "suffix": "" }, { "first": "Seth", "middle": [], "last": "Kulick", "suffix": "" }, { "first": "Justin", "middle": [], "last": "Mott", "suffix": "" }, { "first": "Neil", "middle": [], "last": "Kuster", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2nd Workshop on Technologies for MT of Low Resource Languages", "volume": "", "issue": "", "pages": "48--55", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jennifer Tracey, Stephanie Strassel, Ann Bies, Zhiyi Song, Michael Arrigo, Kira Griffitt, Dana Delgado, Dave Graff, Seth Kulick, Justin Mott, and Neil Kuster. 2019. Corpus building for low resource languages in the DARPA LORELEI program. In Proceedings of the 2nd Workshop on Technologies for MT of Low Resource Languages, pages 48-55, Dublin, Ireland. European Association for Machine Translation.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Cross-Lingual Named Entity Recognition via Wikification", "authors": [ { "first": "Chen-Tse", "middle": [], "last": "Tsai", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Mayhew", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2016, "venue": "Proc. of the Conference on Computational Natural Language Learning (CoNLL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen-Tse Tsai, Stephen Mayhew, and Dan Roth. 2016. Cross-Lingual Named Entity Recognition via Wikification. In Proc. of the Conference on Computational Natural Language Learning (CoNLL).", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT", "authors": [ { "first": "Shijie", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Dredze", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "833--844", "other_ids": { "DOI": [ "10.18653/v1/D19-1077" ] }, "num": null, "urls": [], "raw_text": "Shijie Wu and Mark Dredze. 2019. Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 833-844, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Are all languages created equal in multilingual bert?", "authors": [ { "first": "Shijie", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Dredze", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 5th Workshop on Representation Learning for NLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shijie Wu and Mark Dredze. 2020. Are all languages created equal in multilingual bert? In Proceedings of the 5th Workshop on Representation Learning for NLP. Association for Computational Linguistics.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Generalized data augmentation for low-resource translation", "authors": [ { "first": "Mengzhou", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Kong", "suffix": "" }, { "first": "Antonios", "middle": [], "last": "Anastasopoulos", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5786--5796", "other_ids": { "DOI": [ "10.18653/v1/P19-1579" ] }, "num": null, "urls": [], "raw_text": "Mengzhou Xia, Xiang Kong, Antonios Anastasopoulos, and Graham Neubig. 2019. Generalized data augmentation for low-resource translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5786-5796, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Neural crosslingual named entity recognition with minimal resources", "authors": [ { "first": "Jiateng", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Zhilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" }, { "first": "Jaime", "middle": [], "last": "Carbonell", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "369--379", "other_ids": { "DOI": [ "10.18653/v1/D18-1034" ] }, "num": null, "urls": [], "raw_text": "Jiateng Xie, Zhilin Yang, Graham Neubig, Noah A. Smith, and Jaime Carbonell. 2018. Neural cross- lingual named entity recognition with minimal resources. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 369-379, Brussels, Belgium. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "An overview of the data selection process involved in training models on the FS (fluent speaker) and NS (non-speaker) annotations. In each document set, the stars refer to annotators with the higher English exercise score, whose data is used in training. Details on model performance for each language are shown inFigure 3", "type_str": "figure", "uris": null }, "FIGREF1": { "num": null, "text": "", "type_str": "figure", "uris": null }, "FIGREF2": { "num": null, "text": "", "type_str": "figure", "uris": null }, "FIGREF3": { "num": null, "text": "Comparison of models trained on fluent speaker (FS) and non-speaker (NS) annotations to English crosslingual models, showing comparable or improved performance across all languages. Error bars show one standard deviation calculated over five trials. CBL refers to Constrained Binary Learning. The Eng+NS model is trained on the concatenation of English and NS data. The dashed lines refer to the performance of models trained on the gold annotated training set.", "type_str": "figure", "uris": null }, "TABREF2": { "html": null, "text": "Factors contributing to language difficulty, with examples of the English word \"America.\"", "num": null, "type_str": "table", "content": "" }, "TABREF4": { "html": null, "text": "", "num": null, "type_str": "table", "content": "
" }, "TABREF5": { "html": null, "text": "the quiz, participants were asked to annotate English LORELEI data for 20 minutes. The", "num": null, "type_str": "table", "content": "
LanguageFSNS
Indonesian 19K 38K
Russian28K 38K
Hindi18K 45K
" }, "TABREF6": { "html": null, "text": "Size of datasets produced by fluent speaker (FS) and non-speaker (NS) annotators, in tokens.", "num": null, "type_str": "table", "content": "" }, "TABREF9": { "html": null, "text": "", "num": null, "type_str": "table", "content": "
: Changes in mean annotation quality of non-
speaker (NS) annotations over time show an upwards
trajectory that steepens with language difficulty.
" } } } }