{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:52:47.685572Z" }, "title": "Sta n z a : A Python Natural Language Processing Toolkit for Many Human Languages", "authors": [ { "first": "Peng", "middle": [], "last": "Qi", "suffix": "", "affiliation": { "laboratory": "", "institution": "Stanford University Stanford", "location": { "postCode": "94305", "region": "CA" } }, "email": "pengqi@stanford.edu" }, { "first": "Yuhao", "middle": [], "last": "Zhang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Stanford University Stanford", "location": { "postCode": "94305", "region": "CA" } }, "email": "yuhaozhang@stanford.edu" }, { "first": "Yuhui", "middle": [], "last": "Zhang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Stanford University Stanford", "location": { "postCode": "94305", "region": "CA" } }, "email": "yuhuiz@stanford.edu" }, { "first": "Jason", "middle": [], "last": "Bolton", "suffix": "", "affiliation": { "laboratory": "", "institution": "Stanford University Stanford", "location": { "postCode": "94305", "region": "CA" } }, "email": "jebolton@stanford.edu" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "", "affiliation": { "laboratory": "", "institution": "Stanford University Stanford", "location": { "postCode": "94305", "region": "CA" } }, "email": "manning@stanford.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We introduce Sta n z a , an open-source Python natural language processing toolkit supporting 66 human languages. Compared to existing widely used toolkits, Sta n z a features a language-agnostic fully neural pipeline for text analysis, including tokenization, multiword token expansion, lemmatization, part-ofspeech and morphological feature tagging, dependency parsing, and named entity recognition. We have trained Sta n z a on a total of 112 datasets, including the Universal Dependencies treebanks and other multilingual corpora, and show that the same neural architecture generalizes well and achieves competitive performance on all languages tested. Additionally, Sta n z a includes a native Python interface to the widely used Java Stanford CoreNLP software, which further extends its functionality to cover other tasks such as coreference resolution and relation extraction. Source code, documentation, and pretrained models for 66 languages are available at https:// stanfordnlp.github.io/stanza/. * Equal contribution. Order decided by a tossed coin.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "We introduce Sta n z a , an open-source Python natural language processing toolkit supporting 66 human languages. Compared to existing widely used toolkits, Sta n z a features a language-agnostic fully neural pipeline for text analysis, including tokenization, multiword token expansion, lemmatization, part-ofspeech and morphological feature tagging, dependency parsing, and named entity recognition. We have trained Sta n z a on a total of 112 datasets, including the Universal Dependencies treebanks and other multilingual corpora, and show that the same neural architecture generalizes well and achieves competitive performance on all languages tested. Additionally, Sta n z a includes a native Python interface to the widely used Java Stanford CoreNLP software, which further extends its functionality to cover other tasks such as coreference resolution and relation extraction. Source code, documentation, and pretrained models for 66 languages are available at https:// stanfordnlp.github.io/stanza/. * Equal contribution. Order decided by a tossed coin.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The growing availability of open-source natural language processing (NLP) toolkits has made it easier for users to build tools with sophisticated linguistic processing. While existing NLP toolkits such as CoreNLP (Manning et al., 2014) , FLAIR (Akbik et al., 2019) , spaCy 1 , and UDPipe (Straka, 2018) have had wide usage, they also suffer from several limitations. First, existing toolkits often support only a few major languages. This has significantly limited the community's ability to process multilingual text. Second, widely used tools are sometimes under-optimized for accuracy either due to a focus on efficiency (e.g., spaCy) or use of less powerful models (e.g., CoreNLP), potentially mislead- ing downstream applications and insights obtained from them. Third, some tools assume input text has been tokenized or annotated with other tools, lacking the ability to process raw text within a unified framework. This has limited their wide applicability to text from diverse sources.", "cite_spans": [ { "start": 213, "end": 235, "text": "(Manning et al., 2014)", "ref_id": "BIBREF7" }, { "start": 244, "end": 264, "text": "(Akbik et al., 2019)", "ref_id": "BIBREF0" }, { "start": 288, "end": 302, "text": "(Straka, 2018)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We introduce Sta n z a 2 , a Python natural language processing toolkit supporting many human languages. As shown in Table 1 , compared to existing widely-used NLP toolkits, Sta n z a has the following advantages:", "cite_spans": [], "ref_spans": [ { "start": 117, "end": 124, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 From raw text to annotations. Sta n z a features a fully neural pipeline which takes raw text as input, and produces annotations including tokenization, multi-word token expansion, lemmatization, part-of-speech and morphological feature tagging, dependency parsing, and named entity recognition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Multilinguality. Sta n z a 's architectural design is language-agnostic and data-driven, which allows us to release models support- ing 66 languages, by training the pipeline on the Universal Dependencies (UD) treebanks and other multilingual corpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 State-of-the-art performance. We evaluate Sta n z a on a total of 112 datasets, and find its neural pipeline adapts well to text of different genres, achieving state-of-the-art or competitive performance at each step of the pipeline.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Additionally, Sta n z a features a Python interface to the widely used Java CoreNLP package, allowing access to additional tools such as coreference resolution and relation extraction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Sta n z a is fully open source and we make pretrained models for all supported languages and datasets available for public download. We hope Sta n z a can facilitate multilingual NLP research and applications, and drive future research that produces insights from human languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "At the top level, Sta n z a consists of two individual components: (1) a fully neural multilingual NLP pipeline; (2) a Python client interface to the Java Stanford CoreNLP software. In this section we introduce their designs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Design and Architecture", "sec_num": "2" }, { "text": "Sta n z a 's neural pipeline consists of models that range from tokenizing raw text to performing syntactic analysis on entire sentences (see Figure 1 ). All components are designed with processing many human languages in mind, with high-level design choices capturing common phenomena in many languages and data-driven models that learn the difference between these languages from data. Moreover, the implementation of Sta n z a components is highly modular, and reuses basic model architectures when possible for compactness. We highlight the important design choices here, and refer the reader to Qi et al. (2018) for modeling details. The des in the first sentence corresponds to two syntactic words, de and les; the second des is a single word.", "cite_spans": [ { "start": 600, "end": 616, "text": "Qi et al. (2018)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 142, "end": 150, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Neural Multilingual NLP Pipeline", "sec_num": "2.1" }, { "text": "Tokenization and Sentence Splitting. When presented raw text, Sta n z a tokenizes it and groups tokens into sentences as the first step of processing. Unlike most existing toolkits, Sta n z a combines tokenization and sentence segmentation from raw text into a single module. This is modeled as a tagging problem over character sequences, where the model predicts whether a given character is the end of a token, end of a sentence, or end of a multi-word token (MWT, see Figure 2 ). 3 We choose to predict MWTs jointly with tokenization because this task is context-sensitive in some languages.", "cite_spans": [ { "start": 483, "end": 484, "text": "3", "ref_id": null } ], "ref_spans": [ { "start": 471, "end": 479, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Neural Multilingual NLP Pipeline", "sec_num": "2.1" }, { "text": "Multi-word Token Expansion. Once MWTs are identified by the tokenizer, they are expanded into the underlying syntactic words as the basis of downstream processing. This is achieved with an ensemble of a frequency lexicon and a neural sequence-to-sequence (seq2seq) model, to ensure that frequently observed expansions in the training set are always robustly expanded while maintaining flexibility to model unseen words statistically.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Multilingual NLP Pipeline", "sec_num": "2.1" }, { "text": "POS and Morphological Feature Tagging. For each word in a sentence, Sta n z a assigns it a partof-speech (POS), and analyzes its universal morphological features (UFeats, e.g., singular/plural, 1 st /2 nd /3 rd person, etc.). To predict POS and UFeats, we adopt a bidirectional long short-term memory network (Bi-LSTM) as the basic architecture. For consistency among universal POS (UPOS), treebank-specific POS (XPOS), and UFeats, we adopt the biaffine scoring mechanism from Dozat and Manning (2017) to condition XPOS and UFeats prediction on that of UPOS.", "cite_spans": [ { "start": 477, "end": 501, "text": "Dozat and Manning (2017)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Neural Multilingual NLP Pipeline", "sec_num": "2.1" }, { "text": "Lemmatization. Sta n z a also lemmatizes each word in a sentence to recover its canonical form (e.g., did\u2192do). Similar to the multi-word token expander, Sta n z a 's lemmatizer is implemented as an ensemble of a dictionary-based lemmatizer and a neural seq2seq lemmatizer. An additional classifier is built on the encoder output of the seq2seq model, to predict shortcuts such as lowercasing and identity copy for robustness on long input sequences such as URLs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Multilingual NLP Pipeline", "sec_num": "2.1" }, { "text": "Dependency Parsing. Sta n z a parses each sentence for its syntactic structure, where each word in the sentence is assigned a syntactic head that is either another word in the sentence, or in the case of the root word, an artificial root symbol. We implement a Bi-LSTM-based deep biaffine neural dependency parser (Dozat and Manning, 2017) . We further augment this model with two linguistically motivated features: one that predicts the linearization order of two words in a given language, and the other that predicts the typical distance in linear order between them. We have previously shown that these features significantly improve parsing accuracy (Qi et al., 2018) .", "cite_spans": [ { "start": 314, "end": 339, "text": "(Dozat and Manning, 2017)", "ref_id": "BIBREF6" }, { "start": 655, "end": 672, "text": "(Qi et al., 2018)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Neural Multilingual NLP Pipeline", "sec_num": "2.1" }, { "text": "Named Entity Recognition. For each input sentence, Sta n z a also recognizes named entities in it (e.g., person names, organizations, etc.). For NER we adopt the contextualized string representationbased sequence tagger from Akbik et al. (2018) . We first train a forward and a backward characterlevel LSTM language model, and at tagging time we concatenate the representations at the end of each word position from both language models with word embeddings, and feed the result into a standard one-layer Bi-LSTM sequence tagger with a conditional random field (CRF)-based decoder.", "cite_spans": [ { "start": 225, "end": 244, "text": "Akbik et al. (2018)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Neural Multilingual NLP Pipeline", "sec_num": "2.1" }, { "text": "Stanford's Java CoreNLP software provides a comprehensive set of NLP tools especially for the English language. However, these tools are not easily accessible with Python, the programming language of choice for many NLP practitioners, due to the lack of official support. To facilitate the use of CoreNLP from Python, we take advantage of the existing server interface in CoreNLP, and implement a robust client as its Python interface.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CoreNLP Client", "sec_num": "2.2" }, { "text": "When the CoreNLP client is instantiated, Sta n z a will automatically start the CoreNLP server as a local process. The client then communicates with the server through its RESTful APIs, after which annotations are transmitted in Protocol Buffers, and converted back to native Python objects. Users can also specify JSON or XML as annotation format. To ensure robustness, while the client is being used, Sta n z a periodically checks the health of the server, and restarts it if necessary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CoreNLP Client", "sec_num": "2.2" }, { "text": "Sta n z a 's user interface is designed to allow quick out-of-the-box processing of multilingual text. To achieve this, Sta n z a supports automated model download via Python code and pipeline customization with processors of choice. Annotation results can be accessed as native Python objects to allow for flexible post-processing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Usage", "sec_num": "3" }, { "text": "Sta n z a 's neural NLP pipeline can be initialized with the Pipeline class, taking language name as an argument. By default, all processors will be loaded and run over the input text; however, users can also specify the processors to load and run with a list of processor names as an argument. Users can additionally specify other processor-level properties, such as batch sizes used by processors, at initialization time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Pipeline Interface", "sec_num": "3.1" }, { "text": "The following code snippet shows a minimal usage of Sta n z a for downloading the Chinese model, annotating a sentence with customized processors, and printing out all annotations:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Pipeline Interface", "sec_num": "3.1" }, { "text": "import stanza # download Chinese model stanza.download('zh') # initialize Chinese neural pipeline nlp = stanza.Pipeline('zh', processors='tokenize, pos,ner') # run annotation over a sentence doc = nlp('\u65af\u5766\u798f\u662f\u4e00\u6240\u79c1\u7acb\u7814\u7a76\u578b\u5927\u5b66\u3002') print (doc) After all processors are run, a Document instance will be returned, which stores all annotation results. Within a Document, annotations are further stored in Sentences, Tokens and Words in a top-down fashion (Figure 1 ). The following code snippet demonstrates how to access the text and POS tag of each word in a document and all named entities in the document: # print the text and POS of all words for sentence in doc.sentences:", "cite_spans": [ { "start": 225, "end": 230, "text": "(doc)", "ref_id": null } ], "ref_spans": [ { "start": 439, "end": 448, "text": "(Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Neural Pipeline Interface", "sec_num": "3.1" }, { "text": "for word in sentence.words: print(word.text, word.pos) # print all entities in the document print(doc.entities)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Pipeline Interface", "sec_num": "3.1" }, { "text": "Sta n z a is designed to be run on different hardware devices. By default, CUDA devices will be used whenever they are visible by the pipeline, or otherwise CPUs will be used. However, users can force all computation to be run on CPUs by setting use_gpu=False at initialization time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Pipeline Interface", "sec_num": "3.1" }, { "text": "The CoreNLP client interface is designed in a way that the actual communication with the backend CoreNLP server is transparent to the user. To annotate an input text with the CoreNLP client, a CoreNLPClient instance needs to be initialized, with an optional list of CoreNLP annotators. After the annotation is complete, results will be accessible as native Python objects.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CoreNLP Client Interface", "sec_num": "3.2" }, { "text": "This code snippet shows how to establish a CoreNLP client and obtain the NER and coreference annotations of an English sentence: With the client interface, users can annotate text in 6 languages as supported by CoreNLP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CoreNLP Client Interface", "sec_num": "3.2" }, { "text": "To help visualize documents and their annotations generated by Sta n z a , we build an interactive web demo that runs the pipeline interactively. For all languages and all annotations Sta n z a provides in those languages, we generate predictions from the models trained on the largest treebank/NER dataset, and visualize the result with the Brat rapid annotation tool. 4 This demo runs in a client/server architecture, and annotation is performed on the server side. We make one instance of this demo publicly available at http://stanza.run/. It can also be run locally with proper Python libraries installed. An example of running Sta n z a on a German sentence can be found in Figure 3 .", "cite_spans": [ { "start": 370, "end": 371, "text": "4", "ref_id": null } ], "ref_spans": [ { "start": 680, "end": 688, "text": "Figure 3", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Interactive Web-based Demo", "sec_num": "3.3" }, { "text": "For all neural processors, Sta n z a provides command-line interfaces for users to train their own customized models. To do this, users need to prepare the training and development data in compatible formats (i.e., CoNLL-U format for the Universal Dependencies pipeline and BIO format column files for the NER model). The following command trains a neural dependency parser with user-specified training and development data: Table 2 : Neural pipeline performance comparisons on the Universal Dependencies (v2.5) test treebanks. For our system we show macro-averaged results over all 100 treebanks. We also compare our system against UDPipe and spaCy on treebanks of five major languages where the corresponding pretrained models are publicly available. All results are F 1 scores produced by the 2018 UD Shared Task official evaluation script.", "cite_spans": [], "ref_spans": [ { "start": 425, "end": 432, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Training Pipeline Models", "sec_num": "3.4" }, { "text": "$ python -m", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Pipeline Models", "sec_num": "3.4" }, { "text": "the training data as development data. These treebanks represent 66 languages, mostly European languages, but spanning a diversity of language families, including Indo-European, Afro-Asiatic, Uralic, Turkic, Sino-Tibetan, etc. For NER, we train and evaluate Sta n z a with 12 publicly available datasets covering 8 major languages as shown in Table 3 (Nothman et al., 2013; Tjong Kim Sang and De Meulder, 2003; Tjong Kim Sang, 2002; Benikova et al., 2014; Mohit et al., 2012; Taul\u00e9 et al., 2008; Weischedel et al., 2013) . For the WikiNER corpora, as canonical splits are not available, we randomly split them into 70% training, 15% dev and 15% test splits. For all other corpora we used their canonical splits.", "cite_spans": [ { "start": 351, "end": 373, "text": "(Nothman et al., 2013;", "ref_id": "BIBREF11" }, { "start": 374, "end": 410, "text": "Tjong Kim Sang and De Meulder, 2003;", "ref_id": "BIBREF16" }, { "start": 411, "end": 432, "text": "Tjong Kim Sang, 2002;", "ref_id": "BIBREF15" }, { "start": 433, "end": 455, "text": "Benikova et al., 2014;", "ref_id": "BIBREF3" }, { "start": 456, "end": 475, "text": "Mohit et al., 2012;", "ref_id": "BIBREF8" }, { "start": 476, "end": 495, "text": "Taul\u00e9 et al., 2008;", "ref_id": "BIBREF14" }, { "start": 496, "end": 520, "text": "Weischedel et al., 2013)", "ref_id": "BIBREF17" } ], "ref_spans": [ { "start": 343, "end": 350, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Training Pipeline Models", "sec_num": "3.4" }, { "text": "Training. On the Universal Dependencies treebanks, we tuned all hyper-parameters on several large treebanks and applied them to all other treebanks. We used the word2vec embeddings released as part of the 2018 UD Shared Task (Zeman et al., 2018) , or the fastText embeddings (Bojanowski et al., 2017) whenever word2vec is not available. For the character-level language models in the NER component, we pretrained them on a mix of the Common Crawl and Wikipedia dumps, and the news corpora released by the WMT19 Shared Task (Barrault et al., 2019) , except for English and Chinese, for which we pretrained on the Google One Billion Word (Chelba et al., 2013) and the Chi-nese Gigaword corpora 5 , respectively. We again applied the same hyper-parameters to models for all languages.", "cite_spans": [ { "start": 225, "end": 245, "text": "(Zeman et al., 2018)", "ref_id": "BIBREF18" }, { "start": 275, "end": 300, "text": "(Bojanowski et al., 2017)", "ref_id": "BIBREF4" }, { "start": 523, "end": 546, "text": "(Barrault et al., 2019)", "ref_id": "BIBREF2" }, { "start": 636, "end": 657, "text": "(Chelba et al., 2013)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Training Pipeline Models", "sec_num": "3.4" }, { "text": "Universal Dependencies Results. For performance on UD treebanks, we compared Sta n z a (v1.0) against UDPipe (v1.2) and spaCy (v2.2) on treebanks of 5 major languages whenever a pretrained model is available. As shown in Table 2 , St a n z a achieved the best performance on most scores reported. Notably, we find that Sta n z a 's languageagnostic architecture is able to adapt to datasets of different languages and genres. This is also shown by Sta n z a 's high macro-averaged scores over 100 treebanks covering 66 languages.", "cite_spans": [], "ref_spans": [ { "start": 221, "end": 228, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Training Pipeline Models", "sec_num": "3.4" }, { "text": "NER Results. For performance of the NER component, we compared Sta n z a (v1.0) against FLAIR (v0.4.5) and spaCy (v2.2). For spaCy we reported results from its publicly available pretrained model whenever one trained on the same dataset can be found, otherwise we retrained its model on our datasets with default hyper-parameters, following the publicly available tutorial. 6 For FLAIR, since their downloadable models were pretrained on dataset versions different from canonical ones, we retrained all models on our own dataset splits with their best reported hyper-parameters. All test results are shown in Table 3 . We find that on all datasets Sta n z a achieved either higher or close F 1 scores when compared against FLAIR. When compared to spaCy, Sta n z a 's NER performance is much better. It is worth noting that Sta n z a 's high performance is achieved with much smaller models compared with FLAIR (up to 75% smaller), as we intentionally compressed the models for memory efficiency and ease of distribution.", "cite_spans": [], "ref_spans": [ { "start": 609, "end": 616, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Training Pipeline Models", "sec_num": "3.4" }, { "text": "Speed comparison. We compare Sta n z a against existing toolkits to evaluate the time it takes to annotate text (see Table 4 ). For GPU tests we use a single NVIDIA Titan RTX card. Unsurprisingly, Sta n z a 's extensive use of accurate neural models makes it take significantly longer than spaCy to annotate text, but it is still competitive when compared against toolkits of similar accuracy, especially with the help of GPU acceleration.", "cite_spans": [], "ref_spans": [ { "start": 117, "end": 124, "text": "Table 4", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Training Pipeline Models", "sec_num": "3.4" }, { "text": "We introduced Sta n z a , a Python natural language processing toolkit supporting many human languages. We have showed that Sta n z a 's neural pipeline not only has wide coverage of human languages, but also is accurate on all tasks, thanks to its language-agnostic, fully neural architectural design. Simultaneously, Sta n z a 's CoreNLP client extends its functionality with additional NLP tools. For future work, we consider the following areas of improvement in the near term:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "5" }, { "text": "The toolkit was called StanfordNLP prior to v1.0.0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Following Universal Dependencies(Nivre et al., 2020), we make a distinction between tokens (contiguous spans of characters in the input text) and syntactic words. These are interchangeable aside from the cases of MWTs, where one token can correspond to multiple words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://brat.nlplab.org/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://catalog.ldc.upenn.edu/ LDC2011T136 https://spacy.io/usage/training#ner Note that, following this public tutorial, we did not use pretrained word embeddings when training spaCy NER models, although using pretrained word embeddings may potentially improve the NER results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The authors would like to thank the anonymous reviewers for their comments, Arun Chaganty for his early contribution to this toolkit, Tim Dozat for his design of the original architectures of the tagger and parser models, Matthew Honnibal and Ines Montani for their help with spaCy integration and helpful comments on the draft, Ranting Guo for the logo design, and John Bauer and the community contributors for their help with maintaining and improving this toolkit. This research is funded in part by Samsung Electronics Co., Ltd. and in part by the SAIL-JD Research Initiative.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": "\u2022 Models downloadable in Sta n z a are largely trained on a single dataset. To make models robust to many different genres of text, we would like to investigate the possibility of pooling various sources of compatible data to train \"default\" models for each language;\u2022 The amount of computation and resources available to us is limited. We would therefore like to build an open \"model zoo\" for Sta n z a , so that researchers from outside our group can also contribute their models and benefit from models released by others;\u2022 Sta n z a was designed to optimize for accuracy of its predictions, but this sometimes comes at the cost of computational efficiency and limits the toolkit's use. We would like to further investigate reducing model sizes and speeding up computation in the toolkit, while still maintaining the same level of accuracy.\u2022 We would also like to expand Sta n z a 's functionality by adding other processors such as neural coreference resolution or relation extraction for richer text analytics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "FLAIR: An easy-to-use framework for state-of-theart NLP", "authors": [ { "first": "Alan", "middle": [], "last": "Akbik", "suffix": "" }, { "first": "Tanja", "middle": [], "last": "Bergmann", "suffix": "" }, { "first": "Duncan", "middle": [], "last": "Blythe", "suffix": "" }, { "first": "Kashif", "middle": [], "last": "Rasul", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Schweter", "suffix": "" }, { "first": "Roland", "middle": [], "last": "Vollgraf", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations). Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alan Akbik, Tanja Bergmann, Duncan Blythe, Kashif Rasul, Stefan Schweter, and Roland Vollgraf. 2019. FLAIR: An easy-to-use framework for state-of-the- art NLP. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations). Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Contextual string embeddings for sequence labeling", "authors": [ { "first": "Alan", "middle": [], "last": "Akbik", "suffix": "" }, { "first": "Duncan", "middle": [], "last": "Blythe", "suffix": "" }, { "first": "Roland", "middle": [], "last": "Vollgraf", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In Proceedings of the 27th International Conference on Computational Linguistics. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Findings of the 2019 conference on machine translation (WMT19)", "authors": [ { "first": "Lo\u00efc", "middle": [], "last": "Barrault", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Marta", "middle": [ "R" ], "last": "Costa-Juss\u00e0", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Federmann", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Fishel", "suffix": "" }, { "first": "Yvette", "middle": [], "last": "Graham", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "Huck", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Shervin", "middle": [], "last": "Malmasi", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" }, { "first": "Mathias", "middle": [], "last": "M\u00fcller", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Fourth Conference on Machine Translation", "volume": "2", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lo\u00efc Barrault, Ond\u0159ej Bojar, Marta R. Costa-juss\u00e0, Christian Federmann, Mark Fishel, Yvette Gra- ham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, Christof Monz, Mathias M\u00fcller, Santanu Pal, Matt Post, and Marcos Zampieri. 2019. Findings of the 2019 conference on machine transla- tion (WMT19). In Proceedings of the Fourth Con- ference on Machine Translation (Volume 2: Shared Task Papers, Day 1). Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "NoSta-D named entity annotation for German: Guidelines and dataset", "authors": [ { "first": "Darina", "middle": [], "last": "Benikova", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Biemann", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Reznicek", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Darina Benikova, Chris Biemann, and Marc Reznicek. 2014. NoSta-D named entity annotation for Ger- man: Guidelines and dataset. In Proceedings of the Ninth International Conference on Language Re- sources and Evaluation (LREC'14).", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Enriching word vectors with subword information", "authors": [ { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2017, "venue": "Transactions of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1162/tacl_a_00051" ] }, "num": null, "urls": [], "raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "One billion word benchmark for measuring progress in statistical language modeling", "authors": [ { "first": "Ciprian", "middle": [], "last": "Chelba", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "Qi", "middle": [], "last": "Ge", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robin- son. 2013. One billion word benchmark for measur- ing progress in statistical language modeling. Tech- nical report, Google.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Deep biaffine attention for neural dependency parsing", "authors": [ { "first": "Timothy", "middle": [], "last": "Dozat", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2017, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Timothy Dozat and Christopher D. Manning. 2017. Deep biaffine attention for neural dependency pars- ing. In International Conference on Learning Rep- resentations (ICLR).", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The Stanford CoreNLP natural language processing toolkit", "authors": [ { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "John", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "Jenny", "middle": [], "last": "Finkel", "suffix": "" }, { "first": "Steven", "middle": [ "J" ], "last": "Bethard", "suffix": "" }, { "first": "David", "middle": [], "last": "Mc-Closky", "suffix": "" } ], "year": 2014, "venue": "Association for Computational Linguistics (ACL) System Demonstrations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David Mc- Closky. 2014. The Stanford CoreNLP natural lan- guage processing toolkit. In Association for Compu- tational Linguistics (ACL) System Demonstrations.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Recalloriented learning of named entities in Arabic Wikipedia", "authors": [ { "first": "Behrang", "middle": [], "last": "Mohit", "suffix": "" }, { "first": "Nathan", "middle": [], "last": "Schneider", "suffix": "" }, { "first": "Rishav", "middle": [], "last": "Bhowmick", "suffix": "" }, { "first": "Kemal", "middle": [], "last": "Oflazer", "suffix": "" }, { "first": "Noah A", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Behrang Mohit, Nathan Schneider, Rishav Bhowmick, Kemal Oflazer, and Noah A Smith. 2012. Recall- oriented learning of named entities in Arabic Wikipedia. In Proceedings of the 13th Conference of the European Chapter of the Association for Compu- tational Linguistics. Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Universal dependencies v2: An evergrowing multilingual treebank collection", "authors": [ { "first": "Daniel", "middle": [], "last": "Zeman", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Twelfth International Conference on Language Resources and Evaluation (LREC'20)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Zeman. 2020. Universal dependencies v2: An evergrowing multilingual treebank collection. In Proceedings of the Twelfth International Conference on Language Resources and Evaluation (LREC'20).", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Learning multilingual named entity recognition from Wikipedia", "authors": [ { "first": "Joel", "middle": [], "last": "Nothman", "suffix": "" }, { "first": "Nicky", "middle": [], "last": "Ringland", "suffix": "" }, { "first": "Will", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Tara", "middle": [], "last": "Murphy", "suffix": "" }, { "first": "James R", "middle": [], "last": "Curran", "suffix": "" } ], "year": 2013, "venue": "Artificial Intelligence", "volume": "194", "issue": "", "pages": "151--175", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joel Nothman, Nicky Ringland, Will Radford, Tara Murphy, and James R Curran. 2013. Learning mul- tilingual named entity recognition from Wikipedia. Artificial Intelligence, 194:151-175.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Universal dependency parsing from scratch", "authors": [ { "first": "Peng", "middle": [], "last": "Qi", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Dozat", "suffix": "" }, { "first": "Yuhao", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peng Qi, Timothy Dozat, Yuhao Zhang, and Christo- pher D. Manning. 2018. Universal dependency pars- ing from scratch. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. Association for Computa- tional Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "UDPipe 2.0 prototype at CoNLL 2018 UD shared task", "authors": [ { "first": "Milan", "middle": [], "last": "Straka", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Milan Straka. 2018. UDPipe 2.0 prototype at CoNLL 2018 UD shared task. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "AnCora: Multilevel annotated corpora for Catalan and Spanish", "authors": [ { "first": "M", "middle": [ "Ant\u00f2nia" ], "last": "Mariona Taul\u00e9", "suffix": "" }, { "first": "Marta", "middle": [], "last": "Mart\u00ed", "suffix": "" }, { "first": "", "middle": [], "last": "Recasens", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08). European Language Resources Association (ELRA)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mariona Taul\u00e9, M. Ant\u00f2nia Mart\u00ed, and Marta Recasens. 2008. AnCora: Multilevel annotated corpora for Catalan and Spanish. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08). European Language Re- sources Association (ELRA).", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Introduction to the CoNLL-2002 shared task: Language-independent named entity recognition", "authors": [ { "first": "Erik", "middle": [ "F" ], "last": "", "suffix": "" }, { "first": "Tjong Kim", "middle": [], "last": "Sang", "suffix": "" } ], "year": 2002, "venue": "COLING-02: The 6th Conference on Natural Language Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Erik F. Tjong Kim Sang. 2002. Introduction to the CoNLL-2002 shared task: Language-independent named entity recognition. In COLING-02: The 6th Conference on Natural Language Learning 2002 (CoNLL-2002).", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition", "authors": [ { "first": "Erik", "middle": [ "F" ], "last": "Tjong", "suffix": "" }, { "first": "Kim", "middle": [], "last": "Sang", "suffix": "" }, { "first": "Fien", "middle": [], "last": "De Meulder", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "OntoNotes release 5.0. Linguistic Data Consortium", "authors": [ { "first": "Ralph", "middle": [], "last": "Weischedel", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Mitchell", "middle": [], "last": "Marcus", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Sameer", "middle": [], "last": "Pradhan", "suffix": "" }, { "first": "Lance", "middle": [], "last": "Ramshaw", "suffix": "" }, { "first": "Nianwen", "middle": [], "last": "Xue", "suffix": "" }, { "first": "Ann", "middle": [], "last": "Taylor", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Kaufman", "suffix": "" }, { "first": "Michelle", "middle": [], "last": "Franchini", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Ni- anwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, et al. 2013. OntoNotes release 5.0. Lin- guistic Data Consortium.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "CoNLL 2018 shared task: Multilingual parsing from raw text to universal dependencies", "authors": [ { "first": "Daniel", "middle": [], "last": "Zeman", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Haji\u010d", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Popel", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Potthast", "suffix": "" }, { "first": "Milan", "middle": [], "last": "Straka", "suffix": "" }, { "first": "Filip", "middle": [], "last": "Ginter", "suffix": "" }, { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Zeman, Jan Haji\u010d, Martin Popel, Martin Pot- thast, Milan Straka, Filip Ginter, Joakim Nivre, and Slav Petrov. 2018. CoNLL 2018 shared task: Mul- tilingual parsing from raw text to universal depen- dencies. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Univer- sal Dependencies. Association for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Mohammed Attia, Aitziber Atutxa, Liesbeth Augustinus, Elena Badmaeva, Miguel Ballesteros, Esha Banerjee", "authors": [ { "first": "Daniel", "middle": [], "last": "Zeman", "suffix": "" }, { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Mitchell", "middle": [], "last": "Abrams", "suffix": "" }, { "first": "No\u00ebmi", "middle": [], "last": "Aepli", "suffix": "" }, { "first": "\u017deljko", "middle": [], "last": "Agi\u0107", "suffix": "" }, { "first": "Lars", "middle": [], "last": "Ahrenberg", "suffix": "" }, { "first": "Gabriel\u0117", "middle": [], "last": "Aleksandravi\u010di\u016bt\u0117", "suffix": "" }, { "first": "Lene", "middle": [], "last": "Antonsen", "suffix": "" }, { "first": "Katya", "middle": [], "last": "Aplonova", "suffix": "" }, { "first": "Maria", "middle": [ "Jesus" ], "last": "Aranzabe", "suffix": "" }, { "first": "Gashaw", "middle": [], "last": "Arutie", "suffix": "" }, { "first": "Masayuki", "middle": [], "last": "Asahara", "suffix": "" }, { "first": "Luma", "middle": [], "last": "Ateyah", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Zeman, Joakim Nivre, Mitchell Abrams, No\u00ebmi Aepli, \u017deljko Agi\u0107, Lars Ahrenberg, Gabriel\u0117 Alek- sandravi\u010di\u016bt\u0117, Lene Antonsen, Katya Aplonova, Maria Jesus Aranzabe, Gashaw Arutie, Masayuki Asahara, Luma Ateyah, Mohammed Attia, Aitz- iber Atutxa, Liesbeth Augustinus, Elena Badmaeva, Miguel Ballesteros, Esha Banerjee, Sebastian Bank, Verginica Barbu Mititelu, Victoria Basmov, Colin", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Lilja \u00d8vrelid, Niko Partanen, Elena Pascual, Marco Passarotti, Agnieszka Patejuk, Guilherme Paulino-Passos, Angelika Peljak-\u0141api\u0144ska, Siyao Peng, Cenel-Augusto Perez, Guy Perrier, Daria Petrova", "authors": [ { "first": "John", "middle": [], "last": "Batchelor", "suffix": "" }, { "first": "Sandra", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "Kepa", "middle": [], "last": "Bellato", "suffix": "" }, { "first": "Yevgeni", "middle": [], "last": "Bengoetxea", "suffix": "" }, { "first": "", "middle": [], "last": "Berzak", "suffix": "" }, { "first": "Ahmad", "middle": [], "last": "Irshad", "suffix": "" }, { "first": "Riyaz", "middle": [ "Ahmad" ], "last": "Bhat", "suffix": "" }, { "first": "Erica", "middle": [], "last": "Bhat", "suffix": "" }, { "first": "Eckhard", "middle": [], "last": "Biagetti", "suffix": "" }, { "first": "Agn\u0117", "middle": [], "last": "Bick", "suffix": "" }, { "first": "Rogier", "middle": [], "last": "Bielinskien\u0117", "suffix": "" }, { "first": "Victoria", "middle": [], "last": "Blokland", "suffix": "" }, { "first": "Lo\u00efc", "middle": [], "last": "Bobicev", "suffix": "" }, { "first": "Emanuel", "middle": [ "Borges" ], "last": "Boizou", "suffix": "" }, { "first": "Carl", "middle": [], "last": "V\u00f6lker", "suffix": "" }, { "first": "Cristina", "middle": [], "last": "B\u00f6rstell", "suffix": "" }, { "first": "Gosse", "middle": [], "last": "Bosco", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Bouma", "suffix": "" }, { "first": "Adriane", "middle": [], "last": "Bowman", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Boyd", "suffix": "" }, { "first": "Aljoscha", "middle": [], "last": "Brokait\u0117", "suffix": "" }, { "first": "Marie", "middle": [], "last": "Burchardt", "suffix": "" }, { "first": "Bernard", "middle": [], "last": "Candito", "suffix": "" }, { "first": "Gauthier", "middle": [], "last": "Caron", "suffix": "" }, { "first": "Tatiana", "middle": [], "last": "Caron", "suffix": "" }, { "first": "G\u00fcl\u015fen", "middle": [], "last": "Cavalcanti", "suffix": "" }, { "first": "Flavio", "middle": [], "last": "Cebiroglu Eryigit", "suffix": "" }, { "first": "Giuseppe", "middle": [ "G A" ], "last": "Massimiliano Cecchini", "suffix": "" }, { "first": "", "middle": [], "last": "Celano", "suffix": "" }, { "first": "Savas", "middle": [], "last": "Slavom\u00edr\u010d\u00e9pl\u00f6", "suffix": "" }, { "first": "Fabricio", "middle": [], "last": "Cetin", "suffix": "" }, { "first": "Jinho", "middle": [], "last": "Chalub", "suffix": "" }, { "first": "Yongseok", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Jayeol", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Alessandra", "middle": [ "T" ], "last": "Chun", "suffix": "" }, { "first": "Silvie", "middle": [], "last": "Cignarella", "suffix": "" }, { "first": "Aur\u00e9lie", "middle": [], "last": "Cinkov\u00e1", "suffix": "" }, { "first": "\u00c7agr\u0131", "middle": [], "last": "Collomb", "suffix": "" }, { "first": "Miriam", "middle": [], "last": "\u00c7\u00f6ltekin", "suffix": "" }, { "first": "Marine", "middle": [], "last": "Connor", "suffix": "" }, { "first": "Elizabeth", "middle": [], "last": "Courtin", "suffix": "" }, { "first": "Marie-Catherine", "middle": [], "last": "Davidson", "suffix": "" }, { "first": "Valeria", "middle": [], "last": "De Marneffe", "suffix": "" }, { "first": "Elvis", "middle": [], "last": "De Paiva", "suffix": "" }, { "first": "Arantza", "middle": [], "last": "De Souza", "suffix": "" }, { "first": "Carly", "middle": [], "last": "Diaz De Ilarraza", "suffix": "" }, { "first": "Bamba", "middle": [], "last": "Dickerson", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Dione", "suffix": "" }, { "first": "Kaja", "middle": [], "last": "Dirix", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Dobrovoljc", "suffix": "" }, { "first": "Kira", "middle": [], "last": "Dozat", "suffix": "" }, { "first": "Puneet", "middle": [], "last": "Droganova", "suffix": "" }, { "first": "Hanne", "middle": [], "last": "Dwivedi", "suffix": "" }, { "first": "Marhaba", "middle": [], "last": "Eckhoff", "suffix": "" }, { "first": "Ali", "middle": [], "last": "Eli", "suffix": "" }, { "first": "Binyam", "middle": [], "last": "Elkahky", "suffix": "" }, { "first": "Olga", "middle": [], "last": "Ephrem", "suffix": "" }, { "first": "Toma\u017e", "middle": [], "last": "Erina", "suffix": "" }, { "first": "Aline", "middle": [], "last": "Erjavec", "suffix": "" }, { "first": "Wograine", "middle": [], "last": "Etienne", "suffix": "" }, { "first": "Rich\u00e1rd", "middle": [], "last": "Evelyn", "suffix": "" }, { "first": "Hector", "middle": [], "last": "Farkas", "suffix": "" }, { "first": "Jennifer", "middle": [], "last": "Fernandez Alcalde", "suffix": "" }, { "first": "Cl\u00e1udia", "middle": [], "last": "Foster", "suffix": "" }, { "first": "Kazunori", "middle": [], "last": "Freitas", "suffix": "" }, { "first": "Katar\u00edna", "middle": [], "last": "Fujita", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Gajdo\u0161ov\u00e1", "suffix": "" }, { "first": "Marcos", "middle": [], "last": "Galbraith", "suffix": "" }, { "first": "Moa", "middle": [], "last": "Garcia", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "G\u00e4rdenfors", "suffix": "" }, { "first": "Kim", "middle": [], "last": "Garza", "suffix": "" }, { "first": "Filip", "middle": [], "last": "Gerdes", "suffix": "" }, { "first": "Iakes", "middle": [], "last": "Ginter", "suffix": "" }, { "first": "Koldo", "middle": [], "last": "Goenaga", "suffix": "" }, { "first": "Memduh", "middle": [], "last": "Gojenola", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "G\u00f6k\u0131rmak", "suffix": "" }, { "first": "Xavier", "middle": [ "G\u00f3mez" ], "last": "Goldberg", "suffix": "" }, { "first": "Berta", "middle": [ "Gonz\u00e1lez" ], "last": "Guinovart", "suffix": "" }, { "first": "Bernadeta", "middle": [], "last": "Saavedra", "suffix": "" }, { "first": "Matias", "middle": [], "last": "Grici\u016bt\u0117", "suffix": "" }, { "first": "Normunds", "middle": [], "last": "Grioni", "suffix": "" }, { "first": "Bruno", "middle": [], "last": "Gr\u016bz\u012btis", "suffix": "" }, { "first": "C\u00e9line", "middle": [], "last": "Guillaume", "suffix": "" }, { "first": "Nizar", "middle": [], "last": "Guillot-Barbance", "suffix": "" }, { "first": "Mika", "middle": [], "last": "Habash ; Haji\u010d Jr", "suffix": "" }, { "first": "Linh", "middle": [ "H\u00e0" ], "last": "H\u00e4m\u00e4l\u00e4inen", "suffix": "" }, { "first": "Na-Rae", "middle": [], "last": "M\u1ef9", "suffix": "" }, { "first": "Kim", "middle": [], "last": "Han", "suffix": "" }, { "first": "Dag", "middle": [], "last": "Harris", "suffix": "" }, { "first": "Johannes", "middle": [], "last": "Haug", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Heinecke", "suffix": "" }, { "first": "Barbora", "middle": [], "last": "Hennig", "suffix": "" }, { "first": "Jaroslava", "middle": [], "last": "Hladk\u00e1", "suffix": "" }, { "first": "Florinel", "middle": [], "last": "Hlav\u00e1\u010dov\u00e1", "suffix": "" }, { "first": "Petter", "middle": [], "last": "Hociung", "suffix": "" }, { "first": "Jena", "middle": [], "last": "Hohle", "suffix": "" }, { "first": "Takumi", "middle": [], "last": "Hwang", "suffix": "" }, { "first": "Radu", "middle": [], "last": "Ikeda", "suffix": "" }, { "first": "Elena", "middle": [], "last": "Ion", "suffix": "" }, { "first": "O", "middle": [], "last": "Irimia", "suffix": "" }, { "first": "Tom\u00e1\u0161", "middle": [], "last": "Ishola", "suffix": "" }, { "first": "Anders", "middle": [], "last": "Jel\u00ednek", "suffix": "" }, { "first": "Fredrik", "middle": [], "last": "Johannsen", "suffix": "" }, { "first": "Markus", "middle": [], "last": "J\u00f8rgensen", "suffix": "" }, { "first": "H\u00fcner", "middle": [], "last": "Juutinen", "suffix": "" }, { "first": "Andre", "middle": [], "last": "Ka\u015f\u0131kara", "suffix": "" }, { "first": "Nadezhda", "middle": [], "last": "Kaasen", "suffix": "" }, { "first": "", "middle": [], "last": "Kabaeva", "suffix": "" }, { "first": "Hiroshi", "middle": [], "last": "Sylvain Kahane", "suffix": "" }, { "first": "Jenna", "middle": [], "last": "Kanayama", "suffix": "" }, { "first": "Boris", "middle": [], "last": "Kanerva", "suffix": "" }, { "first": "Tolga", "middle": [], "last": "Katz", "suffix": "" }, { "first": "Jessica", "middle": [], "last": "Kayadelen", "suffix": "" }, { "first": "V\u00e1clava", "middle": [], "last": "Kenney", "suffix": "" }, { "first": "Jesse", "middle": [], "last": "Kettnerov\u00e1", "suffix": "" }, { "first": "Elena", "middle": [], "last": "Kirchner", "suffix": "" }, { "first": "Arne", "middle": [], "last": "Klementieva", "suffix": "" }, { "first": "Kamil", "middle": [], "last": "K\u00f6hn", "suffix": "" }, { "first": "Natalia", "middle": [], "last": "Kopacewicz", "suffix": "" }, { "first": "Jolanta", "middle": [], "last": "Kotsyba", "suffix": "" }, { "first": "Simon", "middle": [], "last": "Kovalevskait\u0117", "suffix": "" }, { "first": "Sookyoung", "middle": [], "last": "Krek", "suffix": "" }, { "first": "Veronika", "middle": [], "last": "Kwak", "suffix": "" }, { "first": "Lorenzo", "middle": [], "last": "Laippala", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Lambertino", "suffix": "" }, { "first": "Tatiana", "middle": [], "last": "Lam", "suffix": "" }, { "first": "", "middle": [], "last": "Lando", "suffix": "" }, { "first": "Alexei", "middle": [], "last": "Septina Dian Larasati", "suffix": "" }, { "first": "John", "middle": [], "last": "Lavrentiev", "suffix": "" }, { "first": "", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Ph\u01b0\u01a1ng L\u00ea H\u1ed3ng", "suffix": "" }, { "first": "Saran", "middle": [], "last": "Lenci", "suffix": "" }, { "first": "Herman", "middle": [], "last": "Lertpradit", "suffix": "" }, { "first": "", "middle": [], "last": "Leung", "suffix": "" }, { "first": "Ying", "middle": [], "last": "Cheuk", "suffix": "" }, { "first": "Josie", "middle": [], "last": "Li", "suffix": "" }, { "first": "Keying", "middle": [], "last": "Li", "suffix": "" }, { "first": "Kyungtae", "middle": [], "last": "Li", "suffix": "" }, { "first": "Maria", "middle": [], "last": "Lim", "suffix": "" }, { "first": "Yuan", "middle": [], "last": "Liovina", "suffix": "" }, { "first": "Nikola", "middle": [], "last": "Li", "suffix": "" }, { "first": "Olga", "middle": [], "last": "Ljube\u0161i\u0107", "suffix": "" }, { "first": "Olga", "middle": [], "last": "Loginova", "suffix": "" }, { "first": "Teresa", "middle": [], "last": "Lyashevskaya", "suffix": "" }, { "first": "Vivien", "middle": [], "last": "Lynn", "suffix": "" }, { "first": "Aibek", "middle": [], "last": "Macketanz", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Makazhanov", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Mandl", "suffix": "" }, { "first": "Ruli", "middle": [], "last": "Manning", "suffix": "" }, { "first": ";", "middle": [], "last": "Manurung", "suffix": "" }, { "first": "Zsolt", "middle": [], "last": "Suzuki", "suffix": "" }, { "first": "Dima", "middle": [], "last": "Sz\u00e1nt\u00f3", "suffix": "" }, { "first": "Yuta", "middle": [], "last": "Taji", "suffix": "" }, { "first": "Fabio", "middle": [], "last": "Takahashi", "suffix": "" }, { "first": "Takaaki", "middle": [], "last": "Tamburini", "suffix": "" }, { "first": "Isabelle", "middle": [], "last": "Tanaka", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Tellier", "suffix": "" }, { "first": "Liisi", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "Trond", "middle": [], "last": "Torga", "suffix": "" }, { "first": "", "middle": [], "last": "Trosterud", "suffix": "" } ], "year": null, "venue": "Faculty of Mathematics and Physics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Batchelor, John Bauer, Sandra Bellato, Kepa Ben- goetxea, Yevgeni Berzak, Irshad Ahmad Bhat, Riyaz Ahmad Bhat, Erica Biagetti, Eckhard Bick, Agn\u0117 Bielinskien\u0117, Rogier Blokland, Victoria Bo- bicev, Lo\u00efc Boizou, Emanuel Borges V\u00f6lker, Carl B\u00f6rstell, Cristina Bosco, Gosse Bouma, Sam Bow- man, Adriane Boyd, Kristina Brokait\u0117, Aljoscha Burchardt, Marie Candito, Bernard Caron, Gauthier Caron, Tatiana Cavalcanti, G\u00fcl\u015fen Cebiroglu Ery- igit, Flavio Massimiliano Cecchini, Giuseppe G. A. Celano, Slavom\u00edr\u010c\u00e9pl\u00f6, Savas Cetin, Fabri- cio Chalub, Jinho Choi, Yongseok Cho, Jayeol Chun, Alessandra T. Cignarella, Silvie Cinkov\u00e1, Aur\u00e9lie Collomb, \u00c7agr\u0131 \u00c7\u00f6ltekin, Miriam Con- nor, Marine Courtin, Elizabeth Davidson, Marie- Catherine de Marneffe, Valeria de Paiva, Elvis de Souza, Arantza Diaz de Ilarraza, Carly Dicker- son, Bamba Dione, Peter Dirix, Kaja Dobrovoljc, Timothy Dozat, Kira Droganova, Puneet Dwivedi, Hanne Eckhoff, Marhaba Eli, Ali Elkahky, Binyam Ephrem, Olga Erina, Toma\u017e Erjavec, Aline Eti- enne, Wograine Evelyn, Rich\u00e1rd Farkas, Hector Fernandez Alcalde, Jennifer Foster, Cl\u00e1udia Fre- itas, Kazunori Fujita, Katar\u00edna Gajdo\u0161ov\u00e1, Daniel Galbraith, Marcos Garcia, Moa G\u00e4rdenfors, Se- bastian Garza, Kim Gerdes, Filip Ginter, Iakes Goenaga, Koldo Gojenola, Memduh G\u00f6k\u0131rmak, Yoav Goldberg, Xavier G\u00f3mez Guinovart, Berta Gonz\u00e1lez Saavedra, Bernadeta Grici\u016bt\u0117, Matias Gri- oni, Normunds Gr\u016bz\u012btis, Bruno Guillaume, C\u00e9line Guillot-Barbance, Nizar Habash, Jan Haji\u010d, Jan Ha- ji\u010d jr., Mika H\u00e4m\u00e4l\u00e4inen, Linh H\u00e0 M\u1ef9, Na-Rae Han, Kim Harris, Dag Haug, Johannes Heinecke, Fe- lix Hennig, Barbora Hladk\u00e1, Jaroslava Hlav\u00e1\u010dov\u00e1, Florinel Hociung, Petter Hohle, Jena Hwang, Takumi Ikeda, Radu Ion, Elena Irimia, O . l\u00e1j\u00edd\u00e9 Ishola, Tom\u00e1\u0161 Jel\u00ednek, Anders Johannsen, Fredrik J\u00f8rgensen, Markus Juutinen, H\u00fcner Ka\u015f\u0131kara, An- dre Kaasen, Nadezhda Kabaeva, Sylvain Kahane, Hiroshi Kanayama, Jenna Kanerva, Boris Katz, Tolga Kayadelen, Jessica Kenney, V\u00e1clava Ket- tnerov\u00e1, Jesse Kirchner, Elena Klementieva, Arne K\u00f6hn, Kamil Kopacewicz, Natalia Kotsyba, Jolanta Kovalevskait\u0117, Simon Krek, Sookyoung Kwak, Veronika Laippala, Lorenzo Lambertino, Lucia Lam, Tatiana Lando, Septina Dian Larasati, Alexei Lavrentiev, John Lee, Ph\u01b0\u01a1ng L\u00ea H\u1ed3ng, Alessandro Lenci, Saran Lertpradit, Herman Leung, Cheuk Ying Li, Josie Li, Keying Li, KyungTae Lim, Maria Li- ovina, Yuan Li, Nikola Ljube\u0161i\u0107, Olga Loginova, Olga Lyashevskaya, Teresa Lynn, Vivien Macke- tanz, Aibek Makazhanov, Michael Mandl, Christo- pher Manning, Ruli Manurung, C\u0203t\u0203lina M\u0203r\u0203n- duc, David Mare\u010dek, Katrin Marheinecke, H\u00e9c- tor Mart\u00ednez Alonso, Andr\u00e9 Martins, Jan Ma\u0161ek, Yuji Matsumoto, Ryan McDonald, Sarah McGuin- ness, Gustavo Mendon\u00e7a, Niko Miekka, Mar- garita Misirpashayeva, Anna Missil\u00e4, C\u0203t\u0203lin Mi- titelu, Maria Mitrofan, Yusuke Miyao, Simonetta Montemagni, Amir More, Laura Moreno Romero, Keiko Sophie Mori, Tomohiko Morioka, Shin- suke Mori, Shigeki Moro, Bjartur Mortensen, Bohdan Moskalevskyi, Kadri Muischnek, Robert Munro, Yugo Murawaki, Kaili M\u00fc\u00fcrisep, Pinkey Nainwani, Juan Ignacio Navarro Hor\u00f1iacek, Anna Nedoluzhko, Gunta Ne\u0161pore-B\u0113rzkalne, L\u01b0\u01a1ng Nguy\u1ec5n Thi . , Huy\u1ec1n Nguy\u1ec5n Thi . Minh, Yoshi- hiro Nikaido, Vitaly Nikolaev, Rattima Nitisaroj, Hanna Nurmi, Stina Ojala, Atul Kr. Ojha, Ad\u00e9dayo . Ol\u00fa\u00f2kun, Mai Omura, Petya Osenova, Robert \u00d6stling, Lilja \u00d8vrelid, Niko Partanen, Elena Pas- cual, Marco Passarotti, Agnieszka Patejuk, Guil- herme Paulino-Passos, Angelika Peljak-\u0141api\u0144ska, Siyao Peng, Cenel-Augusto Perez, Guy Perrier, Daria Petrova, Slav Petrov, Jason Phelan, Jussi Piitulainen, Tommi A Pirinen, Emily Pitler, Bar- bara Plank, Thierry Poibeau, Larisa Ponomareva, Martin Popel, Lauma Pretkalni\u0146a, Sophie Pr\u00e9vost, Prokopis Prokopidis, Adam Przepi\u00f3rkowski, Tiina Puolakainen, Sampo Pyysalo, Peng Qi, Andriela R\u00e4\u00e4bis, Alexandre Rademaker, Loganathan Ra- masamy, Taraka Rama, Carlos Ramisch, Vinit Rav- ishankar, Livy Real, Siva Reddy, Georg Rehm, Ivan Riabov, Michael Rie\u00dfler, Erika Rimkut\u0117, Larissa Ri- naldi, Laura Rituma, Luisa Rocha, Mykhailo Ro- manenko, Rudolf Rosa, Davide Rovati, Valentin Rosca, Olga Rudina, Jack Rueter, Shoval Sadde, Beno\u00eet Sagot, Shadi Saleh, Alessio Salomoni, Tanja Samard\u017ei\u0107, Stephanie Samson, Manuela Sanguinetti, Dage S\u00e4rg, Baiba Saul\u012bte, Yanin Sawanakunanon, Nathan Schneider, Sebastian Schuster, Djam\u00e9 Sed- dah, Wolfgang Seeker, Mojgan Seraji, Mo Shen, Atsuko Shimada, Hiroyuki Shirasu, Muh Shohibus- sirri, Dmitry Sichinava, Aline Silveira, Natalia Sil- veira, Maria Simi, Radu Simionescu, Katalin Simk\u00f3, M\u00e1ria \u0160imkov\u00e1, Kiril Simov, Aaron Smith, Isabela Soares-Bastos, Carolyn Spadine, Antonio Stella, Milan Straka, Jana Strnadov\u00e1, Alane Suhr, Umut Sulubacak, Shingo Suzuki, Zsolt Sz\u00e1nt\u00f3, Dima Taji, Yuta Takahashi, Fabio Tamburini, Takaaki Tanaka, Isabelle Tellier, Guillaume Thomas, Li- isi Torga, Trond Trosterud, Anna Trukhina, Reut Tsarfaty, Francis Tyers, Sumire Uematsu, Zde\u0148ka Ure\u0161ov\u00e1, Larraitz Uria, Hans Uszkoreit, Andrius Utka, Sowmya Vajjala, Daniel van Niekerk, Gert- jan van Noord, Viktor Varga, Eric Villemonte de la Clergerie, Veronika Vincze, Lars Wallin, Abigail Walsh, Jing Xian Wang, Jonathan North Washing- ton, Maximilan Wendt, Seyi Williams, Mats Wir\u00e9n, Christian Wittern, Tsegay Woldemariam, Tak-sum Wong, Alina Wr\u00f3blewska, Mary Yako, Naoki Ya- mazaki, Chunxiao Yan, Koichi Yasuoka, Marat M. Yavrumyan, Zhuoran Yu, Zden\u011bk \u017dabokrtsk\u00fd, Amir Zeldes, Manying Zhang, and Hanzhi Zhu. 2019. Universal Dependencies 2.5. LINDAT/CLARIAH- CZ digital library at the Institute of Formal and Ap- plied Linguistics (\u00daFAL), Faculty of Mathematics and Physics, Charles University.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "Overview of Sta n z a 's neural NLP pipeline. Sta n z a takes multilingual text as input, and produces annotations accessible as native Python objects. Besides this neural pipeline, Sta n z a also features a Python client interface to the Java CoreNLP software.", "uris": null, "num": null }, "FIGREF1": { "type_str": "figure", "text": "fr) L'Association des H\u00f4tels (en) The Association of Hotels (fr) Il y a des h\u00f4tels en bas de la rue (en) There are hotels down the street", "uris": null, "num": null }, "FIGREF2": { "type_str": "figure", "text": "An example of multi-word tokens in French.", "uris": null, "num": null }, "FIGREF3": { "type_str": "figure", "text": "from stanza.server import CoreNLPClient # start a CoreNLP client with CoreNLPClient(annotators=['tokenize','ssplit ','pos','lemma','ner','parse','coref']) as client: # run annotation over input ann = client.annotate('Emily said that she liked the movie.') # access all entities for sent in ann.sentence: print(sent.mentions) # access coreference annotations print(ann.corefChain)", "uris": null, "num": null }, "FIGREF4": { "type_str": "figure", "text": "Sta n z a annotates a German sentence, as visualized by our interactive demo. Note am is expanded into syntactic words an and dem before downstream analyses are performed.", "uris": null, "num": null }, "TABREF1": { "type_str": "table", "num": null, "html": null, "content": "", "text": "Feature comparisons of Sta n z a against other popular natural language processing toolkits." }, "TABREF4": { "type_str": "table", "num": null, "html": null, "content": "
", "text": "NER performance across different languages and corpora. All scores reported are entity microaveraged test F 1 . For each corpus we also list the number of entity types. * marks results from publicly available pretrained models on the same dataset, while others are from models retrained on our datasets." }, "TABREF6": { "type_str": "table", "num": null, "html": null, "content": "
", "text": "Annotation runtime of various toolkits relative to spaCy (CPU) on the English EWT treebank and OntoNotes NER test sets. For reference, on the compared UD and NER tasks, spaCy is able to process 8140 and 5912 tokens per second, respectively." } } } }