Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D19-1034",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:01:06.800455Z"
},
"title": "A Boundary-aware Neural Model for Nested Named Entity Recognition",
"authors": [
{
"first": "Changmeng",
"middle": [],
"last": "Zheng",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "South China University of Technology",
"location": {
"settlement": "Guangzhou",
"country": "China"
}
},
"email": ""
},
{
"first": "Yi",
"middle": [],
"last": "Cai",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "South China University of Technology",
"location": {
"settlement": "Guangzhou",
"country": "China"
}
},
"email": "ycai@scut.edu.cn"
},
{
"first": "Jingyun",
"middle": [],
"last": "Xu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "South China University of Technology",
"location": {
"settlement": "Guangzhou",
"country": "China"
}
},
"email": ""
},
{
"first": "Ho-Fung",
"middle": [],
"last": "Leung",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Chinese University of Hong Kong",
"location": {
"settlement": "Hong Kong SAR",
"country": "China"
}
},
"email": ""
},
{
"first": "Guandong",
"middle": [],
"last": "Xu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Technology Sydney",
"location": {
"country": "Australia"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In natural language processing, it is common that many entities contain other entities inside them. Most existing works on named entity recognition (NER) only deal with flat entities but ignore nested ones. We propose a boundary-aware neural model for nested NER which leverages entity boundaries to predict entity categorical labels. Our model can locate entities precisely by detecting boundaries using sequence labeling models. Based on the detected boundaries, our model utilizes the boundary-relevant regions to predict entity categorical labels, which can decrease computation cost and relieve error propagation problem in layered sequence labeling model. We introduce multitask learning to capture the dependencies of entity boundaries and their categorical labels, which helps to improve the performance of identifying entities. We conduct our experiments on nested NER datasets and the experimental results demonstrate that our model outperforms other state-of-the-art methods.",
"pdf_parse": {
"paper_id": "D19-1034",
"_pdf_hash": "",
"abstract": [
{
"text": "In natural language processing, it is common that many entities contain other entities inside them. Most existing works on named entity recognition (NER) only deal with flat entities but ignore nested ones. We propose a boundary-aware neural model for nested NER which leverages entity boundaries to predict entity categorical labels. Our model can locate entities precisely by detecting boundaries using sequence labeling models. Based on the detected boundaries, our model utilizes the boundary-relevant regions to predict entity categorical labels, which can decrease computation cost and relieve error propagation problem in layered sequence labeling model. We introduce multitask learning to capture the dependencies of entity boundaries and their categorical labels, which helps to improve the performance of identifying entities. We conduct our experiments on nested NER datasets and the experimental results demonstrate that our model outperforms other state-of-the-art methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Named entity recognition (NER) is a task that seeks to locate and classify named entities in unstructured texts into pre-defined categories such as person names, locations or medical codes. NER is generally treated as single-layer sequence labeling problem (Lafferty et al., 2001; Lample et al., 2016) where each token is tagged with one label. The label is composed by an entity boundary label and a categorical label. For example, a token can be tagged with B-P ER, where B indicates the boundary of an entity and P ER indicates the corresponding entity categorical label. However, when entities are nested within one another, single-layer sequence labeling models can not ex-tract both entities simultaneously. A token contained inside many entities has more than one categorical label. Consider an example in Figure 1 from GENIA corpus (Kim et al., 2003) , \"Human TR Beta 1\" is an protein and it is also a part of a DN A \"Human TR Beta 1 mRNA\". Both entities contain the same token \"Human\". Thus the token should have two different categorical labels. In that case, assigning a single categorical label for \"Human\" is improper. Figure 1 : An example of nested entities and their boundary labels. \"B\" and \"E\" indicate the beginning and end of an entity. They are the boundary labels. \"I\" and \"O\" denote tokens inside and outside entities, respectively. protein and RN A are categories of entities.",
"cite_spans": [
{
"start": 257,
"end": 280,
"text": "(Lafferty et al., 2001;",
"ref_id": "BIBREF11"
},
{
"start": 281,
"end": 301,
"text": "Lample et al., 2016)",
"ref_id": "BIBREF12"
},
{
"start": 840,
"end": 858,
"text": "(Kim et al., 2003)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 813,
"end": 821,
"text": "Figure 1",
"ref_id": null
},
{
"start": 1132,
"end": 1140,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Traditional methods coping with nested entities rely on hand-craft features (Shen et al., 2003; Alex et al., 2007) and suffer from heavy feature engineering. Recent studies tackle the nested NER using neural models without relying on linguistics features or external knowledge resources. Ju et al. (2018) propose a layered sequence labeling model and Sohrab and Miwa (2018) propose a exhaustive region classification model.",
"cite_spans": [
{
"start": 76,
"end": 95,
"text": "(Shen et al., 2003;",
"ref_id": "BIBREF20"
},
{
"start": 96,
"end": 114,
"text": "Alex et al., 2007)",
"ref_id": "BIBREF0"
},
{
"start": 288,
"end": 304,
"text": "Ju et al. (2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Layered sequence labeling model will first extract the inner entities (contained by other entities) and feed them into the next layer to extract outer entities. Thus, this model suffers from error propagation. When the previous layer extracts wrong entities, the performance of next layer will be affected. Moreover, when an outer entity is extracted first, the inner one will not be detected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Exhaustive region classification model enumerates all possible regions or spans in sentences to predict entities in a single layer. One issue of their method is the explicit boundary information is ignored, leading to extraction of some non-entities. We consider an example. In a sequence of tokens in GE-NIA dataset, \"novel TH protein\" is an entity and \"a novel TH protein\" is not an entity. However, since they share many tokens, the merged region representations of them are similar to each other. \"novel\" and \"protein\" are the boundary of the entity. Without the boundary information, both candidate regions are extracted as the entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Despite their shortcomings, layered sequence labeling model and exhaustive region classification model are complementary to each other. Therefore, we can combine them to improve the performance of nested NER. We leverage the sequence labeling model to consider the boundary information into locating entities. In the example mentioned above, \"novel\" is the boundary of the entity \"novel TH protein\", while \"a\" is a general token whose representation is different from \"novel\". With the guidance of boundary information, the model can detect \"novel\" as a boundary of the entity rather than token \"a\". We also utilize the region classification model to predict entities without considering the dependencies of inner and outer entities. In such case, Our model will not suffer error propagation problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose a boundary-aware neural model that makes the fusion of sequence labeling model and region classification model. We apply a single-layer sequence labeling model to identify entity boundaries because the tokens in nested entities can share the same boundary labels. For example, as shown in Figure 1 , \"Human\" can be tagged with the label B although it is the beginning of two different entities. Based on the detected entity boundaries, we predict entity categorical labels by classifying boundary-relevant regions. As shown in Figure 1 , we match each token with label B to tokens with label E. The regions between them are considered as candidate entities. The representation of candidate entities will be utilized to classify categorical labels.",
"cite_spans": [],
"ref_spans": [
{
"start": 315,
"end": 323,
"text": "Figure 1",
"ref_id": null
},
{
"start": 553,
"end": 561,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our model is advanced than exhaustive region classification model in two ways: (1) we leverage the explicit boundary information to guide the model to locate and classify entities precisely. Exhaustive region classification model classifies entity regions individually, however, our model can consider the context information of boundary tokens with a sequence labeling model. That facilitates the detection of boundaries. (2) Our model only classifies the boundary-relevance regions which are much fewer than all possible regions. That decreases the time cost. Our model is advanced than layered sequence labeling model because we extract entities without distinguishing inner and outer entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Multitask learning is considered good at optimising the overall goal via alternatively tuning 2+ objectives, which are reinforced each other (Ruder, 2017) . Considering our boundary detection module and entity categorical label prediction module share the same entity boundaries, we apply a multitask loss for training the two tasks simultaneously. The shared features of two modules are extracted by a bidirectional long shortterm memory (LSTM) layer. Extensive experiments show the framework of multitask learning improves final performance in a large margin.",
"cite_spans": [
{
"start": 141,
"end": 154,
"text": "(Ruder, 2017)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In summary, we make the following major contributions in this paper:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose a boundary-aware neural model which leverages entity boundaries to predict categorical labels. Our model can locate entities precisely by detecting boundaries using sequence labeling models. Based on the detected boundaries, our model utilizes boundary-relevant regions to predict entity categorical labels, which can decrease computation cost and relieve error propagation problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We introduce the multitask learning to capture the dependencies of entity boundaries and their categorical labels, which helps to improve the performance of identifying entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We conduct our experiments on public nested NER datasets. The experimental results demonstrate our model outperforms previous state-of-the-art methods and our model is much faster in inference speed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "NER has drawn the attention of NLP researchers because several downstream tasks such as entity linking (Gupta et al., 2017) , relation extraction (Mintz et al., 2009; Liu et al., 2017) , co-reference resolution (Chang et al., 2013) and conversation system (Ren et al., 2019) rely on it. Several methods have been proposed on flat named entity recognition (Lample et al., 2016; Ma and Hovy, 2016; Strubell et al., 2017) while few of them address nested entities. Early work on nested entities rely on hand-craft features or rule-based postprocessing Zhou, 2006) . They detect the innermost flat entities with a Hidden Markov Model and then use rule-based post-processing to extract the outer entities.",
"cite_spans": [
{
"start": 103,
"end": 123,
"text": "(Gupta et al., 2017)",
"ref_id": "BIBREF7"
},
{
"start": 146,
"end": 166,
"text": "(Mintz et al., 2009;",
"ref_id": "BIBREF16"
},
{
"start": 167,
"end": 184,
"text": "Liu et al., 2017)",
"ref_id": "BIBREF13"
},
{
"start": 211,
"end": 231,
"text": "(Chang et al., 2013)",
"ref_id": "BIBREF3"
},
{
"start": 355,
"end": 376,
"text": "(Lample et al., 2016;",
"ref_id": "BIBREF12"
},
{
"start": 377,
"end": 395,
"text": "Ma and Hovy, 2016;",
"ref_id": "BIBREF15"
},
{
"start": 396,
"end": 418,
"text": "Strubell et al., 2017)",
"ref_id": "BIBREF22"
},
{
"start": 549,
"end": 560,
"text": "Zhou, 2006)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "While most work concerns about named entities, Lu and Roth (2015) present a novel hypergraph-based method to tackle the problem of entity mention detection. One issue of their method is the spurious structure of hyper-graphs. Muis and Lu (2017) improve the method of Lu and Roth (2015) by incorporating mention separators along with features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Recent studies reveal that stacking sequence model like conditional random filed(CRF) layer can extract entities from inner to outer. Alex et al. (2007) propose several CRF-based methods for the GENIA dataset. However, their approach can not recognize nested entities of the same type. Finkel and Manning (2009) present a chart-based parsing method where each named entity is a constituent in the parsing tree. However, their method is not scalable to larger corpus with a cubic time complexity. Ju et al. (2018) dynamically stack flat NER layers to extract nested entities, each flat layer is based on a Bi-LSTM layer and then a cascaded CRF layer. Their model suffers error propagation from layer to layer, an inner entity can not be detected when a outer entity is identified first.",
"cite_spans": [
{
"start": 134,
"end": 152,
"text": "Alex et al. (2007)",
"ref_id": "BIBREF0"
},
{
"start": 286,
"end": 311,
"text": "Finkel and Manning (2009)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "It is difficult for sequence model, like CRF, to extract nested entities where a token can be included in several entities. Wang et al. (2018) present a transition-based model for nested mention detection using a forest representation. One drawback of their model is the greedy training and decoding. Sohrab and Miwa (2018) consider all possible regions in a sentence and classify them into their entity type or non-entity. However, their exhaustive method considers too many irrelevant regions(non-entity regions) into detecting entity types and the regions are classified individually, without considering the contextual information. Our model focuses on the boundary-relevant regions which is much fewer and the explicit leveraging of boundary information helps to locate entities more precisely.",
"cite_spans": [
{
"start": 124,
"end": 142,
"text": "Wang et al. (2018)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this paper, we propose a boundary-aware neural model which considers the boundary information into locating and classifying entities. The architecture is illustrated in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 172,
"end": 180,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "Our model is built upon a shared bidirectional LSTM layer. It uses the outputs of LSTM layer to detect entity boundaries and predict categorical labels. We extract entity boundaries as paired tokens with label B and label E, \"B\" indicates the beginning of an entity and \"E\" means the end of an entity. We match every detected token with label B and its corresponding token with label E, the regions between them are identified as candidate entities. We represent entities using the corresponding region outputs of shared LSTM and classify them into categorical labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "The boundary detection module and entity categorical label prediction module are training simultaneously with a multitask loss function, which can capture the underlying dependencies of entity boundaries and their categorical labels. We will describe each part of our model in detail.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "We represent each token in the sentence following the success of Ma and Hovy (2016) and Lample et al. (2016) that leverages character embedding for the flat NER task.",
"cite_spans": [
{
"start": 65,
"end": 83,
"text": "Ma and Hovy (2016)",
"ref_id": "BIBREF15"
},
{
"start": 88,
"end": 108,
"text": "Lample et al. (2016)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Token Representation",
"sec_num": "3.1"
},
{
"text": "For a given sentence consisting of n tokens (t 1 ,t 2 ,....t n ), we represent the word embedding of i-th token t i as equation 1:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Token Representation",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x w i = e w (t i )",
"eq_num": "(1)"
}
],
"section": "Token Representation",
"sec_num": "3.1"
},
{
"text": "where e w denotes a word embedding lookup table.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Token Representation",
"sec_num": "3.1"
},
{
"text": "We use pre-trained word embedding (Chiu et al., 2016) to initialize it. We capture the orthographic and morphological features of the word by integrating character representations. Denoting the representation of characters within t i as x c i , The embedding of each character within token t i is denoted as e c (c j ). e c is the character embedding lookup which is initialized randomly. Then we feed them into a bidirectional LSTM layer to learn hidden states. The forward and backward outputs are concatenated to construct character representations:",
"cite_spans": [
{
"start": 34,
"end": 53,
"text": "(Chiu et al., 2016)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Token Representation",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x c i = [ \u2212 \u2192 h c i ; \u2190 \u2212 h c i ]",
"eq_num": "(2)"
}
],
"section": "Token Representation",
"sec_num": "3.1"
},
{
"text": "where \u2190 \u2212 h c i and \u2212 \u2192 h c i denote the forward and backward outputs of bidirectional LSTM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Token Representation",
"sec_num": "3.1"
},
{
"text": "The final token representation is obtained as equation 3, where [;] denotes concatenation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Token Representation",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x t i = [x w i ; x c i ]",
"eq_num": "(3)"
}
],
"section": "Token Representation",
"sec_num": "3.1"
},
{
"text": "As shown in Figure 2 , we apply the hard parameter sharing mechanism (Ruder, 2017) for multitask training using bidirectional LSTM as shared feature extractor. Hard parameter sharing greatly reduces the risk of overfitting (Baxter, 1997) and",
"cite_spans": [
{
"start": 223,
"end": 237,
"text": "(Baxter, 1997)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 12,
"end": 20,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Shared Feature Extractor",
"sec_num": "3.2"
},
{
"text": "increases the correlation of our boundary detection module and categorical label prediction module. Specifically, the hidden state of bidirectional LSTM can be expressed as following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shared Feature Extractor",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2212 \u2192 h t i = \u2212 \u2212\u2212\u2212 \u2192 LSTM(x t i , \u2212 \u2192 h t i\u22121 ) (4) \u2190 \u2212 h t i = \u2190 \u2212\u2212\u2212 \u2212 LSTM(x t i , \u2190 \u2212 h t i\u22121 ) (5) h t i = [ \u2212 \u2192 h t i ; \u2190 \u2212 h t i ]",
"eq_num": "(6)"
}
],
"section": "Shared Feature Extractor",
"sec_num": "3.2"
},
{
"text": "where x t i is the token representation which is mentioned in section 3.1. We feed x t i into a Dropout layer to prevent overfitting. \u2212 \u2192 h t i and \u2190 \u2212 h t i denote the i-th forward and backward hidden state of Bi-LSTM layer. Formally, we extract the shared features of each token in a sentence as h t i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shared Feature Extractor",
"sec_num": "3.2"
},
{
"text": "Previous works (Lample et al., 2016; Ma and Hovy, 2016) on flat NER (non-nested named entities recognition) predict entity boundaries and categorical labels jointly. However, when entities are nested in other entities, one individual token can be included in many different entities. This means assigning one single categorical label for each token is inappropriate. We divide nested NER into two subtasks: entity boundary detection and categorical label prediction tasks. Unlike assigning an entity categorical label for each token, we predict the boundary labels first. Formally, given a sentence (t 1 ,t 2 ,...t n ), and one entity in the sentence. we represent the entity as R(i, j), which denotes the entity is composed by a continuous token sequence (t i ,t i+1 ,...t j ). Specially, we tag the boundary token t i as \"B\" and t j as \"E\". The tokens inside entities are assigned with label \"I\" and non-entity tokens are assigned with \"O\" labels. We detect entity boundaries as shown in Fig-ure 3. For each token t i in a sentence, we predict a boundary label by feeding its corresponding shared feature representation h t i (described in section 3.2) into a ReLU activation function and a softmax classifier:",
"cite_spans": [
{
"start": 15,
"end": 36,
"text": "(Lample et al., 2016;",
"ref_id": "BIBREF12"
},
{
"start": 37,
"end": 55,
"text": "Ma and Hovy, 2016)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 990,
"end": 997,
"text": "Fig-ure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Entity Boundary Detection",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "o t i = Uh t i + b (7) d t i = softmax(o t i )",
"eq_num": "(8)"
}
],
"section": "Entity Boundary Detection",
"sec_num": "3.3"
},
{
"text": "where U and b are trainable parameters. We compute the KL-divergence multi-label loss between the true distributiond t i and the predicted distribution d t i as equation 9:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Boundary Detection",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L bcls = \u2212 (d t i ) log(d t i )",
"eq_num": "(9)"
}
],
"section": "Entity Boundary Detection",
"sec_num": "3.3"
},
{
"text": "Conditional random field (CRF) (Lafferty et al., 2001 ) is considered good at modeling sequence label dependencies (e.g., label I must be after B).",
"cite_spans": [
{
"start": 31,
"end": 53,
"text": "(Lafferty et al., 2001",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Boundary Detection",
"sec_num": "3.3"
},
{
"text": "We make a comparison of choosing softmax or CRF as output layer because our sequence labels are different from flat NER models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Boundary Detection",
"sec_num": "3.3"
},
{
"text": "Given an input sentence sequence X = (x 1 ,x 2 , ... x n ), and a corresponding boundary label sequence L = (l 1 ,l 2 , ... l n ), we match each token with label B to the token with label E to construct candidate entity regions. Especially, considering there are entities containing one single token, we match tokens with label B to themselves firstly. The representation of entity R(i, j) is obtained as following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Categorical Label Prediction",
"sec_num": "3.4"
},
{
"text": "R i,j = 1 j \u2212 i + 1 j k=i h t k (10)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Categorical Label Prediction",
"sec_num": "3.4"
},
{
"text": "where h t k denotes the outputs of the shared bidirectional LSTM layer for k-th token in sentence. We simply average the representations for each token within boundary regions. The final representation of entities will be sent into a ReLU activation function and the softmax layer to predict entity categorical labels. We compute the loss of categorical label prediction in equation (11-12):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Categorical Label Prediction",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "d e i,j = softmax(U e i,j R i,j + b e i,j )",
"eq_num": "(11)"
}
],
"section": "Entity Categorical Label Prediction",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L ecls = \u2212 (d e i,j ) log(d e i,j )",
"eq_num": "(12)"
}
],
"section": "Entity Categorical Label Prediction",
"sec_num": "3.4"
},
{
"text": "where U e i,j and b e i,j are trainable parameters.d e i,j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Categorical Label Prediction",
"sec_num": "3.4"
},
{
"text": "and d e i,j denote the true distribution and predicted distribution of entity categorical labels, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Categorical Label Prediction",
"sec_num": "3.4"
},
{
"text": "In our model, it is inconvenient and inefficient for the reason that we predict entity categorical labels after all boundary-relative regions have been detected. Considering our boundary detection module and entity categorical label prediction module share the same entity boundaries, we apply a multitask loss for training the two tasks simultaneously.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multitask Training",
"sec_num": "3.5"
},
{
"text": "During training phase, we feed the ground-truth boundary labels into entity categorical label prediction module so that the classifier will be trained without affection from error boundary detection. As for testing phase, the outputs of boundary detection will be collected. The detected boundaries will indicate which entity regions should be considered into predicting categorical labels. The multitask loss function is defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multitask Training",
"sec_num": "3.5"
},
{
"text": "L multi = \u03b1 L bcls + (1 \u2212 \u03b1) L ecls (13)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multitask Training",
"sec_num": "3.5"
},
{
"text": "where L bcls and L ecls denote the categorical crossentropy loss for boundary detection module and entity categorical label prediction module, respectively. \u03b1 is a hyper-parameter which is assigned to control the degree of importance for each task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multitask Training",
"sec_num": "3.5"
},
{
"text": "4 Experimental Settings",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multitask Training",
"sec_num": "3.5"
},
{
"text": "To provide empirical evidence for effectiveness of the proposed model, we employ our experiments on three nested NER datasets: GENIA (Kim et al., 2003) , JNLPBA (Kim et al., 2004) and GermEval 2014 (Benikova et al., 2014) . GENIA dataset is constructed based on the GE-NIA v3.0.2 corpus. We preprocess the dataset following the same settings of (Finkel and Manning, 2009) and (Lu and Roth, 2015) . The dataset is split into 8.1:0.9:1 for training, development and testing. The statistics of GENIA dataset is shown as JNLPBA dataset is originally from GENIA corpus. It contains a training and testing datasets.",
"cite_spans": [
{
"start": 133,
"end": 151,
"text": "(Kim et al., 2003)",
"ref_id": "BIBREF9"
},
{
"start": 161,
"end": 179,
"text": "(Kim et al., 2004)",
"ref_id": "BIBREF10"
},
{
"start": 198,
"end": 221,
"text": "(Benikova et al., 2014)",
"ref_id": "BIBREF2"
},
{
"start": 345,
"end": 371,
"text": "(Finkel and Manning, 2009)",
"ref_id": "BIBREF5"
},
{
"start": 376,
"end": 395,
"text": "(Lu and Roth, 2015)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "However, only the flat and top-most entities are preserved. We collapse the sub-categories into 5 categories following the same settings as GENIA dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "GermEval 2014 dataset contains German nested named entities. The dataset covers over 31,000 sentences corresponding to over 590,000 tokens.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "We compare our model with several state-of-theart models on GENIA dataset. These methods can be divided into three groups: Finkel and Manning (2009) and Ju et al. (2018) propose CRF-based sequence labeling approaches for nested named entity recognition. Finkel and Manning (2009) leverage entity-level features while Ju et al. (2018) propose neural-based method. We rerun the codes of Ju et al. 2018because they have not shared their pre-processed dataset.",
"cite_spans": [
{
"start": 123,
"end": 148,
"text": "Finkel and Manning (2009)",
"ref_id": "BIBREF5"
},
{
"start": 153,
"end": 169,
"text": "Ju et al. (2018)",
"ref_id": "BIBREF8"
},
{
"start": 254,
"end": 279,
"text": "Finkel and Manning (2009)",
"ref_id": "BIBREF5"
},
{
"start": 317,
"end": 333,
"text": "Ju et al. (2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Methods",
"sec_num": "4.2"
},
{
"text": "Sohrab and Miwa 2018propose an exhaustive region classification model for nested NER. We reimplement their method according to their paper because they have not shared the codes. Lu and Roth (2015) and Muis and Lu (2017) build hyper-graphs to represent both the nested entities and their mentions. Muis and Lu (2017) improve the method of Lu and Roth (2015) .",
"cite_spans": [
{
"start": 179,
"end": 197,
"text": "Lu and Roth (2015)",
"ref_id": "BIBREF14"
},
{
"start": 339,
"end": 357,
"text": "Lu and Roth (2015)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Methods",
"sec_num": "4.2"
},
{
"text": "Our model is implemented by PyTorch framework 1 2 . We use Adam optimizer for training our model. We initialize word vectors with a 200dimension pre-trained word embedding the same as Ju et al. (2018) and Sohrab and Miwa (2018) while the char embedding is set to 50-dimension and initialized randomly. The learning rate is set to 0.005. We set a 0.5 dropout rate for the Dropout layer employed after token-level LSTM during training phase. The output dimension of our shared bidirectional LSTM is 200. The coefficient \u03b1 of multitask loss is tuned during development process. All of our experiments are performed on the same machine (NVIDIA 1080ti GPU and Intel i7-8700 CPU).",
"cite_spans": [
{
"start": 184,
"end": 200,
"text": "Ju et al. (2018)",
"ref_id": "BIBREF8"
},
{
"start": 205,
"end": 227,
"text": "Sohrab and Miwa (2018)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Settings",
"sec_num": "4.3"
},
{
"text": "We use a strict evaluation metrics that an entity is confirmed correct when the entity boundary and the entity categorical label are correct simultaneously. We employ precision, recall and F-score to evaluate the performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.4"
},
{
"text": "We conduct our experiments on GENIA test dataset for nested named entity recognition. Table 2 shows our method outperforms the compared methods both in recall and F-score metrics. The CRF-based model is considered as more efficient in sequence labeling task, we compare the utilization of softmax and CRF as output layer of boundary detection module. The results show they gain comparable scores in precision, recall and F-score. However, the CRF-based model is time-consuming, about 3-5 times slower than the softmax-based model in inference speed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall Evaluation",
"sec_num": "5.1"
},
{
"text": "Finkel and Manning 2009 Our model achieves a recall value of 73.6% and outperforms compared methods in Recall value with a large margin. We think that our model extract entities with a more accurate boundaries comparing to other methods. We evaluate it in experiments on boundary detection module. The GermEval 2014 dataset from KONVENS 2014 shared task is a German NER dataset. It contains few nested entities. Previous works in this dataset ignore nested entities or extract inner and outer entities in two independent models. Our method can extract nested entities in an end-to-end way. We compare our method with two state-ofthe-art approaches in Table 3 . Our method outperforms their approaches both in Recall and F-score metrics. Table 4 describes the performances of our model on the five categories on the test dataset.",
"cite_spans": [],
"ref_spans": [
{
"start": 651,
"end": 658,
"text": "Table 3",
"ref_id": "TABREF5"
},
{
"start": 737,
"end": 744,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Model P(%) R(%) F(%)",
"sec_num": null
},
{
"text": "Our model outperforms the model described in Ju et al. (2018) and Sohrab and Miwa (2018) with Fscore value on all categories. ",
"cite_spans": [
{
"start": 45,
"end": 61,
"text": "Ju et al. (2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model P(%) R(%) F(%)",
"sec_num": null
},
{
"text": "We conduct experiments on boundary detection to illustrate that our model extract entity boundaries more precisely comparing to Sohrab and Miwa (2018) and Ju et al. (2018) . Table 5 shows the results of boundary detection on GENIA test dataset. Our model locates entities more accurately with a higher recall value (76.9%) than the comparing methods. It gives a reason why our model outperforms other state-of-the-art methods in recall value. We exploit boundary information explic-itly and consider the dependencies of boundaries and entity categorical labels with a multitask loss. While in the method of Sohrab and Miwa 2018, candidate entity regions are classified individually.",
"cite_spans": [
{
"start": 155,
"end": 171,
"text": "Ju et al. (2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 174,
"end": 181,
"text": "Table 5",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Performance of Boundary Detection",
"sec_num": "5.2"
},
{
"text": "Model Boundary Detection P(%) R(%) F(%) Sohrab and Miwa (2018) 76.669.2 72.7 Ju et al. 201879.9 67.08 73.4 Our model(softmax) 79.7 76.9 78.3 Table 6 describes the performance of our model in detecting boundary labels for each token in sentences. The results are based on the shared bidirectional LSTM and a softmax classifier. Our model extracts entity boundaries with a relatively high performance. This facilitates the prediction of entity categorical labels because the candidate entity regions are more likely to be true entities. ",
"cite_spans": [],
"ref_spans": [
{
"start": 141,
"end": 148,
"text": "Table 6",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Performance of Boundary Detection",
"sec_num": "5.2"
},
{
"text": "Figure 4(a) shows the number of candidate entity regions in our model with softmax and the approach of Sohrab and Miwa (2018). Comparing to classifying all possible regions in sentences, our model only concerns about boundary-relevant regions which is much fewer. We compare the inference speed of our model and the approaches of Sohrab and Miwa (2018) and Ju et al. (2018) in Figure 4 (b). Our model is about 4 times faster than Sohrab and Miwa (2018) and about 3 times faster than Ju et al. (2018) . The cascaded CRF layers of Ju et al. (2018) are the limitation in inference speed. Table 7 shows the performance of our pipeline model and multitask model on GENIA development set and test set. For pipeline model, we train the boundary detection module and entity categorical label prediction module separately. Our multitask model has a higher F value both in development set and test set. Multitask learning can capture the underlying dependencies of boundaries and entity categorical labels. It helps the model focus its attention on those features that actually matter (Ruder, 2017) . In pipeline model, entity categorical prediction module will not share information with boundary detection module because they are trained separately. However, entity categorical prediction module and boundary detection module share the same entity boundaries. We assign a shared feature extractor (the bidirectional LSTM layer) to extract the features utilized in both entity categorical prediction and boundary detection. The results have demonstrated that the framework of multitask learning improves final performance.",
"cite_spans": [
{
"start": 357,
"end": 373,
"text": "Ju et al. (2018)",
"ref_id": "BIBREF8"
},
{
"start": 483,
"end": 499,
"text": "Ju et al. (2018)",
"ref_id": "BIBREF8"
},
{
"start": 529,
"end": 545,
"text": "Ju et al. (2018)",
"ref_id": "BIBREF8"
},
{
"start": 1075,
"end": 1088,
"text": "(Ruder, 2017)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 377,
"end": 385,
"text": "Figure 4",
"ref_id": "FIGREF2"
},
{
"start": 585,
"end": 592,
"text": "Table 7",
"ref_id": "TABREF12"
}
],
"eq_spans": [],
"section": "Inference Time",
"sec_num": "5.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Model Development Set Test Set P(%) R(%) F(%) P(%) R(%) F(%)",
"eq_num": "Pipeline 74"
}
],
"section": "Performance of Multitask Learning",
"sec_num": "5.4"
},
{
"text": "We conduct ablation experiments on GENIA development set to evaluate the contributions of neural components including dropout layer, pretrained word embedding and the character-level LSTM. The results are listed in Table 8 . All these components contribute to the effectiveness of our model. Dropout layer contributes significantly for both precision and recall values. Setting P(%) R(%) F(%)",
"cite_spans": [],
"ref_spans": [
{
"start": 215,
"end": 222,
"text": "Table 8",
"ref_id": "TABREF13"
}
],
"eq_spans": [],
"section": "Ablation Study and Flat NER",
"sec_num": "5.5"
},
{
"text": "Our Model(softmax) 74.5 75.6 75.0 without Dropout 72.6 73.1 72.9 without Pre-trained 73.8 75.7 74.7 without Char repr.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation Study and Flat NER",
"sec_num": "5.5"
},
{
"text": "75.3 73.9 74.6 To prove our model can work on nested NER and also flat NER task, we perform experiments on the JNLPBA dataset. We achieve 73.6 in term of F-score which is comparable with the state-ofthe-art result of Gridach (2017) .",
"cite_spans": [
{
"start": 217,
"end": 231,
"text": "Gridach (2017)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation Study and Flat NER",
"sec_num": "5.5"
},
{
"text": "6 Case Study Table 9 shows a case study comparing our model with exhaustive model (Sohrab and Miwa, 2018) and Layered model (Ju et al., 2018) . In the example, \"human TATA binding factor\" is an entity nested in entity \"transcriptionally active human TATA binding factor\". Our model with multitask learning extracts both entities with exact boundaries and entity categorical labels. Exhaustive model gets the error boundaries and misses the token \"human\" in entities. Comparing to layered model only detecting an outer entity, our model extract both inner and outer entities. It demonstrates that our combination of sequence labeling models and region classification models can locate entities precisely and extract both inner and outer entities.",
"cite_spans": [
{
"start": 82,
"end": 105,
"text": "(Sohrab and Miwa, 2018)",
"ref_id": "BIBREF21"
},
{
"start": 124,
"end": 141,
"text": "(Ju et al., 2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 13,
"end": 20,
"text": "Table 9",
"ref_id": "TABREF15"
}
],
"eq_spans": [],
"section": "Ablation Study and Flat NER",
"sec_num": "5.5"
},
{
"text": "Cloning of a transcriptionally active human TATA binding factor. For our pipeline model, without the dependencies information from entity categorical labels, it misses the outer entity boundaries and only extracts the inner one. It verifies that the multitask learning can share boundary information between boundary detection module and entity categorical label prediction module, which is very effective for nested NER.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence",
"sec_num": null
},
{
"text": "This paper presents a boundary-aware model which leverages boundaries to predict entity categorical labels. Our model combines sequence labeling model and region classification model to locate and classify nested entities with high performance. To capture the underlying dependencies of boundary detection module and entity categorical prediction module, we apply a multitask loss for training the two tasks simultaneously. Our model outperforms existing nested models in terms of Fscore.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "For future work, we consider to model the dependencies among entity regions explicitly and improve the performance of boundary detection module which is important for entity categorical label prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "https://pytorch.org/ 2 Code is available at https://github.com/ thecharm/boundary-aware-nested-ner",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The results are taken from their papers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported by the Fundamental Research Funds for the Central Universities, SCUT (No. 2017ZD048, D2182480), the Science and Technology Planning Project of Guangdong Province (No.2017B050506004), the Science and Technology Programs of Guangzhou (No. 201704030076,201802010027,201902010046) and a CUHK Research Committee Funding (Direct Grants) (Project Code: EE16963).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Recognising nested named entities in biomedical text",
"authors": [
{
"first": "Beatrice",
"middle": [],
"last": "Alex",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Grover",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Workshop on BioNLP 2007: Biological, Translational, and Clinical Language Processing",
"volume": "",
"issue": "",
"pages": "65--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Beatrice Alex, Barry Haddow, and Claire Grover. 2007. Recognising nested named entities in biomedical text. In Proceedings of the Workshop on BioNLP 2007: Biological, Translational, and Clinical Lan- guage Processing, pages 65-72. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A bayesian/information theoretic model of learning to learn via multiple task sampling",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Baxter",
"suffix": ""
}
],
"year": 1997,
"venue": "Machine learning",
"volume": "28",
"issue": "",
"pages": "7--39",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Baxter. 1997. A bayesian/information the- oretic model of learning to learn via multiple task sampling. Machine learning, 28(1):7-39.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Nosta-d named entity annotation for german: Guidelines and dataset",
"authors": [
{
"first": "Darina",
"middle": [],
"last": "Benikova",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Biemann",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Reznicek",
"suffix": ""
}
],
"year": 2014,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "2524--2531",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Darina Benikova, Chris Biemann, and Marc Reznicek. 2014. Nosta-d named entity annotation for german: Guidelines and dataset. In LREC, pages 2524-2531.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A constrained latent variable model for coreference resolution",
"authors": [
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Rajhans",
"middle": [],
"last": "Samdani",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "601--612",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kai-Wei Chang, Rajhans Samdani, and Dan Roth. 2013. A constrained latent variable model for coref- erence resolution. In Proceedings of the 2013 Con- ference on Empirical Methods in Natural Language Processing, pages 601-612.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "How to train good word embeddings for biomedical nlp",
"authors": [
{
"first": "Billy",
"middle": [],
"last": "Chiu",
"suffix": ""
},
{
"first": "Gamal",
"middle": [],
"last": "Crichton",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 15th workshop on biomedical natural language processing",
"volume": "",
"issue": "",
"pages": "166--174",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Billy Chiu, Gamal Crichton, Anna Korhonen, and Sampo Pyysalo. 2016. How to train good word em- beddings for biomedical nlp. In Proceedings of the 15th workshop on biomedical natural language pro- cessing, pages 166-174.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Nested named entity recognition",
"authors": [
{
"first": "Jenny",
"middle": [
"Rose"
],
"last": "Finkel",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "141--150",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jenny Rose Finkel and Christopher D Manning. 2009. Nested named entity recognition. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1-Volume 1, pages 141-150. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Character-level neural network for biomedical named entity recognition",
"authors": [
{
"first": "Mourad",
"middle": [],
"last": "Gridach",
"suffix": ""
}
],
"year": 2017,
"venue": "Journal of biomedical informatics",
"volume": "70",
"issue": "",
"pages": "85--91",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mourad Gridach. 2017. Character-level neural network for biomedical named entity recognition. Journal of biomedical informatics, 70:85-91.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Entity linking via joint encoding of types, descriptions, and context",
"authors": [
{
"first": "Nitish",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2681--2690",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitish Gupta, Sameer Singh, and Dan Roth. 2017. En- tity linking via joint encoding of types, descriptions, and context. In Proceedings of the 2017 Confer- ence on Empirical Methods in Natural Language Processing, pages 2681-2690.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A neural layered model for nested named entity recognition",
"authors": [
{
"first": "Meizhi",
"middle": [],
"last": "Ju",
"suffix": ""
},
{
"first": "Makoto",
"middle": [],
"last": "Miwa",
"suffix": ""
},
{
"first": "Sophia",
"middle": [],
"last": "Ananiadou",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1446--1459",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Meizhi Ju, Makoto Miwa, and Sophia Ananiadou. 2018. A neural layered model for nested named en- tity recognition. In Proceedings of the 2018 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 1446-1459.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Genia corpus''a semantically annotated corpus for bio-textmining",
"authors": [
{
"first": "J-D",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Tomoko",
"middle": [],
"last": "Ohta",
"suffix": ""
},
{
"first": "Yuka",
"middle": [],
"last": "Tateisi",
"suffix": ""
},
{
"first": "Jun''ichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2003,
"venue": "Bioinformatics",
"volume": "19",
"issue": "1",
"pages": "180--182",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J-D Kim, Tomoko Ohta, Yuka Tateisi, and Jun''ichi Tsujii. 2003. Genia corpus''a semantically anno- tated corpus for bio-textmining. Bioinformatics, 19(suppl 1):i180-i182.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Introduction to the bio-entity recognition task at jnlpba",
"authors": [
{
"first": "Jin-Dong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Tomoko",
"middle": [],
"last": "Ohta",
"suffix": ""
},
{
"first": "Yoshimasa",
"middle": [],
"last": "Tsuruoka",
"suffix": ""
},
{
"first": "Yuka",
"middle": [],
"last": "Tateisi",
"suffix": ""
},
{
"first": "Nigel",
"middle": [],
"last": "Collier",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the international joint workshop on natural language processing in biomedicine and its applications",
"volume": "",
"issue": "",
"pages": "70--75",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jin-Dong Kim, Tomoko Ohta, Yoshimasa Tsuruoka, Yuka Tateisi, and Nigel Collier. 2004. Introduction to the bio-entity recognition task at jnlpba. In Pro- ceedings of the international joint workshop on nat- ural language processing in biomedicine and its ap- plications, pages 70-75. Citeseer.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data",
"authors": [
{
"first": "John",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Fernando Cn",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Prob- abilistic models for segmenting and labeling se- quence data.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Neural architectures for named entity recognition",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Sandeep",
"middle": [],
"last": "Subramanian",
"suffix": ""
},
{
"first": "Kazuya",
"middle": [],
"last": "Kawakami",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1603.01360"
]
},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample, Miguel Ballesteros, Sandeep Sub- ramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. arXiv preprint arXiv:1603.01360.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Heterogeneous supervision for relation extraction: A representation learning approach",
"authors": [
{
"first": "Liyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Shi",
"middle": [],
"last": "Zhi",
"suffix": ""
},
{
"first": "Huan",
"middle": [],
"last": "Gui",
"suffix": ""
},
{
"first": "Ji",
"middle": [],
"last": "Heng",
"suffix": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Han",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1707.00166"
]
},
"num": null,
"urls": [],
"raw_text": "Liyuan Liu, Xiang Ren, Qi Zhu, Shi Zhi, Huan Gui, Heng Ji, and Jiawei Han. 2017. Hetero- geneous supervision for relation extraction: A representation learning approach. arXiv preprint arXiv:1707.00166.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Joint mention extraction and classification with mention hypergraphs",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "857--867",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Lu and Dan Roth. 2015. Joint mention extrac- tion and classification with mention hypergraphs. In Proceedings of the 2015 Conference on Empiri- cal Methods in Natural Language Processing, pages 857-867.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "End-to-end sequence labeling via bi-directional lstm-cnns-crf",
"authors": [
{
"first": "Xuezhe",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1603.01354"
]
},
"num": null,
"urls": [],
"raw_text": "Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional lstm-cnns-crf. arXiv preprint arXiv:1603.01354.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Distant supervision for relation extraction without labeled data",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Mintz",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bills",
"suffix": ""
},
{
"first": "Rion",
"middle": [],
"last": "Snow",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "2",
"issue": "",
"pages": "1003--1011",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Mintz, Steven Bills, Rion Snow, and Dan Juraf- sky. 2009. Distant supervision for relation extrac- tion without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Vol- ume 2-Volume 2, pages 1003-1011. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Labeling gaps between words: Recognizing overlapping mentions with mention separators",
"authors": [
{
"first": "Aldrian",
"middle": [],
"last": "Obaja",
"suffix": ""
},
{
"first": "Muis",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2608--2618",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1276"
]
},
"num": null,
"urls": [],
"raw_text": "Aldrian Obaja Muis and Wei Lu. 2017. Labeling gaps between words: Recognizing overlapping mentions with mention separators. In Proceedings of the 2017 Conference on Empirical Methods in Natural Lan- guage Processing, pages 2608-2618, Copenhagen, Denmark. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A multi-encoder neural conversation model",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Da Ren",
"suffix": ""
},
{
"first": "Xue",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Jingyun",
"middle": [],
"last": "Lei",
"suffix": ""
},
{
"first": "Qing",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Ho-Fung",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Leung",
"suffix": ""
}
],
"year": 2019,
"venue": "Neurocomputing",
"volume": "358",
"issue": "",
"pages": "344--354",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Da Ren, Yi Cai, Xue Lei, Jingyun Xu, Qing Li, and Ho-fung Leung. 2019. A multi-encoder neural con- versation model. Neurocomputing, 358:344-354.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "An overview of multi-task learning in",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
}
],
"year": 2017,
"venue": "deep neural networks",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1706.05098"
]
},
"num": null,
"urls": [],
"raw_text": "Sebastian Ruder. 2017. An overview of multi-task learning in deep neural networks. arXiv preprint arXiv:1706.05098.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Effective adaptation of a hidden markov model-based named entity recognizer for biomedical domain",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Chew-Lim",
"middle": [],
"last": "Tan",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the ACL 2003 workshop on Natural language processing in biomedicine",
"volume": "13",
"issue": "",
"pages": "49--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Shen, Jie Zhang, Guodong Zhou, Jian Su, and Chew-Lim Tan. 2003. Effective adaptation of a hid- den markov model-based named entity recognizer for biomedical domain. In Proceedings of the ACL 2003 workshop on Natural language processing in biomedicine-Volume 13, pages 49-56. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Deep exhaustive model for nested named entity recognition",
"authors": [
{
"first": "Golam",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Makoto",
"middle": [],
"last": "Sohrab",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miwa",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2843--2849",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohammad Golam Sohrab and Makoto Miwa. 2018. Deep exhaustive model for nested named entity recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Pro- cessing, pages 2843-2849.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Fast and accurate entity recognition with iterated dilated convolutions",
"authors": [
{
"first": "Emma",
"middle": [],
"last": "Strubell",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Verga",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Belanger",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1702.02098"
]
},
"num": null,
"urls": [],
"raw_text": "Emma Strubell, Patrick Verga, David Belanger, and Andrew McCallum. 2017. Fast and accurate entity recognition with iterated dilated convolutions. arXiv preprint arXiv:1702.02098.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A neural transition-based model for nested mention recognition",
"authors": [
{
"first": "Bailin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hongxia",
"middle": [],
"last": "Jin",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.01808"
]
},
"num": null,
"urls": [],
"raw_text": "Bailin Wang, Wei Lu, Yu Wang, and Hongxia Jin. 2018. A neural transition-based model for nested mention recognition. arXiv preprint arXiv:1810.01808.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Enhancing hmm-based biomedical named entity recognition by studying special phenomena",
"authors": [
{
"first": "Jie",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Chew-Lim",
"middle": [],
"last": "Tan",
"suffix": ""
}
],
"year": 2004,
"venue": "Journal of biomedical informatics",
"volume": "37",
"issue": "6",
"pages": "411--422",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jie Zhang, Dan Shen, Guodong Zhou, Jian Su, and Chew-Lim Tan. 2004. Enhancing hmm-based biomedical named entity recognition by studying special phenomena. Journal of biomedical infor- matics, 37(6):411-422.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Recognizing names in biomedical texts using mutual information independence model and svm plus sigmoid",
"authors": [
{
"first": "",
"middle": [],
"last": "Gd Zhou",
"suffix": ""
}
],
"year": 2006,
"venue": "International Journal of Medical Informatics",
"volume": "75",
"issue": "6",
"pages": "456--467",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "GD Zhou. 2006. Recognizing names in biomedical texts using mutual information independence model and svm plus sigmoid. International Journal of Medical Informatics, 75(6):456-467.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Recognizing names in biomedical texts: a machine learning approach",
"authors": [
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Chewlim",
"middle": [],
"last": "Tan",
"suffix": ""
}
],
"year": 2004,
"venue": "Bioinformatics",
"volume": "20",
"issue": "7",
"pages": "1178--1190",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guodong Zhou, Jie Zhang, Jian Su, Dan Shen, and Chewlim Tan. 2004. Recognizing names in biomed- ical texts: a machine learning approach. Bioinfor- matics, 20(7):1178-1190.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "The Architecture of our boundary-aware model. The representation of each token in sentence \"Human TR Beta 1 mRNA Levels in..\" is feed into a shared bidirectional LSTM layer. We leverage the outputs of Bi-LSTM to detect entity boundaries and their categorical labels. The red circle indicates entity region representations between entity boundaries.",
"type_str": "figure",
"num": null
},
"FIGREF1": {
"uris": null,
"text": "The architecture of entity boundary detection module. We feed the representation of each token in the sentence into a bidirectional LSTM layer, the outputs of LSTM layer are utilized to predict boundary labels.",
"type_str": "figure",
"num": null
},
"FIGREF2": {
"uris": null,
"text": "(a): The number of candidate entity regions in our model with softmax and the approach of Sohrab and Miwa (2018) when evaluating on GENIA test and development set; (b): The inference speed of our model and compared models on GENIA test set. t/s indicates token per second.",
"type_str": "figure",
"num": null
},
"TABREF0": {
"content": "<table><tr><td>Item</td><td>Train</td><td>Dev</td><td>Test</td><td>Overall</td><td>Nested</td></tr><tr><td>Document</td><td>1599</td><td>189</td><td>212</td><td>2000</td><td>-</td></tr><tr><td>Sentences</td><td>15023</td><td>1669</td><td>1854</td><td>18546</td><td>-</td></tr><tr><td>Percentage</td><td>81%</td><td>9%</td><td>10%</td><td>100%</td><td>-</td></tr><tr><td>DNA</td><td>7650</td><td>1026</td><td>1257</td><td>9933</td><td>1744</td></tr><tr><td>RNA</td><td>692</td><td>132</td><td>109</td><td>933</td><td>407</td></tr><tr><td>Protein</td><td>28728</td><td>2303</td><td>3066</td><td>34097</td><td>1902</td></tr><tr><td>Cell Line</td><td>3027</td><td>325</td><td>438</td><td>3790</td><td>347</td></tr><tr><td>Cell Type</td><td>5832</td><td>551</td><td>604</td><td>6987</td><td>389</td></tr><tr><td>Overall</td><td>45929</td><td>4337</td><td>5474</td><td>55740</td><td>4789</td></tr></table>",
"text": "",
"num": null,
"type_str": "table",
"html": null
},
"TABREF1": {
"content": "<table/>",
"text": "",
"num": null,
"type_str": "table",
"html": null
},
"TABREF3": {
"content": "<table/>",
"text": "Performance on GENIA test set. Our models with softmax and CRF outperform other state-of-theart methods.",
"num": null,
"type_str": "table",
"html": null
},
"TABREF5": {
"content": "<table><tr><td>Our model outperforms two state-of-the-art methods in</td></tr><tr><td>nested NER.</td></tr></table>",
"text": "Performance on GermEval 2014 test set.",
"num": null,
"type_str": "table",
"html": null
},
"TABREF7": {
"content": "<table><tr><td>: Our results on five categories compared to Ju</td></tr><tr><td>et al. (2018) and Sohrab and Miwa (2018) on GENIA</td></tr><tr><td>test set.</td></tr></table>",
"text": "",
"num": null,
"type_str": "table",
"html": null
},
"TABREF8": {
"content": "<table/>",
"text": "Performance of Boundary Detection on GE-NIA test set.",
"num": null,
"type_str": "table",
"html": null
},
"TABREF10": {
"content": "<table/>",
"text": "Performance of Boundary Label Prediction with softmax classifier on GENIA test set.",
"num": null,
"type_str": "table",
"html": null
},
"TABREF12": {
"content": "<table/>",
"text": "Performance Comparison of our pipeline model and multitask model on GENIA development set and test set.",
"num": null,
"type_str": "table",
"html": null
},
"TABREF13": {
"content": "<table/>",
"text": "Results of Ablation Tests on GENIA development set.",
"num": null,
"type_str": "table",
"html": null
},
"TABREF15": {
"content": "<table/>",
"text": "An example of predicted results in GENIA test dataset.",
"num": null,
"type_str": "table",
"html": null
}
}
}
}