{
"paper_id": "D19-1020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:57:12.151651Z"
},
"title": "Leveraging Dependency Forest for Neural Medical Relation Extraction",
"authors": [
{
"first": "Linfeng",
"middle": [],
"last": "Song",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Rochester",
"location": {
"settlement": "Rochester",
"region": "NY",
"country": "USA"
}
},
"email": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Westlake Institute for Advanced Study",
"location": {}
},
"email": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Rochester",
"location": {
"settlement": "Rochester",
"region": "NY",
"country": "USA"
}
},
"email": ""
},
{
"first": "Mo",
"middle": [],
"last": "Yu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Yorktown Heights",
"location": {
"region": "NY",
"country": "USA"
}
},
"email": ""
},
{
"first": "Zhiguo",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Amazon AWS",
"location": {
"settlement": "New York",
"region": "NY",
"country": "USA"
}
},
"email": ""
},
{
"first": "Jinsong",
"middle": [],
"last": "Su",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Xiamen University",
"location": {
"settlement": "Xiamen",
"country": "China"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Medical relation extraction discovers relations between entity mentions in text, such as research articles. For this task, dependency syntax has been recognized as a crucial source of features. Yet in the medical domain, 1best parse trees suffer from relatively low accuracies, diminishing their usefulness. We investigate a method to alleviate this problem by utilizing dependency forests. Forests contain many possible decisions and therefore have higher recall but more noise compared with 1-best outputs. A graph neural network is used to represent the forests, automatically distinguishing the useful syntactic information from parsing noise. Results on two biomedical benchmarks show that our method outperforms the standard tree-based methods, giving the state-of-the-art results in the literature.",
"pdf_parse": {
"paper_id": "D19-1020",
"_pdf_hash": "",
"abstract": [
{
"text": "Medical relation extraction discovers relations between entity mentions in text, such as research articles. For this task, dependency syntax has been recognized as a crucial source of features. Yet in the medical domain, 1best parse trees suffer from relatively low accuracies, diminishing their usefulness. We investigate a method to alleviate this problem by utilizing dependency forests. Forests contain many possible decisions and therefore have higher recall but more noise compared with 1-best outputs. A graph neural network is used to represent the forests, automatically distinguishing the useful syntactic information from parsing noise. Results on two biomedical benchmarks show that our method outperforms the standard tree-based methods, giving the state-of-the-art results in the literature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The sheer amount of medical articles and their rapid growth prevent researchers from receiving comprehensive literature knowledge by direct reading. This can hamper both medical research and clinical diagnosis. NLP techniques have been used for automating the knowledge extraction process from the medical literature (Friedman et al., 2001; Yu and Agichtein, 2003; Hirschman et al., 2005; Xu et al., 2010; Sondhi et al., 2010; Abacha and Zweigenbaum, 2011) . Along this line of work, a long-standing task is relation extraction, which mines factual knowledge from free text by labeling relations between entity mentions. As shown in Figure 1 , the sub-clause \"previously observed cytochrome P450 3A4 ( CYP3A4 ) interaction of the dual orexin receptor antagonist almorexant\" contains two entities, namely \"orexin receptor\" and \"almorexant\". There is an \"adversary\" relation between these two entities, denoted as\"CPR:6\". Figure 1: (a) 1-best dependency tree and (b) dependency forest for a medical-domain sentence, where edge label \"comp\" represents \"compound\". Associated mentions are in different colors. Some irrelevant words and edges are omitted for simplicity.",
"cite_spans": [
{
"start": 317,
"end": 340,
"text": "(Friedman et al., 2001;",
"ref_id": "BIBREF12"
},
{
"start": 341,
"end": 364,
"text": "Yu and Agichtein, 2003;",
"ref_id": "BIBREF52"
},
{
"start": 365,
"end": 388,
"text": "Hirschman et al., 2005;",
"ref_id": "BIBREF16"
},
{
"start": 389,
"end": 405,
"text": "Xu et al., 2010;",
"ref_id": "BIBREF47"
},
{
"start": 406,
"end": 426,
"text": "Sondhi et al., 2010;",
"ref_id": "BIBREF40"
},
{
"start": 427,
"end": 456,
"text": "Abacha and Zweigenbaum, 2011)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 633,
"end": 641,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Previous work has shown that dependency syntax is important for guiding relation extraction (Culotta and Sorensen, 2004; Bunescu and Mooney, 2005; Liu et al., 2015; Gormley et al., 2015; Xu et al., 2015a,b; Miwa and Bansal, 2016; , especially in biological and medical domains Peng et al., 2017; Song et al., 2018b) . Compared with sequential surface-level structures, such as POS tags, dependency trees help to model word-toword relations more easily by drawing direct connections between distant words that are syntactically correlated. Take the phrase \"effect on the medicine\" for example; \"effect\" and \"medicine\" are directly connected in a dependency tree, regardless of how many modifiers are added in between.",
"cite_spans": [
{
"start": 92,
"end": 120,
"text": "(Culotta and Sorensen, 2004;",
"ref_id": "BIBREF8"
},
{
"start": 121,
"end": 146,
"text": "Bunescu and Mooney, 2005;",
"ref_id": "BIBREF4"
},
{
"start": 147,
"end": 164,
"text": "Liu et al., 2015;",
"ref_id": "BIBREF28"
},
{
"start": 165,
"end": 186,
"text": "Gormley et al., 2015;",
"ref_id": "BIBREF13"
},
{
"start": 187,
"end": 206,
"text": "Xu et al., 2015a,b;",
"ref_id": null
},
{
"start": 207,
"end": 229,
"text": "Miwa and Bansal, 2016;",
"ref_id": "BIBREF36"
},
{
"start": 277,
"end": 295,
"text": "Peng et al., 2017;",
"ref_id": "BIBREF37"
},
{
"start": 296,
"end": 315,
"text": "Song et al., 2018b)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Dependency parsing has achieved an accuracy over 96% in the news domain (Liu and Zhang, 2017; Kitaev and Klein, 2018) . However, for the medical literature domain, parsing accuracies can drop significantly (Lease and Charniak, 2005; Mc-Closky and Charniak, 2008; Sagae et al., 2008; Candito et al., 2011) . This can lead to severe er-ror propagation in downstream relation extraction tasks, offsetting much of the benefit that relation extraction models can obtain by exploiting dependency trees as a source of external features.",
"cite_spans": [
{
"start": 72,
"end": 93,
"text": "(Liu and Zhang, 2017;",
"ref_id": "BIBREF26"
},
{
"start": 94,
"end": 117,
"text": "Kitaev and Klein, 2018)",
"ref_id": "BIBREF20"
},
{
"start": 206,
"end": 232,
"text": "(Lease and Charniak, 2005;",
"ref_id": "BIBREF24"
},
{
"start": 233,
"end": 262,
"text": "Mc-Closky and Charniak, 2008;",
"ref_id": null
},
{
"start": 263,
"end": 282,
"text": "Sagae et al., 2008;",
"ref_id": "BIBREF39"
},
{
"start": 283,
"end": 304,
"text": "Candito et al., 2011)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We address the low-accuracy issue in biomedical dependency parsing by considering dependency forests as external features. Instead of 1best trees, dependency forests consist of dependency arcs and labels that a parser is relatively confident about, therefore having better recall of gold-standard arcs by offering more candidate choices with noise. Our main idea is to let a relation extraction system learn automatically from a forest which arcs are the most relevant through end-task training, rather than relying solely on the decisions of a noisy syntactic parser. To this end, a graph neural network is used for encoding a forest, which in turn provides features for relation extraction. Back-propagation passes loss gradients from the relation extraction layer to the graph encoder, so that the more relevant edges can be chosen automatically for better relation extraction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Results on BioCreative VI ChemProt (CPR) (Krallinger et al., 2017) and a recent dataset focused on phenotype-gene relations (PGR) show that our method outperforms a strong baseline that uses 1-best dependency trees as features, giving the state-of-the-art accuracies in the literature. To our knowledge, we are the first to study dependency forests for medical information extraction, showing their advantages over 1-best tree structures. Our code is available at http://github.com/freesunshine/ dep-forest-re.",
"cite_spans": [
{
"start": 41,
"end": 66,
"text": "(Krallinger et al., 2017)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Syntactic forests There have been previous studies leveraging constituent forests for machine translation (Mi et al., 2008; Ma et al., 2018; Zaremoodi and Haffari, 2018) , sentiment analysis (Le and Zuidema, 2015) and text generation (Lu and Ng, 2011) . However, the usefulness of dependency forests is relatively rarely studied, with one exception being Tu et al. (2010) , who use dependency forests to enhance long-range word-to-word dependencies for statistical machine translation. To our knowledge, we are the first to study the usefulness of dependency forests for relation extraction under a strong neural framework.",
"cite_spans": [
{
"start": 106,
"end": 123,
"text": "(Mi et al., 2008;",
"ref_id": "BIBREF35"
},
{
"start": 124,
"end": 140,
"text": "Ma et al., 2018;",
"ref_id": "BIBREF30"
},
{
"start": 141,
"end": 169,
"text": "Zaremoodi and Haffari, 2018)",
"ref_id": "BIBREF53"
},
{
"start": 191,
"end": 213,
"text": "(Le and Zuidema, 2015)",
"ref_id": "BIBREF23"
},
{
"start": 234,
"end": 251,
"text": "(Lu and Ng, 2011)",
"ref_id": "BIBREF29"
},
{
"start": 355,
"end": 371,
"text": "Tu et al. (2010)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Graph neural network Graph neural networks (GNNs) have been successful in encoding dependency trees for downstream tasks, such as semantic role labeling , semantic parsing (Xu et al., 2018) , machine translation Bastings et al., 2017) , relation extraction (Song et al., 2018b) and sentence ordering (Yin et al., 2019) . In particular, Song et al. (2018b) showed that GNNs are more effective than DAG networks (Peng et al., 2017) for modeling syntactic trees in relation extraction, which cause loss of important structural information. We are the first to exploit GNNs for encoding search spaces in the form of dependency forests.",
"cite_spans": [
{
"start": 172,
"end": 189,
"text": "(Xu et al., 2018)",
"ref_id": "BIBREF49"
},
{
"start": 212,
"end": 234,
"text": "Bastings et al., 2017)",
"ref_id": "BIBREF2"
},
{
"start": 257,
"end": 277,
"text": "(Song et al., 2018b)",
"ref_id": "BIBREF43"
},
{
"start": 300,
"end": 318,
"text": "(Yin et al., 2019)",
"ref_id": "BIBREF51"
},
{
"start": 336,
"end": 355,
"text": "Song et al. (2018b)",
"ref_id": "BIBREF43"
},
{
"start": 410,
"end": 429,
"text": "(Peng et al., 2017)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Formally, the input to our task is a sentence s =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task",
"sec_num": "3"
},
{
"text": "w 1 , w 2 , . . . , w N ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task",
"sec_num": "3"
},
{
"text": "where N is the number of words in the sentence and w i represents the i-th input word. s is annotated with boundary information (\u21e0 1 : \u21e0 2 and \u21e3 1 : \u21e3 2 ) of target entity mentions (\u21e0 and \u21e3). We focus on the classic binary relation extraction setting , where the number of associated mentions is two. The output is a relation from a predefined relation set R = (r 1 , . . . , r M , None), where \"None\" means that no relation holds for the entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task",
"sec_num": "3"
},
{
"text": "Two steps are taken for predicting the correct relation given an input sentence. First, a dependency parser is used to label the syntactic structure of the input. Here our baseline system takes the standard approach, using the 1-best parser output tree D T as features. In contrast, our proposed model uses the most confident parser forest D F as features. Given D T or D F , the second step is to encode both s and D T /D F using a neural network, before making a prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task",
"sec_num": "3"
},
{
"text": "We make use of the same graph neural network encoder structure to represent dependency syntax information for both the baseline and our model. In particular, a graph recurrent neural network architecture (Beck et al., 2018; is used, which has been shown effective in encoding graph structures , giving competitive results with alternative graph networks such as graph convolutional neural networks Bastings et al., 2017) .",
"cite_spans": [
{
"start": 204,
"end": 223,
"text": "(Beck et al., 2018;",
"ref_id": "BIBREF3"
},
{
"start": 398,
"end": 420,
"text": "Bastings et al., 2017)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task",
"sec_num": "3"
},
{
"text": "As shown in Figure 2 , our baseline model stacks a bidirectional LSTM layer to encode an input sentence w 1 , . . . , w N with a graph recurrent network (GRN) to encode a 1-best dependency tree, which extracts features from the sentence and the dependency tree D T , respectively. Similar model frameworks have shown highly competitive performances in previous relation extraction studies (Peng et al., 2017; Song et al., 2018b) .",
"cite_spans": [
{
"start": 389,
"end": 408,
"text": "(Peng et al., 2017;",
"ref_id": "BIBREF37"
},
{
"start": 409,
"end": 428,
"text": "Song et al., 2018b)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [
{
"start": 12,
"end": 20,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Baseline: DEPTREE",
"sec_num": "4"
},
{
"text": "Given the input sentence w 1 , w 2 , . . . , w N , we represent each word with its embedding to generate a sequence of embeddings e 1 , e 2 , . . . , e N . A Bi-LSTM layer is used to encode the sentences:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bi-LSTM layer",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h (0) i = LSTM l ( h (0) i+1 , e i ) (1) ! h (0) i = LSTM r ( ! h (0) i 1 , e i ),",
"eq_num": "(2)"
}
],
"section": "Bi-LSTM layer",
"sec_num": "4.1"
},
{
"text": "where the state of each word w i is generated by concatenating the states of both directions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bi-LSTM layer",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h (0) i = [ h (0) i ; ! h (0) i ]",
"eq_num": "(3)"
}
],
"section": "Bi-LSTM layer",
"sec_num": "4.1"
},
{
"text": "A 1-best dependency tree can be represented as a directed graph",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GRN layer",
"sec_num": "4.2"
},
{
"text": "D T = hV , Ei, where V includes all words w 1 , w 2 , . . . , w N and E = {(w j , l, w i )}| w j 2V,w i 2V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GRN layer",
"sec_num": "4.2"
},
{
"text": "represents all dependency edges . Each triple (w j , l, w i ) corresponds to a dependency edge, where w j modifies w i with an arc label l. Each word w i is associated with a hidden state that is initialized with the Bi-LSTM output h",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GRN layer",
"sec_num": "4.2"
},
{
"text": "i . The state representation of the entire tree consists of all word states:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GRN layer",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h (0) = {h (0) i } w i 2V",
"eq_num": "(4)"
}
],
"section": "GRN layer",
"sec_num": "4.2"
},
{
"text": "In order to capture non-local interactions between words, the GRN layer adopts a message passing framework that performs iterative information exchange between directly connected words. As a result, each word state is updated by absorbing larger contextual information through the message passing process, and a sequence of state transitions h (0) , h 1, . . . is generated for the entire tree. The final state h (T ) = GRN(h (0) , T ), where T is a hyperparameter representing the number of state transitions.",
"cite_spans": [
{
"start": 344,
"end": 347,
"text": "(0)",
"ref_id": null
},
{
"start": 426,
"end": 429,
"text": "(0)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "GRN layer",
"sec_num": "4.2"
},
{
"text": "Message passing The message passing framework takes two main steps within each iteration: message calculation and state update. Take w i and iteration t as the example. In the first step, separate messages m \" i and m # i are calculated by summing up the messages of its children and parent in the dependency tree, respectively:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GRN layer",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "m \" i = X (w j ,l,w i )2E (\u2022,\u2022,i) [h (t 1) j ; e l ]",
"eq_num": "(5)"
}
],
"section": "GRN layer",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "m # i = X (w i ,l,w k )2E (i,\u2022,\u2022) [h (t 1) k ; e lrev ],",
"eq_num": "(6)"
}
],
"section": "GRN layer",
"sec_num": "4.2"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GRN layer",
"sec_num": "4.2"
},
{
"text": "E (\u2022,\u2022,i) and E (i,\u2022,\u2022)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GRN layer",
"sec_num": "4.2"
},
{
"text": "represent all edges with a head word w i and a modifier word w i , respectively, and e lrev represents the embedding of label l rev , the reverse version of original label l (such as \"amod-rev\" is the reverse version of \"amod\"). The message from a child or a parent is obtained by simply concatenating its hidden state with the corresponding edge label embedding. In the second step, GRN uses standard gated operations of LSTM (Hochreiter and Schmidhuber, 1997) i are used to control information flow from the inputs and to the output",
"cite_spans": [
{
"start": 427,
"end": 461,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "GRN layer",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h (t) i : i (t) i = (W \" 1 m \" i + W # 1 m # i + b 1 ) o (t) i = (W \" 2 m \" i + W # 2 m # i + b 2 ) f (t) i = (W \" 3 m \" i + W # 3 m # i + b 3 ) u (t) i = tanh(W \" 4 m \" i + W # 4 m # i + b 4 ) c (t) i = f (t) i c (t 1) i + i (t) i u (t) i h (t) i = o (t) i tanh(c (t) i ),",
"eq_num": "(7)"
}
],
"section": "GRN layer",
"sec_num": "4.2"
},
{
"text": "where W \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GRN layer",
"sec_num": "4.2"
},
{
"text": "x , W # x , and b x (x 2 {1, 2, 3, 4}) are model parameters, and c 0i is initialized as a vector of zeros.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GRN layer",
"sec_num": "4.2"
},
{
"text": "The same process repeats for T iterations. Starting from h (0) of the Bi-LSTM layer, increasingly more informed hidden states h (t) are obtained as the iteration increases, and h (T ) is used as the final representation of each word.",
"cite_spans": [
{
"start": 128,
"end": 131,
"text": "(t)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "GRN layer",
"sec_num": "4.2"
},
{
"text": "Given h (T ) of the GRN encoding, we calculate the representation vector of the two related entity mentions \u21e0 and \u21e3 (such as \"almorexant\" and \"orexin receptor\" in Figure 1 ) with mean pooling:",
"cite_spans": [],
"ref_spans": [
{
"start": 163,
"end": 171,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Relation prediction",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h \u21e0 = f mean (h (T ) \u21e0 1 :\u21e0 2 )",
"eq_num": "(8)"
}
],
"section": "Relation prediction",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h \u21e3 = f mean (h (T ) \u21e3 1 :\u21e3 2 )",
"eq_num": "(9)"
}
],
"section": "Relation prediction",
"sec_num": "4.3"
},
{
"text": "where \u21e0 1 : \u21e0 2 and \u21e3 1 : \u21e3 2 represent the span of \u21e0 and \u21e3, respectively, and f mean is the mean-pooling function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation prediction",
"sec_num": "4.3"
},
{
"text": "Finally, the representations of both mentions are concatenated to be the input of a logistic regression classifier:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation prediction",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "y = softmax(W 5 [h \u21e0 ; h \u21e3 ] + b 5 ),",
"eq_num": "(10)"
}
],
"section": "Relation prediction",
"sec_num": "4.3"
},
{
"text": "where W 5 and b 5 are model parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation prediction",
"sec_num": "4.3"
},
{
"text": "In this section, we first discuss how to generate high-quality dependency forests, before showing how to adapt GRN to consider the parser probability of each dependency edge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "5"
},
{
"text": "Given a dependency parser, generating dependency forests with high recall and low noise is a non-trivial problem. On the one hand, keeping the whole search space gives 100% recall, but introduces maximum noise. On the other hand, using the 1-best dependency tree can result in low recall given an imperfect parser. We investigate two algorithms to generate high-quality forests by judging \"quality\" from different perspectives: one focusing on arcs, and the other focusing on trees.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Forest generation",
"sec_num": "5.1"
},
{
"text": "EDGEWISE This algorithm focuses on the local relation of each individual edge and uses parser probabilities as confidence scores to assess edge qualities. Starting from the whole parser search space, it keeps all the edges with scores greater than a threshold . The time complexity is O(N 2 ), where N represents the sentence length. 1 KBESTEISNER This algorithm extends the Eisner algorithm (Eisner, 1996) with cube pruning (Huang and Chiang, 2005) for finding K highest-scored tree structures. The Eisner algorithm is a standard method for decoding 1-best trees for graph-based dependency parsing. Based on bottom-up dynamic programming, it stores the 1-best subtree for each span and takes O(N 3 ) time complexity for decoding a sentence of N words.",
"cite_spans": [
{
"start": 392,
"end": 406,
"text": "(Eisner, 1996)",
"ref_id": "BIBREF11"
},
{
"start": 425,
"end": 449,
"text": "(Huang and Chiang, 2005)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Forest generation",
"sec_num": "5.1"
},
{
"text": "KBESTEISNER keeps a sorted list of K-best hypotheses for each span. Cube pruning (Huang and Chiang, 2005 ) is adopted to generate the Kbest list for each larger span from the K-best lists of its sub-spans. After the bottom-up decoding, we merge the final K-bests by combining identical dependency edges to make the forest. As a result, KBESTEISNER takes O(N 3 K log K) time. Discussions EDGEWISE is much simpler and faster than KBESTEISNER. Compared with the O(N 3 K log K) time complexity of KBESTEIS-NER, EDGEWISE only takes O(N 2 ) running time, and each step (storing an edge) runs faster than KBESTEISNER (making a new hypothesis by combining two from sub-spans). Besides, the forests of EDGEWISE can be denser and provide richer information than those from KBESTEIS-NER. This is because KBESTEISNER only merges K trees, where many edges are shared among them. Also, K cannot be set to a large number (such as 100), because that will cause a dramatic increase of running time.",
"cite_spans": [
{
"start": 81,
"end": 104,
"text": "(Huang and Chiang, 2005",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Forest generation",
"sec_num": "5.1"
},
{
"text": "Compared with KBESTEISNER, EDGEWISE suffers from two potential problems. First, EDGE-WISE does not guarantee to produce a 1-best tree in a generated forest, as it makes decisions by considering the individual edges. Second, it does not guarantee to generate spanning forests, which can happen when the threshold is high. On the other hand, no previous work has shown that the information from the whole tree is crucial for relation extraction. In fact, many previous studies use only the dependency path between the target entity mentions (Bunescu and Mooney, 2005; Airola et al., 2008; Chowdhury et al., 2011; Gormley et al., 2015; Mehryary et al., 2016) . We study the effectiveness of both algorithms in our experiments.",
"cite_spans": [
{
"start": 539,
"end": 565,
"text": "(Bunescu and Mooney, 2005;",
"ref_id": "BIBREF4"
},
{
"start": 566,
"end": 586,
"text": "Airola et al., 2008;",
"ref_id": "BIBREF1"
},
{
"start": 587,
"end": 610,
"text": "Chowdhury et al., 2011;",
"ref_id": "BIBREF7"
},
{
"start": 611,
"end": 632,
"text": "Gormley et al., 2015;",
"ref_id": "BIBREF13"
},
{
"start": 633,
"end": 655,
"text": "Mehryary et al., 2016)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Forest generation",
"sec_num": "5.1"
},
{
"text": "As illustrated by Figure 1(b) , our dependency forests are directed graphs that can be consumed by GRN without any structural changes. For fair comparison, we use the same model as the baseline to encode sentences and forests. Thus our model uses the same number of parameters as our baseline taking 1-best trees.",
"cite_spans": [],
"ref_spans": [
{
"start": 18,
"end": 29,
"text": "Figure 1(b)",
"ref_id": null
}
],
"eq_spans": [],
"section": "GRN encoding with parser confidence",
"sec_num": "5.2"
},
{
"text": "Since forests contain more than one tree, it is intuitive to consider parser confidence scores for potentially better feature extraction. To this end, we slightly adjust the GRN encoding process without introducing additional parameters. In particular, we enhance the original message sum function (Equations 5 and 6) by applying the edge probabilities in calculating weighted message sums:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GRN encoding with parser confidence",
"sec_num": "5.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "m \" i = X \u270f2E (\u2022,\u2022,i) p \u270f [h (t 1) j ; e l ]",
"eq_num": "(11)"
}
],
"section": "GRN encoding with parser confidence",
"sec_num": "5.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "m # i = X \u270f2E (i,\u2022,\u2022) p \u270f [h (t 1) k ; e lrev ],",
"eq_num": "(12)"
}
],
"section": "GRN encoding with parser confidence",
"sec_num": "5.2"
},
{
"text": "where \u270f (instead of a triple) is used to represent an edge for simplicity, and p \u270f is the parser probability for edge \u270f. The edge probabilities are not adjusted during end-task training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GRN encoding with parser confidence",
"sec_num": "5.2"
},
{
"text": "Relation loss Given a set of training instances, each containing a sentence s with two target mentions \u21e0 and \u21e3, and a dependency structure D (tree or forest), we train our models with a crossentropy loss between the gold-standard relations r and model distribution:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "6"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "l R = log p(r|s, \u21e0, \u21e3, D; \u2713),",
"eq_num": "(13)"
}
],
"section": "Training",
"sec_num": "6"
},
{
"text": "where \u2713 represents the model parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "6"
},
{
"text": "Using additional NER loss For training on BioCreative VI CPR, we follow previous work Verga et al., 2018) to take NER loss as additional supervision, though the mention boundaries are known during testing.",
"cite_spans": [
{
"start": 86,
"end": 105,
"text": "Verga et al., 2018)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "6"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "l NER = 1 N N X n=1 log p(t n |s, D; \u2713),",
"eq_num": "(14)"
}
],
"section": "Training",
"sec_num": "6"
},
{
"text": "where t n is the gold NE tag of w n with the \"BIO\" scheme. Both losses are conditionally independent given the deep features produced by our model, and the final loss for BioCreative VI CPR training is l = l R + l NER .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "6"
},
{
"text": "We conduct experiments on two medical benchmarks to test the usefulness of dependency forest.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "7"
},
{
"text": "BioCreative VI CPR (Krallinger et al., 2017) This task 2 focuses on the relations between chemical compounds (such as drugs) and proteins (such as genes). The full corpus contains 1020, 612 and 800 extracted PubMed 3 abstracts for training, development and testing, respectively. All abstracts are manually annotated with the boundaries of entity mentions and the relations. The data provides three types of NEs: \"CHEMICAL\", \"GENE-Y\" and \"GENE-N\", and the relation set R contains 5 regular relations (\"CPR:3\", \"CPR:4\", \"CPR:5\", \"CPR:6\" and \"CPR:9\") and the \"None\" relation.",
"cite_spans": [
{
"start": 19,
"end": 44,
"text": "(Krallinger et al., 2017)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "7.1"
},
{
"text": "For efficient generation of dependency structures, we segment each abstract into sentences, keeping only the sentences that contain at least a chemical mention and a protein mention. For any sentence containing several chemical mentions or protein mentions, we keep multiple copies of it with each copy having different target mention pairs. As a result, we only consider the relations of mentions in the same sentence, assigning all cross-sentence chemical-protein pairs as \"None\" relation. By doing this, we effectively sacrifice cross-sentence relations, which has a negative effect on our systems; but this is necessary for efficient generation of dependency structures since directly parsing a short paragraph is slow and erroneous. 4 In general, we obtain 16,107 training, 10,030 development and 14,269 testing instances, in which around 23% have regular relations. The highest recalls for relations on our development and test sets are 92.25 and 92.54, respectively, because of the exclusion of cross-sentence relations in preprocessing. We report F1 scores of the full test set for a fair comparison, using all gold regular relations to calculate recalls.",
"cite_spans": [
{
"start": 738,
"end": 739,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "7.1"
},
{
"text": "Phenotype-Gene relation (PGR) This dataset concerns the relations between human phenotypes (such as diseases) with human genes, where the relation set is a binary class on whether a phenotype is related to a gene. It has 18,451 silver training instances and 220 highquality test instances, with each containing mention boundary annotations. We separate the first 15% training instances as our development set. Unlike BioCreative VI CPR, almost every relation of PGR is within a single sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "7.1"
},
{
"text": "We compare the following models:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "7.2"
},
{
"text": "\u2022 TEXTONLY: It does not take dependency structures and directly uses the Bi-LSTM outputs (h (0) in Eq. 3) to make predictions.",
"cite_spans": [
{
"start": 92,
"end": 95,
"text": "(0)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "7.2"
},
{
"text": "\u2022 DEPTREE: Our baseline using 1-best dependency trees, as shown in Section 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "7.2"
},
{
"text": "\u2022 EDGEWISEPS and EDGEWISE: Our models using the forests generated by our EDGEWISE algorithm with or without parser scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "7.2"
},
{
"text": "\u2022 KBESTEISNERPS and KBESTEISNER: Our model using the forests generated by our KBESTEISNER algorithm with or without parser scores, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "7.2"
},
{
"text": "We take a state-of-the-art deep biaffine parser (Dozat and Manning, 2017) , trained on the Penn Treebank (PTB) (Marcus and Marcinkiewicz, 1993) converted to Universal Dependency, to obtain 1-best trees and full search spaces for generating forests. Using standard PTB data split (02-21 for training, 22 for development and 23 for testing), it gives UAS and LAS scores of 95.7 and 94.6, respectively. For the other hyper-parameters, word embeddings are initialized with the 200-dimensional BioASQ vectors 5 , pretrained on 10M abstracts of biomedical articles, and are fixed during training. The dimension of hidden vectors in Bi-LSTM is set to 200, and the number of message passing steps T is set to 2 based on . We use Adam (Kingma and Ba, 2014), with a learning rate of 0.001, as the optimizer. The batch size, coefficient for l2 normalization loss and dropout rate are 20, 10 8 and 0.1, respectively. 7.4 Analyses of generated forests Table 1 demonstrates several characteristics of the generated forests of both the EDGEWISE and KBESTEISNER algorithms in Section 5.1, where \"#Edge/#Sent\" measures the forest density with the number of edges divided by the sentence length, \"LAS\" represents the oracle LAS score on 100 biomedical sentences with manually annotated dependency trees, and \"Conn. Ratio (%)\" shows the percentage of forests where both related entity mentions are connected. Regarding the forest density, forests produced by EDGEWISE generally contain more edges than those from KBESTEISNER. Due to the combinatorial property of forests, EDGEWISE can give much more candidate trees (and sub-trees) for the whole sentence (and each sub-span). This coincides with the fact that the forests generated by EDGEWISE have higher oracle scores than these generated by KBESTEISNER.",
"cite_spans": [
{
"start": 48,
"end": 73,
"text": "(Dozat and Manning, 2017)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 939,
"end": 946,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Settings",
"sec_num": "7.3"
},
{
"text": "For connectivity, KBESTEISNER guarantees to generate spanning forests. On the other hand, the connectivity ratio for the forests produced by EDGEWISE drops when increasing the threshold . We can have more than 94% being connected with \uf8ff 0.2. Later we will show that good endtask performance can still be achieved with the 94% connectivity ratio. This indicates that losing connectivity for a small potion of the data may not hurt the overall performance. Figure 3 shows the development experiments for our forest generation algorithms, where both EDGEWISE and KBESTEISNER give consistent improvements over DEPTREE and TEXTONLY. Generally, EDGEWISE gives more improvements than KBESTEISNER. The main reason may be that EDGEWISE generates denser forests, providing richer features.",
"cite_spans": [],
"ref_spans": [
{
"start": 455,
"end": 463,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Settings",
"sec_num": "7.3"
},
{
"text": "On the other hand, KBESTEISNER shows a marginal improvement by increasing K from 5 to 10. This indicates that only merging 10-best trees may be far from sufficient. However, using a much larger K (such as 100) is not practical due to dramatically increased computation time. In particular, the running time of KBESTEISNER with K = 10 is already much longer than that of EDGEWISE. As a result, EDGEWISE better serves our goal compared to KBESTEISNER. This may sound surprising, as EDGEWISE does not consider tree-level scores. It suggests that relation extraction may not require full dependency tree features. This coincides with previous relation extraction research (Bunescu and Mooney, 2005; Airola et al., 2008) , which utilizes the shortest path connecting the two candidate entities in the dependency tree.",
"cite_spans": [
{
"start": 668,
"end": 694,
"text": "(Bunescu and Mooney, 2005;",
"ref_id": "BIBREF4"
},
{
"start": 695,
"end": 715,
"text": "Airola et al., 2008)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Development results",
"sec_num": "7.5"
},
{
"text": "Leveraging parser confidence scores also consistently helps both methods. It is especially effective for EDGEWISE when = 0.05. This is likely because the parser confidence scores are useful for distinguishing some erroneous dependency arcs, when noise is large (e.g. when is too small). Following the development results, we Model F1 score GRU+Attn \u2020 49.5 Bran (Verga et al., 2018) (Efron and Tibshirani, 1994) .",
"cite_spans": [
{
"start": 361,
"end": 381,
"text": "(Verga et al., 2018)",
"ref_id": "BIBREF46"
},
{
"start": 382,
"end": 410,
"text": "(Efron and Tibshirani, 1994)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Development results",
"sec_num": "7.5"
},
{
"text": "directly report the performances of EDGEWISEPS and KBESTEISNERPS, setting and K to 0.2 and 10, respectively, in our remaining experiments. Table 2 shows the main comparison results on the BioCreative CPR testset, with comparisons to the previous state-of-the-art and our baselines.",
"cite_spans": [],
"ref_spans": [
{
"start": 139,
"end": 146,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Development results",
"sec_num": "7.5"
},
{
"text": "GRU+Attn ) stacks a self-attention layer on top of GRU (Cho et al., 2014) and embedding layers; Bran (Verga et al., 2018 ) adopts a biaffine self-attention model to simultaneously extract the relations of all mention pairs. Both methods use only textual knowledge. TEXTONLY gives a performance comparable with Bran. With 1-best dependency trees, our DEPTREE baseline gives better performances than the previous state of the art. This confirms the usefulness of dependency structures and the effectiveness of GRN on encoding these structures. Using dependency forests and parser confidence scores, both KBESTEISNERPS and EDGE-WISEPS obtain significantly higher numbers than DEPTREE. Consistent with the development experiments, EDGEWISEPS has a higher testset performance than KBESTEISNERPS.",
"cite_spans": [
{
"start": 55,
"end": 73,
"text": "(Cho et al., 2014)",
"ref_id": "BIBREF6"
},
{
"start": 101,
"end": 120,
"text": "(Verga et al., 2018",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Main results on BioCreative VI CPR",
"sec_num": "7.6"
},
{
"text": "Effectiveness on parsing accuracy We have shown in Sections 7.5 and 7.6 that a dependency parser trained using a domain-general treebank can produce high-quality dependency forests in a target domain (biomedical) for helping relation extraction. This is based on the assumption of there being a high-quality treebank in a descent scale, which may not be true for low-resource languages. We simulate this low-resource effect by training our parser in much smaller treebanks of 1K or 5K dependency trees, respectively. The LAS scores for the resulting parsers on our 100 manually annotated biomedical dependency trees are 79.3 and 84.2, respectively, while the LAS score for the parser trained with the full treebank is 86.4, as shown in Table 1 . Figure 4 shows the results on the Biocreative CPR development set, where the performance of TEXTONLY is 51.6. DEPTREE fails to outperform TEXTONLY when only 1K or 5K dependency trees are available for training our parser. This is due to the low parsing recall and subsequent noise caused by the weak parsers. It confirms the previous conclusion that dependency structures are highly influential to the performance of relation extraction. Both EDGEWISEPS and KBESTEIS-NERPS are still more effective than DEPTREE. In particular, KBESTEISNERPS significantly improves TEXTONLY with 5K dependency trees, and EDGEWISEPS is helpful even with 1K dependency trees.",
"cite_spans": [],
"ref_spans": [
{
"start": 736,
"end": 743,
"text": "Table 1",
"ref_id": null
},
{
"start": 746,
"end": 754,
"text": "Figure 4",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "7.7"
},
{
"text": "KBESTEISNER shows relatively smaller gaps than EDGEWISE when only a limited number of dependency trees are available. This is probably because considering whole-tree quality helps to better eliminate noise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "7.7"
},
{
"text": "Case study Figure 5 illustrates two major types of errors in BioCreative CPR, which are caused by inaccurate 1-best dependency trees. As shown in Figure 5 (a), the baseline system mistakenly predicts a \"None\" relation for that instance. This is mainly because \"STAT3\" is incorrectly linked to the main verb \"inhibited\" with a \"punct\" relation, but it should be linked to \"AKT\". In contrast, our forest contains the correct relation and with a probability of 0.18. This is possibly because \"AKT and STAT3\" fits the common pattern of \"A and B\" that conjunct two nouns. rors that cause end-task mistakes. In this example, the multi-token mention \"calcium modulated cyclases\" is incorrectly segmented in the 1-best dependency tree, where \"modulated\" is used as the main verb of the whole sentence, leaving \"cyclases\" and \"calcium\" as the object and the modifier of the subject, respectively. However, this mention ought to be a noun phrase with \"cyclases\" being the head. Our forest helps in this case by providing a more reasonable structure (shown as the yellow dashed arcs), where both \"calcium\" and \"modulated\" modify \"cyclases\". This is likely because \"modulated\" can be interpreted as an adjective in addition to being a verb. It shows the advantage of keeping multiple candidate syntactic arcs. Table 3 shows the comparison with previous work on the PGR testset, where our models are significantly better than the existing models. This is likely because the previous models do not utilize all the information from inputs: BO-LSTM only takes the words (without arc labels) along the shortest dependency path between the target mentions; the pretrained weights of BioBERT are kept constant during training for relation extraction. With 1-best trees, DEPTREE is 2.9 points bet-Model F1 score C-GCN \u2020 84.8 C-AGGCN (Guo et al., 2019) ter than TEXTONLY, confirming the usefulness of dependency structures. Leveraging dependency forests, both KBESTEISNERPS and EDGE-WISEPS significantly outperform DEPTREE with p-values of 0.003 and 0.024, respectively. This further confirms the usefulness of dependency forests for medical relation extraction.",
"cite_spans": [
{
"start": 1813,
"end": 1831,
"text": "(Guo et al., 2019)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 11,
"end": 19,
"text": "Figure 5",
"ref_id": "FIGREF5"
},
{
"start": 146,
"end": 154,
"text": "Figure 5",
"ref_id": "FIGREF5"
},
{
"start": 1298,
"end": 1305,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "7.7"
},
{
"text": "7.9 Main results on SemEval-2010 task 8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main results on PGR",
"sec_num": "7.8"
},
{
"text": "In addition to the biomedical domain, leveraging dependency forests applies to other domains as well. As shown in Table 4 , we conduct a preliminary study on SemEval-2010 task 8 (Hendrickx et al., 2009) , a widely used benchmark for newsdomain relation extraction. It is a public dataset, containing 10,717 instances (8000 for training and development, 2717 for testing) with 19 relations: 9 directed relations and a special \"Other\" class. Both C-GCN and C-AGGCN take a similar network as ours by stacking a graph neural network for encoding trees on top of a Bi-LSTM layer for encoding sentences. DEPTREE achieves similar performance as C-GCN and is slightly worse than C-AGGCN, with one potential reason being that C-AGGCN takes more parameters. Using forests, both KBESTEIS-NERPS and EDGEWISEPS outperform DEPTREE with the same number of parameters, and they show comparable and slightly better performances than C-AGGCN. Again, EDGEWISEPS is better than KBESTEISNERPS, showing that the former is a better way for generating forests.",
"cite_spans": [
{
"start": 178,
"end": 202,
"text": "(Hendrickx et al., 2009)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 114,
"end": 121,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Main results on PGR",
"sec_num": "7.8"
},
{
"text": "We have proposed two algorithms for generating high-quality dependency forests for relation extraction, and studied a graph recurrent network for effectively distinguishing useful features from noise in parsing forests. Experiments on two biomedical relation extraction benchmarks show the superiority of forests versus tree structures, without introducing any additional model parameters. Our deep analyses indicate that the main advantage comes from alleviating out-of-domain parsing errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "More accurately, it is O(N 2 L) and L s a constant factor, denoting the number of distinct dependency labels. We omit it for simplicity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://biocreative.bioinformatics.udel.edu/tasks/biocreativevi/track-5/ 3 https://www.ncbi.nlm.nih.gov/pubmed/ 4 Peng et al. (2017) describe a solution for cross-sentence cases, which joins different dependency structures by connecting their roots. We leave it for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://bioasq.lip6.fr/tools/BioASQword2vec/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Acknowledgments Research supported by NSF award IIS-1813823.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Automatic extraction of semantic relations between medical entities: a rule based approach",
"authors": [
{
"first": "Asma",
"middle": [],
"last": "Ben Abacha",
"suffix": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Zweigenbaum",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of biomedical semantics",
"volume": "2",
"issue": "5",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Asma Ben Abacha and Pierre Zweigenbaum. 2011. Automatic extraction of semantic relations between medical entities: a rule based approach. Journal of biomedical semantics, 2(5).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "All-paths graph kernel for protein-protein interaction extraction with evaluation of cross-corpus learning",
"authors": [
{
"first": "Antti",
"middle": [],
"last": "Airola",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "Jari",
"middle": [],
"last": "Bj\u00f6rne",
"suffix": ""
},
{
"first": "Tapio",
"middle": [],
"last": "Pahikkala",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "Tapio",
"middle": [],
"last": "Salakoski",
"suffix": ""
}
],
"year": 2008,
"venue": "BMC bioinformatics",
"volume": "9",
"issue": "11",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antti Airola, Sampo Pyysalo, Jari Bj\u00f6rne, Tapio Pahikkala, Filip Ginter, and Tapio Salakoski. 2008. All-paths graph kernel for protein-protein interac- tion extraction with evaluation of cross-corpus learn- ing. BMC bioinformatics, 9(11):S2.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Graph convolutional encoders for syntax-aware neural machine translation",
"authors": [
{
"first": "Joost",
"middle": [],
"last": "Bastings",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
},
{
"first": "Wilker",
"middle": [],
"last": "Aziz",
"suffix": ""
},
{
"first": "Diego",
"middle": [],
"last": "Marcheggiani",
"suffix": ""
},
{
"first": "Khalil",
"middle": [],
"last": "Simaan",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joost Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, and Khalil Simaan. 2017. Graph convolutional encoders for syntax-aware neural ma- chine translation. In Proceedings of the 2017 Con- ference on Empirical Methods in Natural Language Processing.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Graph-to-sequence learning using gated graph neural networks",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Beck",
"suffix": ""
},
{
"first": "Gholamreza",
"middle": [],
"last": "Haffari",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Beck, Gholamreza Haffari, and Trevor Cohn. 2018. Graph-to-sequence learning using gated graph neural networks. In Proceedings of the 56th Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A shortest path dependency kernel for relation extraction",
"authors": [
{
"first": "Razvan",
"middle": [],
"last": "Bunescu",
"suffix": ""
},
{
"first": "Raymond",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the conference on human language technology and empirical methods in natural language processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Razvan Bunescu and Raymond Mooney. 2005. A shortest path dependency kernel for relation extrac- tion. In Proceedings of the conference on human language technology and empirical methods in nat- ural language processing.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A word clustering approach to domain adaptation: Effective parsing of biomedical texts",
"authors": [
{
"first": "Marie",
"middle": [],
"last": "Candito",
"suffix": ""
},
{
"first": "Enrique",
"middle": [
"Henestroza"
],
"last": "Anguiano",
"suffix": ""
},
{
"first": "Djam\u00e9",
"middle": [],
"last": "Seddah",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 12th International Conference on Parsing Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marie Candito, Enrique Henestroza Anguiano, and Djam\u00e9 Seddah. 2011. A word clustering approach to domain adaptation: Effective parsing of biomed- ical texts. In Proceedings of the 12th International Conference on Parsing Technologies.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merrienboer",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Bougares",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart van Merrienboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A study on dependency tree kernels for automatic extraction of protein-protein interaction",
"authors": [
{
"first": "Faisal",
"middle": [],
"last": "Md",
"suffix": ""
},
{
"first": "Alberto",
"middle": [],
"last": "Chowdhury",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Lavelli",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of BioNLP 2011 Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Faisal Md. Chowdhury, Alberto Lavelli, and Alessan- dro Moschitti. 2011. A study on dependency tree kernels for automatic extraction of protein-protein interaction. In Proceedings of BioNLP 2011 Work- shop.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Dependency tree kernels for relation extraction",
"authors": [
{
"first": "Aron",
"middle": [],
"last": "Culotta",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Sorensen",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL'04)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aron Culotta and Jeffrey Sorensen. 2004. Dependency tree kernels for relation extraction. In Proceedings of the 42nd Meeting of the Association for Compu- tational Linguistics (ACL'04), Main Volume.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Deep biaffine attention for neural dependency parsing",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Dozat",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timothy Dozat and Christopher D Manning. 2017. Deep biaffine attention for neural dependency pars- ing. In Proceedings of International Conference on Learning Representations.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "An introduction to the bootstrap",
"authors": [
{
"first": "Bradley",
"middle": [],
"last": "Efron",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Robert",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tibshirani",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bradley Efron and Robert J Tibshirani. 1994. An intro- duction to the bootstrap. CRC press.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Three new probabilistic models for dependency parsing: An exploration",
"authors": [
{
"first": "Jason",
"middle": [
"M"
],
"last": "Eisner",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the 16th conference on Computational linguistics",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason M. Eisner. 1996. Three new probabilistic models for dependency parsing: An exploration. In Pro- ceedings of the 16th conference on Computational linguistics-Volume 1.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Genies: a natural-language processing system for the extraction of molecular pathways from journal articles",
"authors": [
{
"first": "Carol",
"middle": [],
"last": "Friedman",
"suffix": ""
},
{
"first": "Pauline",
"middle": [],
"last": "Kra",
"suffix": ""
},
{
"first": "Hong",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Krauthammer",
"suffix": ""
},
{
"first": "Andrey",
"middle": [],
"last": "Rzhetsky",
"suffix": ""
}
],
"year": 2001,
"venue": "Bioinformatics",
"volume": "",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carol Friedman, Pauline Kra, Hong Yu, Michael Krauthammer, and Andrey Rzhetsky. 2001. Genies: a natural-language processing system for the extrac- tion of molecular pathways from journal articles. Bioinformatics, 17(1).",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Improved relation extraction with feature-rich compositional embedding models",
"authors": [
{
"first": "Mo",
"middle": [],
"last": "Matthew R Gormley",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dredze",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1774--1784",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew R Gormley, Mo Yu, and Mark Dredze. 2015. Improved relation extraction with feature-rich com- positional embedding models. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1774-1784.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Attention guided graph convolutional networks for relation extraction",
"authors": [
{
"first": "Zhijiang",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "241--251",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhijiang Guo, Yan Zhang, and Wei Lu. 2019. Atten- tion guided graph convolutional networks for rela- tion extraction. In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 241-251.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Semeval-2010 task 8: Multi-way classification of semantic relations between pairs of nominals",
"authors": [
{
"first": "Iris",
"middle": [],
"last": "Hendrickx",
"suffix": ""
},
{
"first": "Su",
"middle": [
"Nam"
],
"last": "Kim",
"suffix": ""
},
{
"first": "Zornitsa",
"middle": [],
"last": "Kozareva",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Diarmuid\u00f3",
"middle": [],
"last": "S\u00e9aghdha",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Pennacchiotti",
"suffix": ""
},
{
"first": "Lorenza",
"middle": [],
"last": "Romano",
"suffix": ""
},
{
"first": "Stan",
"middle": [],
"last": "Szpakowicz",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of Se-mEval",
"volume": "",
"issue": "",
"pages": "94--99",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuid\u00d3 S\u00e9aghdha, Sebastian Pad\u00f3, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2009. Semeval-2010 task 8: Multi-way classification of semantic relations be- tween pairs of nominals. In Proceedings of Se- mEval, pages 94-99.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Overview of biocreative: critical assessment of information extraction for biology",
"authors": [
{
"first": "Lynette",
"middle": [],
"last": "Hirschman",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Yeh",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Blaschke",
"suffix": ""
},
{
"first": "Alfonso",
"middle": [],
"last": "Valencia",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lynette Hirschman, Alexander Yeh, Christian Blaschke, and Alfonso Valencia. 2005. Overview of biocreative: critical assessment of information extraction for biology.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "",
"issue": "8",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8).",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Better k-best parsing",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Ninth International Workshop on Parsing Technology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang Huang and David Chiang. 2005. Better k-best parsing. In Proceedings of the Ninth International Workshop on Parsing Technology.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Constituency parsing with a self-attentive encoder",
"authors": [
{
"first": "Nikita",
"middle": [],
"last": "Kitaev",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikita Kitaev and Dan Klein. 2018. Constituency pars- ing with a self-attentive encoder. In Proceedings of the 56th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers).",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Overview of the biocreative vi chemicalprotein interaction track",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Krallinger",
"suffix": ""
},
{
"first": "Obdulia",
"middle": [],
"last": "Rabal",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Saber",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Akhondi",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the VI BioCreative challenge evaluation workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Krallinger, Obdulia Rabal, Saber A Akhondi, et al. 2017. Overview of the biocreative vi chemical- protein interaction track. In Proceedings of the VI BioCreative challenge evaluation workshop.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "BO-LSTM: classifying relations via long short-term memory networks along biomedical ontologies",
"authors": [
{
"first": "Andre",
"middle": [],
"last": "Lamurias",
"suffix": ""
},
{
"first": "Diana",
"middle": [],
"last": "Sousa",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Luka",
"suffix": ""
},
{
"first": "Francisco M",
"middle": [],
"last": "Clarke",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Couto",
"suffix": ""
}
],
"year": 2019,
"venue": "BMC bioinformatics",
"volume": "20",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andre Lamurias, Diana Sousa, Luka A Clarke, and Francisco M Couto. 2019. BO-LSTM: classify- ing relations via long short-term memory networks along biomedical ontologies. BMC bioinformatics, 20(1):10.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "The forest convolutional network: Compositional distributional semantics with a neural chart and without binarization",
"authors": [
{
"first": "Phong",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Willem",
"middle": [],
"last": "Zuidema",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Phong Le and Willem Zuidema. 2015. The forest con- volutional network: Compositional distributional se- mantics with a neural chart and without binarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Parsing biomedical literature",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Lease",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 2005,
"venue": "International Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Lease and Eugene Charniak. 2005. Parsing biomedical literature. In International Conference on Natural Language Processing.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Biobert: pre-trained biomedical language representation model for biomedical text mining",
"authors": [
{
"first": "Jinhyuk",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Wonjin",
"middle": [],
"last": "Yoon",
"suffix": ""
},
{
"first": "Sungdong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Donghyeon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Sunkyu",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Chan",
"middle": [],
"last": "Ho So",
"suffix": ""
},
{
"first": "Jaewoo",
"middle": [],
"last": "Kang",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1901.08746"
]
},
"num": null,
"urls": [],
"raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. Biobert: pre-trained biomed- ical language representation model for biomedical text mining. arXiv preprint arXiv:1901.08746.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "In-order transition-based constituent parsing",
"authors": [
{
"first": "Jiangming",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiangming Liu and Yue Zhang. 2017. In-order transition-based constituent parsing. Transactions of the Association for Computational Linguistics, 5.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Attention-based neural networks for chemical protein relation extraction",
"authors": [
{
"first": "Sijia",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Feichen",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Yanshan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Majid",
"middle": [],
"last": "Rastegar-Mojarad",
"suffix": ""
},
{
"first": "Ravikumar",
"middle": [],
"last": "Komandur Elayavilli",
"suffix": ""
},
{
"first": "Vipin",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Hongfang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the BioCreative VI Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sijia Liu, Feichen Shen, Yanshan Wang, Majid Rastegar-Mojarad, Ravikumar Komandur Elayav- illi, Vipin Chaudhary, and Hongfang Liu. 2017. Attention-based neural networks for chemical pro- tein relation extraction. In Proceedings of the BioCreative VI Workshop.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A dependency-based neural network for relation classification",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wang",
"middle": [],
"last": "Houfeng",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Liu, Furu Wei, Sujian Li, Heng Ji, Ming Zhou, and Houfeng WANG. 2015. A dependency-based neural network for relation classification. In Pro- ceedings of the 53rd Annual Meeting of the Associ- ation for Computational Linguistics and the 7th In- ternational Joint Conference on Natural Language Processing (Volume 2: Short Papers).",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "A probabilistic forest-to-string model for language generation from typed lambda calculus expressions",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Lu and Hwee Tou Ng. 2011. A probabilistic forest-to-string model for language generation from typed lambda calculus expressions. In Proceedings of the Conference on Empirical Methods in Natural Language Processing.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Forestbased neural machine translation",
"authors": [
{
"first": "Chunpeng",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Akihiro",
"middle": [],
"last": "Tamura",
"suffix": ""
},
{
"first": "Masao",
"middle": [],
"last": "Utiyama",
"suffix": ""
},
{
"first": "Tiejun",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chunpeng Ma, Akihiro Tamura, Masao Utiyama, Tiejun Zhao, and Eiichiro Sumita. 2018. Forest- based neural machine translation. In Proceedings of the 56th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers).",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Encoding sentences with graph convolutional networks for semantic role labeling",
"authors": [
{
"first": "Diego",
"middle": [],
"last": "Marcheggiani",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diego Marcheggiani and Ivan Titov. 2017. Encoding sentences with graph convolutional networks for se- mantic role labeling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Lan- guage Processing.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Building a large annotated corpus of English: The Penn treebank",
"authors": [
{
"first": "P",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ann"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Marcinkiewicz",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell P Marcus and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of En- glish: The Penn treebank. Computational Linguis- tics, 19(2).",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Selftraining for biomedical parsing",
"authors": [
{
"first": "David",
"middle": [],
"last": "Mcclosky",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David McClosky and Eugene Charniak. 2008. Self- training for biomedical parsing. In Proceedings of the 46th Annual Meeting of the Association for Com- putational Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Deep learning with minimal training data: TurkuNLP entry in the BioNLP shared task 2016",
"authors": [
{
"first": "Farrokh",
"middle": [],
"last": "Mehryary",
"suffix": ""
},
{
"first": "Jari",
"middle": [],
"last": "Bj\u00f6rne",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "Tapio",
"middle": [],
"last": "Salakoski",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 4th BioNLP Shared Task Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Farrokh Mehryary, Jari Bj\u00f6rne, Sampo Pyysalo, Tapio Salakoski, and Filip Ginter. 2016. Deep learning with minimal training data: TurkuNLP entry in the BioNLP shared task 2016. In Proceedings of the 4th BioNLP Shared Task Workshop.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Forestbased translation",
"authors": [
{
"first": "Haitao",
"middle": [],
"last": "Mi",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL-08: HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haitao Mi, Liang Huang, and Qun Liu. 2008. Forest- based translation. In Proceedings of ACL-08: HLT.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "End-to-end relation extraction using lstms on sequences and tree structures",
"authors": [
{
"first": "Makoto",
"middle": [],
"last": "Miwa",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Makoto Miwa and Mohit Bansal. 2016. End-to-end re- lation extraction using lstms on sequences and tree structures. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers).",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Cross-sentence n-ary relation extraction with graph LSTMs. Transactions of the Association for",
"authors": [
{
"first": "Nanyun",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Hoifung",
"middle": [],
"last": "Poon",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Quirk",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
}
],
"year": 2017,
"venue": "Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "101--115",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nanyun Peng, Hoifung Poon, Chris Quirk, Kristina Toutanova, and Wen-tau Yih. 2017. Cross-sentence n-ary relation extraction with graph LSTMs. Trans- actions of the Association for Computational Lin- guistics, 5:101-115.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Distant supervision for relation extraction beyond the sentence boundary",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Quirk",
"suffix": ""
},
{
"first": "Hoifung",
"middle": [],
"last": "Poon",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the ACL (EACL-17)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Quirk and Hoifung Poon. 2017. Distant super- vision for relation extraction beyond the sentence boundary. In Proceedings of the 15th Conference of the European Chapter of the ACL (EACL-17).",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Evaluating the effects of treebank size in a practical application for parsing",
"authors": [
{
"first": "Kenji",
"middle": [],
"last": "Sagae",
"suffix": ""
},
{
"first": "Yusuke",
"middle": [],
"last": "Miyao",
"suffix": ""
},
{
"first": "Rune",
"middle": [],
"last": "Saetre",
"suffix": ""
},
{
"first": "Jun'ichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2008,
"venue": "Software Engineering, Testing, and Quality Assurance for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenji Sagae, Yusuke Miyao, Rune Saetre, and Jun'ichi Tsujii. 2008. Evaluating the effects of treebank size in a practical application for parsing. In Soft- ware Engineering, Testing, and Quality Assurance for Natural Language Processing.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Shallow information extraction from medical forum data",
"authors": [
{
"first": "Parikshit",
"middle": [],
"last": "Sondhi",
"suffix": ""
},
{
"first": "Manish",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Chengxiang",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics: Posters",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Parikshit Sondhi, Manish Gupta, ChengXiang Zhai, and Julia Hockenmaier. 2010. Shallow information extraction from medical forum data. In Proceedings of the 23rd International Conference on Computa- tional Linguistics: Posters.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Semantic neural machine translation using amr",
"authors": [
{
"first": "Linfeng",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhiguo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jinsong",
"middle": [],
"last": "Su",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "7",
"issue": "",
"pages": "19--31",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Linfeng Song, Daniel Gildea, Yue Zhang, Zhiguo Wang, and Jinsong Su. 2019. Semantic neural ma- chine translation using amr. Transactions of the As- sociation for Computational Linguistics, 7:19-31.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "A graph-to-sequence model for amrto-text generation",
"authors": [
{
"first": "Linfeng",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhiguo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1616--1626",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Linfeng Song, Yue Zhang, Zhiguo Wang, and Daniel Gildea. 2018a. A graph-to-sequence model for amr- to-text generation. In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1616- 1626.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "N-ary relation extraction using graph-state lstm",
"authors": [
{
"first": "Linfeng",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhiguo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2226--2235",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Linfeng Song, Yue Zhang, Zhiguo Wang, and Daniel Gildea. 2018b. N-ary relation extraction using graph-state lstm. In Proceedings of the 2018 Con- ference on Empirical Methods in Natural Language Processing, pages 2226-2235.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "A silver standard corpus of human phenotypegene relations",
"authors": [
{
"first": "Diana",
"middle": [],
"last": "Sousa",
"suffix": ""
},
{
"first": "Andr\u00e9",
"middle": [],
"last": "Lam\u00farias",
"suffix": ""
},
{
"first": "Francisco M",
"middle": [],
"last": "Couto",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diana Sousa, Andr\u00e9 Lam\u00farias, and Francisco M Couto. 2019. A silver standard corpus of human phenotype- gene relations. In Proceedings of the 2019 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Dependency forest for statistical machine translation",
"authors": [
{
"first": "Zhaopeng",
"middle": [],
"last": "Tu",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Young-Sook",
"middle": [],
"last": "Hwang",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Shouxun",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhaopeng Tu, Yang Liu, Young-Sook Hwang, Qun Liu, and Shouxun Lin. 2010. Dependency forest for statistical machine translation. In Proceedings of the 23rd International Conference on Computa- tional Linguistics (Coling 2010).",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Simultaneously self-attending to all mentions for full-abstract biological relation extraction",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Verga",
"suffix": ""
},
{
"first": "Emma",
"middle": [],
"last": "Strubell",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrick Verga, Emma Strubell, and Andrew McCallum. 2018. Simultaneously self-attending to all mentions for full-abstract biological relation extraction. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers).",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Medex: a medication information extraction system for clinical narratives",
"authors": [
{
"first": "Hua",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Shane",
"suffix": ""
},
{
"first": "Son",
"middle": [],
"last": "Stenner",
"suffix": ""
},
{
"first": "Kevin",
"middle": [
"B"
],
"last": "Doan",
"suffix": ""
},
{
"first": "Lemuel",
"middle": [
"R"
],
"last": "Johnson",
"suffix": ""
},
{
"first": "Joshua",
"middle": [
"C"
],
"last": "Waitman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Denny",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of the American Medical Informatics Association",
"volume": "",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hua Xu, Shane P Stenner, Son Doan, Kevin B John- son, Lemuel R Waitman, and Joshua C Denny. 2010. Medex: a medication information extraction sys- tem for clinical narratives. Journal of the American Medical Informatics Association, 17(1).",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Semantic relation classification via convolutional neural networks with simple negative sampling",
"authors": [
{
"first": "Kun",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Yansong",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Songfang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Dongyan",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kun Xu, Yansong Feng, Songfang Huang, and Dongyan Zhao. 2015a. Semantic relation classifica- tion via convolutional neural networks with simple negative sampling. In Proceedings of the 2015 Con- ference on Empirical Methods in Natural Language Processing.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Exploiting rich syntactic information for semantic parsing with graph-to-sequence model",
"authors": [
{
"first": "Kun",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Lingfei",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Zhiguo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Mo",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Liwei",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Vadim",
"middle": [],
"last": "Sheinin",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kun Xu, Lingfei Wu, Zhiguo Wang, Mo Yu, Li- wei Chen, and Vadim Sheinin. 2018. Exploiting rich syntactic information for semantic parsing with graph-to-sequence model. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Classifying relations via long short term memory networks along shortest dependency paths",
"authors": [
{
"first": "Yan",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Lili",
"middle": [],
"last": "Mou",
"suffix": ""
},
{
"first": "Ge",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yunchuan",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Zhi",
"middle": [],
"last": "Jin",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yan Xu, Lili Mou, Ge Li, Yunchuan Chen, Hao Peng, and Zhi Jin. 2015b. Classifying relations via long short term memory networks along shortest depen- dency paths. In Proceedings of the 2015 Confer- ence on Empirical Methods in Natural Language Processing.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Graph-based neural sentence ordering",
"authors": [
{
"first": "Yongjing",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Linfeng",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Jinsong",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Jiali",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Chulun",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jiebo",
"middle": [],
"last": "Luo",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of IJCAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yongjing Yin, Linfeng Song, Jinsong Su, Jiali Zeng, Chulun Zhou, and Jiebo Luo. 2019. Graph-based neural sentence ordering. In Proceedings of IJCAI.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Extracting synonymous gene and protein terms from biological literature",
"authors": [
{
"first": "Hong",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Agichtein",
"suffix": ""
}
],
"year": 2003,
"venue": "Bioinformatics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hong Yu and Eugene Agichtein. 2003. Extracting syn- onymous gene and protein terms from biological lit- erature. Bioinformatics, 19.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Incorporating syntactic uncertainty in neural machine translation with a forest-to-sequence model",
"authors": [
{
"first": "Poorya",
"middle": [],
"last": "Zaremoodi",
"suffix": ""
},
{
"first": "Gholamreza",
"middle": [],
"last": "Haffari",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1421--1429",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Poorya Zaremoodi and Gholamreza Haffari. 2018. In- corporating syntactic uncertainty in neural machine translation with a forest-to-sequence model. In Pro- ceedings of the 27th International Conference on Computational Linguistics, pages 1421-1429.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Sentence-state lstm for text representation",
"authors": [
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Linfeng",
"middle": [],
"last": "Song",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "317--327",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yue Zhang, Qi Liu, and Linfeng Song. 2018a. Sentence-state lstm for text representation. In Pro- ceedings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 317-327.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Graph convolution over pruned dependency trees improves relation extraction",
"authors": [
{
"first": "Yuhao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuhao Zhang, Peng Qi, and Christopher D. Manning. 2018b. Graph convolution over pruned dependency trees improves relation extraction. In Proceedings of the 2018 Conference on Empirical Methods in Nat- ural Language Processing.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Yue Zhang is the corresponding author ... observed ... interaction of orexin receptor antagonist",
"type_str": "figure",
"uris": null
},
"FIGREF2": {
"num": null,
"text": "Framework of our baseline and model.",
"type_str": "figure",
"uris": null
},
"FIGREF3": {
"num": null,
"text": "Development results (F1 score) for our forest generation methods.",
"type_str": "figure",
"uris": null
},
"FIGREF4": {
"num": null,
"text": "DEV results of BioCreative CPR regarding the dependency parsers trained on different number (1K, 5K or Full) of dependency trees.",
"type_str": "figure",
"uris": null
},
"FIGREF5": {
"num": null,
"text": "Figure 5(b) shows another type of parsing er-ATO inhibited phosphorylation and activation of AKT and STAT3 nsubj (a) Role of the calcium modulated cyclases in ... retinal projection .Two representative cases in BioCreative CPR, contrasting 1-best trees and forests, where irrelevant content and arcs are omitted for simplicity.",
"type_str": "figure",
"uris": null
},
"TABREF0": {
"html": null,
"type_str": "table",
"content": "
(t 1) i ously integrated message. In particular, a cell c with the previ-(t) i is taken to record memory for h (t) i ; an input gate i (t) i , an output gate o (t) i and a forget gate f (t) |
",
"num": null,
"text": "to update hidden state h"
},
"TABREF3": {
"html": null,
"type_str": "table",
"content": "\u2020 indi-cates previously reported numbers. ** means signifi- |
cant over DEPTREE at p < 0.01 with 1000 bootstrap |
tests |
",
"num": null,
"text": "Test results of Biocreative VI CPR."
},
"TABREF5": {
"html": null,
"type_str": "table",
"content": "",
"num": null,
"text": "Main results on PGR testest. \u2020 denotes previous numbers rounded into 3 significant digits. * and ** indicate significance over DEPTREE at p < 0.05 and p < 0.01 with 1000 bootstrap tests."
},
"TABREF7": {
"html": null,
"type_str": "table",
"content": "",
"num": null,
"text": "Main results on SemEval-2010 task 8 testest. \u2020 denotes previous numbers."
}
}
}
}