ACL-OCL / Base_JSON /prefixB /json /blackboxnlp /2021.blackboxnlp-1.21.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:08:18.012373Z"
},
"title": "An in-depth look at Euclidean disk embeddings for structure preserving parsing",
"authors": [
{
"first": "Federico",
"middle": [],
"last": "Fancellu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Samsung AI Center",
"location": {
"settlement": "Toronto"
}
},
"email": "federico.f@samsung.com"
},
{
"first": "Lan",
"middle": [],
"last": "Xiao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Samsung AI Center",
"location": {
"settlement": "Toronto"
}
},
"email": ""
},
{
"first": "Allan",
"middle": [],
"last": "Jepson",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Samsung AI Center",
"location": {
"settlement": "Toronto"
}
},
"email": "allan.jepson@samsung.com"
},
{
"first": "Afsaneh",
"middle": [],
"last": "Fazly",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Samsung AI Center",
"location": {
"settlement": "Toronto"
}
},
"email": "a.fazly@samsung.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Preserving the structural properties of trees or graphs when embedding them into a metric space allows for a high degree of interpretability, and has been shown beneficial for downstream tasks (e.g., hypernym detection, natural language inference, multimodal retrieval). However, whereas the majority of prior work looks at using structure-preserving embeddings when encoding a structure given as input, e.g., WordNet (Fellbaum, 1998), there is little exploration on how to use such embeddings when predicting one. We address this gap for two structure generation tasks, namely dependency and semantic parsing. We test the applicability of disk embeddings (Suzuki et al., 2019) that has been proposed for embedding Directed Acyclic Graphs (DAGs) but has not been tested on tasks that generate such structures. Our experimental results show that for both tasks the original disk embedding formulation leads to much worse performance when compared to nonstructure-preserving baselines. We propose enhancements to this formulation and show that they almost close the performance gap for dependency parsing. However, the gap still remains notable for semantic parsing due to the complexity of meaning representation graphs, suggesting a challenge for generating interpretable semantic parse representations.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Preserving the structural properties of trees or graphs when embedding them into a metric space allows for a high degree of interpretability, and has been shown beneficial for downstream tasks (e.g., hypernym detection, natural language inference, multimodal retrieval). However, whereas the majority of prior work looks at using structure-preserving embeddings when encoding a structure given as input, e.g., WordNet (Fellbaum, 1998), there is little exploration on how to use such embeddings when predicting one. We address this gap for two structure generation tasks, namely dependency and semantic parsing. We test the applicability of disk embeddings (Suzuki et al., 2019) that has been proposed for embedding Directed Acyclic Graphs (DAGs) but has not been tested on tasks that generate such structures. Our experimental results show that for both tasks the original disk embedding formulation leads to much worse performance when compared to nonstructure-preserving baselines. We propose enhancements to this formulation and show that they almost close the performance gap for dependency parsing. However, the gap still remains notable for semantic parsing due to the complexity of meaning representation graphs, suggesting a challenge for generating interpretable semantic parse representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Numerous studies in NLP have focused on embedding linguistic elements into metric spaces, where instances are represented as vectors whose geometric distance reflects the semantic similarity among instances (Mikolov et al., 2013; Baroni et al., 2014; Pennington et al., 2014, inter alia) . More recently, some have gone beyond embedding words and sequences, and explored the encoding of a hierarchy (e.g., WordNet, Fellbaum, 1998) through modelling its partial order structure (Vendrov et al., 2015; Lai and Hockenmaier, 2017; Vilnis et al., 2018) . Consequently, given the size and depth of such structures, attention has shifted to geometric spaces (mostly hyperbolic) that could better represent order and containment relations (Nickel and Kiela, 2017; Ganea et al., 2018; Dong et al., 2018; Suzuki et al., 2019) . Methods to embed elements of hierarchies are structure-preserving, and therefore interpretable, in that the relative position in the embedding space reflects the relation in the original hierarchy (e.g., parent-child relation). Most of these methods have been shown to be beneficial not only on tasks pertinent to the encoded hierarchy itself (e.g., hyponymy relations), but on downstream tasks including multimodal retrieval (Vendrov et al., 2015) and video understanding (Sur\u00eds et al., 2021) .",
"cite_spans": [
{
"start": 207,
"end": 229,
"text": "(Mikolov et al., 2013;",
"ref_id": "BIBREF13"
},
{
"start": 230,
"end": 250,
"text": "Baroni et al., 2014;",
"ref_id": "BIBREF0"
},
{
"start": 251,
"end": 287,
"text": "Pennington et al., 2014, inter alia)",
"ref_id": null
},
{
"start": 415,
"end": 430,
"text": "Fellbaum, 1998)",
"ref_id": "BIBREF7"
},
{
"start": 477,
"end": 499,
"text": "(Vendrov et al., 2015;",
"ref_id": "BIBREF24"
},
{
"start": 500,
"end": 526,
"text": "Lai and Hockenmaier, 2017;",
"ref_id": "BIBREF11"
},
{
"start": 527,
"end": 547,
"text": "Vilnis et al., 2018)",
"ref_id": "BIBREF25"
},
{
"start": 731,
"end": 755,
"text": "(Nickel and Kiela, 2017;",
"ref_id": "BIBREF14"
},
{
"start": 756,
"end": 775,
"text": "Ganea et al., 2018;",
"ref_id": "BIBREF8"
},
{
"start": 776,
"end": 794,
"text": "Dong et al., 2018;",
"ref_id": "BIBREF5"
},
{
"start": 795,
"end": 815,
"text": "Suzuki et al., 2019)",
"ref_id": "BIBREF23"
},
{
"start": 1244,
"end": 1266,
"text": "(Vendrov et al., 2015)",
"ref_id": "BIBREF24"
},
{
"start": 1291,
"end": 1311,
"text": "(Sur\u00eds et al., 2021)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, whereas there is a plethora of studies looking at preserving structure while encoding an hierarchy given as input, there is little to no exploration on how to do this while predicting one. In this work, we start one such exploration with the quintessential structure generation task in NLP: parsing. Given an input sentence, e.g., 'Anna asked Mary to stop', parsing is the task of transducing a natural language string into a structured linguistic representation (e.g., the AMR graph in Fig. 1(a) ) that encodes either syntactic or semantic properties of the string. Recent neural network based approaches have achieved state-of-the-art performance while being able to generalize both across frameworks, as well as across trees and graphs (Zhang et al., 2019; Lindemann et al., 2020; Ozaki et al., 2020; Samuel and Straka, 2020; Procopio et al., 2021, inter alia) . However, not much can be said about the representations these parsers learn since the parsers are not explicitly trained to preserve any of the geometric properties of their output structure, and as such are not interpretable. We believe that moving to interpretable, structured rep- :ARG2",
"cite_spans": [
{
"start": 748,
"end": 768,
"text": "(Zhang et al., 2019;",
"ref_id": "BIBREF26"
},
{
"start": 769,
"end": 792,
"text": "Lindemann et al., 2020;",
"ref_id": "BIBREF12"
},
{
"start": 793,
"end": 812,
"text": "Ozaki et al., 2020;",
"ref_id": "BIBREF16"
},
{
"start": 813,
"end": 837,
"text": "Samuel and Straka, 2020;",
"ref_id": "BIBREF20"
},
{
"start": 838,
"end": 872,
"text": "Procopio et al., 2021, inter alia)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 496,
"end": 505,
"text": "Fig. 1(a)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(c) (d) Figure 1 : Abstract Meaning Representation (AMR) graph for the sentence 'Anna asked Mary to stop' (a), with a transitive closure between the nodes 'ask' and 'person'; however, given the corresponding disk embedding representation in (b), we cannot reconstruct such a relation. Therefore, we introduce a dummy node in every transitive closure (c) so that graph relations and disk embedding containment are bijective (d).",
"cite_spans": [],
"ref_spans": [
{
"start": 8,
"end": 16,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "resentations would allow for a better diagnostic of what is learnt by a parser, while at the same time laying the foundations to connect 'deep' natural language understanding to other tasks, especially in the multimodal domain. How can we build such interpretable representations? Does interpretability impact performance? K\u00e1d\u00e1r et al. (2021) have attempted to answer these questions in the context of parsing into dependency trees by means of a structure-preserving loss that forces embedding distances to be isometric to tree distances. Whereas K\u00e1d\u00e1r et al. show that a structure-preserving embedding leads to comparable performance with blackbox methods, their method does not generalize to graphs. This is of particular relevance to semantic parses that are often DAGs, and where it is unclear how to isometrically embed multiple paths between a pair of nodes.",
"cite_spans": [
{
"start": 323,
"end": 342,
"text": "K\u00e1d\u00e1r et al. (2021)",
"ref_id": "BIBREF9"
},
{
"start": 547,
"end": 559,
"text": "K\u00e1d\u00e1r et al.",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To this end, we turn to a method that allows us to model transitive asymmetrical relations expressible as a DAG: disk embeddings (Dong et al., 2018; Suzuki et al., 2019) . Disk embeddings represent DAGs as a series of concentric disks, each defined by a center vector and a radius (as the ones in Fig 1(b) ), and have been shown to outperform other methods when encoding hypernym relations in the WordNet hierarchy. However, it is not clear whether such a method transfers well to more complex architectures where embeddings are contextualized given an input sentence, and structure prediction often interacts with predicting other elements of the tree or graph (node label, edge label, etc.).",
"cite_spans": [
{
"start": 129,
"end": 148,
"text": "(Dong et al., 2018;",
"ref_id": "BIBREF5"
},
{
"start": 149,
"end": 169,
"text": "Suzuki et al., 2019)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 297,
"end": 305,
"text": "Fig 1(b)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In summary, our work makes the following contributions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Disk embedding losses for tree and graph generation in parsing: we found the disk embedding loss formulation of Suzuki et al. (2019) to be sub-optimal w.r.t. parsing performance. We found that simply adding a positive margin already provides a large boost in performance, but the best performance is obtained when an auxiliary loss that considers local neighbourhood relations is added, as well as when parent-child relations are oversampled.",
"cite_spans": [
{
"start": 112,
"end": 132,
"text": "Suzuki et al. (2019)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Interpretability in parsing (though at the cost of performance): through a comparison with noninterpretable approaches, we found that whereas for dependency trees the price to pay in terms of performance for interpretability is small, for semantic graphs the gap is higher, highlighting where future work should focus its efforts. Importantly, we found that most semantic parsing errors are local and specific to the parser we use, where the lack of explicit alignment between words and graph nodes poses a challenge for disk embeddings, especially in the case of named entity substructures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As discussed above, disk embeddings (Dong et al., 2018; Suzuki et al., 2019 ) provide an interpretable model for transitive asymmetrical relations (i.e., partially ordered sets or posets), such as those represented by a DAG. They are a general framework that allows to embed posets in a (quasi-)metric space. Let's define (X, d) as a quasi-metric space with distance d and a closed disk",
"cite_spans": [
{
"start": 36,
"end": 55,
"text": "(Dong et al., 2018;",
"ref_id": "BIBREF5"
},
{
"start": 56,
"end": 75,
"text": "Suzuki et al., 2019",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Disk Embeddings: Background",
"sec_num": "2"
},
{
"text": "D(x, r) = {p \u2208 X | d(p, x) \u2264 r},",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Disk Embeddings: Background",
"sec_num": "2"
},
{
"text": "where x is the center and r the radius. We can express the containment relationship for two disks 1 as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Disk Embeddings: Background",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "D(ci, ri) \u2283 D(cj, rj) \u21d0\u21d2 d(ci, cj) < ri \u2212 rj.",
"eq_num": "(1)"
}
],
"section": "Disk Embeddings: Background",
"sec_num": "2"
},
{
"text": "Given a set of such disks, the ordering provided by this subset relationship provides a poset. 2 Disk embeddings seek to maintain order isomorphism between the posets of a graph G (X G , G ) and their disk embedding representation (X \u03c6 , \u03c6 ). Such an isomorphism exists if there is a bijective function f :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Disk Embeddings: Background",
"sec_num": "2"
},
{
"text": "X G \u2192 X \u03c6 such that x G y G \u21d0\u21d2 x \u03c6 y \u03c6 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Disk Embeddings: Background",
"sec_num": "2"
},
{
"text": "In our case the bijective function f is modelled via a neural network architecture introduced in \u00a7 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Disk Embeddings: Background",
"sec_num": "2"
},
{
"text": "To achieve isomorphism, we make use of the",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Disk Embeddings: Background",
"sec_num": "2"
},
{
"text": "protrusion ij of disk x j = D(c j , r j ) with respect to disk x i = D(c i , r i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Disk Embeddings: Background",
"sec_num": "2"
},
{
"text": "as the degree of containment:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Disk Embeddings: Background",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "ij = d(ci, cj) \u2212 ri + rj.",
"eq_num": "(2)"
}
],
"section": "Disk Embeddings: Background",
"sec_num": "2"
},
{
"text": "It follows from Eq. (1) that ij < 0 (negative protrusion) if and only if x j \u2282 x i . Moreover, the protrusion ij provides a continuous measure of the degree of containment. Specifically it equals the maximum signed distance of points in disk",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Disk Embeddings: Background",
"sec_num": "2"
},
{
"text": "D(c j , r j ) from the boundary of disk D(c i , r i ). Here ij < 0 indicates that D(c j , r j ) is entirely contained in- side D(c i , r i ) (and, indeed, with a margin equal to \u2212 ij ). While ij > 0 indicates that some point in D(c j , r j ) is outside D(c i , r i ) by a distance equal to ij .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Disk Embeddings: Background",
"sec_num": "2"
},
{
"text": "Recovering DAGs from disk embeddings. We wish to be able to represent a DAG with a disk embedding and also be able to recover the edges of that DAG from its disk embedding. However, recovering the edges of the original DAG is not always possible. To see this, suppose we have a disk embedding formed by a 1-1 mapping of the nodes of a DAG to disks. 3 Moreover, suppose the mapping is an order isomorphism, so the partial ordering is preserved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Disk Embeddings: Background",
"sec_num": "2"
},
{
"text": "It is natural to consider an edge from disk x i to disk x j if and only if x j \u2282 x i and there is no other intervening node (i.e., there is no x k such that x j \u2282 x k \u2282 x i ). This process recovers a DAG with the same partial ordering as the original DAG, but it may be missing edges. This issue occurs, for example, for the disk embedding shown in Fig. 1b , which represents the same partial ordering as the DAG in Fig. 1a . However, in decoding this disk embedding, we would not decode the edge from 'ask' to 'person', since 'stop' is an intervening node.",
"cite_spans": [],
"ref_spans": [
{
"start": 349,
"end": 356,
"text": "Fig. 1b",
"ref_id": null
},
{
"start": 416,
"end": 423,
"text": "Fig. 1a",
"ref_id": null
}
],
"eq_spans": [],
"section": "Disk Embeddings: Background",
"sec_num": "2"
},
{
"text": "We remedy this problem by adding as many dummy nodes as necessary to the original DAG (as illustrated in Fig. 1c and Fig. 1d ). Specifically, for any edge (n i , n j ) in the original DAG that can be removed without changing the partial ordering, we create a new dummy node m ij , and replace that edge by the two edges (n i , m ij ) and (m ij , n j ). Every edge in the modified graph is then required to reproduce the implied partial ordering, and this modified DAG is therefore recoverable from an order isomorphism.",
"cite_spans": [],
"ref_spans": [
{
"start": 105,
"end": 112,
"text": "Fig. 1c",
"ref_id": null
},
{
"start": 117,
"end": 124,
"text": "Fig. 1d",
"ref_id": null
}
],
"eq_spans": [],
"section": "Disk Embeddings: Background",
"sec_num": "2"
},
{
"text": "The disk embedding module takes as input the hidden representations of a sentence encoder (LSTM or transformer); we discuss how these representations are obtained in the context of dependency and semantic parsing in \u00a7 4.1 and 4.2 respectively. This input undergoes a linear transformation followed by a LeakyReLU activation to obtain X g , that is then used to learn disk embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Disk Embedding: Model",
"sec_num": "3"
},
{
"text": "To learn the centers C and the radii R of the disks, we pass X g through a n-layer MLP (where n=2 in our case) as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Disk Embedding: Model",
"sec_num": "3"
},
{
"text": "C = fa(W (n) c (X (n\u22121) g ) + bc) (3a) R = fa(W (n) r (X (n\u22121) g ) + br) (3b)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Disk Embedding: Model",
"sec_num": "3"
},
{
"text": "where X",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Disk Embedding: Model",
"sec_num": "3"
},
{
"text": "(n\u22121) g",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Disk Embedding: Model",
"sec_num": "3"
},
{
"text": "are the input representations from the previous layer (X 0 g is the input representation X g ), W (n) are the weights for the nth MLP layer, b c and b r are the biases, and f a is a non-linearity activation function (LeakyReLU in our case).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Disk Embedding: Model",
"sec_num": "3"
},
{
"text": "Centers and radii C and R are used to compute the protrusion values ij as in Eq. 2. We then use the ij values in a contrastive loss with a margin \u03b1 as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Disk Embedding: Model",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Lc = (i,j)\u2208P [ ij ]+ + (i,j)\u2208N [\u03b1 \u2212 ij ]+",
"eq_num": "(4)"
}
],
"section": "Disk Embedding: Model",
"sec_num": "3"
},
{
"text": "where [x] + is the function max(0,x) and P and N represent the sets of positive and negative pairs, respectively, where positives are pairs of nodes in an ancestor-descendant relation, and negatives include all other pairs of nodes. Recall that ij < 0 if node i is the ancestor of j, and > 0 otherwise, hence for the positive samples we only incur in a loss if the ij > 0. Given that we have access to both positive and negative instances (node pairs in an ancestordescendant relationship vs. all the rest), we can also formulate a Maximum Likelhood Estimation (MLE) binary cross-entropy loss, where we directly maximize the probability of a pair of nodes i and j to be either in an ancestor-descendant relationship or not.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Disk Embedding: Model",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "aij = sigmoid(\u2212 ij ),",
"eq_num": "(5)"
}
],
"section": "Disk Embedding: Model",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "LMLE = \u2212 i,j\u2208V, i =j ij log aij + (1 \u2212 ij ) log(1 \u2212 aij),",
"eq_num": "(6)"
}
],
"section": "Disk Embedding: Model",
"sec_num": "3"
},
{
"text": "where a i,j is the probability of i and j being in an ancestor-descendant relationship, and is obtained by applying a sigmoid function to ij over two classes (ancestor-descendant vs. other). ij is an indicator function that is 1 if i and j are in an ancestor-descendant relationship. Structure auxiliary loss. A contrastive or MLE loss optimizes a global pair-wise containment relationship between nodes; however, this might come at the expense of local relations, which we still seek to preserve. In order to incorporate information about local neighbourhood structure, we propose a (local) structure-preserving loss. The goal of this loss is to maximize the probability of correctly predicting the local relation between two nodes out of a class of 6 relations, namely, C = {parent, grandparent, child, grandchild, sister, other}, where 'other' represents all relations with a path length greater than 2. 4 To predict these relations we start by passing X g through a biaffine transform to obtain s ij ; this is the same as the biaffine function used in dependency and semantic parsing for edge prediction and since our parsers are two such systems, we reuse their implemented biaffine function. During training, we seek to predict the correct relationship between all pairs of nodes using a cross-entropy loss. Computation is as follows:",
"cite_spans": [
{
"start": 907,
"end": 908,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Disk Embedding: Model",
"sec_num": "3"
},
{
"text": "X 1 b , X 2 b = ReLU(W (1) b Xg + b (1) b ) (7) sij = X 1T b U X 2 b + W (2) b X 1 b + W (3) b X 1 b + b (2) b (8) Lstruc = i,j\u2208V, i =j \u2212 log(softmax(W (4) b sij + b (3) b )) (9)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Disk Embedding: Model",
"sec_num": "3"
},
{
"text": "where X 1 b , X 2 b are separate representations for ancestors and descendants respectively, whose dimensionality is half that of X g , W and U are the weight and b b the bias. s ij is the output of the biaffine transform, on top of which a linear followed by a softmax transform are applied to obtain the probability for the nodes i and j having the relation class c \u2208 C.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Disk Embedding: Model",
"sec_num": "3"
},
{
"text": "When used, the structure auxiliary loss is added to either the constrastive or the MLE loss to obtain the overall loss for a sentence, normalized by a factor T . We use the number of node pairs as T as opposed to the number of nodes, since we observe that the former achieves consistently better performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Disk Embedding: Model",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L = 1 T (L c/M LE + Lstruc)",
"eq_num": "(10)"
}
],
"section": "Disk Embedding: Model",
"sec_num": "3"
},
{
"text": "Decoding into parent-child relations. At test time we recover the parent-child relations from the protrusion values ij by first using Eq. 5 to get ancestor probabilities a ij . Using this equation, negative protrusion values are transformed to high ancestor probabilities (\u2265 0.5) and positive ones to low probabilities (< 0.5). We then identify the direct parent of each node as the ancestor with no intervening node:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Disk Embedding: Model",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "pij = aij[1 \u2212 max k\u2208V \\{i,j} [a ik a kj ]]",
"eq_num": "(11)"
}
],
"section": "Disk Embedding: Model",
"sec_num": "3"
},
{
"text": "Note that for dependency parsing the parentchild probabilities p ij are fed as input to a Maximum Spanning Tree (MST) decoding algorithm to obtain the final tree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Disk Embedding: Model",
"sec_num": "3"
},
{
"text": "Our baseline dependency and semantic parsing both determine edge presence between nodes using the biaffine formulation of Dozat et al. (2017) (see Eq. 7-9), which predicts the most likely parent for each node, along with the grammatical relation between each pair of head and dependent, conditioned on an encoded hidden representation. We replace this with the disk embedding module described in \u00a7 3, leaving all other modules unchanged.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Systems",
"sec_num": "4"
},
{
"text": "We use the dependency parser of K\u00e1d\u00e1r et al. x i is a concatenation of word-level, characterlevel, part-of-speech and morphological feature embeddings for w i . We use pre-trained word2vec (Mikolov et al., 2013) and fastText embedding (Bojanowski et al., 2017) to initialize word embeddings, whereas the remaining embeddings are trained from scratch. A BiLSTM decoder then applies an MLP on top of the encoded representation to generate the input to either the biaffine or the disk embedding module.",
"cite_spans": [
{
"start": 189,
"end": 211,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF13"
},
{
"start": 235,
"end": 260,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency parsing",
"sec_num": "4.1"
},
{
"text": "Unlike dependencies, there is no one-to-one correspondence between words in a sentence and nodes in an AMR graph. A common solution is to use an auto-regressive parser (e.g., Zhang et al., 2019; Bevilacqua et al., 2021 ) that decodes nodes following an arbitrary graph linearization. However, semantic parses, just like dependency trees, are inherently orderless, which is why we opted for a non-autoregressive parser that predicts all nodes in parallel at once. 6 We use the PERIN parser (Samuel and Straka, 2020) to parse sentences into AMR graphs. 7 The parser generates graphs in two steps: first, a transformer encoder followed by a transformer decoder take pre-trained XLM-R embeddings (Conneau et al., 2019) as input to generate the hidden representations h 1 ...h |S| . Unlike dependency parsing, the alignment between words and nodes in a graph is missing, so the cross-entropy loss w.r.t. node labels cannot be computed directly. The alignments have to be bijective (one hidden state to one node only) but should still accommodate for many-to-many correspondences, as in the case of named entities (e.g., 'Mary') that are mapped to entire subgraphs (e.g., person \u2192 name \u2192 Mary). To meet both conditions, each h i is transformed into k representations via a function \u03c6 : h i \u2192\u0177 i with\u0177 i \u2208 R dk . The parser finds the best alignment between the set of vector\u015d Y = {\u0177 i }, and the set of target nodes Y by scoring all permutations \u03a0(\u0176) and selecting the one, \u03c0 * , that maximizes the probability of a vector\u0177 i to correspond to the label of a node y, i.e., p(y label ); see Eq. 12. Note that we require the two sets Y and Y to be the same size, and as such, we extend the set Y of target nodes with NULL tokens; in practice this means that the words aligned to NULL are dropped.",
"cite_spans": [
{
"start": 175,
"end": 194,
"text": "Zhang et al., 2019;",
"ref_id": "BIBREF26"
},
{
"start": 195,
"end": 218,
"text": "Bevilacqua et al., 2021",
"ref_id": "BIBREF1"
},
{
"start": 463,
"end": 464,
"text": "6",
"ref_id": null
},
{
"start": 489,
"end": 514,
"text": "(Samuel and Straka, 2020)",
"ref_id": "BIBREF20"
},
{
"start": 551,
"end": 552,
"text": "7",
"ref_id": null
},
{
"start": 692,
"end": 714,
"text": "(Conneau et al., 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic parsing",
"sec_num": "4.2"
},
{
"text": "\u03c0 * = arg max \u03c0\u2208\u03a0 |S|\u00d7k i=1 1 [y label =NULL] p(y label \u03c0(i) |\u0177i; \u03b8) (12)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic parsing",
"sec_num": "4.2"
},
{
"text": "where \u03b8 are the model parameters. Given a permutation \u03c0 * , the parser then computes the weighted sum of five different losses: the node label, the edge presence, the edge label, the property (see below), and the top node loss. For more detail, we refer the reader to the original paper (Samuel and Straka, 2020) .",
"cite_spans": [
{
"start": 287,
"end": 312,
"text": "(Samuel and Straka, 2020)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic parsing",
"sec_num": "4.2"
},
{
"text": "In our work, we solely focus on the edge presence classifier, that is a biaffine function followed by a cross-entropy loss, as stated at the beginning of this section. Note that there could be multiple nodes with the same label; in particular, this is the case for properties that are subgraphs describing named entities containing the same semantic constants (e.g., 'person' and 'name' for the named entities 'Anna' and 'Mary' in Fig. 1 ). An infelicitous consequence of the formula in Eq. 12 is that 'Anna' (or 'Mary') can be assigned either of the 'name' or 'person' nodes. To solve this problem, the parser decides which mapping is optimal by scoring edge attachments for all permutations of property nodes and selects the argmax. We will refer to this problem as the edge permutation problem when analyzing the errors of the parser in \u00a7 8.",
"cite_spans": [],
"ref_spans": [
{
"start": 431,
"end": 437,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Semantic parsing",
"sec_num": "4.2"
},
{
"text": "We use the English-EWT section of Universal Dependencies (UD; Nivre et al., 2020) for dependency parsing, and the AMR2.0 dataset (Knight et al., 2017) for semantic parsing.",
"cite_spans": [
{
"start": 129,
"end": 150,
"text": "(Knight et al., 2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data and settings",
"sec_num": "5"
},
{
"text": "Ablations. For both tasks, we perform an ablation study on the development data to understand the impact of the choice of the loss function, sampling method, distance function and related settings (see Table 1 ). Specifically, we consider the following losses: the original loss formulation of Suzuki et al. (2019) and an extended version where a margin is added to the positive pairs (+pos margin; Eq. 4). Additionally, we report results for the MLE loss (Eq. 6), and for when we add our auxiliary structure loss (+struct; Eq. 9) to the contrastive or MLE losses.",
"cite_spans": [
{
"start": 294,
"end": 314,
"text": "Suzuki et al. (2019)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 202,
"end": 209,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Data and settings",
"sec_num": "5"
},
{
"text": "In Eq. 3a-3b, centers and radii are computed separately, whereas in the original implementation of Suzuki et al. they are learnt jointly as a single vector in which the radius is the last dimension. To understand whether capturing the interaction between centers and radii helps with learning better disk embeddings, we create a (+shared) alternative where we jointly learn the centres and radii as in the original implementation. Finally, we build on the intuition of centers and radii informing each other, and propose an additional option on top of shared, (+iter2 and +iter3), where we train an MLP to take C, R and X (n\u22121) g as input and iteratively refine the centers and radii k times (where k \u2208 {2,3}).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data and settings",
"sec_num": "5"
},
{
"text": "All the settings described assume that we include all negative instances (node pairs that are not in an ancestor-descendant relation) as part of the loss computation; however this might lead to a class imbalance problem as there are many more negatives than there are positive instances (\u223c 10:1 ratio). We therefore experiment with different sampling methods, including removing descendant-ancestor negatives altogether. We found that only one sampling method lead to performance improvement, i.e., oversampling of parent-child nodes (+sampling). In practice, we multiply the loss of every parent-child pair by a factor (here 2) in order to penalize errors coming from such pairs. We further test the joint impact of this sampling approach together with the auxiliary structure loss (+both).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data and settings",
"sec_num": "5"
},
{
"text": "Eq. 4 uses a constant margin \u03b1 for negative samples. However, one can formulate a tailored lower bound on the margin, \u03c3 ij , that depends on the relationship between node i and j. We formally prove the existence of such a lower bound in Appendix B, and provide an algorithm to identify it (Algorithm 1). In practice, however, the use of this tailored margin did not result in a performance improvement for either the dependency or the semantic parsing. For completeness, we include the results in Appendix C, Table 4 (+tailored).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data and settings",
"sec_num": "5"
},
{
"text": "For all combinations, we compare two distance functions: 2 and 1 norms. We include a list of hyperparameter values in Appendix A.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data and settings",
"sec_num": "5"
},
{
"text": "Comparison with baselines. We then compare the performance of our best systems (on test data) against the dependency parser of parser of K\u00e1d\u00e1r et al. (2021) , and the semantic parser of Samuel and Straka (2020) , respectively. We found that \u223c 3% of the instances on the test set contain cycles which cannot be modelled by disk embeddings. To assess whether this impacts performance, we also provide results when removing instances containing cycles.",
"cite_spans": [
{
"start": 137,
"end": 156,
"text": "K\u00e1d\u00e1r et al. (2021)",
"ref_id": "BIBREF9"
},
{
"start": 186,
"end": 210,
"text": "Samuel and Straka (2020)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data and settings",
"sec_num": "5"
},
{
"text": "For both dependency and semantic parsing tasks, we compare our model output (a tree or a graph) to a reference parse.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "6"
},
{
"text": "For dependency parsing, we report accuracy, calculated using two standard measures: Unlabelled Attachment Score (UAS), that is the percentage of tokens that are assigned the correct head; and Labelled Attachment Score (LAS) that is the percentage of tokens that are assigned the correct head and the correct grammatical relation. We use UAS for model selection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "6"
},
{
"text": "For semantic parsing, we use Mtool 8 to report a set of fine-grained F1 scores that reflect the performance of each classifier in the PERIN parser. edge (presence), (node) label, top (node) reflect the losses introduced in \u00a7 4.2; prop(erty prediction) represents a dedicated score on how well we predict named entity subgraphs (the properties), as well as how well we connect them to the rest of the graph. 9 Finally, all is a weighted average of these F1 scores. 10 All results are reported as an average of 3 runs, along with standard deviations.",
"cite_spans": [
{
"start": 464,
"end": 466,
"text": "10",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "6"
},
{
"text": "The disk embedding loss formulation of Suzuki et al. (2019) performs considerably worse than all other settings. Results in Table 1 show that for both semantic and dependency parsing, adding a margin to positive instances (i.e., pushing positives below a margin \u2212\u03b1) leads to a considerable boost in performance. Compare the row of results for original( 2 ) with the rows for different variations of pos margin( 2 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 124,
"end": 131,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "7"
},
{
"text": "The structure auxiliary loss helps with performance, often in conjunction with oversampling parent-child pairs. Results in Table 1 show that injecting information on the local neighbourhood structure, together with giving more weight to parent-child relations helps both dependency and semantic parsing; see results in the row corresponding to pos margin( 1 )+both. On top of these settings, we also test whether performance can be further improved by having a shared representation of radii and centers, as well as by iteratively refining this. Results show that doing so only leads to a slight improvement. Finally, we can see that all our contrastive loss formulations perform better than our MLE loss, and that performance is comparable with 2 and 1 distance. Further experiments and analyses use the best setting of pos margin( 1 )+both+shared(iter2).",
"cite_spans": [],
"ref_spans": [
{
"start": 123,
"end": 130,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "7"
},
{
"text": "To have interpretability one has to pay a price in performance. Table 2 shows results on the test set for both dependency and semantic parsing, for the SOTA parsers, as well as our best settings. As can be seen, whereas the loss in performance for dependency parsing is rather small, the gap is wider for semantic parsing. We also notice that removing instances with cycles does not change this gap, and as such we can conclude that the fact that disk embeddings cannot model these instances is not responsible for a drop in performance. We investigate this gap in performance for semantic parsing as part of our analysis in the following section. (K\u00e1d\u00e1r et al., 2021) and SS(2020) (Samuel and Straka, 2020) . Results are on the test set.",
"cite_spans": [
{
"start": 648,
"end": 668,
"text": "(K\u00e1d\u00e1r et al., 2021)",
"ref_id": "BIBREF9"
},
{
"start": 682,
"end": 707,
"text": "(Samuel and Straka, 2020)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 64,
"end": 71,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "7"
},
{
"text": "Do errors correlate with the (graph) distance and the relation between nodes? Now that we have access to interpetable representations, we can inspect which relations in a parse tree/graph are challenging to embed correctly. We begin by defining two types of errors w.r.t. the protrusion ij for a predicted edge (i, j): one where ij > 0 and one where \u2212\u03b1 < ij < 0 (note that a correct ij < \u2212\u03b1). We use 0 instead of \u03b1 as the cutoff point because, although not below the desired margin, the sigmoid in Eq. 5 will still correctly predict the disk for i containing that of j. We plot these errors against the graph distance between all node pairs, 11 and report the % of errors over the total number of node pairs for different values of graph distance. Fig. 2(a) shows that there is an inverse correlation between errors and distance, with parent-child (distance of 1) and grandparentgrandchild relations (distance of 2) displaying the highest numbers of incorrect protrusions; this is particularly striking in the case of semantic parsing and we elaborate more on this when discussing the role of edge permutations below. However, we can see that a large number of incorrect protrusions fall in (\u2212\u03b1, 0), especially in the case of dependency parsing, for which the sigmoid will still predict a correct containment.",
"cite_spans": [],
"ref_spans": [
{
"start": 748,
"end": 757,
"text": "Fig. 2(a)",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "8"
},
{
"text": "What makes semantic parsing harder? We start answering this question by looking at the results on the dev set where properties (prop) are the ones whose performance is impacted the most. We referred to this in \u00a7 4.2 as the edge permutation problem for nodes in named entity substructures which pose a challenge in the absence of explicit alignment information; we hypothesize this might cause a drop in performance and if so, we expect performance to drop more when there are more permutations. Fig. 3 shows that there is indeed an effect of the number of permutations on performance, with the baseline system performing better on instances with a large number of permutations. Interestingly, in the absence of permutations, our parser performs comparably to the baseline system.",
"cite_spans": [],
"ref_spans": [
{
"start": 495,
"end": 501,
"text": "Fig. 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "8"
},
{
"text": "Edge permutations could also be the main reason behind the large % of local errors in Fig. 2(a) . To confirm this, we take a closer look at the breakdown of % protrusion errors for parent-child pairs according to whether both, either or neither node is subjected to permutation. Fig. 2(b) shows that when both or either node is permuted, the contain- ment relationship is usually incorrectly predicted. Note, however, that whereas the number of these incorrect predictions are large, the overall performance is not overly affected, because when matching a predicted and a gold graph, we look at the node label and not at the node id. Using the example in Fig. 1(b) , from a matching perspective, we would obtain the same graph swapping the parent node 'name' of 'Anna' with the one of 'Mary'.",
"cite_spans": [],
"ref_spans": [
{
"start": 86,
"end": 95,
"text": "Fig. 2(a)",
"ref_id": "FIGREF2"
},
{
"start": 279,
"end": 288,
"text": "Fig. 2(b)",
"ref_id": "FIGREF2"
},
{
"start": 655,
"end": 664,
"text": "Fig. 1(b)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "8"
},
{
"text": "We also analyze the difference in performance between trees and graphs, as well as the effect of the number of nodes on performance; due to space limitations we include these results in Appendix D.1. In Appendix D.3, we discuss whether the size of the parses warrants moving to the hyperbolic space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "8"
},
{
"text": "We have explored disk embeddings as a means to obtain interpretable representations when training a parser that preduces a tree/graph. We showed that previously proposed disk embedding formulations are sub-optimal for the task of parsing, and accordingnly explored alternatives that improve parsing performance. Nonetheless, our results suggest that we still need to pay a cost in performance to attain interpretability; this cost is small when parsing into trees, but notable for graphs. We also speculate that this cost might be due to properties of the parser we use, especially in cases where the absence of an alignment between words in a sentence and nodes in a graph allows for many permutations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "9"
},
{
"text": "Our work can be considered as a first attempt to bring parsing and 'deep' natural language understanding into the realm of interpretability and representation learning, so that trees and graphs could be used in downstream tasks (e.g., image retrieval), similarly to how word embeddings have been used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "9"
},
{
"text": "Hyperparameters for the disk embedding modules of both dependency and semantic parsing are listed in Table 3 ; all hyperparameters not related to the disk embedding module are the same as in the original implementations. Both were tuned separately on the +pos margin( 2 ) system. All models were trained on a single TitanX GPU v100. semantic dependency batch size 8 5000 accumulation step 4 decoder lr 6e-4 1e-3 layers (disk MLP) 2 center dim. 800 weight init.",
"cite_spans": [],
"ref_spans": [
{
"start": 101,
"end": 108,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Hyperparameters",
"sec_num": null
},
{
"text": "-0.01\u223c0.01 activation (Eq. 3a-3b) leakyReLU dropout 0.0 margin 1 2 Table 3 : Hyperparameters used in the disk embedding module for semantic and dependency parsing.",
"cite_spans": [],
"ref_spans": [
{
"start": 67,
"end": 74,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Hyperparameters",
"sec_num": null
},
{
"text": "Suppose we have a DAG, G = (V, E), with an associated order-isomorphic disk embedding for a given margin of \u03b1 > 0. That is, given any two distinct nodes a and b in the DAG we must have | ab | \u2265 \u03b1. Here we prove a stronger lower bound of the form | ab | \u2265 \u03c3 ab \u03b1 where \u03c3 ab \u2265 1 depends on the relationship of a and b in the DAG. Moreover, the signs of ab are such that ab \u2264 \u2212\u03c3 ab \u03b1 when a is an ancestor of b, and ab \u2265 \u03c3 ab \u03b1 otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Tailored bounds on protrusions",
"sec_num": null
},
{
"text": "Suppose a is an ancestor of b, that is, for some k > 0, there is a path (n k , n k\u22121 , . . . , n 0 ) in the DAG with a = n k , b = n 0 . If there are several such paths from a to b we choose one that is the longest. Then, since we have assumed the corresponding disk embedding satisfies the margin \u03b1, we have that n i is an ancestor of n i\u22121 and therefore n i ,n i\u22121 \u2264 \u2212\u03b1. Therefore",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Tailored bounds on protrusions",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "k i=1 n i ,n i\u22121 \u2264 \u2212k\u03b1.",
"eq_num": "(13)"
}
],
"section": "B Tailored bounds on protrusions",
"sec_num": null
},
{
"text": "Moreover from the definition of n i ,n i\u22121 in (2), we have",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Tailored bounds on protrusions",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "k i=1 n i ,n i\u22121 = k i=1 d(c i , c i\u22121 ) \u2212 k i=1 (r i \u2212 r i\u22121 ) = k i=1 d(c i , c i\u22121 ) \u2212 (r k \u2212 r 0 ) \u2265 d(c k , c 0 ) \u2212 (r k \u2212 r 0 ) (14a) = n k ,n 0 \u2261 a,b .",
"eq_num": "(14b)"
}
],
"section": "B Tailored bounds on protrusions",
"sec_num": null
},
{
"text": "Here we have used the triangle inequality in (14a). Together (14b) and (13) implya,b \u2264 \u2212\u03c3 ab \u03b1 with \u03c3 ab = k,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Tailored bounds on protrusions",
"sec_num": null
},
{
"text": "where k is the maximum length of any path from a to b. For the protrusion from the descendant b to the ancestor a, namely b,a , we use b,a \u2265 \u03c3 b,a \u03b1, for \u03c3 b,a = \u03c3 a,b .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Tailored bounds on protrusions",
"sec_num": null
},
{
"text": "Here 16is a simple consequence of (15) and the relation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Tailored bounds on protrusions",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "n i n j + n j n i = 2d(c i , c j ) \u2265 0,",
"eq_num": "(17)"
}
],
"section": "B Tailored bounds on protrusions",
"sec_num": null
},
{
"text": "which follows easily from (2). The remaining case is when a and b are in an \"other\" relationship, that is, they are not in an ancestor-descendant relationship (or vice versa). For this case it is useful to first define a feasible pair of paths (see Fig. 4 ). Define (P 1 , P 2 ) to be a feasible pair of paths for nodes a and b if P 1 is a path (x i , x i\u22121 , . . . , x 0 ) ending at x 0 = a, P 2 is a path (y j , y j\u22121 , . . . , y 0 ) starting at y j = b, and where no node in either path is a descendant of a node in the other path. The following theorem provides a lower bound on the protrusion ab in terms of such a feasible pair (P 1 , P 2 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 249,
"end": 255,
"text": "Fig. 4",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "B Tailored bounds on protrusions",
"sec_num": null
},
{
"text": "Theorem B.1. Suppose P 1 and P 2 are a feasible pair of paths for nodes a and b, as described above. Then",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Tailored bounds on protrusions",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "ab \u2265 [|P 1 | + |P 2 | + 1] \u03b1,",
"eq_num": "(18)"
}
],
"section": "B Tailored bounds on protrusions",
"sec_num": null
},
{
"text": "where |P | denotes the number of edges (i.e., the length) of the path P .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Tailored bounds on protrusions",
"sec_num": null
},
{
"text": "Proof. Since b is an ancestor of y 0 , the bound (15) implies b,y 0 \u2264 \u2212|P 2 |\u03b1. Moreover, since x i is neither an ancestor nor descendant of y 0 , we have",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Tailored bounds on protrusions",
"sec_num": null
},
{
"text": "x i ,y 0 \u2265 \u03b1. By subtracting these two inequalities we find that",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Tailored bounds on protrusions",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x i ,y 0 \u2212 b,y 0 \u2265 [|P 2 | + 1] \u03b1.",
"eq_num": "(19)"
}
],
"section": "B Tailored bounds on protrusions",
"sec_num": null
},
{
"text": "From (2) we find",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Tailored bounds on protrusions",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "[|P 2 | + 1]\u03b1 \u2264 x i ,y 0 \u2212 b,y 0 (20a) = d(c x i , c y 0 ) \u2212 d(c b , c y 0 ) \u2212 r x i + r b (20b) \u2264 d(c x i , c b ) \u2212 r x i + r b (20c) = x i ,b ,",
"eq_num": "(20d)"
}
],
"section": "B Tailored bounds on protrusions",
"sec_num": null
},
{
"text": "where we have used the triangle inequality in (20c). Similarly, since the path P 1 starts at x i and ends at x 0 = a we have",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Tailored bounds on protrusions",
"sec_num": null
},
{
"text": "x i ,a \u2264 \u2212|P 1 |\u03b1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Tailored bounds on protrusions",
"sec_num": null
},
{
"text": "Moreover, all the disks x k , for k = 0, . . . , i are in the other relationship to node b. Subtracting this inequality from (20) then gives",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Tailored bounds on protrusions",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "[|P 1 | + |P 2 | + 1]\u03b1 \u2264 x i ,b \u2212 x i ,a (21a) = d(c x i , c b ) \u2212 d(c x i , c a ) \u2212 r a + r b (21b) \u2264 d(c a , c b ) \u2212 r a + r b (21c) = a,b ,",
"eq_num": "(21d)"
}
],
"section": "B Tailored bounds on protrusions",
"sec_num": null
},
{
"text": "where we have again used the triangle inequality in (21c). Eqn. (21) is the desired result.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Tailored bounds on protrusions",
"sec_num": null
},
{
"text": "We apply Theorem B.1 by defining \u03c3 ab , in the case a is neither an ancestor nor descendant of b, to be the maximum lower bound (18) over all feasible pairs, (P 1 , P 2 ), for these nodes a and b. That is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Tailored bounds on protrusions",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c3 ab = max (P 1 ,P 2 ) {|P 1 | + |P 2 | + 1} .",
"eq_num": "(22)"
}
],
"section": "B Tailored bounds on protrusions",
"sec_num": null
},
{
"text": "Given a DAG, Algorithm 1 below computes this \u03c3 ab for two nodes a and b in an \"other\" relationship. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Tailored bounds on protrusions",
"sec_num": null
},
{
"text": "L c = (i,j)\u2208P [\u03c3 ij \u03b1 + l ij ] + + (i,j)\u2208N [\u03c3 ij \u03b1 \u2212 l ij ] + (23)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Tailored bounds on protrusions",
"sec_num": null
},
{
"text": "Note that in this formulation a margin is used for positive samples as well. Results in Table 4 shows that a tailored margin, even in combination with the auxiliary structure loss as well as parent-child oversampling, does not lead to any gain over our best system (pos margin( 1 )+both).",
"cite_spans": [],
"ref_spans": [
{
"start": 88,
"end": 95,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "B Tailored bounds on protrusions",
"sec_num": null
},
{
"text": "D.1 Is performance worse for larger trees/graphs?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "D Analysis",
"sec_num": null
},
{
"text": "Instances with no permutations might also be easier because they are shallower or contain less nodes. Fig. 5 shows that there is an effect due to the number of nodes. However Fig. 6 shows that performance starts to diverge at more than 25 nodes which doesn't fully explain performance for instance with less than 10 permutations.",
"cite_spans": [],
"ref_spans": [
{
"start": 102,
"end": 108,
"text": "Fig. 5",
"ref_id": null
},
{
"start": 175,
"end": 181,
"text": "Fig. 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "D Analysis",
"sec_num": null
},
{
"text": "Given the difference in performance between dependency and semantic parsing in Table 1 , one can hypothesize that it is easier to preserve the structure of trees than graphs. To answer this question, we Table 4 : Results for the +tailored setting for dependency and semantic parsing on dev set. Performance of the baseline parsers, the original formulation (Suzuki et al., 2019) , as well as our best system (pos margin( 1 )+both, see Table 1 ) are also reported for comparison. Figure 5 : Analysis of number of permutations w.r.t. the number nodes in a graph. Figure 6 : Analysis of the number of nodes in a graph w.r.t. parse prediction performance per instance for the baseline biaffine and the disk embedding system. In orange, an histogram over the proportion of instances in a particular size bin is also reported.",
"cite_spans": [
{
"start": 357,
"end": 378,
"text": "(Suzuki et al., 2019)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 79,
"end": 86,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 203,
"end": 210,
"text": "Table 4",
"ref_id": null
},
{
"start": 435,
"end": 442,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 479,
"end": 487,
"text": "Figure 5",
"ref_id": null
},
{
"start": 561,
"end": 569,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "D.2 Is there a difference in performance between trees and graphs?",
"sec_num": null
},
{
"text": "first divide the AMR dev set into instances whose gold parse is a tree vs. those that are graph. We then compare the average of the per-instance F1 scores for edge presence as given by the predictions of the baseline system (Samuel and Straka, 2020) vs. those of our best disk embedding model. We confirm that indeed graphs are harder, and, in line with the results of dependency parsing, the gap be-tween the baseline system and our disk embedding formulation is larger for graphs (\u2206 =1.95) than trees (\u2206 =0.77).",
"cite_spans": [
{
"start": 224,
"end": 249,
"text": "(Samuel and Straka, 2020)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "D.2 Is there a difference in performance between trees and graphs?",
"sec_num": null
},
{
"text": "To answer this question, we plot the difference between the radii for all pairs of ancestor-descendant disks (|r i \u2212 r j |), as well as the norm of the difference between the center of a disk and the mean of all centers of the disks in a graph (||c i \u2212c||). We hypothesize that moving to the hyperbolic space is justified if an exponential growth is observed when the graphs get larger. However, Fig. 7 shows that this is not the case. Figure 7 : Boxplots analysis the absolute difference between radii of two disks (above) and the norm of the difference between the center of a disk in a graph and the average over the all disk centers in the same graph (below). Both are plotted against the number of nodes in a graph on the x axis.",
"cite_spans": [],
"ref_spans": [
{
"start": 396,
"end": 402,
"text": "Fig. 7",
"ref_id": null
},
{
"start": 436,
"end": 444,
"text": "Figure 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "D.3 Do we need to move to hyperbolic space?",
"sec_num": null
},
{
"text": "Even though multi-dimensional disks are technically balls, we will refer to them as disks throughout the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The manner in which individual edges in a DAG are represented in a disk embedding is more subtle and is discussed later.3 Note that disk embeddings must necessarily be acyclic due to the subset relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We also experimented with maximizing the probability of parent-child relations vs. others (|C| = 2) as well as including sisters relations (|C| = 3) but found that a 6-way classification consistently resulted in the best performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The parser ofK\u00e1d\u00e1r et al. (2021) is based on the parser of(Qi et al., 2018) whose codebase is provided at https: //github.com/stanfordnlp/stanfordnlp.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Nonetheless, our disk embedding module is parser agnostic and could be applied to auto-regressive models as well.7 https://github.com/ufal/perin",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/cfmrp/mtool 9 Property scores can overlap with edge presence scores in that they also assess edge prediction but only for property nodes.10 We use Mtool instead of SMATCH scoring(Cai and Knight, 2013) since Mtool provides a more fine-grained evaluation of the parser performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In a graph, where there could be multiple paths between a pair of nodes, we take the length of the longest path as the graph distance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the three anonymous reviewers for their useful comments. Research was conducted at the Samsung AI Centre Toronto and funded by Samsung Research, Samsung Electronics Co.,Ltd",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Don't count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Georgiana",
"middle": [],
"last": "Dinu",
"suffix": ""
},
{
"first": "Germ\u00e1n",
"middle": [],
"last": "Kruszewski",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "238--247",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Baroni, Georgiana Dinu, and Germ\u00e1n Kruszewski. 2014. Don't count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 238-247.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "One spring to rule them both: Symmetric amr semantic parsing and generation without a complex pipeline",
"authors": [
{
"first": "Michele",
"middle": [],
"last": "Bevilacqua",
"suffix": ""
},
{
"first": "Rexhina",
"middle": [],
"last": "Blloshmi",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michele Bevilacqua, Rexhina Blloshmi, and Roberto Navigli. 2021. One spring to rule them both: Sym- metric amr semantic parsing and generation without a complex pipeline. In Proceedings of the Thirty- Fifth AAAI Conference on Artificial Intelligence.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Smatch: an evaluation metric for semantic feature structures",
"authors": [
{
"first": "Shu",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "748--752",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shu Cai and Kevin Knight. 2013. Smatch: an evalua- tion metric for semantic feature structures. In Pro- ceedings of the 51st Annual Meeting of the Associa- tion for Computational Linguistics (Volume 2: Short Papers), pages 748-752.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Unsupervised cross-lingual representation learning at scale",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Kartikay",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.02116"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Imposing category trees onto word-embeddings using a geometric construction",
"authors": [
{
"first": "Tiansi",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Chrisitan",
"middle": [],
"last": "Bauckhage",
"suffix": ""
},
{
"first": "Hailong",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Juanzi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Olaf",
"middle": [],
"last": "Cremers",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Speicher",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Armin",
"suffix": ""
},
{
"first": "J\u00f6rg",
"middle": [],
"last": "Cremers",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zimmermann",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tiansi Dong, Chrisitan Bauckhage, Hailong Jin, Juanzi Li, Olaf Cremers, Daniel Speicher, Armin B Cre- mers, and J\u00f6rg Zimmermann. 2018. Imposing cate- gory trees onto word-embeddings using a geometric construction. In International Conference on Learn- ing Representations.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Stanford's graph-based neural dependency parser at the CoNLL 2017 shared task",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Dozat",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies",
"volume": "",
"issue": "",
"pages": "20--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timothy Dozat, Peng Qi, and Christopher D. Manning. 2017. Stanford's graph-based neural dependency parser at the CoNLL 2017 shared task. In Proceed- ings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 20-30, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "WordNet: An Electronic Lexical Database",
"authors": [
{
"first": "Christiane",
"middle": [],
"last": "Fellbaum",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christiane Fellbaum. 1998. WordNet: An Electronic Lexical Database. Cambridge, MA: MIT Press.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Hyperbolic entailment cones for learning hierarchical embeddings",
"authors": [
{
"first": "Octavian",
"middle": [],
"last": "Ganea",
"suffix": ""
},
{
"first": "Gary",
"middle": [],
"last": "B\u00e9cigneul",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Hofmann",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "1646--1655",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Octavian Ganea, Gary B\u00e9cigneul, and Thomas Hof- mann. 2018. Hyperbolic entailment cones for learn- ing hierarchical embeddings. In International Con- ference on Machine Learning, pages 1646-1655. PMLR.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Dependency parsing with structure preserving embeddings",
"authors": [
{
"first": "\u00c1kos",
"middle": [],
"last": "K\u00e1d\u00e1r",
"suffix": ""
},
{
"first": "Lan",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Mete",
"middle": [],
"last": "Kemertas",
"suffix": ""
},
{
"first": "Federico",
"middle": [],
"last": "Fancellu",
"suffix": ""
},
{
"first": "Allan",
"middle": [],
"last": "Jepson",
"suffix": ""
},
{
"first": "Afsaneh",
"middle": [],
"last": "Fazly",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
"volume": "",
"issue": "",
"pages": "1684--1697",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "\u00c1kos K\u00e1d\u00e1r, Lan Xiao, Mete Kemertas, Federico Fan- cellu, Allan Jepson, and Afsaneh Fazly. 2021. De- pendency parsing with structure preserving embed- dings. In Proceedings of the 16th Conference of the European Chapter of the Association for Computa- tional Linguistics: Main Volume, pages 1684-1697.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Abstract meaning representation (amr) annotation release 2.0",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Bianca",
"middle": [],
"last": "Badarau",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Banarescu",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Bonial",
"suffix": ""
},
{
"first": "Madalina",
"middle": [],
"last": "Bardocz",
"suffix": ""
},
{
"first": "Kira",
"middle": [],
"last": "Griffitt",
"suffix": ""
},
{
"first": "Ulf",
"middle": [],
"last": "Hermjakob",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Tim",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gorman",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Knight, Bianca Badarau, Laura Banarescu, Claire Bonial, Madalina Bardocz, Kira Griffitt, Ulf Hermjakob, Daniel Marcu, Martha Palmer, Tim O'Gorman, et al. 2017. Abstract meaning repre- sentation (amr) annotation release 2.0. Technical report, Technical Report LDC2017T10, Linguistic Data Consortium, Philadelphia, PA.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Learning to predict denotational probabilities for modeling entailment",
"authors": [
{
"first": "Alice",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "721--730",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alice Lai and Julia Hockenmaier. 2017. Learning to predict denotational probabilities for modeling en- tailment. In Proceedings of the 15th Conference of the European Chapter of the Association for Compu- tational Linguistics: Volume 1, Long Papers, pages 721-730.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Fast semantic parsing with well-typedness guarantees",
"authors": [
{
"first": "Matthias",
"middle": [],
"last": "Lindemann",
"suffix": ""
},
{
"first": "Jonas",
"middle": [],
"last": "Groschwitz",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Koller",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2009.07365"
]
},
"num": null,
"urls": [],
"raw_text": "Matthias Lindemann, Jonas Groschwitz, and Alexan- der Koller. 2020. Fast semantic parsing with well-typedness guarantees. arXiv preprint arXiv:2009.07365.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems",
"volume": "26",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in Neural Information Processing Systems, volume 26, pages 3111-3119. Curran As- sociates, Inc.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Poincar\u00e9 embeddings for learning hierarchical representations",
"authors": [
{
"first": "Maximillian",
"middle": [],
"last": "Nickel",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "30",
"issue": "",
"pages": "6338--6347",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maximillian Nickel and Douwe Kiela. 2017. Poincar\u00e9 embeddings for learning hierarchical representa- tions. Advances in neural information processing systems, 30:6338-6347.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Universal Dependencies v2: An evergrowing multilingual treebank collection",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Haji\u010d",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Francis",
"middle": [],
"last": "Tyers",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Zeman",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "4034--4043",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre, Marie-Catherine de Marneffe, Filip Gin- ter, Jan Haji\u010d, Christopher D. Manning, Sampo Pyysalo, Sebastian Schuster, Francis Tyers, and Daniel Zeman. 2020. Universal Dependencies v2: An evergrowing multilingual treebank collection. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4034-4043, Mar- seille, France. European Language Resources Asso- ciation.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Hitachi at mrp 2020: Text-to-graph-notation transducer",
"authors": [
{
"first": "Hiroaki",
"middle": [],
"last": "Ozaki",
"suffix": ""
},
{
"first": "Gaku",
"middle": [],
"last": "Morio",
"suffix": ""
},
{
"first": "Yuta",
"middle": [],
"last": "Koreeda",
"suffix": ""
},
{
"first": "Terufumi",
"middle": [],
"last": "Morishita",
"suffix": ""
},
{
"first": "Toshinori",
"middle": [],
"last": "Miyoshi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the CoNLL 2020 Shared Task: Cross-Framework Meaning Representation Parsing",
"volume": "",
"issue": "",
"pages": "40--52",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hiroaki Ozaki, Gaku Morio, Yuta Koreeda, Terufumi Morishita, and Toshinori Miyoshi. 2020. Hitachi at mrp 2020: Text-to-graph-notation transducer. In Proceedings of the CoNLL 2020 Shared Task: Cross- Framework Meaning Representation Parsing, pages 40-52.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language process- ing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Sgl: Speaking the graph languages of semantic parsing via multilingual translation",
"authors": [
{
"first": "Luigi",
"middle": [],
"last": "Procopio",
"suffix": ""
},
{
"first": "Rocco",
"middle": [],
"last": "Tripodi",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "325--337",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luigi Procopio, Rocco Tripodi, and Roberto Navigli. 2021. Sgl: Speaking the graph languages of se- mantic parsing via multilingual translation. In Pro- ceedings of the 2021 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 325-337.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Universal dependency parsing from scratch",
"authors": [
{
"first": "Peng",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Dozat",
"suffix": ""
},
{
"first": "Yuhao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2018,
"venue": "CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng Qi, Timothy Dozat, Yuhao Zhang, and Christo- pher D Manning. 2018. Universal dependency pars- ing from scratch. In CoNLL 2018 Shared Task: Mul- tilingual Parsing from Raw Text to Universal Depen- dencies.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Ufal at mrp 2020: Permutation-invariant semantic parsing in perin",
"authors": [
{
"first": "David",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "Milan",
"middle": [],
"last": "Straka",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2011.00758"
]
},
"num": null,
"urls": [],
"raw_text": "David Samuel and Milan Straka. 2020. Ufal at mrp 2020: Permutation-invariant semantic parsing in perin. arXiv preprint arXiv:2011.00758.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Highway networks",
"authors": [
{
"first": "Klaus",
"middle": [],
"last": "Rupesh Kumar Srivastava",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Greff",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Deep Learning Workshop at the International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rupesh Kumar Srivastava, Klaus Greff, , and J\u00fcrgen Schmidhuber. 2015. Highway networks. In Pro- ceedings of the Deep Learning Workshop at the In- ternational Conference on Machine Learning.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Learning the predictability of the future",
"authors": [
{
"first": "D\u00eddac",
"middle": [],
"last": "Sur\u00eds",
"suffix": ""
},
{
"first": "Ruoshi",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Carl",
"middle": [],
"last": "Vondrick",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "12607--12617",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D\u00eddac Sur\u00eds, Ruoshi Liu, and Carl Vondrick. 2021. Learning the predictability of the future. In Proceed- ings of the IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition, pages 12607-12617.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Hyperbolic disk embeddings for directed acyclic graphs",
"authors": [
{
"first": "Ryota",
"middle": [],
"last": "Suzuki",
"suffix": ""
},
{
"first": "Ryusuke",
"middle": [],
"last": "Takahama",
"suffix": ""
},
{
"first": "Shun",
"middle": [],
"last": "Onoda",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "6066--6075",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryota Suzuki, Ryusuke Takahama, and Shun Onoda. 2019. Hyperbolic disk embeddings for directed acyclic graphs. In International Conference on Ma- chine Learning, pages 6066-6075. PMLR.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Sanja Fidler, and Raquel Urtasun",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Vendrov",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Kiros",
"suffix": ""
}
],
"year": 2015,
"venue": "Order-embeddings of images and language",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1511.06361"
]
},
"num": null,
"urls": [],
"raw_text": "Ivan Vendrov, Ryan Kiros, Sanja Fidler, and Raquel Ur- tasun. 2015. Order-embeddings of images and lan- guage. arXiv preprint arXiv:1511.06361.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Probabilistic embedding of knowledge graphs with box lattice measures",
"authors": [
{
"first": "Luke",
"middle": [],
"last": "Vilnis",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Shikhar",
"middle": [],
"last": "Murty",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mc-Callum",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "263--272",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luke Vilnis, Xiang Li, Shikhar Murty, and Andrew Mc- Callum. 2018. Probabilistic embedding of knowl- edge graphs with box lattice measures. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 263-272.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Broad-coverage semantic parsing as transduction",
"authors": [
{
"first": "Sheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xutai",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.02607"
]
},
"num": null,
"urls": [],
"raw_text": "Sheng Zhang, Xutai Ma, Kevin Duh, and Benjamin Van Durme. 2019. Broad-coverage semantic parsing as transduction. arXiv preprint arXiv:1909.02607.",
"links": null
}
},
"ref_entries": {
"FIGREF2": {
"type_str": "figure",
"text": "A detailed breakdown of % protrusion error (i.e., ij > \u2212\u03b1) for dependency and semantic parsing, for different values of graph distance (a); (b) shows this % for parent-child pairs (pairs with distance 1), categorized according to whether both, either or neither nodes are subjected to edge permutation.",
"uris": null,
"num": null
},
"FIGREF3": {
"type_str": "figure",
"text": "Performance on edge prediction for our disk embedding formulation vs. the baseline semantic parser. Instances are divided based on the number of edge permutations computed (with 1 meaning no permutations), as shown on the X-axis.",
"uris": null,
"num": null
},
"FIGREF4": {
"type_str": "figure",
"text": "An example of a tight \"other\" relationship between nodes a and b in the situation described in Theorem B.1. Here |P 1 | = |P 2 | = 2, and a,b , is depicted by the length of the red line, where each hash mark denotes a subsegment of length \u03b1. Note the lower bound, namely a,b = [|P 1 | + |P 2 | + 1]\u03b1, is achieved.",
"uris": null,
"num": null
},
"TABREF0": {
"text": ", 5 which has been shown to perform on par with SOTA systems while relying on structurepreserving, interpretable representations. To generate the hidden representations h 1 , h 2 ...h |S| for a given sentence S=w 1 ...w |S| , a highway-BiLSTM encoder (Srivastava et al., 2015) takes as input a sequence of |S| embeddings x 1 , . . . , x |S| , where each",
"content": "<table/>",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF1": {
"text": "\u00b12.23) 87.70(\u00b1.14) 58.36(\u00b14.63) 89.44(\u00b1.69) 67.19(\u00b11.85) pos margin( 2 ) 87.64(\u00b1.15) 58.29(\u00b11.19) 87.77(\u00b1.21) 80.23(\u00b12.92) 90.03(\u00b1.35) 73.80(\u00b11.27) +struct 88.63(\u00b1.21) 61.33(\u00b11.07) 86.04(\u00b1.07) 81.95(\u00b11.15) 88.12(\u00b11.",
"content": "<table><tr><td/><td>dependency</td><td/><td/><td>semantic</td><td/><td/></tr><tr><td>system</td><td>UAS</td><td>edge</td><td>label</td><td>prop</td><td>top</td><td>all</td></tr><tr><td>K\u00e1d\u00e1r et al. (2021)</td><td>91.62(\u00b1.33)</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Samuel and Straka (2020)</td><td>-</td><td colspan=\"4\">70.14(\u00b1.36) 88.15(\u00b1.05) 87.34(\u00b1.25) 90.37(\u00b1.11)</td><td>80.28(\u00b1.21)</td></tr><tr><td>original( 2 )</td><td>70(\u00b1.94)</td><td colspan=\"4\">46.98(17)</td><td>74.36(\u00b1.9)</td></tr><tr><td>+sampling</td><td>88.79(\u00b1.37)</td><td colspan=\"4\">59.58(\u00b1.23) 87.76(\u00b1.4) 80.78(\u00b1.78) 89.59(\u00b1.64)</td><td>74.38(\u00b1.15)</td></tr><tr><td>+both</td><td>89.63(\u00b1.16)</td><td colspan=\"4\">62.47(\u00b1.33) 86.15(\u00b1.18) 81.57(\u00b12.88) 89.91(\u00b1.17)</td><td>75.26(\u00b1.4)</td></tr><tr><td>pos margin( 1 )</td><td>87.88(\u00b1.21)</td><td>58.1(\u00b1.28)</td><td>87.50(\u00b1.09)</td><td>80(\u00b12.69)</td><td colspan=\"2\">89.02(\u00b11.14) 73.68(\u00b1.18)</td></tr><tr><td>+struct</td><td>89.23(\u00b1.31)</td><td>62.14(\u00b1.49)</td><td colspan=\"4\">86.2(\u00b1.26) 81.58(\u00b12.31) 88.57(\u00b11.32) 75.02(\u00b1.52)</td></tr><tr><td>+sampling</td><td>88.91(\u00b1.05)</td><td>59.49(\u00b1.38)</td><td>87.7(\u00b1.4)</td><td colspan=\"3\">80.52(\u00b11.43) 89.40(\u00b11.59) 74.36(\u00b1.31)</td></tr><tr><td>+both</td><td>90.08(\u00b1.21)</td><td>63.78(\u00b1.07)</td><td>86(\u00b1.20)</td><td colspan=\"2\">83.12(\u00b1.28) 88.93(\u00b11.1)</td><td>75.83(\u00b1.02)</td></tr><tr><td>+both+shared</td><td>90.02(\u00b1.01)</td><td colspan=\"4\">63.75(\u00b1.5) 86.01(\u00b1.12) 83.04(\u00b11.31) 89.88(\u00b1.7)</td><td>75.89(\u00b1.15)</td></tr><tr><td>+both+shared(iter2)</td><td>90.27(\u00b1.01)</td><td colspan=\"4\">63.22(\u00b1.8) 86.83(\u00b1.21) 83.73(\u00b11.46) 88.98(\u00b12.1)</td><td>76.51(\u00b1.6)</td></tr><tr><td>+both+shared(iter3)</td><td>90.12(\u00b1.12)</td><td colspan=\"5\">63.57(\u00b11.6) 86.76(\u00b1.13) 83.34(\u00b13.21) 90.61(\u00b1.36) 76.24(\u00b11.21)</td></tr><tr><td>MLE</td><td>85.61(\u00b12.63)</td><td colspan=\"3\">56.52(\u00b1.12) 87.71(\u00b1.11) 79.15(\u00b12.3)</td><td>90.20(\u00b1.1)</td><td>72.60(\u00b1.7)</td></tr><tr><td>+struct</td><td>87.98(\u00b1.46)</td><td colspan=\"4\">59.74(\u00b1.08) 85.61(\u00b1.16) 80.37(\u00b11.63) 90.57(\u00b1.2)</td><td>73.42(\u00b1.92)</td></tr></table>",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF2": {
"text": "Results for dependency and semantic parsing on dev set to evaluate the impact of different settings and loss functions.",
"content": "<table/>",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF4": {
"text": "Comparison between our best dependency and semantic parsers with a disk embedding loss, ours, and the baselines, namely K(2021)",
"content": "<table/>",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF5": {
"text": "Algorithm 1: compute_margin(G, n i , n j ) Input: graph G, nodes ni and nj in 'other' relation Output: maximum lower bound \u03c3ij Initialize q with all ancestors of ni, including ni; Initialize \u03c3i,j = 1; while q is not empty do ny = q.pop(); if nj is reachable from ny then",
"content": "<table><tr><td>continue;</td></tr><tr><td>else</td></tr><tr><td>G = copy(G);</td></tr><tr><td>Remove all descendants of ny in G ;</td></tr><tr><td>lgj = 0;</td></tr><tr><td>for each descendant nw of nj in G do</td></tr><tr><td>if longest_path(nw, nj) &gt; lgj then</td></tr><tr><td>lgj = longest_path(nw, nj);</td></tr><tr><td>ng = nw;</td></tr><tr><td>end</td></tr><tr><td>lyi = longest_path(ni, ny);</td></tr><tr><td>\u03c3ij = max(\u03c3ij, lyi + lgj + 1);</td></tr><tr><td>end</td></tr><tr><td>end</td></tr><tr><td>C Using a tailored bound on protrusion</td></tr><tr><td>Eq. 4 can then modified to include \u03c3 ij as shown</td></tr><tr><td>below:</td></tr></table>",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF6": {
"text": "(\u00b1.36) 88.15(\u00b1.05) 87.34(\u00b1.25) 90.37(\u00b1.11) 80.28(\u00b1.21) original( 2 ) 70(\u00b1.94) 46.98(\u00b12.23) 87.70(\u00b1.14) 58.36(\u00b14.63) 89.44(\u00b1.69) 67.19(\u00b11.85) pos margin( 1 )+both 90.08(\u00b1.21) 63.78(\u00b1.07) 86(\u00b1.20) 83.12(\u00b1.28) 88.93(\u00b11.1) 75.83(\u00b1.02) +tailored ( 2 ) 88.6(\u00b1.65) 52.13(\u00b11.02) 88.79(\u00b1.07) 74.09(\u00b12.2) 89.66(\u00b1.92) 69.95(\u00b1.33)",
"content": "<table><tr><td/><td>dependency</td><td/><td/><td>semantic</td><td/><td/></tr><tr><td>system</td><td>UAS</td><td>edge</td><td>label</td><td>prop</td><td>top</td><td>all</td></tr><tr><td>K\u00e1d\u00e1r et al. (2021)</td><td>91.62(\u00b1.33)</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td colspan=\"6\">Samuel and Straka (2020) 70.14+struct -89.21(\u00b1.05) 61.37(\u00b1.35) 86.67(\u00b11.75) 81.86(\u00b11.31) 88.97(\u00b1.92)</td><td>74.70(\u00b1.31)</td></tr><tr><td>+sampling</td><td>89.14(\u00b1.91)</td><td colspan=\"2\">54.47(\u00b11.12) 87.94(\u00b1.1)</td><td>76(\u00b11.15)</td><td colspan=\"2\">88.93(\u00b11.56) 70.91(\u00b1.38)</td></tr><tr><td>+both</td><td>89.87(\u00b1.64)</td><td colspan=\"3\">62.91(\u00b1.34) 86.91(\u00b1.01) 83.42(\u00b1.42)</td><td>89.2(\u00b1.89)</td><td>75.54(\u00b1.35)</td></tr><tr><td>+tailored ( 1 )</td><td>88.65(\u00b1.16)</td><td colspan=\"4\">52.38(\u00b1.58) 88.84(\u00b1.25) 73.44(\u00b11.78) 89.81(\u00b1.87)</td><td>69.72(\u00b1.71)</td></tr><tr><td>+struct</td><td>89.4(\u00b1.04)</td><td colspan=\"5\">61.92(\u00b1.17) 86.73(\u00b1.15) 82.40(\u00b1.62) 89.64(\u00b11.62) 75.05(\u00b1.06)</td></tr><tr><td>+sampling</td><td>89.12(\u00b1.43)</td><td colspan=\"4\">53.98(\u00b1.68) 87.78(\u00b1.11) 75.98(\u00b11.92) 89.71(\u00b1.34)</td><td>70.82(\u00b1.18)</td></tr><tr><td>+both</td><td>89.92(\u00b1.4)</td><td colspan=\"4\">62.89(\u00b1.65) 86.89(\u00b1.11) 82.61(\u00b11.02) 89.11(\u00b1.83)</td><td>75.60(\u00b1.43)</td></tr></table>",
"html": null,
"num": null,
"type_str": "table"
}
}
}
}