ACL-OCL / Base_JSON /prefixI /json /iwcs /2021.iwcs-1.18.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:24:18.185402Z"
},
"title": "A Transition-based Parser for Unscoped Episodic Logical Forms",
"authors": [
{
"first": "Gene",
"middle": [],
"last": "Louis",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Rochester",
"location": {}
},
"email": ""
},
{
"first": "Viet",
"middle": [],
"last": "Duong",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Rochester",
"location": {}
},
"email": "vduong\u2666@u.rochester.edu"
},
{
"first": "Lenhart",
"middle": [],
"last": "Schubert",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Rochester",
"location": {}
},
"email": "schubert\u2663@cs.rochester.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "\"Episodic Logic: Unscoped Logical Form\" (EL-ULF) is a semantic representation capturing predicate-argument structure as well as more challenging aspects of language within the Episodic Logic formalism. We present the first learned approach for parsing sentences into ULFs, using a growing set of annotated examples. The results provide a strong baseline for future improvement. Our method learns a sequence-to-sequence model for predicting the transition action sequence within a modified cache transition system. We evaluate the efficacy of type grammar-based constraints, a word-to-symbol lexicon, and transition system state features in this task. Our system is available at https://github.com/genelkim/ ulf-transition-parser. We also present the first official annotated ULF dataset at",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "\"Episodic Logic: Unscoped Logical Form\" (EL-ULF) is a semantic representation capturing predicate-argument structure as well as more challenging aspects of language within the Episodic Logic formalism. We present the first learned approach for parsing sentences into ULFs, using a growing set of annotated examples. The results provide a strong baseline for future improvement. Our method learns a sequence-to-sequence model for predicting the transition action sequence within a modified cache transition system. We evaluate the efficacy of type grammar-based constraints, a word-to-symbol lexicon, and transition system state features in this task. Our system is available at https://github.com/genelkim/ ulf-transition-parser. We also present the first official annotated ULF dataset at",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "EL-ULF was recently introduced as a semantic representation that accurately captures linguistic semantic structure within an expressive logical formalism, while staying close to the surface language, facilitating annotation of a dataset that can be used to train a parser . The goal is to overcome the limitations of fragile rulebased systems, such as the Episodic Logic (EL) parser used for gloss axiomatization (Kim and Schubert, 2016) and domain-specific ULF parsers used for schema generation and dialogue systems (Lawley et al., 2019; Platonov et al., 2020) . EL's rich model-theoretic semantics enables deductive inference, uncertain inference, and natural logic-like inference (Morbini and Schubert, 2009; Schubert and Hwang, 2000; Schubert, 2014) ; and the unscoped version, EL-ULF, supports Natural Logic-like monotonic inferences (Kim et al., 2020) (i.pro ((pres want.v) (to (dance.v (adv-a (in.p (my.d ((mod-n new.a) (plur shoe.n))))))))) and inferences based on some classes of entailments, presuppositions, and implicatures which are common in discourse . The lack of robust parsers have prevented large scale experiments using these powerful representations. We will refer to EL-ULF as simply ULF in the rest of this paper.",
"cite_spans": [
{
"start": 413,
"end": 437,
"text": "(Kim and Schubert, 2016)",
"ref_id": "BIBREF18"
},
{
"start": 518,
"end": 539,
"text": "(Lawley et al., 2019;",
"ref_id": "BIBREF22"
},
{
"start": 540,
"end": 562,
"text": "Platonov et al., 2020)",
"ref_id": "BIBREF33"
},
{
"start": 684,
"end": 712,
"text": "(Morbini and Schubert, 2009;",
"ref_id": "BIBREF27"
},
{
"start": 713,
"end": 738,
"text": "Schubert and Hwang, 2000;",
"ref_id": "BIBREF37"
},
{
"start": 739,
"end": 754,
"text": "Schubert, 2014)",
"ref_id": "BIBREF34"
},
{
"start": 840,
"end": 858,
"text": "(Kim et al., 2020)",
"ref_id": "BIBREF19"
},
{
"start": 866,
"end": 920,
"text": "((pres want.v) (to (dance.v (adv-a (in.p (my.d ((mod-n",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we present the first system that learns to parse ULFs of English sentences from an annotated dataset, and provide the first official release of the annotated ULF corpus, whereon our system is trained. We evaluate the parser using SEMBLEU (Song and Gildea, 2019 ) and a modified version of SMATCH (Cai and Knight, 2013) , establishing a baseline for future work.",
"cite_spans": [
{
"start": 252,
"end": 274,
"text": "(Song and Gildea, 2019",
"ref_id": "BIBREF39"
},
{
"start": 310,
"end": 332,
"text": "(Cai and Knight, 2013)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "An initial effort in learning a parser producing a representation as rich as ULF is bound to face a data sparsity issue. 1 Thus a major goal in our choice of a transition-system-based parser has been to reduce the search space of the model. We investigate three additional methods of tackling this issue: (1) constraining actions in the decoding phase based on faithfulness to the ULF type system, (2) using a lexicon to limit the possible word-aligned symbols that the parser can generate, and (3) defining learnable features of the transition system state.",
"cite_spans": [
{
"start": 121,
"end": 122,
"text": "1",
"ref_id": "BIBREF44"
},
{
"start": 305,
"end": 308,
"text": "(1)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Episodic Logic is an extension of first-order logic (FOL) that closely matches the form and ex-pressivity of natural language, using reifying operators to enrich the domain of basic individuals and situations with propositions and kinds, keeping the logic first-order. It also uses other type-shifters, e.g., for mapping predicates to modifiers, and allows for generalized quantifiers . ULF fully specifies the semantic type structure of EL by marking the types of the atoms and all of the predicate-argument relationships while leaving operator scope, anaphora, and word sense unresolved . ULF is the critical first step to parsing full-fledged EL formulas. Types are marked on ULF atoms with a suffixed tag resembling the part-of-speech (e.g., .v, .n, .pro, .d for verbs, nouns, pronouns, and determiners, respectively) . Names are instead marked with pipes (e.g. |John|) and a closed set of logical and macro operators have unique types and are left without a type marking. Each suffix denotes a set of possible semantic denotations, e.g. .pro always denotes an entity and .v denotes an n-ary predicate where n can vary. The symbol without the suffix or pipes is called the stem.",
"cite_spans": [
{
"start": 739,
"end": 821,
"text": "(e.g., .v, .n, .pro, .d for verbs, nouns, pronouns, and determiners, respectively)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unscoped Logical Form",
"sec_num": "2"
},
{
"text": "Type shifters in ULF maintain coherence of the semantic type compositions. For example, the type shifter adv-a maps a predicate into a verbal predicate modifier as in the prepositional phrase \"in my new shoes\" in Figure 1 , as opposed to its predicative use \"A spider is in my new shoes\".",
"cite_spans": [],
"ref_spans": [
{
"start": 213,
"end": 221,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Unscoped Logical Form",
"sec_num": "2"
},
{
"text": "The syntactic structure is closely reflected in ULF even under syntactic movement through the use of rewriting macros which explicitly mark these occurrences and upon expansion make the exact semantic argument structure available. Also, further resembling syntactic structure, ULFs are trees. The operators in operator-argument relations of ULF can be in first or second position, disambiguated by the types of the participating expressions. This further reduces the amount of word reordering between English and ULFs. The EL type system only allows function application for combining types, A, B , A \u2192 B, much like Montagovian semantics (Montague, 1970) , but without type-raising.",
"cite_spans": [
{
"start": 638,
"end": 654,
"text": "(Montague, 1970)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unscoped Logical Form",
"sec_num": "2"
},
{
"text": "Currently, there is semantic parsing research occurring on multiple representational fronts, which is showcased by the cross-framework meaning representation parsing task (Oepen et al., 2019) . The key differentiating factor of ULF from other meaning representations is the model-theoretic expres-sive capacity. To highlight this, here are a few limitations of notable representations: AMR (Banarescu et al., 2013a) neglects issues such as articles, tense, and nonintersective modification in favor of a canonicalized form that abstracts away from the surface structure; Minimal Recursion Semantics (Copestake et al., 2005) captures metalevel semantics for which inference systems cannot be built directly based on model-theoretic notions of truth and entailment; and extant semantic parsers for DRSs generate FOL-equivalent LFs, thus precludes proper treatment of phenomena such as generalized quantifiers, modification, and reification. Due to space limitations, we refer to for an in-depth description and motivation of ULF, including comparisons to other representations. We also refer to Schubert (2015) which places EL-the antecedent of ULF-in a broad context.",
"cite_spans": [
{
"start": 171,
"end": 191,
"text": "(Oepen et al., 2019)",
"ref_id": "BIBREF29"
},
{
"start": 390,
"end": 415,
"text": "(Banarescu et al., 2013a)",
"ref_id": "BIBREF5"
},
{
"start": 599,
"end": 623,
"text": "(Copestake et al., 2005)",
"ref_id": "BIBREF11"
},
{
"start": 1093,
"end": 1108,
"text": "Schubert (2015)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "3"
},
{
"text": "Our ULF parser development draws inspiration from the body of semantic parsing research on graph-based formalism of natural language, in particular, the recent advances in AMR parsing Zhang et al., 2019a) . The core organization of our parser is based on , which uses a sequence-to-sequence model to predict the transition action sequence for a cache transition system with transition system features and hard attention alignment.",
"cite_spans": [
{
"start": 184,
"end": 204,
"text": "Zhang et al., 2019a)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "3"
},
{
"text": "There are many transition-based parsers that were developed for parsing meaning representations (Zhang et al., 2016; Buys and Blunsom, 2017; Damonte et al., 2017; Hershcovich et al., 2017) . These are mainly based on what's called an arceager parsing method, termed by Abney and Johnson (1991) . Arc-eager parsing greedily adds edges between nodes before full constituents are formed, which keeps the partial graph as connected as possible during the parsing process (Nivre, 2004) . They modify arc-eager parsing in various ways to generalize to the graph structures. Our transition system can be considered a modification of bottomup arc-standard parsing due to restrictions on arc formation. While this leads to a longer action sequence for parsing, the parser's access to complete constituents allows promotion-based symbol generation for unary operators such as type shifters and standard bottom-up type analysis for constrained parsing. ",
"cite_spans": [
{
"start": 96,
"end": 116,
"text": "(Zhang et al., 2016;",
"ref_id": "BIBREF43"
},
{
"start": 117,
"end": 140,
"text": "Buys and Blunsom, 2017;",
"ref_id": "BIBREF7"
},
{
"start": 141,
"end": 162,
"text": "Damonte et al., 2017;",
"ref_id": "BIBREF12"
},
{
"start": 163,
"end": 188,
"text": "Hershcovich et al., 2017)",
"ref_id": "BIBREF14"
},
{
"start": 279,
"end": 293,
"text": "Johnson (1991)",
"ref_id": null
},
{
"start": 467,
"end": 480,
"text": "(Nivre, 2004)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "3"
},
{
"text": "Our transition system is a modification of the cache transition system which has been shown to be effective in AMR parsing . The distinctive aspect of our version is that the transition system generates nodes that are derived, but distinct, from the input sequence. We call it a node generative transition system. This eliminates the two-stage parsing framework of . Our transition system also restricts the parses to be bottom-up to enable node generation and decoding constraints by the available constituents since ULF has an bottom-up compositional type system. The transition parser configuration is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Transition System",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "C = (\u03c3, \u03b7, \u03b2, G p )",
"eq_num": "(1)"
}
],
"section": "Our Transition System",
"sec_num": "4"
},
{
"text": "where \u03c3 is the stack, \u03b7 is the cache, \u03b2 is the buffer, and G p is the partial graph. The parser is initialized",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Transition System",
"sec_num": "4"
},
{
"text": "with ([],[$, . . . , $],[w 1 , . . . , w n ],\u2205)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Transition System",
"sec_num": "4"
},
{
"text": ", that is an empty stack, the cache with null values ($), the buffer with the input sequence of words, where each word is a token, lemma, POS tuple,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Transition System",
"sec_num": "4"
},
{
"text": "w i = (t i , l i , p i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Transition System",
"sec_num": "4"
},
{
"text": ", and an empty partial graph,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Transition System",
"sec_num": "4"
},
{
"text": "G p = (V p , E p ), where V p is ordered. A vertex, v i = (s i , a i ) \u2208 V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Transition System",
"sec_num": "4"
},
{
"text": ", is a pair of a ULF symbol s i , and its alignment a i -the index of the word from which s i was produced. We will refer to the leftmost element in \u03b2 as w next . While the size of the cache is a hyperparameter that can be set for the cache transition parser, we restrict the cache size to 2 in order to keep the oracle simple despite the newly added actions. This means that only tree structures can be parsed. In describing the transition system, we differentiate between phases and actions. Phases are classes of states in the transition system and the actions move between states. Figure 2 shows the full state transition diagram and shows how the phases dictate which actions can be taken and how actions move between phases. Actions may take variables to specify how to move into the next phase. Phases also determine which features go into the determining the next action. We will write phases in small caps (e.g. GEN) and actions in bold (e.g. TokenGen) for clarity.",
"cite_spans": [],
"ref_spans": [
{
"start": 585,
"end": 593,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Our Transition System",
"sec_num": "4"
},
{
"text": "The GEN and PROMOTE phases are novel to our transition system. The GEN phase generates graph vertices that are transformations of the buffer values. This allows us to put words of the input sentence in \u03b2 instead of a pre-computed ULF atom sequence. The PROMOTE phase enables contextsensitive symbol generation. It generates unaligned symbols in the context of an existing constituent in the partial graph. (Use of logical operators without word alignments only makes sense with respect to something for the operators to act on.) We now describe each of the actions in the transition system. The following are parser actions that were almost directly inherited from the vanilla cache transition parser.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Transition System",
"sec_num": "4"
},
{
"text": "\u2022 PushIndex(i) pushes (i, v) onto \u03c3, where v is the vertex currently at index i of \u03b7. Then it moves the vertex generated by the prior GEN phase to index i in \u03b7. \u2022 Arc(i, d, l) forms an arc with label l in direction d (i.e. left or right) between the vertex at index i of the cache and the rightmost vertex in the cache. The NoArc action is used if no arc is made. \u2022 Pop pops (i, v) from \u03c3 where i is the index of \u03b7 which v came from. v is placed at index i of \u03b7 and shifts the appropriate elements to the right.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Transition System",
"sec_num": "4"
},
{
"text": "We introduce two sets, S p and S s , which define the vocabulary of the two unaligned symbol generation actions: PromoteSym and SymGen, respectively. S p consists of logical and macro operators that do not align with English words. S s consists of symbols that could not be aligned in the training set and are not members of S p .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Transition System",
"sec_num": "4"
},
{
"text": "PROMOTE includes a subordinate PROMOTEARC phase for modularizing the parsing decision. The following parsing actions are in this phase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Promotion-based Symbol Generation",
"sec_num": "4.1"
},
{
"text": "\u2022 PromoteSym(s p ) generates a promotion symbol, Figure 3 : Example run of the transition system running on the sentence \"Do you want to see me?\" from our parser. The left four columns show the parser configuration after taking the actions shown in the rightmost column. We make the following modifications for brevity. When a WordGen action takes place, it is always followed by one of Name, Lemma, or Token and then a Suffix(e) action. Thus we omit the WordGen and Suffix actions and transfer the argument of Suffix to the Name, Lemma, or Token action. \"Promote\" is abbreviated as \"P\" (e.g., PromoteSym as PSym) and PushIdx as Push. Stack item indices (i, v) are written as v i instead. \u03c7 and \u03b9 stand for COMPLEX and INSTANCE which are the special node and edge labels, respectively, for constructing non-atomic ULF operators in penman format. Edge labels arg0 and arg1 simply indicate the argument position in ULF.",
"cite_spans": [],
"ref_spans": [
{
"start": 49,
"end": 57,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Promotion-based Symbol Generation",
"sec_num": "4.1"
},
{
"text": "E n = {e i | 0 \u2264 i < n}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Promotion-based Symbol Generation",
"sec_num": "4.1"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Promotion-based Symbol Generation",
"sec_num": "4.1"
},
{
"text": "e 0 = (do.aux-s arg0 \u2190 \u2212\u2212 \u2212 pres), e 1 = (pres \u03b9 \u2190 \u2212 \u03c7 0 ), e 2 = (\u03c7 0 arg0 \u2212 \u2212\u2212 \u2192 you), e 3 = (see.v arg0 \u2212 \u2212\u2212 \u2192 me.pro), e 4 = (to arg0 \u2212 \u2212\u2212 \u2192 see.v), e 5 = (want.v arg0 \u2212 \u2212\u2212 \u2192 to), e 6 = (\u03c7 0 arg0 \u2212 \u2212\u2212 \u2192 want.v), e 7 = (\u03c7 0 \u03b9 \u2190 \u2212 \u03c7 1 ), e 8 = (\u03c7 1 arg0 \u2212 \u2212\u2212 \u2192 ?).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Promotion-based Symbol Generation",
"sec_num": "4.1"
},
{
"text": "s p \u2208 S p , appends the vertex (s p , NONE) to V p , and proceeds to the PROMOTEARC phase. \u2022 NoPromote skips the PROMOTE phase and proceeds to the POP phase. \u2022 PromoteArc(l) makes an arc from the last added vertex, v p , to the vertex at the rightmost position of the cache, v \u03b7r , by adding",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Promotion-based Symbol Generation",
"sec_num": "4.1"
},
{
"text": "(v p , v \u03b7r , l) to E p . v p",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Promotion-based Symbol Generation",
"sec_num": "4.1"
},
{
"text": "then takes the place of v \u03b7r in the cache and v \u03b7r is no longer accessible by the transition system. The system proceeds to the ARC phase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Promotion-based Symbol Generation",
"sec_num": "4.1"
},
{
"text": "We replace the Shift action with the GEN phase to generate ULF atoms based on the tokenized text input. This phase allows the parser to generate a symbol using w next as a foundation, or generate an arbitrary symbol that is not aligned to any word in \u03b2. GEN includes subordinate phases WORD-GEN, LEMMAGEN, TOKENGEN, and NAMEGEN for modularizing the decision process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequential Symbol Generation",
"sec_num": "4.2"
},
{
"text": "\u2022 WordGen proceeds to WORDGEN phase, in which the following actions are available. 1. Name proceeds to the NAMEGEN phase. 2. Lemma proceeds to the LEMMAGEN phase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequential Symbol Generation",
"sec_num": "4.2"
},
{
"text": "3. Token proceeds to the TOKENGEN phase. \u2022 Suffix(e) is the only action available in the NAMEGEN, LEMMAGEN, and TOKENGEN phases. It generates a symbol s consisting of a stem and suffix extension e from w next . In the NAMEGEN phase, the stem is t next with surrounding pipes; in the TOKENGEN phase, the stem is t next ; and in the LEMMAGEN phase, the stem is l next . (s, i) where i is the index of w next is added to V p and we move forward one word in \u03b2. The system proceeds to the PUSH phase. \u2022 SymGen(s) adds an unaligned symbol (s, NONE) to V p and proceeds to the PUSH phase. \u2022 SkipWord skips word in \u03b2 and returns to the GEN phase. \u2022 MergeBuf takes w next and merges it with the word after it w next+1 . This is stored at the front of the buffer as a pair (v \u03b2 , v \u03b2+1 ). This forms a single stem with a space delimiter in the NAMEGEN phase and an underscore delimiter in the LEM-MAGEN and TOKENGEN phases. The system returns to the GEN phase. This is used to handle multi-word expressions (e.g. \"had better\").",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequential Symbol Generation",
"sec_num": "4.2"
},
{
"text": "The transition system begins in the GEN phase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequential Symbol Generation",
"sec_num": "4.2"
},
{
"text": "In order to train a model of the parser actions, we need to extract the desired action sequences from gold graphs. We modify the oracle extraction algorithm for the vanilla cache transition parser, described by . The oracle starts with a gold graph G g = (V g , E g ) and maintains the partial graph G p = (V p , E p ) of the parsing process, where V g is sequenced by the preorder traversal of G g . The oracle maintains s next , the symbol in the foremost vertex of V g that has not yet been added to G p . The oracle begins with a transition system configuration, C, initialized with the input sequence,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Oracle Extraction Algorithm",
"sec_num": "4.3"
},
{
"text": "w 1 , ..., w n .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Oracle Extraction Algorithm",
"sec_num": "4.3"
},
{
"text": "The oracle is also provided with an approximate alignment,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Oracle Extraction Algorithm",
"sec_num": "4.3"
},
{
"text": "A = {(w i , v j ) | 1 \u2264 i \u2264 n, 1 \u2264 j \u2264 m},",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Oracle Extraction Algorithm",
"sec_num": "4.3"
},
{
"text": "between the input sequence, w i:n , to the nodes in the gold graph, V g , |V g | = m, which is generated with a greedy matching algorithm. The matching algorithm uses a manually-tuned similarity heuristic built on the superficial similarity of English words, POS, and word order to the stems, suffixes, and preorder positions of the corresponding ULF atoms. A complete description of the alignment algorithm is in appendix B. This alignment is not necessary to maintain correctness of the oracle, but it is used to cut the losses when the input words become out of sync with the gold graph vertex order. 2 Steps 5-7 of the GEN phase uses the alignments to identify whether the buffer or the vertex order is ahead of the other and appropriately sync them back together.",
"cite_spans": [
{
"start": 604,
"end": 605,
"text": "2",
"ref_id": "BIBREF47"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Oracle Extraction Algorithm",
"sec_num": "4.3"
},
{
"text": "The oracle uses the following procedure, broken down by parsing phase, to extract the action sequence to build the G p = G g with C and A.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Oracle Extraction Algorithm",
"sec_num": "4.3"
},
{
"text": "\u2022 GEN phase: Let b = Stem(s next ), e = Suffix(s next ), n = IsName(s next ). 3 1. If n and t next = b, NameGen(e) 2. If not n and t next = i b, TokenGen(e) 3. If not n and l next = i b, LemmaGen(e) 4. MergeBuf if n and Pre(Concat(t next , \" \", t next+1 ), b) or not n and Pre i (Concat(l next , \" \", l next+1 ), b) or not n and Pre",
"cite_spans": [
{
"start": 78,
"end": 79,
"text": "3",
"ref_id": "BIBREF49"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Oracle Extraction Algorithm",
"sec_num": "4.3"
},
{
"text": "i (Concat(t next , \" \", t next+1 ), b) 5. If (w i , v next ) \u2208 A for w i before w next or v next \u2208 S s , then SymGen(v next ) 6. If (w next , v j ) \u2208 A for v j which comes after v next or v j \u2208 V p , then SkipWord. 7. Otherwise, SymGen(v next )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Oracle Extraction Algorithm",
"sec_num": "4.3"
},
{
"text": "Step 5-7 allow the oracle to handle the generation of symbols that are not in word order, by skipping any words that come earlier than the symbol order; and generating symbols that cannot be aligned with SymGen for any reason.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Oracle Extraction Algorithm",
"sec_num": "4.3"
},
{
"text": "\u2022 PUSH phase: The push phase of the vanilla cache transition parser's oracle-viz., choosing the cache position whose closest edge into \u03b2 is farthest away-is extended to account not only for direct edges, but also for paths that include only unaligned-symbols. 4 \u2022 ARC phase: The vanilla cache transition system's rule of generating the ARC action for any edge, e \u2208 E g \u2227 e / \u2208 E p between the rightmost cache position and the other positions, is extended to also require the child vertex to be fully formed.",
"cite_spans": [
{
"start": 260,
"end": 261,
"text": "4",
"ref_id": "BIBREF52"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Oracle Extraction Algorithm",
"sec_num": "4.3"
},
{
"text": "That is, for",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Oracle Extraction Algorithm",
"sec_num": "4.3"
},
{
"text": "the vertex v child , |descendants(v child , G g )| = |descendants(v child , G p )|.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Oracle Extraction Algorithm",
"sec_num": "4.3"
},
{
"text": "This enforces bottomup parsing, which is necessary for both the promotion-based symbol generation and type composition constraint.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Oracle Extraction Algorithm",
"sec_num": "4.3"
},
{
"text": "If the vertex in the rightmost cache position, v \u03b7r , is fully formed (|descendants(v \u03b7r , G g )| = |descendants(v \u03b7r , G p )|) and has a parent node in the PROMOTE lexicon (label(parent(v \u03b7r , G g )) \u2208 S p ), then the parser generates the action sequence PromoteSym(parent(v r , G g )), Pro-moteArc(l p ) where l p is the label for the edge from the parent of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 PROMOTE phase:",
"sec_num": null
},
{
"text": "v \u03b7r to v \u03b7r in G g (EdgeLabel(parent(v \u03b7r , G g ), v \u03b7r , G g )).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 PROMOTE phase:",
"sec_num": null
},
{
"text": "Our model has three basic components: (1) a word sequence encoder, (2) a ULF atom sequence encoder, and (3) an action decoder, all of which are Figure 4 : The model consists of a sentence-encoding BiLSTM, a symbol-encoding LSTM, and an action-decoding LSTM. New symbols generated in the GEN and PROMOTE phases of the transition system are appended to the symbol sequence. The transition system supplies hard attention pointers that select the relevant word and symbol embeddings. These are concatenated with the transition state feature vector and supplied as input to the action decoder, which predicts the next action that updates the transition system.",
"cite_spans": [],
"ref_spans": [
{
"start": 144,
"end": 152,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": "5"
},
{
"text": "LSTMs. During decoding, the transition system configuration, C, is updated with decoded actions and used to organize the action decoder inputs using the sequence encoders. The system models the following probability",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "5"
},
{
"text": "P (a 1:q |w 1:n ) = q t=1 P (a t |a 1:t\u22121 , w 1:n ; \u03b8) (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "5"
},
{
"text": "where a 1:q is the action sequence, w 1:n is the input sequence, and \u03b8 is the set of model parameters. Figure 4 is a diagram of the full model structure.",
"cite_spans": [],
"ref_spans": [
{
"start": 103,
"end": 111,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": "5"
},
{
"text": "The input word embedding sequence w 1:n is encoded by a stacked bidirectional LSTM (Hochreiter and Schmidhuber, 1997) with L w layers. Each word embedding sequence is a concatenation of embeddings of GloVe (Pennington et al., 2014) , lemmas, part-of-speech (POS) and named entity (NER) tags, RoBERTa (Liu et al., 2019) , and features learned by a character-level convolutional neural network (CharCNN, . As ULF symbols are generated during the parsing process, the symbol embedding sequence s 1:m , which is the concatenation of a symbol-level learned embedding and the CharCNN feature vector over the symbol string, is encoded by a stacked unidirectional LSTM of L s layers.",
"cite_spans": [
{
"start": 206,
"end": 231,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF32"
},
{
"start": 300,
"end": 318,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word and Symbol Sequence Encoders",
"sec_num": "5.1"
},
{
"text": "Peng et al. 2018found that for AMR parsing with cache transition systems, a hard attention mechanism, tracking the next buffer node position and its aligned word, works better than a soft attention mechanism for selecting the embedding used during decoding. We take this idea and modify the tracking mechanism to find the most relevant word, w i , and symbol, s j , for each phase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hard Attention",
"sec_num": "5.2"
},
{
"text": "\u2022 ARC and PROMOTE*: The symbol s j in the rightmost cache position and aligned word w i . \u2022 PUSH: The symbol s j generated in the previous action and aligned word w i . \u2022 Otherwise: The last generated symbol s j and the word w i in the leftmost \u03b2 position.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hard Attention",
"sec_num": "5.2"
},
{
"text": "This selects the output sequences h Lw w i and h Ls s j from the encoders for the action decoder. where e f k (C) (k = 1, ..., l) is the k-th feature embedding, with l total features. Our features, which are heavily inspired by , are as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hard Attention",
"sec_num": "5.2"
},
{
"text": "\u2022 Phase: An indicator of the phase in the transition system. \u2022 POP, GEN features: Token features 5 of the rightmost cache position and the leftmost buffer position; the number of rightward dependency edges from the cache position word and the first three of their labels; and the number of outgoing ULF arcs from the cache position and their first three labels. \u2022 ARC, PROMOTE features: For the two cache positions, their token features and the word, symbol 6 , and dependency distance between them; furthermore, their first three outgoing and single incoming dependency arc labels and their first two outgoing and single incoming ULF arc labels. \u2022 PROMOTEARC features: Same as the PROMOTE features but for the rightmost cache position use the node/symbol generated in the preceding Pro-moteSym action. \u2022 PUSH features: Token features for the leftmost buffer position and all cache positions.",
"cite_spans": [
{
"start": 458,
"end": 459,
"text": "6",
"ref_id": "BIBREF56"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transition State Features",
"sec_num": "5.3"
},
{
"text": "The action sequence is encoded by a stacked unidirectional LSTM with L a layers where the action input embeddings, h a 1:q are concatenations of the word and symbol encodings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Action Encoder/Decoder",
"sec_num": "5.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h a k = [h Lw w i ; h Ls s j ; e f (C)]",
"eq_num": "(4)"
}
],
"section": "Action Encoder/Decoder",
"sec_num": "5.4"
},
{
"text": "The state features h La a k are then decoded into prediction weights with a linear transformation and ReLU non-linearity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Action Encoder/Decoder",
"sec_num": "5.4"
},
{
"text": "The model is trained on the cross-entropy loss of the model probability (2) using the oracle action sequence. Both training and decoding are limited to a maximum action length of 800. For the training set the oracle has an average action length of 134 actions and a maximum action length of 1477.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing",
"sec_num": "6"
},
{
"text": "We investigate two methods of constraining the decoding process with prior knowledge of ULF to overcome the challenge of using a small dataset. These automatic methods filter out clearly incorrect 5 The token features are the ULF symbol and the word, lemma, POS, and NER tags of the aligned index of the input. 6 Symbol distance is based on the order in which the symbols are generated by the parser.",
"cite_spans": [
{
"start": 197,
"end": 198,
"text": "5",
"ref_id": "BIBREF55"
},
{
"start": 311,
"end": 312,
"text": "6",
"ref_id": "BIBREF56"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained Decoding",
"sec_num": "6.1"
},
{
"text": "choices at the cost of some decoding speed and further tailor the parser to ULFs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained Decoding",
"sec_num": "6.1"
},
{
"text": "ULF Lexicon To improve symbol generation, we introduce a lexicon with possible ULF atoms for each word. Nouns, verbs, adjectives, adverbs, and preposition entries are automatically converted from the Alvey lexicon (Carroll and Grover, 1989) with some manual editing. Pronouns, determiners, and conjunctions entries are extracted from Wiktionary 7 category lists. Auxiliary verbs entries are manually built from our ULF annotation guidelines. When generating a word-aligned symbol the stem is searched in the lexicon. If the string is present in the lexicon, only corresponding symbols in the lexicon are allowed to be generated. Since the lexicon is not completely comprehensive, this constraint may lead to some additional errors.",
"cite_spans": [
{
"start": 214,
"end": 240,
"text": "(Carroll and Grover, 1989)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained Decoding",
"sec_num": "6.1"
},
{
"text": "Type Composition The type system constraint adds a list of types, T v , to accompany |V p | (the vertices of the partial graph), which stores the ULF type of each vertex. When a vertex, v, is added to G p , its ULF type, t v is added to T v . This ULF type system is generalized with placeholders for macros and each stage in processing them. When the parser predicts an arc action during decoding, the types source, t s , and target, t t nodes are run through a type composition function. If the types can compose, t c = (t s .t t ), t c = \u2205, the type of the source node is replaced with t c . Otherwise, the resulting C is not added to the search beam.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained Decoding",
"sec_num": "6.1"
},
{
"text": "We ran our experiments on a hand-annotated dataset of ULFs totaling 1,738 sentences (1,378 train, 180 dev, 180 test) . The dataset is a mixture of sentences from crowd-sourced translations, news text, a question dataset, and novels. The distribution of sentences leans towards more questions, requests, clause-taking verbs, and counterfactuals because a portion of the dataset comes from the dataset used by for generating inferences from ULFs of those constructions.",
"cite_spans": [
{
"start": 84,
"end": 116,
"text": "(1,378 train, 180 dev, 180 test)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "7"
},
{
"text": "The data is split by segmenting the dataset into 10 sentence segments and distributing them in a round-robin fashion, with the training set receiving eight chunks in each round. This splitting method is designed to allow document-level topics to distribute into splits while limiting any performance inflation of the dev and test results that can result when localized word-choice and grammatical patterns are distributed into all splits. found that interannotator agreement (IA) on ULFs using the EL-SMATCH metric (Kim and Schubert, 2016) is 0.79. 8 We add a second pass to further reduce variability in our annotations. 9 Further details about the dataset are available in appendix A and the complete annotation guidelines are available as part of the dataset.",
"cite_spans": [
{
"start": 515,
"end": 539,
"text": "(Kim and Schubert, 2016)",
"ref_id": "BIBREF18"
},
{
"start": 549,
"end": 550,
"text": "8",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "7"
},
{
"text": "In order to use parsing and evaluation methods developed for AMR parsing (Banarescu et al., 2013a), we rewrite ULFs in penman format (Kasper, 1989) by introducing a node for each ULF atom and generating left-to-right arcs in the order that they appear (:ARG0, :ARG1, etc.), assuming the leftmost constituent is the parent. In order to handle non-atomic operators in penman format which only allows atomic nodes, we introduce a COMPLEX node with an :INSTANCE edge to mark the identity of the non-atomic operator.",
"cite_spans": [
{
"start": 133,
"end": 147,
"text": "(Kasper, 1989)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ULF-AMR",
"sec_num": null
},
{
"text": "Setup We evaluate the model with SEM-BLEU (Song and Gildea, 2019) , a metric for parsing accuracy of AMRs (Banarescu et al., 2013b) . This metric extends BLEU (Papineni et al., 2002) to node-and edge-labeled graphs. We also measure EL-SMATCH, a generalization of SMATCH to graphs with non-atomic nodes, for analysis of the model since it has F1, precision, and recall components.",
"cite_spans": [
{
"start": 42,
"end": 65,
"text": "(Song and Gildea, 2019)",
"ref_id": "BIBREF39"
},
{
"start": 106,
"end": 131,
"text": "(Banarescu et al., 2013b)",
"ref_id": "BIBREF6"
},
{
"start": 159,
"end": 182,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ULF-AMR",
"sec_num": null
},
{
"text": "The tokens, lemmas, POS tags, NER tags, and dependencies are all extracted using the Stanford CoreNLP toolkit . In all experiments the model was trained for 25 epochs. Starting at the 12th epoch we measured the SEM-BLEU performance on the dev split with beam size 3. Hyperparameters were tuned manually on the dev split performance of a smaller, preliminary version of the annotation corpus. We use RoBERTa-Base embeddings with frozen parameters, 300 dimensional GloVe embeddings, and 100 dimensional t i , l i , p i , action, and symbol embeddings. The word encoder is 3 layers. The symbol encoder and action decoder are 2 layers. Experiments were run on a single NVIDIA Tesla K80 or GeForce RTX 2070 GPU. Training the full model 8 cf. AMR is reported to have about 0.8 IA using the SMATCH metric (Tsialos, 2015) 9 We did not measure IAA on our dataset and take the prior report as an lower-end estimate given the similarity of our annotations methods and our additional review phase. Our annotation process was collaborative and result in a single annotation per sentence so IAA cannot be measured. takes about 6 hours. The full tables of results and default parameters are available in appendix D.",
"cite_spans": [
{
"start": 814,
"end": 815,
"text": "9",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ULF-AMR",
"sec_num": null
},
{
"text": "Ablations In our ablation tests, the model from the training epoch with the highest dev set SEM-BLEU score is evaluated on the test split with beam size 3. 10 The results are shown in Figure 5 .",
"cite_spans": [
{
"start": 156,
"end": 158,
"text": "10",
"ref_id": null
}
],
"ref_spans": [
{
"start": 184,
"end": 192,
"text": "Figure 5",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "7.1"
},
{
"text": "CharCNN and RoBERTa are the least important components-to the point that we cannot conclude that they are of any benefit to the model due to the large overlap in the performance of models with and without them. The GloVe, POS, and feature embeddings are more important. The importance of POS is not surprising given the tight correspondence between POS tags and ULF type tags.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "7.1"
},
{
"text": "SEMBLEU EL-SMATCH (Zhang et al., 2019a) 12.3 34.3 (Cai and Lam, 2020) 34.2 52.6 Our best model 47.4 59.8 Comparison to Baselines We compare our parser performance against two AMR parsers with minimal AMR-specific assumptions. The major recent efforts by the research community in AMR parsing make these parsers strong baselines. Specifically, we compare against the sequence-tograph (STOG) parser (Zhang et al., 2019a) and Cai and Lam's (2020) graph-sequence iterative inference (GS) parser. 11 The ULF dataset is preprocessed for these parsers by stripping pipes from names to support the use of a copy mechanism and splitting node labels with spaces into multiple nodes to make the labels compatible with their data pipelines. Table 1 shows the results. 12 The STOG parser fares poorly on both metrics. A review of the results revealed that the parser struggles with node prediction in particular. This is likely the result of the dataset size not properly supporting the parser's latent alignment mechanism. 13 The GS parser performs better than the STOG parser by a large margin, but is still far from our parser's performance. The GS parser also struggles with node prediction, but is more successful in maintaining the correct edges in spite of incorrect node labels. Investigating the dev set results reveals that our parser is quite successful in node generation, since by design the node generation process reflects the design of ULF atoms. Despite the theoretical capacity to generate node labels without a corresponding uttered word or phrase, our parser only does this for common logical operators such as reifiers and modifier constructors. The GS parser on the other hand, is relatively successful on node labels without uttered correspondences, correctly generating the elided \"you\" in imperatives and the logical operators ! and multi-sent which indicate imperatives and multi-sentence annotations, respectively. Our parser also manages to correctly generate a variety of verb phrase constructions, but fails to recognize reified infinitives as arguments of less frequent clausal verbs such as \"neglect\", \"attach\", etc. (as opposed to \"have\", \"tell\") and instead interprets \"to\" as either an argument-marking prepositions or reification of an already reified verb. Examples of parses and a discussion of specific errors are omitted here due to space constraints and provided in appendix E.",
"cite_spans": [
{
"start": 18,
"end": 39,
"text": "(Zhang et al., 2019a)",
"ref_id": null
},
{
"start": 50,
"end": 69,
"text": "(Cai and Lam, 2020)",
"ref_id": "BIBREF8"
},
{
"start": 397,
"end": 418,
"text": "(Zhang et al., 2019a)",
"ref_id": null
},
{
"start": 423,
"end": 443,
"text": "Cai and Lam's (2020)",
"ref_id": "BIBREF8"
},
{
"start": 492,
"end": 494,
"text": "11",
"ref_id": null
},
{
"start": 756,
"end": 758,
"text": "12",
"ref_id": null
},
{
"start": 1011,
"end": 1013,
"text": "13",
"ref_id": null
}
],
"ref_spans": [
{
"start": 729,
"end": 736,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Constrained Decoding When evaluating decoding constraints, we select the model by re-running the five best performing epochs with constraints. When using the type composition constraint, we additionally increase the beam size to 10 so that the parser has backup options when its top choices are filtered out. Table 2 presents these results. We see a increase in precision for +Lex, but a greater loss in recall. +Type reduces performance on all metrics. Due to the bottom-up parsing procedure, a filtering of choices can cascade into fragmented 12 Our parser gets the exact ULF for 6 out of the 180 sentences (3.3%). They were all yes-no questions which tend to be a bit shorter than informative declarative sentences (e.g. \"Can't you do something?\").",
"cite_spans": [
{
"start": 545,
"end": 547,
"text": "12",
"ref_id": null
}
],
"ref_spans": [
{
"start": 309,
"end": 316,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "13 The STOG parser is improved by (Zhang et al., 2019b ) with about 1 point of improvement on SMATCH. Unfortunately, the code for this parser is not released to the public. parses. The outputs for an arbitrarily selected run of the model has on average 2.9 fragments per sentence when decoding with the type constraint and 1.4 without. This and the relative performance on the precision metric suggest that constraints improve individual parsing choices, but are too strict, leading to fragmented parses.",
"cite_spans": [
{
"start": 34,
"end": 54,
"text": "(Zhang et al., 2019b",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Dependence on Length To investigate the performance dependence on the problem size, we partition the test set into quartiles by oracle action length. The 0 seed of our full model has SEM-BLEU scores of 52, 47, 48, and 31 on the quartiles of increasing length. As expected, the parser performs better on shorter tasks. The parser performance is relatively stable until the last quartile. This is likely due to a long-tail of sentence lengths in our dataset. This last quartile includes sentences with oracle action length ranging from 148 to 1474.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "We presented the first annotated ULF dataset and the first parser trained on such a dataset. We showed that our parser is a strong baseline, outperforming existing semantic parsers from a similar task. Surprisingly, our experiments showed that even in this low-resource setting, constrained decoding with a lexicon or a type system does more harm than good. However, the symbol generation method and features designed for ULFs result in a performance lead over using an AMR parser with minimal representational assumptions. We hope that releasing this dataset will spur other efforts into improving ULF parsing. This of course includes expanding the dataset, using our comprehensive annotation guidelines and tools; but we see many additional avenues of improvement. The type grammar opens up many promising possibilities: sampling of silver data (in conjunction with ULF to English generation ), use as a weighted constraint, or direct incorporation into a model to avoid the pitfalls we observed in our simple approach to semantic type enforcement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "This work was supported by NSF EAGER grant NSF IIS-1908595, DARPA CwC subcontract W911NF-15-1-0542, and a Sproull Graduate Fellowship from the University of Rochester. We are grateful to the anonymous reviewers for their helpful feedback. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": "9"
},
{
"text": "We chose a variety of text sources for constructing this dataset to reduce genre-effects and provide good coverage of all the phenomena we are investigating. Some of these datasets include annotations, which we use only to identify sentence and token boundaries. The dataset includes 1,738 sentences, with a mean, median, min, and max sentence lengths of 10.275, 8, 2, and 128 words, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Dataset Details",
"sec_num": null
},
{
"text": "\u2022 Tatoeba",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.1 Data Sources",
"sec_num": null
},
{
"text": "The Tatoeba dataset 14 consists of crowd-sourced translations from a community-based educational platform. People can request the translation of a sentence from one language to another on the website and other members will provide the translation. Due to this pedagogical structure, the sentences are fluent, simple, and highly-varied. The English portion downloaded on May 18, 2017 contains 687,274 sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.1 Data Sources",
"sec_num": null
},
{
"text": "\u2022 Discourse Graphbank",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.1 Data Sources",
"sec_num": null
},
{
"text": "The Discourse Graphbank (Wolf, 2005 ) is a discourse annotation corpus created from 135 newswire and WSJ texts. We use the discourse annotations to perform sentence delimiting. This dataset is on the order of several thousand sentences.",
"cite_spans": [
{
"start": 24,
"end": 35,
"text": "(Wolf, 2005",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A.1 Data Sources",
"sec_num": null
},
{
"text": "\u2022 Project Gutenberg Project Gutenberg 15 is an online repository of texts with expired copyright. We downloaded the top 100 most popular books from the 30 days prior to February 26, 2018. We then ignored books that have non-standard writing styles: poems, plays, archaic texts, instructional books, textbooks, and dictionaries. This collection totals to 578,650 sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.1 Data Sources",
"sec_num": null
},
{
"text": "The UIUC Question Classification dataset (Li and Roth, 2002) consists of questions from the TREC question answering competition. It covers a wide range of question structures on a wide variety of topics, but focuses on factoid questions. This dataset consists of 15,452 questions.",
"cite_spans": [
{
"start": 41,
"end": 60,
"text": "(Li and Roth, 2002)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 UIUC Question Classification",
"sec_num": null
},
{
"text": "Most of the dataset is annotated by random selection of a single or some contiguous sequence of sentences by annotators. However, part of the annotated dataset comes from inference experiments run by regarding questions, requests, counterfactuals, and clause-taking verbs. Therefore, the dataset has a bias towards having these phenomena at a higher frequency than expected from a random selection of English text. A key issue regarding the dataset is its difficulty. We primarily quantify this with the AMR parser baseline, the sequence-to-graph (STOG) parser (Zhang et al., 2019a) , in the main text, which performs quite poorly on this dataset. Its performance indicates that the patterns in this dataset are too varied for a modern parsing model to learn without built in ULF-specific biases. Although, part of this is due to the size of the dataset, if the dataset consisted only of short and highly-similar sentences, we would expect a modern neural model, such as the AMR baseline, to be able to learn successful parsing strategy for it.",
"cite_spans": [
{
"start": 561,
"end": 582,
"text": "(Zhang et al., 2019a)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 UIUC Question Classification",
"sec_num": null
},
{
"text": "This reflects the design of the dataset construction. Although the dataset indeed includes many short sentences, especially from the Tatoeba and UIUC Question Classification datasets, the sentences cover a wide range of styles and topics. The Tatoeba dataset is built from a crowd-sourced translation community, so the sentences are not limited in genre and style and has a bias toward sentences that give people trouble when learning a second language. We consider this to be valuable for a parsing dataset since, while the sentences from Tatoeba are usually short, they vary widely in topic and tend to focus on tricky phenomena that give languagelearners-and likely parsers-trouble. Sentences from the Discourse Graphbank (news text) and Project Gutenberg (novels) further widen the scope of genres and styles in the dataset. This should make it difficult for a parsing model to overfit to dataset distribution. The dataset also has a considerable representation of longer sentences (\u223c10% of the dataset is >20 words) including dozens of sentences exceeding 40 words, reaching up to 128 words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 UIUC Question Classification",
"sec_num": null
},
{
"text": "We use the same annotation interface as , which includes (1) syntax and bracket highlighting, (2) a sanity checker based on the underlying type grammar, and (3) uncertainty marking to trigger a review by a second annotator. The complete English-to-ULF annotation guideline is attached as a supplementary document. reports interannotator agreement (IA) of ULF annotations using this annotation procedure. In summary, they found that agreement among sentences that are marked as certain are 0.79 on average and can be up-to 0.88 when we filter for well-trained annotators. For comparison, AMR annotations have been reported to have annotator vs consensus IA of 0.83 for newswire text and 0.79 for webtext using the smatch metric (Tsialos, 2015) .",
"cite_spans": [
{
"start": 727,
"end": 742,
"text": "(Tsialos, 2015)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A.2 Annotation Interface & Interannotator Agreement",
"sec_num": null
},
{
"text": "In order to mitigate the issue of low agreement of some annotators in the IA study, each annotation in our dataset was reviewed by a second annotator and corrected if necessary. There was an open discussion among annotators to clear up uncertainty and handle tricky cases during both the original annotation and the reviewing process so the actual dataset annotations are more consistent than the test of IA agreement (which had completely independent annotations) would suggest.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.2 Annotation Interface & Interannotator Agreement",
"sec_num": null
},
{
"text": "The data split is done by segmenting the dataset into 10 sentence segments and distributing them in a round-robin fashion, with the training set receiving eight chunks in each round. This splitting method is designed to allow document-level topics to distribute into splits while limiting any performance inflation of the dev and test results that can result when localized word-choice and grammatical patterns are distributed into all splits.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.3 Dataset Splits",
"sec_num": null
},
{
"text": "The Tatoeba dataset further exacerbates the issue of localized word-choice and grammatical patterns since multiple sentences using the same phrase or grammatical construction often appear back-toback. We suspect that this is because the Tatoeba dataset is ordered chronologically and users often submit multiple similar sentences in order to help understand a particular phrase or grammatical pattern in a language that they are learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.3 Dataset Splits",
"sec_num": null
},
{
"text": "The ULF-English alignment system takes into account the similarity of the English word to the ULF atom without the type extension, the similarity of the type extension with the POS tag, and the relative distance of the word and symbol in question.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Full ULF Alignment Details",
"sec_num": null
},
{
"text": "Given a sentence s = w 1:n , which is tokenized, t 1:n , lemmatized, l 1:n , and POS tagged, p 1:n , a set of symbols that are never aligned S u , and a list of ULF atoms a 1:m , which can be broken up into the base stems, b 1:m , and suffix extensions, e 1:m , in order of appearance in the formula (i.e. DFS preorder traversal), the word/atom similarity is defined using the following formulas. Next, in order of Sim(w, a), we consider each word-atom pair, (w i , a i ), 1 \u2264 i \u2264 n until Sim(w, a) < MinSim, where MinSim is set to 1.0, based on cursory results. We further disregard any alignments that include an atom which shouldn't be aligned (a i s.t. a i \u2208 S u ). We assume that spans of words align to connected subgraphs, so we cannot accept all word-atom pairs. An wordatom pair, (w i , a i ), is accepted into the set of token alignments, A t , if and only if the following conditions are met:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Full ULF Alignment Details",
"sec_num": null
},
{
"text": "1. w i has no alignments or a i is connected to an atom, a , that is already aligned to w i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Full ULF Alignment Details",
"sec_num": null
},
{
"text": "2. a i is not in any other alignment or w i is adjacent to another, w which is already aligned to a i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Full ULF Alignment Details",
"sec_num": null
},
{
"text": "The token-level (word-atom) alignment, A t , is then converted to connected (span-subgraph) alignment, A. This is done with the following algorithm:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Full ULF Alignment Details",
"sec_num": null
},
{
"text": "1. For every atom a i in one of the aligned pairs of A t , merge all of the words aligned to a i into a single span, s i . During the initial alignment, we ensured that these words would form a span.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Full ULF Alignment Details",
"sec_num": null
},
{
"text": "2. Merge all overlapping spans into single spans and collect the set of atoms that are aligned to each of these spans into a subgraph. 16 These collected subgraphs will be connected because we ensured that for any word the nodes that it is aligned to forms a connected subgraph. 16 This can be done in O(n log n) time by sorting the spans, then doing a single pass of merging overlapping elements.",
"cite_spans": [
{
"start": 135,
"end": 137,
"text": "16",
"ref_id": null
},
{
"start": 279,
"end": 281,
"text": "16",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "B Full ULF Alignment Details",
"sec_num": null
},
{
"text": "Except for RoBERTa, all other embeddings are fetched from their corresponding learned embedding lookup tables. RoBERTa uses OpenAI GPT-2 tokenizer for the input sequence and segments words into subwords prior to generating embeddings, which means one input word may correspond to multiple hidden states of RoBERTa. In order to accurately use these hidden states to represent each word, we apply an average pooling function to the outputs of RoBERTa according to the alignments between the original and GPT-2 tokenized sequences. Tables Tables of the full set of raw results and parameters are presented in this section. Table 3 shows the ablations on the model without decoding constraints. This is the basis of Figure 5 in the main text. Table 4 shows the performance change with the lexicon constraint and Table 5 shows the performance change with the composition constraint. These tables are the basis of Table 2 in the main text. Our experiments with the lexicon constraint were more extensive since the type constraint takes considerably longer to run due to requiring a larger beam size and more computational overhead. Table 7 presents all of the model parameters in our experiments. Figure 6 shows six parse examples of our parser and the GS parser in reference to the gold standard. Generally, we find that our parser does much better on node generation for nodes that correspond to an input word. For example, the GS parser on example 1 uses (plur *s) for the word \"speech\" and iron.n for the words \"silver\" and \"silence\". This isn't to say that our parser doesn't make mistakes. But the mistakes are not as open-ended. For example, our parser mistakenly annotates \"silver\" as a noun in example 1 when in fact it should be an adjective (compared against \"golden\"). The GS parser seems to pick the closest word in its vocabulary, which is generated from the training set and closed. This leads to strange annotations like iron.n for the word \"silence\". If there is nothing close available, then it can derail the entire parse. In example 4, the GS parser is unable to find a node label for the word \"device\" which derails the parse to generate (mod-n 46.6 \u00b1 1.3 46.9 \u00b1 0.6 56.1 \u00b1 0.6 57.8 \u00b1 0.4 60.0 \u00b1 0.7 60.5 \u00b1 0.9 52.6 \u00b1 0.6 55.3 \u00b1 0.5 -CharCNN 45.8 \u00b1 2.3 45.5 \u00b1 2.5 56.1 \u00b1 1.4 56.9 \u00b1 1.1 59.3 \u00b1 2.4 59.6 \u00b1 1.8 53.3 \u00b1 1.1 54.5 \u00b1 1.5 -e f (C) Feats 45.9 \u00b1 1.5 45.6 \u00b1 0.9 56.5 \u00b1 0.6 57.0 \u00b1 0.5 62.0 \u00b1 0.8 61.4 \u00b1 0.6 52.0 \u00b1 1.1 53.3 \u00b1 0.5 -POS 44.1 \u00b1 2.0 44.5 \u00b1 0.9 55.3 \u00b1 0.2 56.6 \u00b1 0.7 58.5 \u00b1 2.2 60.4 \u00b1 0.8 52.6 \u00b1 2.3 53.2 \u00b1 1.4 -GloVe 46.1 \u00b1 1.1 45.4 \u00b1 1.4 55.9 \u00b1 0.9 57.0 \u00b1 0.6 59.5 \u00b1 1.5 60.3 \u00b1 0.8 52.7 \u00b1 1.0 54.0 \u00b1 0.7 Table 4 : Ablation results with the lexicon constraint, mean and standard deviation of 5 runs. \u2206x is the difference in the mean score between the test set results of the model with the lexicon constraint and without, i.e. Table 3 . We only list this for the full model, but the pattern of higher precision but lower scores on other metrics generally holds for the other variants as well. Table 5 : Ablation results with the type composition constraint, mean and standard deviation of 5 runs. \u2206x is the difference in the mean score between the test set results of the model with the type constraint and without, i.e. Table 3 . We only ran the full model for this test because this constraint takes much longer to run.",
"cite_spans": [],
"ref_spans": [
{
"start": 529,
"end": 556,
"text": "Tables Tables of the full",
"ref_id": null
},
{
"start": 622,
"end": 629,
"text": "Table 3",
"ref_id": "TABREF5"
},
{
"start": 714,
"end": 722,
"text": "Figure 5",
"ref_id": "FIGREF3"
},
{
"start": 741,
"end": 748,
"text": "Table 4",
"ref_id": null
},
{
"start": 810,
"end": 817,
"text": "Table 5",
"ref_id": null
},
{
"start": 910,
"end": 917,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 1193,
"end": 1201,
"text": "Figure 6",
"ref_id": null
},
{
"start": 2638,
"end": 2645,
"text": "Table 4",
"ref_id": null
},
{
"start": 2860,
"end": 2867,
"text": "Table 3",
"ref_id": "TABREF5"
},
{
"start": 3026,
"end": 3033,
"text": "Table 5",
"ref_id": null
},
{
"start": 3254,
"end": 3261,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "C RoBERTa Handling Details",
"sec_num": null
},
{
"text": "Fragments/Sentence \u03b1 \u03c4 Full 1.4 2.9 -CharCNN 1.1 3.5 -e f (C) Feats 1. 4 3.9 -POS 1.5 3.2 x 1.4 3.4 Table 6 : Fragments per sentence on the test set decoding results for a subset of the ablated lexicon-constrained models (Table 4) . \u03b1 is the original model and \u03c4 is with the type composition constraint.",
"cite_spans": [
{
"start": 71,
"end": 72,
"text": "4",
"ref_id": "BIBREF52"
}
],
"ref_spans": [
{
"start": 100,
"end": 107,
"text": "Table 6",
"ref_id": null
},
{
"start": 221,
"end": 230,
"text": "(Table 4)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "(mod-n man.n) (mod-n man.n iron.n) mod-n mod-n) for the text span \"device is attached firmly to the ceiling\". This isn't to say that the GS parser always performs worse than our parser. When it comes to words that are elided ({you}.pro in example 4), nodes generated from multiple words (had better.aux-s in example 3), or logical symbols unassociated with a particular word (multisent in example 6), the GS parser consistently performs better than our parser. Our parser has no special mechanism for these handling these cases and prefers to avoid generating node labels without an anchoring word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "A common mistake by our parser seems to be nested reifiers, which is not possible in the EL type system (e.g. (to (ka come.v)) in example 5 and (to (ka (show.v ..))) in example 6). Other common mistakes that could be fixed by type coherence enforcement is mistakenly shifting a term into a modifier (e.g. (adv-a (to ...)) in example 6). In the EL type system only predicates can be shifted into modifiers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "The training set in our initial release is only 1,378 sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "When the words become out-of-sync with the gold graph the oracle must rely on SymGen to generate the graph nodes. Since SymGen requires selecting the correct value out the entire vocabulary of ULF atoms, it is much more difficult to predict correctly than NameGen, TokenGen, and LemmaGen which require only selecting the correct type tag.3 = is string match, =i is case-insensitive string match, Pre determines whether its first argument is a prefix of the second and Prei is the case-insensitive counterpart.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The motivation for this is that if only unaligned symbols exist in the path, the full path can be made without changing the relative status of any other node in the transition system. Let v1 and v2 be the end points of the path. With v1 in the cache and the word aligned to v2, wv 2 = wnext, SymGen and PROMOTE can generate all nodes in the path without interacting with the rest of the transition system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://en.wiktionary.org/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Our initial experiments re-evaluated the top-5 choices with a beam size of 10, but we found that the performance consistently degraded and abandoned this step.11 We do not compare our model against the existing rulebased ULF parsers since they are domain specific and cannot handle the range of sentences that appear in our dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://tatoeba.org/eng/ 15 https://www.gutenberg.org",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "PArc(arg0)",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "PArc(arg0)",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "E3 NoArc; NoP; Lemma(v)",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arc(0, R, arg0); NoP; Pop [$ 0 , $ 0 ] [\u03c70, want.v] [to, see, me, ?] E3 NoArc; NoP; Lemma(v); Push(0) , to] [see, me, ?] E3 NoArc; NoP; Lemma(\u2205); Push(0)",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Memory requirements and local ambiguities of parsing strategies",
"authors": [],
"year": 1991,
"venue": "E9 NoArc; NoP; Pop References",
"volume": "20",
"issue": "",
"pages": "233--250",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arc(0, R, arg0); NoP; Pop [] [$, $] [] E9 NoArc; NoP; Pop References Steven P Abney and Mark Johnson. 1991. Mem- ory requirements and local ambiguities of parsing strategies. Journal of Psycholinguistic Research, 20(3):233-250.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Abstract meaning representation for sembanking",
"authors": [
{
"first": "Laura",
"middle": [],
"last": "Banarescu",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Bonial",
"suffix": ""
},
{
"first": "Shu",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Madalina",
"middle": [],
"last": "Georgescu",
"suffix": ""
},
{
"first": "Kira",
"middle": [],
"last": "Griffitt",
"suffix": ""
},
{
"first": "Ulf",
"middle": [],
"last": "Hermjakob",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Schneider",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 7th linguistic annotation workshop and interoperability with discourse",
"volume": "",
"issue": "",
"pages": "178--186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013a. Abstract meaning representation for sembanking. In Proceedings of the 7th linguistic annotation workshop and interoperability with dis- course, pages 178-186.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Abstract Meaning Representation for sembanking",
"authors": [
{
"first": "Laura",
"middle": [],
"last": "Banarescu",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Bonial",
"suffix": ""
},
{
"first": "Shu",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Madalina",
"middle": [],
"last": "Georgescu",
"suffix": ""
},
{
"first": "Kira",
"middle": [],
"last": "Griffitt",
"suffix": ""
},
{
"first": "Ulf",
"middle": [],
"last": "Hermjakob",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Schneider",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse",
"volume": "",
"issue": "",
"pages": "178--186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013b. Abstract Meaning Representa- tion for sembanking. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperabil- ity with Discourse, pages 178-186, Sofia, Bulgaria. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Robust incremental neural semantic graph parsing",
"authors": [
{
"first": "Jan",
"middle": [],
"last": "Buys",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1215--1226",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1112"
]
},
"num": null,
"urls": [],
"raw_text": "Jan Buys and Phil Blunsom. 2017. Robust incremen- tal neural semantic graph parsing. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1215-1226, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "AMR parsing via graphsequence iterative inference",
"authors": [
{
"first": "Deng",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Wai",
"middle": [],
"last": "Lam",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1290--1301",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.119"
]
},
"num": null,
"urls": [],
"raw_text": "Deng Cai and Wai Lam. 2020. AMR parsing via graph- sequence iterative inference. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 1290-1301, Online. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Smatch: an evaluation metric for semantic feature structures",
"authors": [
{
"first": "Shu",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "748--752",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shu Cai and Kevin Knight. 2013. Smatch: an evalua- tion metric for semantic feature structures. In Pro- ceedings of the 51st Annual Meeting of the Associa- tion for Computational Linguistics (Volume 2: Short Papers), pages 748-752, Sofia, Bulgaria. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The derivation of a large computational lexicon of english from LDOCE",
"authors": [
{
"first": "J",
"middle": [],
"last": "Carroll",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Grover",
"suffix": ""
}
],
"year": 1989,
"venue": "Computational Lexicography for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "117--134",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Carroll and C. Grover. 1989. The derivation of a large computational lexicon of english from LDOCE. In Boguraev B. and Briscoe E., editors, Computational Lexicography for Natural Language Processing, pages 117-134. Longman, Harlow, UK.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Minimal Recursion Semantics: An introduction",
"authors": [
{
"first": "Ann",
"middle": [],
"last": "Copestake",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Flickinger",
"suffix": ""
},
{
"first": "Carl",
"middle": [],
"last": "Pollard",
"suffix": ""
},
{
"first": "Ivan",
"middle": [
"A"
],
"last": "Sag",
"suffix": ""
}
],
"year": 2005,
"venue": "Research on Language and Computation",
"volume": "3",
"issue": "2",
"pages": "281--332",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ann Copestake, Dan Flickinger, Carl Pollard, and Ivan A. Sag. 2005. Minimal Recursion Semantics: An introduction. Research on Language and Com- putation, 3(2):281-332.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "An incremental parser for Abstract Meaning Representation",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Damonte",
"suffix": ""
},
{
"first": "Shay",
"middle": [
"B"
],
"last": "Cohen",
"suffix": ""
},
{
"first": "Giorgio",
"middle": [],
"last": "Satta",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "536--546",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Damonte, Shay B. Cohen, and Giorgio Satta. 2017. An incremental parser for Abstract Mean- ing Representation. In Proceedings of the 15th Con- ference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Pa- pers, pages 536-546, Valencia, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Cache transition systems for graph parsing",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Giorgio",
"middle": [],
"last": "Satta",
"suffix": ""
},
{
"first": "Xiaochang",
"middle": [],
"last": "Peng",
"suffix": ""
}
],
"year": 2018,
"venue": "Computational Linguistics",
"volume": "44",
"issue": "1",
"pages": "85--118",
"other_ids": {
"DOI": [
"10.1162/COLI_a_00308"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel Gildea, Giorgio Satta, and Xiaochang Peng. 2018. Cache transition systems for graph parsing. Computational Linguistics, 44(1):85-118.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A transition-based directed acyclic graph parser for UCCA",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Hershcovich",
"suffix": ""
},
{
"first": "Omri",
"middle": [],
"last": "Abend",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1127--1138",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1104"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel Hershcovich, Omri Abend, and Ari Rappoport. 2017. A transition-based directed acyclic graph parser for UCCA. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1127- 1138, Vancouver, Canada. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A flexible interface for linking applications to Penman's sentence generator",
"authors": [
{
"first": "Robert",
"middle": [
"T"
],
"last": "Kasper",
"suffix": ""
}
],
"year": 1989,
"venue": "Speech and Natural Language: Proceedings of a Workshop Held at",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert T. Kasper. 1989. A flexible interface for linking applications to Penman's sentence genera- tor. In Speech and Natural Language: Proceedings of a Workshop Held at Philadelphia, Pennsylvania, February 21-23, 1989.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Generating discourse inferences from unscoped episodic logical formulas",
"authors": [
{
"first": "Gene",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Kane",
"suffix": ""
},
{
"first": "Viet",
"middle": [],
"last": "Duong",
"suffix": ""
},
{
"first": "Muskaan",
"middle": [],
"last": "Mendiratta",
"suffix": ""
},
{
"first": "Graeme",
"middle": [],
"last": "Mcguire",
"suffix": ""
},
{
"first": "Sophie",
"middle": [],
"last": "Sackstein",
"suffix": ""
},
{
"first": "Georgiy",
"middle": [],
"last": "Platonov",
"suffix": ""
},
{
"first": "Lenhart",
"middle": [],
"last": "Schubert",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the First International Workshop on Designing Meaning Representations",
"volume": "",
"issue": "",
"pages": "56--65",
"other_ids": {
"DOI": [
"10.18653/v1/W19-3306"
]
},
"num": null,
"urls": [],
"raw_text": "Gene Kim, Benjamin Kane, Viet Duong, Muskaan Mendiratta, Graeme McGuire, Sophie Sackstein, Georgiy Platonov, and Lenhart Schubert. 2019. Gen- erating discourse inferences from unscoped episodic logical formulas. In Proceedings of the First Inter- national Workshop on Designing Meaning Represen- tations, pages 56-65, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "High-fidelity lexical axiom construction from verb glosses",
"authors": [
{
"first": "Gene",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Lenhart",
"middle": [],
"last": "Schubert",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Fifth Joint Conference on Lexical and Computational Semantics",
"volume": "",
"issue": "",
"pages": "34--44",
"other_ids": {
"DOI": [
"10.18653/v1/S16-2004"
]
},
"num": null,
"urls": [],
"raw_text": "Gene Kim and Lenhart Schubert. 2016. High-fidelity lexical axiom construction from verb glosses. In Proceedings of the Fifth Joint Conference on Lexical and Computational Semantics, pages 34-44, Berlin, Germany. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Monotonic inference for underspecified episodic logic",
"authors": [
{
"first": "Gene",
"middle": [
"Louis"
],
"last": "Kim",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Juvekar",
"suffix": ""
},
{
"first": "Lenhart",
"middle": [],
"last": "Schubert",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 1st Workshop on Natural Logic Meets Machine Learning (NALOMA)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gene Louis Kim, Mandar Juvekar, and Lenhart Schu- bert. 2020. Monotonic inference for underspeci- fied episodic logic. In Proceedings of the 1st Work- shop on Natural Logic Meets Machine Learning (NALOMA). Association for Computational Linguis- tics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A typecoherent, expressive representation as an initial step to language understanding",
"authors": [
{
"first": "Gene",
"middle": [
"Louis"
],
"last": "",
"suffix": ""
},
{
"first": "Kim",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Lenhart",
"middle": [],
"last": "Schubert",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Conference on Computational Semantics -Long Papers",
"volume": "",
"issue": "",
"pages": "13--30",
"other_ids": {
"DOI": [
"10.18653/v1/W19-0402"
]
},
"num": null,
"urls": [],
"raw_text": "Gene Louis Kim and Lenhart Schubert. 2019. A type- coherent, expressive representation as an initial step to language understanding. In Proceedings of the 13th International Conference on Computational Se- mantics -Long Papers, pages 13-30, Gothenburg, Sweden. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Character-aware neural language models",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Sontag",
"suffix": ""
},
{
"first": "Alexander M",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2016,
"venue": "Thirtieth AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoon Kim, Yacine Jernite, David Sontag, and Alexan- der M Rush. 2016. Character-aware neural language models. In Thirtieth AAAI Conference on Artificial Intelligence.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Towards natural language story understanding with rich logical schemas",
"authors": [
{
"first": "Lane",
"middle": [],
"last": "Lawley",
"suffix": ""
},
{
"first": "Gene",
"middle": [
"Louis"
],
"last": "Kim",
"suffix": ""
},
{
"first": "Lenhart",
"middle": [],
"last": "Schubert",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Sixth Workshop on Natural Language and Computer Science",
"volume": "",
"issue": "",
"pages": "11--22",
"other_ids": {
"DOI": [
"10.18653/v1/W19-1102"
]
},
"num": null,
"urls": [],
"raw_text": "Lane Lawley, Gene Louis Kim, and Lenhart Schubert. 2019. Towards natural language story understand- ing with rich logical schemas. In Proceedings of the Sixth Workshop on Natural Language and Computer Science, pages 11-22, Gothenburg, Sweden. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Learning question classifiers",
"authors": [
{
"first": "Xin",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2002,
"venue": "COLING 2002: The 19th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xin Li and Dan Roth. 2002. Learning question clas- sifiers. In COLING 2002: The 19th International Conference on Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "RoBERTa: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "The Stanford CoreNLP natural language processing toolkit",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "Jenny",
"middle": [],
"last": "Finkel",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mcclosky",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "55--60",
"other_ids": {
"DOI": [
"10.3115/v1/P14-5010"
]
},
"num": null,
"urls": [],
"raw_text": "Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language pro- cessing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Lin- guistics: System Demonstrations, pages 55-60, Bal- timore, Maryland. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Universal grammar. Theoria",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Montague",
"suffix": ""
}
],
"year": 1970,
"venue": "",
"volume": "36",
"issue": "",
"pages": "373--398",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Montague. 1970. Universal grammar. Theo- ria, 36(3):373-398.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Evaluation of Epilog: A reasoner for Episodic Logic",
"authors": [
{
"first": "Fabrizio",
"middle": [],
"last": "Morbini",
"suffix": ""
},
{
"first": "Lenhart",
"middle": [],
"last": "Schubert",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Ninth International Symposium on Logical Formalizations of Commonsense Reasoning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabrizio Morbini and Lenhart Schubert. 2009. Evalu- ation of Epilog: A reasoner for Episodic Logic. In Proceedings of the Ninth International Symposium on Logical Formalizations of Commonsense Reason- ing, Toronto, Canada.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Incrementality in deterministic dependency parsing",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the workshop on incremental parsing: Bringing engineering and cognition together",
"volume": "",
"issue": "",
"pages": "50--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre. 2004. Incrementality in deterministic de- pendency parsing. In Proceedings of the workshop on incremental parsing: Bringing engineering and cognition together, pages 50-57.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Cross-framework meaning representation parsing",
"authors": [
{
"first": "Stephan",
"middle": [],
"last": "Oepen",
"suffix": ""
},
{
"first": "Omri",
"middle": [],
"last": "Abend",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Hajic",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Hershcovich",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Kuhlmann",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Tim",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Gorman",
"suffix": ""
},
{
"first": "Jayeol",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Milan",
"middle": [],
"last": "Chun",
"suffix": ""
},
{
"first": "Zdenka",
"middle": [],
"last": "Straka",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Uresova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Shared Task on Cross-Framework Meaning Representation Parsing at the 2019 Conference on Natural Language Learning",
"volume": "2019",
"issue": "",
"pages": "1--27",
"other_ids": {
"DOI": [
"10.18653/v1/K19-2001"
]
},
"num": null,
"urls": [],
"raw_text": "Stephan Oepen, Omri Abend, Jan Hajic, Daniel Her- shcovich, Marco Kuhlmann, Tim O'Gorman, Nian- wen Xue, Jayeol Chun, Milan Straka, and Zdenka Uresova. 2019. MRP 2019: Cross-framework mean- ing representation parsing. In Proceedings of the Shared Task on Cross-Framework Meaning Repre- sentation Parsing at the 2019 Conference on Natural Language Learning, pages 1-27, Hong Kong. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {
"DOI": [
"10.3115/1073083.1073135"
]
},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Sequence-to-sequence models for cache transition systems",
"authors": [
{
"first": "Xiaochang",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Linfeng",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Giorgio",
"middle": [],
"last": "Satta",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1842--1852",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1171"
]
},
"num": null,
"urls": [],
"raw_text": "Xiaochang Peng, Linfeng Song, Daniel Gildea, and Giorgio Satta. 2018. Sequence-to-sequence models for cache transition systems. In Proceedings of the 56th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 1842-1852, Melbourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "GloVe: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "A spoken dialogue system for spatial question answering in a physical blocks world",
"authors": [
{
"first": "Georgiy",
"middle": [],
"last": "Platonov",
"suffix": ""
},
{
"first": "Lenhart",
"middle": [],
"last": "Schubert",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Kane",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Gindi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "128--131",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Georgiy Platonov, Lenhart Schubert, Benjamin Kane, and Aaron Gindi. 2020. A spoken dialogue system for spatial question answering in a physical blocks world. In Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dia- logue, pages 128-131, 1st virtual meeting. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "From treebank parses to Episodic Logic and commonsense inference",
"authors": [
{
"first": "Lenhart",
"middle": [],
"last": "Schubert",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the ACL 2014 Workshop on Semantic Parsing",
"volume": "",
"issue": "",
"pages": "55--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lenhart Schubert. 2014. From treebank parses to Episodic Logic and commonsense inference. In Pro- ceedings of the ACL 2014 Workshop on Semantic Parsing, pages 55-60, Baltimore, MD. Association for Computational Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Semantic representation",
"authors": [
{
"first": "Lenhart",
"middle": [],
"last": "Schubert",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, AAAI'15",
"volume": "",
"issue": "",
"pages": "4132--4138",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lenhart Schubert. 2015. Semantic representation. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, AAAI'15, pages 4132- 4138. AAAI Press.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "The situations we talk about",
"authors": [
{
"first": "K",
"middle": [],
"last": "Lenhart",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schubert",
"suffix": ""
}
],
"year": 2000,
"venue": "Logic-based Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "407--439",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lenhart K. Schubert. 2000. The situations we talk about. In Jack Minker, editor, Logic-based Artifi- cial Intelligence, pages 407-439. Kluwer Academic Publishers, Norwell, MA, USA.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Episodic Logic meets Little Red Riding Hood: A comprehensive natural representation for language understanding",
"authors": [
{
"first": "K",
"middle": [],
"last": "Lenhart",
"suffix": ""
},
{
"first": "Chung",
"middle": [
"Hee"
],
"last": "Schubert",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hwang",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lenhart K. Schubert and Chung Hee Hwang. 2000. Episodic Logic meets Little Red Riding Hood: A comprehensive natural representation for language understanding. In Lucja M. Iwa\u0144ska and Stuart C.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Natural Language Processing and Knowledge Representation",
"authors": [
{
"first": "",
"middle": [],
"last": "Shapiro",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "111--174",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shapiro, editors, Natural Language Processing and Knowledge Representation, pages 111-174. MIT Press, Cambridge, MA, USA.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "SemBleu: A robust metric for AMR parsing evaluation",
"authors": [
{
"first": "Linfeng",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4547--4552",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1446"
]
},
"num": null,
"urls": [],
"raw_text": "Linfeng Song and Daniel Gildea. 2019. SemBleu: A robust metric for AMR parsing evaluation. In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4547- 4552, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Abstract meaning rep",
"authors": [
{
"first": "Aristeidis",
"middle": [],
"last": "Tsialos",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aristeidis Tsialos. 2015. Abstract meaning rep-",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "tnlp/2014/Aristeidis.pdf, accessed December 8",
"authors": [],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "tnlp/2014/Aristeidis.pdf, accessed Decem- ber 8, 2018.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Coherence in natural language : data structures and applications. Ph.D. thesis, Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences",
"authors": [
{
"first": "Florian",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Florian Wolf. 2005. Coherence in natural language : data structures and applications. Ph.D. thesis, Mas- sachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Transition-based neural word segmentation",
"authors": [
{
"first": "Meishan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Guohong",
"middle": [],
"last": "Fu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "421--431",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1040"
]
},
"num": null,
"urls": [],
"raw_text": "Meishan Zhang, Yue Zhang, and Guohong Fu. 2016. Transition-based neural word segmentation. In Pro- ceedings of the 54th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 421-431, Berlin, Germany. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Gold: (((k speech.n) ((pres be.v) silver.a)) but.cc ((k silence.n) ((pres be",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "\"Speech is silver but silence is golden.\" Gold: (((k speech.n) ((pres be.v) silver.a)) but.cc ((k silence.n) ((pres be.v) golden.a)))",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Ours: (((k speech.n) ((pres be.v) silver.n)) (k silence.n) ((pres be",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ours: (((k speech.n) ((pres be.v) silver.n)) (k silence.n) ((pres be.v) golden.a))",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "plur *s)) ((pres be.v) (= (k iron.n)))) but.cc ((k iron.n) ((pres be",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "GS: (((k (plur *s)) ((pres be.v) (= (k iron.n)))) but.cc ((k iron.n) ((pres be.v) =)))",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "You neglected to tell me to buy bread",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "\"You neglected to tell me to buy bread.\" Gold: (you.pro ((past neglect.v) (to (tell.v me.pro (to (buy.v (k bread.n)))))))",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "GS: (you.pro ((past fail.v) (to (tell.v me",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "GS: (you.pro ((past fail.v) (to (tell.v me.pro {ref}.pro))))",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Gold: (you.pro ((pres had better.aux-s) (knuckle.v down",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "\"You'd better knuckle down to work.\" Gold: (you.pro ((pres had better.aux-s) (knuckle.v down.adv-a (to work.v))))",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "aux-s) (knuckle.v down.a (adv-a (to",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ours: (you.pro ((pres would.aux-s) (knuckle.v down.a (adv-a (to.p work.v)))))",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "aux-s) (go.v (to.p-arg (k work.n)) (adv-a (to",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "GS: (you.pro ((pres had better.aux-s) (go.v (to.p-arg (k work.n)) (adv-a (to.p (ka work.v))))))",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Make sure that the device is attached firmly to the ceiling",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "\"Make sure that the device is attached firmly to the ceiling.\" Gold: ({you}.pro ((pres make.v) sure.a (that ((the.d device.n) ((pres (pasv attach.v)) firmly.adv-a (to.p-arg (the.d ceiling.n)))))))",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "pro ((pres make.v) (sure.a (that (the.d (mod-n (mod-n man.n) (mod-n man.n iron.n) mod-n mod-n)))))) !)",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "GS: (({you}.pro ((pres make.v) (sure.a (that (the.d (mod-n (mod-n man.n) (mod-n man.n iron.n) mod-n mod-n)))))) !)",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Gold: (((pres can.aux-v) not i.pro (persuade.v you.pro (to come.v)) ?) Ours: (sub ((pres can.aux-v) not i.pro (persuade",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "\"Can't I persuade you to come?\" Gold: (((pres can.aux-v) not i.pro (persuade.v you.pro (to come.v)) ?) Ours: (sub ((pres can.aux-v) not i.pro (persuade.v you.pro (to (ka come.v)) ?)))",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Gold: (multi-sent (({you}.pro ((pres look.v) carefully.adv-a)) !) (i.pro ((pres be-going-to",
"authors": [],
"year": null,
"venue": "v)) *h))))))))",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "\"Look carefully. I'm going to show you how it's done.\" Gold: (multi-sent (({you}.pro ((pres look.v) carefully.adv-a)) !) (i.pro ((pres be-going-to.aux-v) (show.v you.pro (ans-to (sub how.pq (it.pro ((pres (pasv do.v)) *h))))))))",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Ours: ( ((pres look.v) carefully.adv-a)",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ours: ( ((pres look.v) carefully.adv-a) (tht (i.pro ((pres be.v) (go.v (adv-a (to (ka (show.v you.pro (sub how.pq (it.pro ((pres be.v) do.n ))))))))))))",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "An example ULF for the sentence, \"I want to dance in my new shoes\".",
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"num": null,
"text": "State transition diagram of the node generative transition system. Nodes in the figure are phases and edges are actions. An unlabeled edge means that this state transition occurs no matter what action is taken in that phase. The transition system starts in the GEN phase.",
"type_str": "figure",
"uris": null
},
"FIGREF2": {
"num": null,
"text": "Similar to, we extract features from the current transition state configuration, C, to feed into the decoder as additional input in the form of learned embeddings e f (C) = [e f 1 (C); e f 2 (C); ...; e f l (C)] (3)",
"type_str": "figure",
"uris": null
},
"FIGREF3": {
"num": null,
"text": "Ablation tests with standard deviation error bars of 5 runs of different random seeds.",
"type_str": "figure",
"uris": null
},
"FIGREF4": {
"num": null,
"text": "Sim(w, a) = max(Olap(t, b), Olap(l, b)) + 0.5 * (Olap(p, e) + (1 \u2212 |RL(w, n) \u2212 RL(a, m)|))where token overlap, Olap, is defined asOlap(x, y) = 2 * |MaxSharedSubstr(x, y)| |x| + |y| and relative location RL is defined as RL(x, n) = IndexOf(x) n",
"type_str": "figure",
"uris": null
},
"TABREF0": {
"text": "",
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF2": {
"text": "Statistics of model performances with constraints added-the average of 5 runs.",
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF3": {
"text": "Sheng Zhang, Xutai Ma, Kevin Duh, and Benjamin Van Durme. 2019a. AMR parsing as sequence-tograph transduction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 80-94, Florence, Italy. Association for Computational Linguistics.",
"content": "<table><tr><td>Sheng Zhang, Xutai Ma, Kevin Duh, and Benjamin</td></tr><tr><td>Van Durme. 2019b. Broad-coverage semantic pars-</td></tr><tr><td>ing as transduction. In Proceedings of the 2019 Con-</td></tr><tr><td>ference on Empirical Methods in Natural Language</td></tr><tr><td>Processing and the 9th International Joint Confer-</td></tr><tr><td>ence on Natural Language Processing (EMNLP-</td></tr><tr><td>IJCNLP), pages 3786-3798, Hong Kong, China. As-</td></tr><tr><td>sociation for Computational Linguistics.</td></tr></table>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF4": {
"text": "\u00b1 1.4 47.4 \u00b1 1.3 58.4 \u00b1 0.7 59.8 \u00b1 1.0 59.1 \u00b1 1.1 60.7 \u00b1 1.5 57.8 \u00b1 0.5 59.0 \u00b1 0.7 -RoBERTa 45.5 \u00b1 2.4 47.2 \u00b1 1.7 58.3 \u00b1 1.4 59.3 \u00b1 1.0 59.1 \u00b1 1.6 60.5 \u00b1 1.1 57.5 \u00b1 1.2 58.3 \u00b1 0.9 -CharCNN 46.4 \u00b1 1.0 46.9 \u00b1 0.7 58.8 \u00b1 0.8 59.3 \u00b1 0.4 59.4 \u00b1 1.3 60.1 \u00b1 0.5 58.1 \u00b1 0.6 58.5 \u00b1 0.5 -e f (C) Feats 47.0 \u00b1 1.2 46.6 \u00b1 1.2 58.6 \u00b1 0.5 58.8 \u00b1 1.1 60.4 \u00b1 1.2 60.2 \u00b1 1.1 56.9 \u00b1 0.4 57.4 \u00b1 1.2 -POS 43.8 \u00b1 1.7 45.1 \u00b1 1.2 56.9 \u00b1 1.1 58.3 \u00b1 1.1 56.8 \u00b1 1.0 58.7 \u00b1 1.1 56.9 \u00b1 1.2 57.9 \u00b1 1.2 -GloVe 43.2 \u00b1 1.8 44.3 \u00b1 1.2 56.6 \u00b1 1.0 57.1 \u00b1 0.9 56.9 \u00b1 2.7 58.3 \u00b1 2.2 56.4 \u00b1 1.7 56.1 \u00b1 2.2",
"content": "<table><tr><td>Ablation</td><td colspan=\"2\">SEMBLEU</td><td/><td/><td colspan=\"2\">EL-SMATCH</td><td/></tr><tr><td/><td/><td/><td>F1</td><td/><td/><td>Precision</td><td>Recall</td></tr><tr><td/><td>Dev</td><td>Test</td><td>Dev</td><td>Test</td><td>Dev</td><td>Test</td><td>Dev</td><td>Test</td></tr><tr><td>Full</td><td>46.4</td><td/><td/><td/><td/><td/><td/></tr></table>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF5": {
"text": "Ablation results without decoding constraints, mean and standard deviation of 5 runs.",
"content": "<table><tr><td>Ablation</td><td colspan=\"2\">SEMBLEU</td><td/><td/><td colspan=\"2\">EL-SMATCH</td><td/></tr><tr><td/><td/><td/><td>F1</td><td/><td/><td>Precision</td><td>Recall</td></tr><tr><td/><td>Dev</td><td>Test</td><td>Dev</td><td>Test</td><td>Dev</td><td>Test</td><td>Dev</td><td>Test</td></tr><tr><td>Full</td><td colspan=\"8\">47.3 \u00b1 0.6 46.2 \u00b1 0.3 56.3 \u00b1 0.7 57.5 \u00b1 0.8 60.2 \u00b1 0.5 61.5 \u00b1 1.2 52.9 \u00b1 0.9 54.1 \u00b1 1.5</td></tr><tr><td>\u2206x</td><td/><td>-1.2</td><td/><td>-2.3</td><td/><td>+0.8</td><td/><td>-4.9</td></tr><tr><td>-RoBERTa</td><td/><td/><td/><td/><td/><td/><td/></tr></table>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF6": {
"text": "\u00b1 2.3 40.0 \u00b1 1.4 54.2 \u00b1 1.2 55.8 \u00b1 1.2 57.6 \u00b1 1.0 59.1 \u00b1 1.2 51.1 \u00b1 1.5 52.8 \u00b1 1.",
"content": "<table><tr><td>Ablation</td><td colspan=\"2\">SEMBLEU</td><td/><td/><td colspan=\"2\">EL-SMATCH</td><td/></tr><tr><td/><td/><td/><td>F1</td><td/><td>Precision</td><td/><td>Recall</td></tr><tr><td/><td>Dev</td><td>Test</td><td>Dev</td><td>Test</td><td>Dev</td><td>Test</td><td>Dev</td><td>Test</td></tr><tr><td>Full</td><td colspan=\"8\">38.3 4</td></tr><tr><td>\u2206x</td><td/><td>-7.4</td><td/><td>-4.0</td><td/><td>-1.6</td><td/><td>-6.2</td></tr></table>",
"type_str": "table",
"html": null,
"num": null
}
}
}
}