ACL-OCL / Base_JSON /prefixA /json /acl /2020.acl-demos.35.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
97.7 kB
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:22:06.519349Z"
},
"title": "Penman: An Open-Source Library and Tool for AMR Graphs",
"authors": [
{
"first": "Michael",
"middle": [
"Wayne"
],
"last": "Goodman",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Nanyang Technological University",
"location": {
"country": "Singapore"
}
},
"email": "goodmami@uw.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Meaning Representation (AMR) (Banarescu et al., 2013) is a framework for semantic dependencies that encodes its rooted and directed acyclic graphs in a format called PENMAN notation. The format is simple enough that users of AMR data often write small scripts or libraries for parsing it into an internal graph representation, but there is enough complexity that these users could benefit from a more sophisticated and well-tested solution. The open-source Python library Penman provides a robust parser, functions for graph inspection and manipulation, and functions for formatting graphs into PENMAN notation. Many functions are also available in a command-line tool, thus extending its utility to non-Python setups.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Meaning Representation (AMR) (Banarescu et al., 2013) is a framework for semantic dependencies that encodes its rooted and directed acyclic graphs in a format called PENMAN notation. The format is simple enough that users of AMR data often write small scripts or libraries for parsing it into an internal graph representation, but there is enough complexity that these users could benefit from a more sophisticated and well-tested solution. The open-source Python library Penman provides a robust parser, functions for graph inspection and manipulation, and functions for formatting graphs into PENMAN notation. Many functions are also available in a command-line tool, thus extending its utility to non-Python setups.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Abstract Meaning Representation (AMR; Banarescu et al., 2013) is a framework for encoding English language 1 meaning as structural-semantic graphs using a fork of Propbank (Kingsbury and Palmer, 2002; O'Gorman et al., 2018) for its semantic frames with additional AMR-specific roles. The graphs are connected, directed, with node and edge labels, and may have multiple roots but always have exactly one distinguished top node. AMR corpora, such as the recent AMR Annotation Release 3.0 (LDC2020T02), 2 encode the graphs in a format called PENMAN notation (Matthiessen and Bateman, 1991) . PENMAN notation is a text stream and is thus linear, but it first uses bracketing to capture a spanning tree over the graph, then inverted edge labels and references to node IDs to capture re-entrancies. Proper interpretation 1 Variations exist for other languages (e.g., Li et al., 2016; , but AMR is primarily English and is not an interlingua (Xue et al., 2014 ).",
"cite_spans": [
{
"start": 172,
"end": 200,
"text": "(Kingsbury and Palmer, 2002;",
"ref_id": "BIBREF16"
},
{
"start": 201,
"end": 223,
"text": "O'Gorman et al., 2018)",
"ref_id": "BIBREF24"
},
{
"start": 555,
"end": 586,
"text": "(Matthiessen and Bateman, 1991)",
"ref_id": "BIBREF20"
},
{
"start": 815,
"end": 816,
"text": "1",
"ref_id": null
},
{
"start": 861,
"end": 877,
"text": "Li et al., 2016;",
"ref_id": "BIBREF18"
},
{
"start": 935,
"end": 952,
"text": "(Xue et al., 2014",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 https://catalog.ldc.upenn.edu/ LDC2020T02 of the \"pure\" graph therefore requires the deinversion of inverted edges and the resolution of node IDs. Some tools that work with AMR use the interpreted pure graph Song and Gildea, 2019; Chiang et al., 2013) , but many others work at the tree level for surface alignment (Flanigan et al., 2014) , for transformations from syntax trees (Wang et al., 2015) , or to make use of tree-based algorithms (Pust et al., 2015; Takase et al., 2016) . Others, particularly sequential neural systems (Konstas et al., 2017; van Noord and Bos, 2017) , use the linear form directly.",
"cite_spans": [
{
"start": 210,
"end": 232,
"text": "Song and Gildea, 2019;",
"ref_id": "BIBREF28"
},
{
"start": 233,
"end": 253,
"text": "Chiang et al., 2013)",
"ref_id": "BIBREF5"
},
{
"start": 317,
"end": 340,
"text": "(Flanigan et al., 2014)",
"ref_id": "BIBREF10"
},
{
"start": 381,
"end": 400,
"text": "(Wang et al., 2015)",
"ref_id": "BIBREF31"
},
{
"start": 443,
"end": 462,
"text": "(Pust et al., 2015;",
"ref_id": "BIBREF26"
},
{
"start": 463,
"end": 483,
"text": "Takase et al., 2016)",
"ref_id": "BIBREF30"
},
{
"start": 533,
"end": 555,
"text": "(Konstas et al., 2017;",
"ref_id": "BIBREF17"
},
{
"start": 556,
"end": 580,
"text": "van Noord and Bos, 2017)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Furthermore, while AMRs ostensibly describe semantic graphs abstracted away from any particular sentence's surface form, human annotators tend to \"leak information\" (Konstas et al., 2017) about the source sentence. This means that an annotator might be expected to produce the AMR in Fig. 1 for sentence (1), but then swap the relative order of the adjunct relations :location and :time for (2). 3 Van Noord and Bos (2017) embraced these biases and intentionally reordered relations, even frame arguments such as :ARG0 and :ARG1, by their surface alignments, leading to a boost in their evaluation scores.",
"cite_spans": [
{
"start": 165,
"end": 187,
"text": "(Konstas et al., 2017)",
"ref_id": "BIBREF17"
},
{
"start": 396,
"end": 397,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 284,
"end": 290,
"text": "Fig. 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) I swam in the pool today.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(2) Today I swam in the pool. As illustrated above, work involving AMR may use the PENMAN string, the tree structure, or the pure graph, or possibly multiple representations. This paper therefore describes and demonstrates Penman, a Python library and command-line utility for working with AMR data at both the tree and graph levels and for encoding and decoding these structures using PENMAN notation. Converting a tree into a graph loses information that the tree implicitly encodes, so Penman introduces the epigraph: 4 optional information that exists on top of the graph and controls how the pure graph is expressed as a tree. Penman is freely available under a permissive open-source license at https://github.com/goodmami/penman/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Penman uses three-stage processes to decode PEN-MAN notation to a graph and to encode a graph to PENMAN, as illustrated in Fig. 2 . Parsing is the process of getting a tree from a PENMAN string, and interpretation is getting a graph from a tree, while decoding is the whole string-to-graph process. Going the other way, configuration is the process of getting a tree from a graph and formatting is getting a string from a tree, while encoding is the whole graph-to-string process. Splitting the decoding and encoding processes into two steps each allows one to work with AMR data at any stage. The variant of PENMAN notation used by Penman is described in \u00a72.1. The tree, graph, and epigraph data structures are described in \u00a72.2. Getting a tree from a string (and vice-versa) depends only on understanding PENMAN notation, but getting a graph from a tree (and vice-versa) requires an understanding of the semantic model. Semantic models are described in \u00a72.4.",
"cite_spans": [],
"ref_spans": [
{
"start": 123,
"end": 129,
"text": "Fig. 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Decoding and Encoding Graphs",
"sec_num": "2"
},
{
"text": "The Penman project uses a less-strict variant of PENMAN notation than is used by AMR in order to robustly handle some kinds of erroneous output by AMR parsers. The syntactic and lexical rules for PENMAN notation in PEG syntax 5 are shown in Fig. 3 . Optional whitespace (not shown) is allowed around expressions in the syntactic rules.",
"cite_spans": [],
"ref_spans": [
{
"start": 241,
"end": 247,
"text": "Fig. 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "PENMAN Notation",
"sec_num": "2.1"
},
{
"text": "In AMR, the Concept expression on Node, the Atom expression on Concept, and the (Node / Atom) expression on Reln are obligatory, but they are optional for Penman and will get a null value when missing. Also in AMR, the initial Symbol on Node may be further constrained with a specific Variable pattern for node identifiers and the Symbol in Atom would become a choice: Variable / Symbol. How Penman handles variables is discussed in \u00a72.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PENMAN Notation",
"sec_num": "2.1"
},
{
"text": "AMR corpora conventionally use blank lines to delineate multiple graphs, but Penman relies on bracketing instead and whitespace is not significant. Penman also parses comments (not described in Fig. 3 ), which are lines prefixed with # characters, and extracts metadata where keys are tokens prefixed with two colons (e.g., ::id) and values are anything after the key until the next key or a newline.",
"cite_spans": [],
"ref_spans": [
{
"start": 194,
"end": 200,
"text": "Fig. 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "PENMAN Notation",
"sec_num": "2.1"
},
{
"text": "In Penman, a tree data structure is a n, B tuple where n is the node's identifier (variable) and B is a list of branches. Each branch is a l, b tuple where l is a branch label (a possibly inverted role) and b is a (sub)tree or an atom. The first branch on B is the node's concept, thus a tree is a near-direct conversion of the Node rule in Fig. 3 where B is the concatenation of Concept and Reln. The tree corresponding to the AMR in Fig. 2 is shown in Fig. 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 341,
"end": 347,
"text": "Fig. 3",
"ref_id": "FIGREF3"
},
{
"start": 435,
"end": 441,
"text": "Fig. 2",
"ref_id": "FIGREF2"
},
{
"start": 454,
"end": 460,
"text": "Fig. 4",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Decoding: Trees, Graphs, and Epigraphs",
"sec_num": "2.2"
},
{
"text": "A graph is a tuple v, T where v is the top variable and T is a flat list of triples. For each triple s, r, t , the source s is always the head variable of a dependency, r is the normalized role, and t is the dependent. When interpreting a triple from a tree branch, n becomes s and t comes from b unless the branch label l is deinverted according to the semantic model (described in \u00a72.4) to produce r, in which case s and t are swapped. In the graph, t is designated a variable if it appears as the source of any other triple; otherwise it is an atom. Triples where t is a variable are called edge relations. If t is an atom and r is the special role :instance, then t is the node's concept and the triple is an instance relation. All other triples are attribute relations. Fig. 5 shows the graph corresponding to the AMR in Fig. 2 . Conversion from a PENMAN string to a tree is straightforward: the only information lost in parsing is formatting details like the amount of whitespace. The interpretation of a graph from a tree, however, loses information about the specific tree configuration for the graph, as there are often many possible configurations for the same graph. Therefore, upon interpretation, Penman stores in two places the information that would be lost: in the order of triples (meaning the graph's triples are a sequence, not an unordered bag or set), and in the epigraph, which is a mapping of triples to lists of epigraphical markers. The choice of the term epigraph is by analogy to the epigenome: just as epigenetic markers control how genes are expressed in an organism, epigraphical markers control how triples are expressed in a tree. In interpreting a graph from a tree, when a branch's target is another subtree (e.g., when ( is encountered in the string), a Push marker is assigned to the triple resulting from the branch, indicating that that triple pushed a new node context onto a stack representing the tree structure. The final triple resulting from branches in the subtree, even considering further nested subtrees (e.g., at the point where ) is encountered in the string), gets a Pop marker indicating the end of the node context. In addition to tree-layout markers, the epigraph is also where surface alignment information is stored, as these alignments are not part of the pure graph. Fig. 6 shows the epigraph for the AMR in Fig. 2 . [Push('g')], ('g', ':instance', 'gamma'):[], ('g', ':ARG1', 'b'):",
"cite_spans": [],
"ref_spans": [
{
"start": 775,
"end": 781,
"text": "Fig. 5",
"ref_id": "FIGREF5"
},
{
"start": 826,
"end": 832,
"text": "Fig. 2",
"ref_id": "FIGREF2"
},
{
"start": 2325,
"end": 2331,
"text": "Fig. 6",
"ref_id": null
},
{
"start": 2366,
"end": 2372,
"text": "Fig. 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Decoding: Trees, Graphs, and Epigraphs",
"sec_num": "2.2"
},
{
"text": "# Lexical rules Symbol <-NameChr+ Role <-':' NameChr * Algn <-'~' Prefix? Indices Prefix <-[a-zA-Z] '.'? Indices <-Digit+ (',' Digit+) * String <-'\"' (!'\"' ('\\\\' . / .)) * '\"' NameChr <-![ \\n\\t\\r\\f\\v()/:~] . Digit <-[0-9]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding: Trees, Graphs, and Epigraphs",
"sec_num": "2.2"
},
{
"text": "[Pop] } Figure 6 : Epigraph structure for the AMR in Fig. 2 2.3 Encoding: No Surprises When configuring a tree from a graph, the epigraph is used to control where triples occur in the tree. If at each step the layout markers in the epigraph allow the configuration process to navigate a tree with no surprises (that is, when the source or target of each triple is the current node on a node-context stack), then it will produce the same tree that was decoded to get the graph. 6 Otherwise, such as when a graph is modified or constructed without an epigraph, the algorithm will switch to another procedure that repeatedly passes over the list of remaining triples and configures those whose source or target are already in the tree under construction. If no triples are inserted in a pass, the remaining triples are discarded and a warning is logged that the graph is disconnected. The semantic model is used to properly configure inverted branches as necessary.",
"cite_spans": [
{
"start": 477,
"end": 478,
"text": "6",
"ref_id": null
}
],
"ref_spans": [
{
"start": 8,
"end": 16,
"text": "Figure 6",
"ref_id": null
},
{
"start": 53,
"end": 59,
"text": "Fig. 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Decoding: Trees, Graphs, and Epigraphs",
"sec_num": "2.2"
},
{
"text": "Once a tree is configured, formatting it to a string is simple, and users may customize the formatter to adjust the amount of whitespace used. The default indentation width is an adaptive mode that indents based on the initial column of the current node context; otherwise an explicit width is multiplied by the nesting level, or a user may select to print the whole AMR on one line. Another customization option is a \"compact\" mode which joins any attribute relations, but not edges, that immediately follow the concept onto the same line as the concept.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding: Trees, Graphs, and Epigraphs",
"sec_num": "2.2"
},
{
"text": "In order to interpret a tree into a graph, a semantic model is used to get normalized, or deinverted, triples. Penman provides a default model which only checks if the role ends in -of (the conventional indicator of role inversion in PENMAN notation). Ideally this would be all that is needed, but AMR defines several primary (non-inverted) roles ending in -of, such as :consist-of and :prep-on-behalf-of, where the inverted forms are :consist-of-of and :prep-on-behalf-of-of, respectively. The model therefore first checks if a role is listed as a primary role; if not and if it ends in -of, it is inverted, otherwise it is not. When the role of a triple 6 There is currently one known situation where this is not the case: when a graph has duplicate triples with the same source, role, and target, as the epigraph cannot uniquely map the triple to its epigraphical markers. These, however, are likely bad graphs in AMR.",
"cite_spans": [
{
"start": 656,
"end": 657,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Models",
"sec_num": "2.4"
},
{
"text": "is deinverted, Penman also swaps its source and target so the dependency relation remains intact.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Models",
"sec_num": "2.4"
},
{
"text": "The model has other uses, such as inverting triples (useful when encoding), defining transformations as described in \u00a73, and checking graphs for compliance with the model. In addition to the default model, Penman includes an AMR model with the roles and transformations defined in the AMR documentation. 7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Models",
"sec_num": "2.4"
},
{
"text": "3 Graph and Tree Transformations Goodman (2019a) described four transformations of AMR graphs and trees-namely, role canonicalization, edge reification (including dereification), attribute reification, and tree structure indication 8 -and how they could be used to improve the comparability of parser-produced AMR corpora by normalizing differences that are meaningequivalent in AMR and by allowing for partial credit when, for example, a relation has a correct role but an incorrect target value. Penman incorporates all of those transformations but it (a) depends on the semantic model to define canonical roles and reifications, whereas Goodman 2019a used hardcoded transformations; and (b) inserts layout markers for a \"no-surprises\" configuration that results in the expected tree. A separately-defined model allows Penman to use the same transformation methods with different versions of AMR, for different tasks, or even with non-AMR representations, by creating different models. For the implementation details of these transformations, refer to Goodman 2019a.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Models",
"sec_num": "2.4"
},
{
"text": "In addition to those four transformations, Penman adds a few more methods. The rearrange method operates on a tree and sorts the order of branches by their branch labels. Besides changing the order of branches, their structure is unchanged by this method. Van Noord and Bos (2017) similarly rearranged tree branches based on surface alignments. The reconfigure method configures a tree from a graph after discarding the layout markers in the epigraph and sorting the triples based on their roles. Unlike the rearrange method, reconfigure affects the entire structure of the graph except for which node is the graph's top. For both of these, the sorting methods are defined by the model, and Penman includes three such methods: original order, random order, and canonical order. For rearrange there are additional sorting methods applicable to trees: alphanumeric order, attributesfirst order, and inverted-last order. Since node variables in AMR are conventionally assigned in order of their appearance and the above methods can change this order, the reset-variables method reassigns the variables based on the new tree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Models",
"sec_num": "2.4"
},
{
"text": "Here I describe a handful of use cases that motivate the use of Penman.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Use Cases",
"sec_num": "4"
},
{
"text": "Users of the Penman library can programmatically construct graphs and then encode them to PENMAN notation. Penman allows users to directly append to the list of triples and assign epigraphical markers, or to assemble small graphs and use set-union operations to combine them together. Another option is to assemble the tree directly, which may make more sense for some systems. Once the tree is configured or constructed, users can use transformations such as rearrange and reset-variables to make the PENMAN string more canonical in form. Fig. 7 illustrates using the Python API to construct and encode a graph. Another possibility is for graph augmentation, where users rely on Penman to parse a string to a graph which they then modify, e.g., to add surface alignments or wiki links, then serialize to a string again. This allows them to focus on their primary task without worrying about the details of parsing and formatting.",
"cite_spans": [],
"ref_spans": [
{
"start": 540,
"end": 546,
"text": "Fig. 7",
"ref_id": "FIGREF7"
}
],
"eq_spans": [],
"section": "Graph Construction",
"sec_num": "4.1"
},
{
"text": "Whether one is generating AMR graphs with hand annotation or by automatic means, the end result is not guaranteed to be valid with respect to the model, so Penman offers a function to check for compliance. Currently, this check evaluates three criteria:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Validation",
"sec_num": "4.2"
},
{
"text": "1. Is each role defined by the model? 2. Is the top set to a node in the graph?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Validation",
"sec_num": "4.2"
},
{
"text": "3. Is the graph fully connected?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Validation",
"sec_num": "4.2"
},
{
"text": "To facilitate both library and tool usage, the library function returns a dictionary mapping triples (for context) to error messages, as shown in Fig. 8 , while the tool encodes the errors as metadata comments and has a nonzero exit-code on errors. ",
"cite_spans": [],
"ref_spans": [
{
"start": 146,
"end": 152,
"text": "Fig. 8",
"ref_id": "FIGREF8"
}
],
"eq_spans": [],
"section": "Graph Validation",
"sec_num": "4.2"
},
{
"text": "The official AMR corpora, such as the AMR Annotation Release 3.0, are distributed with the graphs serialized in a human-readable style that uses increasing levels of indentation to show the nesting of subgraphs. Furthermore, relations on a node appear in a canonical order depending on their roles (e.g., ARG1 appears before ARG2) or their surface alignments, where the appearance of a node roughly follows the order of corresponding words in a sentence. The rearrange and reconfigure transformations can change the order of relations in the graph to be more canonical, the reset-variables method can ensure variable forms are as expected, and the whitespace options of tree formatting can emulate the same indentation style as the official corpora. These features may be useful for users distributing new AMR corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formatting for a Consistent Style",
"sec_num": "4.3"
},
{
"text": "The normalization options in \u00a73 can be useful when evaluating the results of AMR parsing, as described in Goodman 2019a. Penman is thus well-situated as a preprocessor to an evaluation step using, e.g., smatch , SemBLEU Gildea, 2019), or SEMA (Anchi\u00eata et al., 2019 ). Fig. 9 shows the command-line tool performing role canonicalization. $ echo '(c / chapter :domain-of 7)' \\ > | penman --amr --canonicalize-roles (c / chapter :mod 7)",
"cite_spans": [
{
"start": 220,
"end": 265,
"text": "Gildea, 2019), or SEMA (Anchi\u00eata et al., 2019",
"ref_id": null
}
],
"ref_spans": [
{
"start": 269,
"end": 275,
"text": "Fig. 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Normalization for Fairer Evaluation",
"sec_num": "4.4"
},
{
"text": "Figure 9: Example of using Penman's command-line tool for normalization",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Normalization for Fairer Evaluation",
"sec_num": "4.4"
},
{
"text": "Sequential neural models which use linearized AMR graphs have been popular for both parsing and generation (Barzdins and Gosko, 2016; Peng et al., 2017; Konstas et al., 2017; van Noord and Bos, 2017; Song et al., 2018; Damonte and Cohen, 2019; Zhang et al., 2019) , but data sparsity is a significant issue (Peng et al., 2017) . One way to address data sparsity is to remove senses on concepts (Lyu and Titov, 2018) . Fig. 10 shows how the Python API can remove these senses in the tree. Other techniques include, but are not limited to, normalizing linear forms, as discussed in \u00a74.4; rearranging graphs with alignments to match the input string (van Noord and Bos, 2017); or randomizing branch orders to avoid overfitting to annotator biases, as suggested by (Konstas et al., 2017) . Penman supports all these use cases via commands, as in Fig. 9 , without any coding required.",
"cite_spans": [
{
"start": 107,
"end": 133,
"text": "(Barzdins and Gosko, 2016;",
"ref_id": "BIBREF2"
},
{
"start": 134,
"end": 152,
"text": "Peng et al., 2017;",
"ref_id": "BIBREF25"
},
{
"start": 153,
"end": 174,
"text": "Konstas et al., 2017;",
"ref_id": "BIBREF17"
},
{
"start": 175,
"end": 199,
"text": "van Noord and Bos, 2017;",
"ref_id": "BIBREF21"
},
{
"start": 200,
"end": 218,
"text": "Song et al., 2018;",
"ref_id": "BIBREF29"
},
{
"start": 219,
"end": 243,
"text": "Damonte and Cohen, 2019;",
"ref_id": "BIBREF9"
},
{
"start": 244,
"end": 263,
"text": "Zhang et al., 2019)",
"ref_id": "BIBREF33"
},
{
"start": 307,
"end": 326,
"text": "(Peng et al., 2017)",
"ref_id": "BIBREF25"
},
{
"start": 394,
"end": 415,
"text": "(Lyu and Titov, 2018)",
"ref_id": "BIBREF19"
},
{
"start": 761,
"end": 783,
"text": "(Konstas et al., 2017)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 418,
"end": 425,
"text": "Fig. 10",
"ref_id": "FIGREF1"
},
{
"start": 842,
"end": 848,
"text": "Fig. 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Preprocessing for Machine Learning",
"sec_num": "4.5"
},
{
"text": "This paper has described PENMAN as a notation for encoding AMR graphs, but it is also applicable to other dependency graphs that share the same constraints (e.g., connected, directed). PENMAN notation can encode Dependency Minimal Recursion Semantics (DMRS; Copestake, 2009; Copestake et al., 2016) , such as for learning graph-to-graph machine translation rules (Goodman, 2018) and neural generation (Hajdik et al., 2019) , and it can encode Elementary Dependency Structures (EDS; Oepen et al., 2004; Oepen and L\u00f8nning, 2006) , as shown in Fig. 11 using PyDelphin (Goodman, 2019b) for conversion. It is also useful for extensions of AMR, such as Uniform Meaning Representation (UMR; Pustejovsky et al., 2019) . ",
"cite_spans": [
{
"start": 258,
"end": 274,
"text": "Copestake, 2009;",
"ref_id": "BIBREF7"
},
{
"start": 275,
"end": 298,
"text": "Copestake et al., 2016)",
"ref_id": "BIBREF8"
},
{
"start": 401,
"end": 422,
"text": "(Hajdik et al., 2019)",
"ref_id": "BIBREF15"
},
{
"start": 482,
"end": 501,
"text": "Oepen et al., 2004;",
"ref_id": "BIBREF22"
},
{
"start": 502,
"end": 526,
"text": "Oepen and L\u00f8nning, 2006)",
"ref_id": "BIBREF23"
},
{
"start": 684,
"end": 709,
"text": "Pustejovsky et al., 2019)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 541,
"end": 548,
"text": "Fig. 11",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Applicability beyond AMR",
"sec_num": "5"
},
{
"text": "In this paper I have presented Penman, a Python library and command-line tool for working with AMR and other graphs serialized in the PENMAN format. Existing work on AMR has targeted the PENMAN string, the parsed tree, or the interpreted graph, and Penman accommodates all of these use cases by allowing users to work with the tree or graph data structures or to encode them back to strings. Transformations defined at both the graph and tree level make it applicable for pre-and postprocessing steps for corpus creation, evaluation, machine learning projects, and more. Penman is available under the MIT open-source license at https://github.com/goodmami/penman. Interactive notebook demonstrations and informational videos are available at https://github. com/goodmami/penman#demo.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Graphically there is no difference, and a metric like smatch would return a perfect score when comparing the two.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "A different sense than for an inscription on a building or a short passage at the start of a book.5 See https://bford.info/packrat/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://isi.edu/~ulf/amr/lib/roles. html 8 With the introduction of the epigraph, tree structure indication is somewhat redundant, however it differs in that the transformation puts this information in the graph triples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Thanks to the three anonymous reviewers for their helpful comments, and to the contributors and users of the Penman project for their support.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "SEMA: an extended semantic evaluation for AMR",
"authors": [
{
"first": "Rafael",
"middle": [],
"last": "Torres Anchi\u00eata",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Antonio Sobrevilla",
"suffix": ""
},
{
"first": "Thiago Alexandre Salgueiro",
"middle": [],
"last": "Cabezudo",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pardo",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 20th Computational Linguistics and Intelligent Text Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rafael Torres Anchi\u00eata, Marco Antonio Sobrevilla Cabezudo, and Thiago Alexandre Salgueiro Pardo. 2019. SEMA: an extended semantic evaluation for AMR. In Proceedings of the 20th Computational Linguistics and Intelligent Text Processing. Springer International Publishing.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Abstract meaning representation for sembanking",
"authors": [
{
"first": "Laura",
"middle": [],
"last": "Banarescu",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Bonial",
"suffix": ""
},
{
"first": "Shu",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Madalina",
"middle": [],
"last": "Georgescu",
"suffix": ""
},
{
"first": "Kira",
"middle": [],
"last": "Griffitt",
"suffix": ""
},
{
"first": "Ulf",
"middle": [],
"last": "Hermjakob",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Schneider",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse",
"volume": "",
"issue": "",
"pages": "178--186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of the 7th Linguis- tic Annotation Workshop and Interoperability with Discourse, pages 178-186, Sofia, Bulgaria. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "RIGA at SemEval-2016 task 8: Impact of Smatch extensions and character-level neural translation on AMR parsing accuracy",
"authors": [
{
"first": "Guntis",
"middle": [],
"last": "Barzdins",
"suffix": ""
},
{
"first": "Didzis",
"middle": [],
"last": "Gosko",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)",
"volume": "",
"issue": "",
"pages": "1143--1147",
"other_ids": {
"DOI": [
"10.18653/v1/S16-1176"
]
},
"num": null,
"urls": [],
"raw_text": "Guntis Barzdins and Didzis Gosko. 2016. RIGA at SemEval-2016 task 8: Impact of Smatch extensions and character-level neural translation on AMR pars- ing accuracy. In Proceedings of the 10th Interna- tional Workshop on Semantic Evaluation (SemEval- 2016), pages 1143-1147, San Diego, California. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Towards a general Abstract Meaning Representation corpus for Brazilian Portuguese",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Antonio",
"suffix": ""
},
{
"first": "Sobrevilla",
"middle": [],
"last": "Cabezudo",
"suffix": ""
},
{
"first": "Thiago",
"middle": [],
"last": "Pardo",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th Linguistic Annotation Workshop",
"volume": "",
"issue": "",
"pages": "236--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Antonio Sobrevilla Cabezudo and Thiago Pardo. 2019. Towards a general Abstract Meaning Repre- sentation corpus for Brazilian Portuguese. In Pro- ceedings of the 13th Linguistic Annotation Work- shop, pages 236-244.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Smatch: an evaluation metric for semantic feature structures",
"authors": [
{
"first": "Shu",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "748--752",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shu Cai and Kevin Knight. 2013. Smatch: an evalua- tion metric for semantic feature structures. In Pro- ceedings of the 51st Annual Meeting of the Associa- tion for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 748-752.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Parsing graphs with hyperedge replacement grammars",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Andreas",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "Karl",
"middle": [
"Moritz"
],
"last": "Hermann",
"suffix": ""
},
{
"first": "Bevan",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Chiang, Jacob Andreas, Daniel Bauer, Karl Moritz Hermann, Bevan Jones, and Kevin Knight. 2013. Parsing graphs with hyperedge replacement grammars. In Proceedings of the 51st",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Annual Meeting of the Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "1",
"issue": "",
"pages": "924--932",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), pages 924-932.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Invited Talk: slacker semantics: Why superficiality, dependency and avoidance of commitment can be the right way to go",
"authors": [
{
"first": "Ann",
"middle": [],
"last": "Copestake",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009)",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ann Copestake. 2009. Invited Talk: slacker seman- tics: Why superficiality, dependency and avoidance of commitment can be the right way to go. In Proceedings of the 12th Conference of the Euro- pean Chapter of the ACL (EACL 2009), pages 1-9, Athens, Greece. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Resources for building applications with Dependency Minimal Recursion Semantics",
"authors": [
{
"first": "Ann",
"middle": [],
"last": "Copestake",
"suffix": ""
},
{
"first": "Guy",
"middle": [],
"last": "Emerson",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"Wayne"
],
"last": "Goodman",
"suffix": ""
},
{
"first": "Matic",
"middle": [],
"last": "Horvat",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Kuhnle",
"suffix": ""
},
{
"first": "Ewa",
"middle": [],
"last": "Muszy\u00e5\u010fska",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ann Copestake, Guy Emerson, Michael Wayne Good- man, Matic Horvat, Alexander Kuhnle, and Ewa Muszy\u00c5\u010eska. 2016. Resources for building ap- plications with Dependency Minimal Recursion Se- mantics. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), Paris, France. European Language Re- sources Association (ELRA).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Structural neural encoders for AMR-to-text generation",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Damonte",
"suffix": ""
},
{
"first": "Shay",
"middle": [
"B"
],
"last": "Cohen",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "3649--3658",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1366"
]
},
"num": null,
"urls": [],
"raw_text": "Marco Damonte and Shay B. Cohen. 2019. Structural neural encoders for AMR-to-text generation. In Pro- ceedings of the 2019 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol- ume 1 (Long and Short Papers), pages 3649-3658, Minneapolis, Minnesota. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A discriminative graph-based parser for the abstract meaning representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Flanigan",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Thomson",
"suffix": ""
},
{
"first": "Jaime",
"middle": [
"G"
],
"last": "Carbonell",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1426--1436",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Flanigan, Sam Thomson, Jaime G Carbonell, Chris Dyer, and Noah A Smith. 2014. A discrimi- native graph-based parser for the abstract meaning representation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 1426-1436.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Semantic Operations for Transfer-based Machine Translation",
"authors": [
{
"first": "",
"middle": [],
"last": "Michael Wayne Goodman",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Wayne Goodman. 2018. Semantic Operations for Transfer-based Machine Translation. Ph.D. the- sis, University of Washington, Seattle.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "AMR normalization for fairer evaluation",
"authors": [
{
"first": "",
"middle": [],
"last": "Michael Wayne Goodman",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 33rd",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Wayne Goodman. 2019a. AMR normaliza- tion for fairer evaluation. In Proceedings of the 33rd",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Pacific Asia Conference on Language, Information, and Computation",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pacific Asia Conference on Language, Information, and Computation, Hakodate.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A Python library for deep linguistic resources",
"authors": [
{
"first": "",
"middle": [],
"last": "Michael Wayne Goodman",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 Pacific Neighborhood Consortium Annual Conference and Joint Meetings (PNC)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Wayne Goodman. 2019b. A Python library for deep linguistic resources. In 2019 Pacific Neigh- borhood Consortium Annual Conference and Joint Meetings (PNC), Singapore.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Neural text generation from rich semantic representations",
"authors": [
{
"first": "Valerie",
"middle": [],
"last": "Hajdik",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Buys",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"Wayne"
],
"last": "Goodman",
"suffix": ""
},
{
"first": "Emily",
"middle": [
"M"
],
"last": "Bender",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Valerie Hajdik, Jan Buys, Michael Wayne Goodman, and Emily M. Bender. 2019. Neural text genera- tion from rich semantic representations. In Proceed- ings of the 2019 Conference on the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies (NAACL- HLT)), Minneapolis, Minnesota.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "From Treebank to Propbank",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Kingsbury",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2002,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "1989--1993",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Kingsbury and Martha Palmer. 2002. From Tree- bank to Propbank. In LREC, pages 1989-1993. Cite- seer.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Neural AMR: Sequence-to-sequence models for parsing and generation",
"authors": [
{
"first": "Ioannis",
"middle": [],
"last": "Konstas",
"suffix": ""
},
{
"first": "Srinivasan",
"middle": [],
"last": "Iyer",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Yatskar",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "146--157",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1014"
]
},
"num": null,
"urls": [],
"raw_text": "Ioannis Konstas, Srinivasan Iyer, Mark Yatskar, Yejin Choi, and Luke Zettlemoyer. 2017. Neural AMR: Sequence-to-sequence models for parsing and gener- ation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 146-157, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Annotating the Little Prince with Chinese AMRs",
"authors": [
{
"first": "Bin",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Wen",
"suffix": ""
},
{
"first": "Weiguang",
"middle": [],
"last": "Qu",
"suffix": ""
},
{
"first": "Lijun",
"middle": [],
"last": "Bu",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th Linguistic Annotation Workshop held in conjunction with ACL 2016",
"volume": "",
"issue": "",
"pages": "7--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bin Li, Yuan Wen, Weiguang Qu, Lijun Bu, and Nian- wen Xue. 2016. Annotating the Little Prince with Chinese AMRs. In Proceedings of the 10th Linguis- tic Annotation Workshop held in conjunction with ACL 2016 (LAW-X 2016), pages 7-15.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "AMR parsing as graph prediction with latent alignment",
"authors": [
{
"first": "Chunchuan",
"middle": [],
"last": "Lyu",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "397--407",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chunchuan Lyu and Ivan Titov. 2018. AMR parsing as graph prediction with latent alignment. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 397-407.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Text generation and systemic-functional linguistics: experiences from English and Japanese",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Matthiessen",
"suffix": ""
},
{
"first": "John A",
"middle": [],
"last": "Bateman",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian Matthiessen and John A Bateman. 1991. Text generation and systemic-functional linguistics: ex- periences from English and Japanese. Pinter Pub- lishers.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Neural semantic parsing by character-based translation: Experiments with abstract meaning representations",
"authors": [
{
"first": "Rik",
"middle": [],
"last": "Van Noord",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Bos",
"suffix": ""
}
],
"year": 2017,
"venue": "Computational Linguistics in the Netherlands Journal",
"volume": "7",
"issue": "",
"pages": "93--108",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rik van Noord and Johan Bos. 2017. Neural semantic parsing by character-based translation: Experiments with abstract meaning representations. Computa- tional Linguistics in the Netherlands Journal, 7:93- 108.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "LinGO Redwoods",
"authors": [
{
"first": "Stephan",
"middle": [],
"last": "Oepen",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Flickinger",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2004,
"venue": "Research on Language and Computation",
"volume": "2",
"issue": "4",
"pages": "575--596",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephan Oepen, Dan Flickinger, Kristina Toutanova, and Christopher D Manning. 2004. LinGO Red- woods. Research on Language and Computation, 2(4):575-596.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Discriminant-based MRS banking",
"authors": [
{
"first": "Stephan",
"middle": [],
"last": "Oepen",
"suffix": ""
},
{
"first": "Jan",
"middle": [
"Tore"
],
"last": "L\u00f8nning",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 5th International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "1250--1255",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephan Oepen and Jan Tore L\u00f8nning. 2006. Discriminant-based MRS banking. In Proceedings of the 5th International Conference on Language Resources and Evaluation, pages 1250-1255.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "The new Propbank: Aligning Propbank with AMR through POS unification",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Tim O'gorman",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Katie",
"middle": [],
"last": "Bonn",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Conger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gung",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tim O'Gorman, Sameer Pradhan, Martha Palmer, Ju- lia Bonn, Katie Conger, and James Gung. 2018. The new Propbank: Aligning Propbank with AMR through POS unification. In Proceedings of the Eleventh International Conference on Language Re- sources and Evaluation (LREC 2018).",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Addressing the data sparsity issue in neural AMR parsing",
"authors": [
{
"first": "Xiaochang",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Chuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "366--375",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaochang Peng, Chuan Wang, Daniel Gildea, and Ni- anwen Xue. 2017. Addressing the data sparsity is- sue in neural AMR parsing. In Proceedings of the 15th Conference of the European Chapter of the As- sociation for Computational Linguistics: Volume 1, Long Papers, pages 366-375, Valencia, Spain. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Parsing English into abstract meaning representation using syntaxbased machine translation",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Pust",
"suffix": ""
},
{
"first": "Ulf",
"middle": [],
"last": "Hermjakob",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1143--1154",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Pust, Ulf Hermjakob, Kevin Knight, Daniel Marcu, and Jonathan May. 2015. Parsing English into abstract meaning representation using syntax- based machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1143-1154.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Modeling quantification and scope in abstract meaning representations",
"authors": [
{
"first": "James",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
},
{
"first": "Ken",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the First International Workshop on Designing Meaning Representations",
"volume": "",
"issue": "",
"pages": "28--33",
"other_ids": {
"DOI": [
"10.18653/v1/W19-3303"
]
},
"num": null,
"urls": [],
"raw_text": "James Pustejovsky, Ken Lai, and Nianwen Xue. 2019. Modeling quantification and scope in abstract mean- ing representations. In Proceedings of the First In- ternational Workshop on Designing Meaning Repre- sentations, pages 28-33, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "SemBleu: A robust metric for AMR parsing evaluation",
"authors": [
{
"first": "Linfeng",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4547--4552",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Linfeng Song and Daniel Gildea. 2019. SemBleu: A robust metric for AMR parsing evaluation. In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4547- 4552.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "A graph-to-sequence model for AMRto-text generation",
"authors": [
{
"first": "Linfeng",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhiguo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1616--1626",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Linfeng Song, Yue Zhang, Zhiguo Wang, and Daniel Gildea. 2018. A graph-to-sequence model for AMR- to-text generation. In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1616- 1626, Melbourne, Australia. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Neural headline generation on abstract meaning representation",
"authors": [
{
"first": "Jun",
"middle": [],
"last": "Sho Takase",
"suffix": ""
},
{
"first": "Naoaki",
"middle": [],
"last": "Suzuki",
"suffix": ""
},
{
"first": "Tsutomu",
"middle": [],
"last": "Okazaki",
"suffix": ""
},
{
"first": "Masaaki",
"middle": [],
"last": "Hirao",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nagata",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 conference on empirical methods in natural language processing",
"volume": "",
"issue": "",
"pages": "1054--1059",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sho Takase, Jun Suzuki, Naoaki Okazaki, Tsutomu Hirao, and Masaaki Nagata. 2016. Neural head- line generation on abstract meaning representation. In Proceedings of the 2016 conference on empiri- cal methods in natural language processing, pages 1054-1059.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "A transition-based algorithm for AMR parsing",
"authors": [
{
"first": "Chuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Pradhan",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "366--375",
"other_ids": {
"DOI": [
"10.3115/v1/N15-1040"
]
},
"num": null,
"urls": [],
"raw_text": "Chuan Wang, Nianwen Xue, and Sameer Pradhan. 2015. A transition-based algorithm for AMR pars- ing. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, pages 366-375, Denver, Colorado. Association for Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Not an interlingua, but close: Comparison of English AMRs to Chinese and Czech",
"authors": [
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Ondrej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Hajic",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Zdenka",
"middle": [],
"last": "Uresova",
"suffix": ""
},
{
"first": "Xiuhong",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2014,
"venue": "LREC",
"volume": "14",
"issue": "",
"pages": "1765--1772",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nianwen Xue, Ondrej Bojar, Jan Hajic, Martha Palmer, Zdenka Uresova, and Xiuhong Zhang. 2014. Not an interlingua, but close: Comparison of English AMRs to Chinese and Czech. In LREC, volume 14, pages 1765-1772. Reykjavik, Iceland.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "AMR parsing as sequence-tograph transduction",
"authors": [
{
"first": "Sheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xutai",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "80--94",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1009"
]
},
"num": null,
"urls": [],
"raw_text": "Sheng Zhang, Xutai Ma, Kevin Duh, and Benjamin Van Durme. 2019. AMR parsing as sequence-to- graph transduction. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 80-94, Florence, Italy. Associa- tion for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"num": null,
"uris": null,
"text": "An AMR for (1) or (2)",
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"uris": null,
"text": "The three-stage decoding/encoding processes # Syntactic rules Start <-Node Node <-'(' Symbol Concept? Reln * ')' Concept <-'/' Atom? Reln <-Role Algn? (Node / Atom)? Atom <-(String / Symbol) Algn?",
"type_str": "figure"
},
"FIGREF3": {
"num": null,
"uris": null,
"text": "Syntactic and lexical rules of PENMAN ('a', [ ('/', 'alpha'), (':ARG0', ('b', [ ('/', 'beta')])), (':ARG0-of', ('g', [ ('/', 'gamma'), (':ARG1', 'b')]))])",
"type_str": "figure"
},
"FIGREF4": {
"num": null,
"uris": null,
"text": "Tree structure for the AMR inFig. 2('a', [('a', ':instance', 'alpha'), ('a', ':ARG0', 'b'), ('b', ':instance', 'beta'), ('g', ':ARG0', 'a'), ('g', ':instance', 'gamma'), ('g', ':ARG1', 'b')])",
"type_str": "figure"
},
"FIGREF5": {
"num": null,
"uris": null,
"text": "Graph structure for the AMR inFig. 2",
"type_str": "figure"
},
"FIGREF6": {
"num": null,
"uris": null,
"text": "', ':instance', 'beta'): [Pop], ('g', ':ARG0', 'a'):",
"type_str": "figure"
},
"FIGREF7": {
"num": null,
"uris": null,
"text": "', ':instance', 'pool')]) >>> print(penman.encode(g)) (s / swim-01 :ARG0 (i / i) :location (p / pool)) Example of using Penman's Python API for graph construction",
"type_str": "figure"
},
"FIGREF8": {
"num": null,
"uris": null,
"text": "from penman.models.amr import model >>> g = penman.decode( ... '(s / swim-01' ... ' :ARG0 (i / i)' ... ' :stroke (b / butterfly))') >>> model.errors(g) {('s', ':stroke', 'b'): ['invalid role']} Example of using Penman's Python API for checking model compliance",
"type_str": "figure"
},
"FIGREF9": {
"num": null,
"uris": null,
"text": "Example of using Penman's Python API to remove concept senses",
"type_str": "figure"
},
"FIGREF10": {
"num": null,
"uris": null,
"text": "echo '{e: x:pron[] > _1:pronoun_q[BV x] > e:_swim_v_1[ARG1 x]}' \\ > | delphin convert --from eds \\ Example of EDS in Penman notation",
"type_str": "figure"
}
}
}
}