{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:40:59.765324Z" }, "title": "Syntactic Search by Example", "authors": [ { "first": "Micah", "middle": [], "last": "Shlain", "suffix": "", "affiliation": { "laboratory": "", "institution": "Allen Institute for AI", "location": { "settlement": "Tel Aviv", "country": "Israel" } }, "email": "micahs@allenai.org" }, { "first": "Hillel", "middle": [], "last": "Taub-Tabib", "suffix": "", "affiliation": { "laboratory": "", "institution": "Allen Institute for AI", "location": { "settlement": "Tel Aviv", "country": "Israel" } }, "email": "hillelt@allenai.org" }, { "first": "Shoval", "middle": [], "last": "Sadde", "suffix": "", "affiliation": { "laboratory": "", "institution": "Allen Institute for AI", "location": { "settlement": "Tel Aviv", "country": "Israel" } }, "email": "shovals@allenai.org" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "", "affiliation": { "laboratory": "", "institution": "Allen Institute for AI", "location": { "settlement": "Tel Aviv", "country": "Israel" } }, "email": "yoavg@allenai.org" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present a system that allows a user to search a large linguistically annotated corpus using syntactic patterns over dependency graphs. In contrast to previous attempts to this effect, we introduce a lightweight query language that does not require the user to know the details of the underlying syntactic representations, and instead to query the corpus by providing an example sentence coupled with simple markup. Search is performed at an interactive speed due to an efficient linguistic graphindexing and retrieval engine. This allows for rapid exploration, development and refinement of syntax-based queries. We demonstrate the system using queries over two corpora: the English wikipedia, and a collection of English pubmed abstracts. A demo of the wikipedia system is avilable at: https: //allenai.github.io/spike/ .", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "We present a system that allows a user to search a large linguistically annotated corpus using syntactic patterns over dependency graphs. In contrast to previous attempts to this effect, we introduce a lightweight query language that does not require the user to know the details of the underlying syntactic representations, and instead to query the corpus by providing an example sentence coupled with simple markup. Search is performed at an interactive speed due to an efficient linguistic graphindexing and retrieval engine. This allows for rapid exploration, development and refinement of syntax-based queries. We demonstrate the system using queries over two corpora: the English wikipedia, and a collection of English pubmed abstracts. A demo of the wikipedia system is avilable at: https: //allenai.github.io/spike/ .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The introduction of neural-network based models into NLP brought with it a substantial increase in syntactic parsing accuracy. We can now produce accurate syntactically annotated corpora at scale. However, the produced representations themselves remain opaque to most users, and require substantial linguistic expertise to use. Patterns over syntactic dependency graphs 1 can be very effective for interacting with linguistically-annotated corpora, either for linguistic retrieval or for information and relation extraction (Fader et al., 2011; Akbik et al., 2014; Valenzuela-Esc\u00e1rcega et al., 2015 , 2018 ). However, their use in mainstream NLP as represented in ACL and affiliated venues remain limited. We argue that this is due to the high barrier of entry associated with the application of such patterns. Our aim is to lower this barrier and allow also linguistically-na\u00efve users to effectively experiment with and develop syntactic patterns. Our proposal rests on two components:", "cite_spans": [ { "start": 524, "end": 544, "text": "(Fader et al., 2011;", "ref_id": "BIBREF4" }, { "start": 545, "end": 564, "text": "Akbik et al., 2014;", "ref_id": "BIBREF1" }, { "start": 565, "end": 598, "text": "Valenzuela-Esc\u00e1rcega et al., 2015", "ref_id": "BIBREF16" }, { "start": 599, "end": 605, "text": ", 2018", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(1) A light-weight query language that does not require in-depth familiarity with the underlying syntactic representation scheme, and instead lets the user specify their intent via a natural language example and lightweight markup.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(2) A fast, near-real-time response time due to efficient indexing, allowing for rapid experimentation. The query specifies a sentence (Paul was a founder of Microsoft) and three named captures: founder, t and entity. The founder and entity captures should have the same entity-type as the corresponding sentence words (PERSON for Paul and ORGANIZATION for Microsoft, indicated by [e]), and the t capture should have the same word form as the one in the sentence (founder) (indicated by [w] ). The syntactic relation between the captures should be the same as the one in the sentence, and the founder and entity captures should be expanded (indicated by ). The query is translated into a graph-based query, which is shown below the query, each graph-node associated with the query word that triggered it. The system also returned a list of matched sentences. The matched tokens for each capture group (founder, t and entity) are highlighted. The user can then issue another query, browse the results, or download all the results as a tab-separated file. ", "cite_spans": [ { "start": 487, "end": 490, "text": "[w]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While several rich query languages over linguistic tree and graph structure exist, they require a substantial amount of expertise to use. 2 The user needs to be familiar not only with the syntax of the query language itself, but to also be intimately familiar with the specific syntactic scheme used in the underlying linguistic annotations. For example, in Odin (Valenzuela-Esc\u00e1rcega et al., 2015), a dedicated language for pattern-based information extraction, the same rule as above is expressed as: The Spacy NLP toolkit 3 also includes pattern matcher over dependency trees,using JSON based syntax:", "cite_spans": [ { "start": 138, "end": 139, "text": "2", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Existing syntactic-query languages", "sec_num": "2" }, { "text": "[{\"PATTERN\": {\"ORTH\": \"founder\"}, \"SPEC\": {\"NODE_NAME\": \"t\"}}, {\"PATTERN\": {\"ENT_TYPE\": \"PERSON\"}}, \"SPEC\": {\"NODE_NAME\": \"founder\", \"NBOR_RELOP\": \">nsubj\", 2 We focus here on systems that are based on dependency syntax, but note that many systems and query languages exist also for constituency-trees, e.g., TGREP/TGREP2, TigerSearch (Lezius et al., 2002) , the linguists search engine (Resnik and Elkiss, 2005) , Fangorn (Ghodke and Bird, 2012 ).", "cite_spans": [ { "start": 157, "end": 158, "text": "2", "ref_id": null }, { "start": 335, "end": 356, "text": "(Lezius et al., 2002)", "ref_id": null }, { "start": 387, "end": 412, "text": "(Resnik and Elkiss, 2005)", "ref_id": "BIBREF12" }, { "start": 423, "end": 445, "text": "(Ghodke and Bird, 2012", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Existing syntactic-query languages", "sec_num": "2" }, { "text": "3 https://spacy.io/ \"NBOR_NAME\": \"t\"}}, {\"PATTERN\": {\"ENT_TYPE\": \"ORGANIZATION\"}, \"SPEC\": {\"NODE_NAME\": \"entity\", \"NBOR_RELOP\": \">nmod\", \"NBOR_NAME\": \"t\"}}]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Existing syntactic-query languages", "sec_num": "2" }, { "text": "Stanford's Core-NLP package (Manning et al., 2014) includes a dependency matcher called SEM-GREX, 4 which uses a more concise syntax: The dep search system 5 from Turku university (Luotolahti et al., 2017) is designed to provide a rich and expressive syntactic search over large parsebanks. They use a lightweight syntax and support working against pre-indexed data, though they do not support named captures of specific nodes.", "cite_spans": [ { "start": 28, "end": 50, "text": "(Manning et al., 2014)", "ref_id": "BIBREF8" }, { "start": 180, "end": 205, "text": "(Luotolahti et al., 2017)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Existing syntactic-query languages", "sec_num": "2" }, { "text": "While the different systems vary in the verboseness and complexity of their own syntax (indeed, the Turku system's syntax is rather minimal), they all require the user to explicitly specify the dependency relations between the tokens, making it challenging and error-prone to write, read or edit. The challenge grows substantially as the complexity of the pattern increases beyond the very simple example we show here.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PERSON nmod ORG", "sec_num": null }, { "text": "Closest in spirit to our proposal, the PROP-MINER system of Akbik et al. (2013) which lets the user enter a natural language sentence, mark spans as subject, predicate and object, and have a rule be generated automatically. However, the system is restricted to ternary subject-predicate-object patterns. Furthermore, the generated pattern is written in a path-expression SQL variant (SerQL, (Broekstra and Kampman, 2003) ), which the user then needs to manually edit. For example, our query above will be translated to: All these systems require the user to closely interact with linguistic concepts and explicitly specify graph-structures, posing a high barrier of entry for non-expert users. They also slow down expert users: formulating a complex query may require a few minutes. Furthermore, many of these query languages are designed to match against a provided sentence, and are not indexable. This requires iterating over all sentences in the corpus attempting to match each one, requiring substantial time to obtain matches from large corpora.", "cite_spans": [ { "start": 60, "end": 79, "text": "Akbik et al. (2013)", "ref_id": "BIBREF0" }, { "start": 391, "end": 420, "text": "(Broekstra and Kampman, 2003)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "PERSON nmod ORG", "sec_num": null }, { "text": "SELECT subject,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PERSON nmod ORG", "sec_num": null }, { "text": "Augustinus et al. (2012) describe a system for syntactic search by example, which retrieves tree fragments and which is completely UI based. Our system takes a similar approach, but replaces the UI-only interface with an expressive textual query language, allowing for richer queries. We also return node matches rather than tree fragments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PERSON nmod ORG", "sec_num": null }, { "text": "We propose a substantially simplified language, that has the minimal syntax and that does not require the user to know the underlying syntactic schema upfront (though it does not completely hide it from the user, allowing for exposure over time, and allowing control for expert users who understand the underlying syntactic annotation scheme).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic Search by Example", "sec_num": "3" }, { "text": "The query language is designed to be linguistically expressive, simple to use and amenable to efficient indexing and query. The simplicity and indexing requirements do come at a cost, though: we purposefully do not support some of the features available in existing languages. We expect these features to correlate with expertise. 6 At the same 6 Example of a query feature we do not support is quantifi-time, we also seamlessly support expressing arbitrary sub-graphs, a task which is either challenging or impossible with many of the other systems. The language is based on the following principles:", "cite_spans": [ { "start": 331, "end": 332, "text": "6", "ref_id": null }, { "start": 345, "end": 346, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Syntactic Search by Example", "sec_num": "3" }, { "text": "(1) The core of the query is a natural language sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic Search by Example", "sec_num": "3" }, { "text": "(2) A user can specify the tokens of interest and constraints on them via lightweight markup.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic Search by Example", "sec_num": "3" }, { "text": "(3) While expert users can specify complex token constraints, effective constraints can be specified by pulling values from the query words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic Search by Example", "sec_num": "3" }, { "text": "The required syntactic knowledge from the user, both in terms of the syntax of the query language itself and in terms of the underlying linguistic formalism, remains minimal.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntactic Search by Example", "sec_num": "3" }, { "text": "The language is structured around between-token relations and within-token constraints, where tokens can be captured.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Query Formalism", "sec_num": "4" }, { "text": "Formally, our query G = (V, E) is a labeled directed graph, where each node v i \u2208 V corresponds to a token, and a labeled edge e = (v i , v j , ) \u2208 E between the nodes corresponds to a between-token syntactic constraint. This query graph is then matched against parsed target sentences, looking for a correspondence between query nodes and target-sentence nodes that adhere to the token and edge constraints.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Query Formalism", "sec_num": "4" }, { "text": "For example, the following graph specifies three tokens, where the first and second are connected via an 'xcomp' relation, and the second and third via a 'dobj' relation. The first token is unconstrained, while the second token must have the POS-tag of VB, and the third token must be the word home.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Query Formalism", "sec_num": "4" }, { "text": "Sentences whose syntactic graph has a subgraph that aligns to the query graph and adheres to the constraints will be considered as matches. Example of such matching sentences are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Query Formalism", "sec_num": "4" }, { "text": "-John wanted w to go v home h after lunch.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Query Formalism", "sec_num": "4" }, { "text": "-It was a place she decided w to call v her home h . The , and marks on the nodes denote named captures. When matching a sentence, the sentence tokens corresponding to the graph-nodes will be bound to variables named 'w', 'v' and 'h', in our case {w=wanted, v=go, h=home} for the first sentence and {w=decided, v=call, h=home} for the second. Graph nodes can also be cation, i.e., \"nodes a and b should be connected via a path that includes one or more 'conj' edges\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Query Formalism", "sec_num": "4" }, { "text": "unnamed, in which case they must match sentence tokens but will not bind to any variable. The graph structure is not meant to be specified by hand, 7 but rather to be inferred from the example based query language described in the next section (an example query resulting in this graph is \"They w:wanted to v:[tag]go h:[word]home\").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Query Formalism", "sec_num": "4" }, { "text": "Between-token constraints correspond to labeled directed edges in the sentence's syntactic graph.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Query Formalism", "sec_num": "4" }, { "text": "Within-token constraints correspond to properties of individual sentence tokens. 8 For each property we specify a list of possible values (a disjunction) and if lists for several properties are provided, we require all of them to hold (a conjunction). For example, in the constraint tag=VBD|VBZ&lemma=buy we look for tokens with POS-tag of either VBD or VBZ, and the lemma buy. The list of possible values for a property can be specified as a pipe-separated list (tag=VBD|VBZ|VBN) or as a regular expression (tag=/VB[DZN]/).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph Query Formalism", "sec_num": "4" }, { "text": "The graph language described above is expressive enough to support many interesting queries, but it is also very tedious to specify query graphs G, especially for non-expert users. We propose a simple syntax that allows to easily specify a graph query G (constrained nodes connected by labeled edges) using a textual query q that takes the form of an example sentence and lightweight markup. Let s = w 1 , ..., w n be a proper English sentence. Let D be its dependency graph, with nodes w i and labeled edges (w i , w j , ). A corresponding textual query q takes the form q = q 1 , ..., q n , where each q i is either a word q i = w i , or a marked word Each of these corresponds to a node v qi in the query graph above.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example-based User-friendly Query Language", "sec_num": "5" }, { "text": "q i = m(w i ). A", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example-based User-friendly Query Language", "sec_num": "5" }, { "text": "Let m be the set of marked query words, and m + be a minimal connected subgraph of D that includes all the words in m. When translating q to G, each marked word w i \u2208 m is translated to a named query graph node v q i with the appropriate restriction. The additional words w j \u2208 m + \\ m are translated to unrestricted, unnamed nodes v q j . We add a query graph edge", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example-based User-friendly Query Language", "sec_num": "5" }, { "text": "(v q i , v q j , ) for each pair in V for which (w i , w j , ) \u2208 D.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example-based User-friendly Query Language", "sec_num": "5" }, { "text": "Further query simplifications. Consider the marked word h:[word=home] home. The constraint is redundant with the word. In such cases we allow the user to drop the value, which is then taken from the corresponding property of the query word. This allows us to replace the query: Finally, capture names can be omitted, in which case an automatic name is generated based on the corresponding word: Anchors. In some cases we want to add a node to the graph, without an explicit capture. In such cases we can use the anchor $ ($John). These are interpreted as having a default constraint of [w] , which can be overriden by providing an alternative constraint ($[e]John), or an empty one ($[]John).", "cite_spans": [ { "start": 586, "end": 589, "text": "[w]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Example-based User-friendly Query Language", "sec_num": "5" }, { "text": "Expansions When matching a query against a sentence the graph nodes bind to sentence words. Sometimes, we may want the match to be expanded to a larger span of the sentence. For example, when matching a word which is part of a entity, we often wish to capture the entire entity rather than the word. This is achieved by prefixing the term with the \"expansion diamond\" . The default behavior is to expand the match from the current word to the named entity boundary or NP-chunk that surrounds it, if it exists. We are currently investigating the option of providing additional expansion strategies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example-based User-friendly Query Language", "sec_num": "5" }, { "text": "Summary To summarize the query language from the point of view of the user: the user starts with a sentence w 1 , ..., w n , and marks some of the words for inclusion in the query graph. For each marked word, the user may specify a name, and optional constraints. The user query is then translated to a graph query as described above. The results list highlights the words corresponding to the marked query words. The user can choose for the results to highlight entire entities rather than single words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example-based User-friendly Query Language", "sec_num": "5" }, { "text": "An important aspect of the system is its interactivity. Users enter queries by writing a sentence and adding markup on some words, and can then refine them following feedback from the environment, as we demonstrate with a walk-through example.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interactive Pattern Authoring", "sec_num": "6" }, { "text": "A user interested in people who obtained degrees from higher education institutions may issue the following query: Here, the person in the \"subj\" capture and the institution in the \"inst\" capture are placeholders for items to be captured, so the user uses generic names and leaves them unconstrained. The \"degree\" (\"d\") capture should match exactly, as the user specified the \"w\" constraint (exact word match). When pressing Enter, the user is then shown the resulting query-graph and a result list. The user can then refine their queries based on either the query graph, the result list, or both. For the above query, the graph is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interactive Pattern Authoring", "sec_num": "6" }, { "text": "Note that the query graph associates each graph node with the query word that triggered it. The word \"obtained\" resulted in a graph node even though it was not marked by the user as a capture. The user makes note to themselves to go back to this word later. The user also notices that the word \"from\" is not part of the query.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interactive Pattern Authoring", "sec_num": "6" }, { "text": "Looking at the result list, things look weird:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interactive Pattern Authoring", "sec_num": "6" }, { "text": "Maybe this is because the word from is not in the graph? Indeed, adding a non-capturing exact-word anchor on \"from\" solves this issue: subj:John obtained his d:[w]degree $from inst:Harvard However, the resulting list contains many nonnames in the subj capture. Trying to resolve this, the user adds an \"entity-type\" constraint to the subj capture: These are the kind of results the user expected, but now they are curious about degrees obtained by females, and their representation in the Wikipedia corpus. Adding the pronoun to the query, the user then issues the following two queries, saving the result-sets from each one as a CSV for further comparative analysis. Our user now worries that they may be missing some results by focusing on the word degree. Maybe other things can be obtained from a university? The user then sets an exact-word constraint on \"Harvard\", adds a lemma constraint to \"obtain\" and clears the constraint from \"degree\": Over a pubmed corpus, annotated with the SciSpacy (Neumann et al., 2019) ", "cite_spans": [ { "start": 998, "end": 1020, "text": "(Neumann et al., 2019)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Interactive Pattern Authoring", "sec_num": "6" }, { "text": "The indexing is handled by Lucene. 10 We currently use Odinson (Valenzuela-Esc\u00e1rcega et al., 2020) , 11 an open-source Lucene-based query engine developed at Lum.ai, as a successor of Odin (Valenzuela-Esc\u00e1rcega et al., 2015) , that allows to index syntactic graphs and issue efficient path queries on them. We translate our queries into an Odinson path query that corresponds to a longest path in our query graph. We then iterate over the returned Odinson matches and verify the constraints that were not on the path. Conceptually, the Odinson system works by first using Lucene's reverse-index for retrieving sentences for which there is a token matching each of the specified token-constraints, and then verifying the syntactic between-token constraints. To improve the Lucene-query selectivity, tokens are indexed with incoming and outgoing syntactic edge label information, which is incorporated as additional token-constraints to the Lucene engine. The system easily supports millions of sentences, returning results at interactive speeds.", "cite_spans": [ { "start": 35, "end": 37, "text": "10", "ref_id": null }, { "start": 63, "end": 98, "text": "(Valenzuela-Esc\u00e1rcega et al., 2020)", "ref_id": "BIBREF15" }, { "start": 101, "end": 103, "text": "11", "ref_id": null }, { "start": 189, "end": 224, "text": "(Valenzuela-Esc\u00e1rcega et al., 2015)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Implementation Details", "sec_num": "8" }, { "text": "We introduce a simple query language that allows to pose complex syntax-based queries, and obtain results in an interactive speed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "9" }, { "text": "A search interface over Wikipedia sentences is available at https://allenai.github.io/ spike/. We intend to release the code as open source, as well as providing hosted open access to a PubMed-based corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "9" }, { "text": "In this paper, we very loosely use the term \"syntactic\" to refer to a linguistically motivated graph-based annotation over a piece of text, where the graph is directed and there is a path between any two nodes. While this usually implies syntactic dependency trees or graphs (and indeed, our system currently indexes Enhanced English Universal Dependency graphs(Nivre et al., 2016;Schuster and Manning, 2016)) the system can work also with more semantic annotation schemes e.g,(Oepen et al., 2015), given the availability of an accurate enough parser for them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://nlp.stanford.edu/software/ tregex.shtml 5 http://bionlp-www.utu.fi/dep_search/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Indeed, we currently do not even expose a textual representation of the graph.8 Currently supported properties are word-form (word), lemma (lemma), pos-tag (tag) or entity type (entity). Additional types can be easily added, provided that we have suitable linguistic annotators for them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The list can be even more comprehensive had we selected additional degree words and obtain words, and considered also additional re-phrasings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://lucene.apache.org 11 https://github.com/lum-ai/odinson/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank the team at LUM.ai and the University of Arizona, in particular Mihai Surdeanu, Marco Valenzuela-Esc\u00e1rcega, Gus Hahn-Powell and Dane Bell, for fruitful discussion and their work on the Odinson system. This project has received funding from the Europoean Research Council (ERC) under the Europoean Union's Horizon 2020 research and innovation programme, grant agreement No. 802774 (iEXTRACT).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": "result list is rather short, suggesting that either Harvard or obtain are too restrictive. The user seeks to expand the \"obtain\" node's vocabulary, adding back the exact word constraint on \"degree\" while removing the one from \"obtain\": Looking at the result list in the o capture, the user chooses the lemmas \"receive, complete, earn, obtain, get\", adds them to the o constraint, and removes the degree constraint. The returned result-set is now much longer, and we select additional terms for the degree slot and remove the institution word constraint, resulting in the final query: The result is a list of person names earning degrees from institution, and the entire list can be downloaded as a tab-separated file which includes the named captures as well as the source sentences (over Wikipedia, this list has 6197 rows). 9The query can also be further refined to capture which degree was obtained, e.g.: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null }, { "text": "To whet the reader's appetite, here are a sample of additional queries, showing different potential", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Additional Query Examples", "sec_num": "7" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Propminer: A workflow for interactive information extraction and exploration using dependency trees", "authors": [ { "first": "Alan", "middle": [], "last": "Akbik", "suffix": "" }, { "first": "Oresti", "middle": [], "last": "Konomi", "suffix": "" }, { "first": "Michail", "middle": [], "last": "Melnikov", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations", "volume": "", "issue": "", "pages": "157--162", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alan Akbik, Oresti Konomi, and Michail Melnikov. 2013. Propminer: A workflow for interactive in- formation extraction and exploration using depen- dency trees. In Proceedings of the 51st Annual Meet- ing of the Association for Computational Linguistics: System Demonstrations, pages 157-162, Sofia, Bul- garia. Association for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Exploratory relation extraction in large text corpora", "authors": [ { "first": "Alan", "middle": [], "last": "Akbik", "suffix": "" }, { "first": "Thilo", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Christoph", "middle": [], "last": "Boden", "suffix": "" } ], "year": 2014, "venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers", "volume": "", "issue": "", "pages": "2087--2096", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alan Akbik, Thilo Michael, and Christoph Boden. 2014. Exploratory relation extraction in large text corpora. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguis- tics: Technical Papers, pages 2087-2096.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Example-based treebank querying", "authors": [ { "first": "Liesbeth", "middle": [], "last": "Augustinus", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Vandeghinste", "suffix": "" }, { "first": "Frank", "middle": [], "last": "Van Eynde", "suffix": "" } ], "year": 2012, "venue": "LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liesbeth Augustinus, Vincent Vandeghinste, and Frank Van Eynde. 2012. Example-based treebank querying. In LREC.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Serql: A second generation rdf query language", "authors": [ { "first": "Jeen", "middle": [], "last": "Broekstra", "suffix": "" }, { "first": "Arjohn", "middle": [], "last": "Kampman", "suffix": "" } ], "year": 2003, "venue": "Proc. SWAD-Europe Workshop on Semantic Web Storage and Retrieval", "volume": "", "issue": "", "pages": "13--14", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeen Broekstra and Arjohn Kampman. 2003. Serql: A second generation rdf query language. In Proc. SWAD-Europe Workshop on Semantic Web Storage and Retrieval, pages 13-14.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Identifying relations for open information extraction", "authors": [ { "first": "Anthony", "middle": [], "last": "Fader", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Soderland", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Etzioni", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the conference on empirical methods in natural language processing", "volume": "", "issue": "", "pages": "1535--1545", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information ex- traction. In Proceedings of the conference on empir- ical methods in natural language processing, pages 1535-1545. Association for Computational Linguis- tics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Fangorn: A system for querying very large treebanks", "authors": [ { "first": "Sumukh", "middle": [], "last": "Ghodke", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bird", "suffix": "" } ], "year": 2012, "venue": "Proceedings of COLING 2012: Demonstration Papers", "volume": "", "issue": "", "pages": "175--182", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sumukh Ghodke and Steven Bird. 2012. Fangorn: A system for querying very large treebanks. In Pro- ceedings of COLING 2012: Demonstration Papers, pages 175-182, Mumbai, India. The COLING 2012 Organizing Committee.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Dep search: Efficient search tool for large dependency parsebanks", "authors": [ { "first": "Juhani", "middle": [], "last": "Luotolahti", "suffix": "" }, { "first": "Jenna", "middle": [], "last": "Kanerva", "suffix": "" }, { "first": "Filip", "middle": [], "last": "Ginter", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 21st Nordic Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "255--258", "other_ids": {}, "num": null, "urls": [], "raw_text": "Juhani Luotolahti, Jenna Kanerva, and Filip Ginter. 2017. Dep search: Efficient search tool for large dependency parsebanks. In Proceedings of the 21st Nordic Conference on Computational Linguistics, pages 255-258, Gothenburg, Sweden. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "The Stanford CoreNLP natural language processing toolkit", "authors": [ { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "John", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "Jenny", "middle": [], "last": "Finkel", "suffix": "" }, { "first": "Steven", "middle": [ "J" ], "last": "Bethard", "suffix": "" }, { "first": "David", "middle": [], "last": "Mc-Closky", "suffix": "" } ], "year": 2014, "venue": "Association for Computational Linguistics (ACL) System Demonstrations", "volume": "", "issue": "", "pages": "55--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David Mc- Closky. 2014. The Stanford CoreNLP natural lan- guage processing toolkit. In Association for Compu- tational Linguistics (ACL) System Demonstrations, pages 55-60.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Iz Beltagy, and Waleed Ammar. 2019. Scispacy: Fast and robust models for biomedical natural language processing", "authors": [ { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "King", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Neumann, Daniel King, Iz Beltagy, and Waleed Ammar. 2019. Scispacy: Fast and robust models for biomedical natural language processing.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Universal dependencies v1: A multilingual treebank collection", "authors": [ { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Marie-Catherine", "middle": [], "last": "De Marneffe", "suffix": "" }, { "first": "Filip", "middle": [], "last": "Ginter", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Hajic", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Manning", "suffix": "" }, { "first": "Slav", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Sampo", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "Natalia", "middle": [], "last": "Pyysalo", "suffix": "" }, { "first": "", "middle": [], "last": "Silveira", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)", "volume": "", "issue": "", "pages": "1659--1666", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Nivre, Marie-Catherine De Marneffe, Filip Gin- ter, Yoav Goldberg, Jan Hajic, Christopher D Man- ning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, et al. 2016. Universal dependencies v1: A multilingual treebank collection. In Proceed- ings of the Tenth International Conference on Lan- guage Resources and Evaluation (LREC'16), pages 1659-1666.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "SemEval 2015 task 18: Broad-coverage semantic dependency parsing", "authors": [ { "first": "Stephan", "middle": [], "last": "Oepen", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Kuhlmann", "suffix": "" }, { "first": "Yusuke", "middle": [], "last": "Miyao", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Zeman", "suffix": "" }, { "first": "Silvie", "middle": [], "last": "Cinkov\u00e1", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Flickinger", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 9th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "915--926", "other_ids": { "DOI": [ "10.18653/v1/S15-2153" ] }, "num": null, "urls": [], "raw_text": "Stephan Oepen, Marco Kuhlmann, Yusuke Miyao, Daniel Zeman, Silvie Cinkov\u00e1, Dan Flickinger, Jan Haji\u010d, and Zde\u0148ka Ure\u0161ov\u00e1. 2015. SemEval 2015 task 18: Broad-coverage semantic dependency pars- ing. In Proceedings of the 9th International Work- shop on Semantic Evaluation (SemEval 2015), pages 915-926, Denver, Colorado. Association for Compu- tational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "The Linguist's Search Engine: An overview", "authors": [ { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Elkiss", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the ACL Interactive Poster and Demonstration Sessions", "volume": "", "issue": "", "pages": "33--36", "other_ids": { "DOI": [ "10.3115/1225753.1225762" ] }, "num": null, "urls": [], "raw_text": "Philip Resnik and Aaron Elkiss. 2005. The Linguist's Search Engine: An overview. In Proceedings of the ACL Interactive Poster and Demonstration Sessions, pages 33-36, Ann Arbor, Michigan. Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Enhanced english universal dependencies: An improved representation for natural language understanding tasks", "authors": [ { "first": "Sebastian", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)", "volume": "", "issue": "", "pages": "2371--2378", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Schuster and Christopher D Manning. 2016. Enhanced english universal dependencies: An im- proved representation for natural language under- standing tasks. In Proceedings of the Tenth Interna- tional Conference on Language Resources and Eval- uation (LREC'16), pages 2371-2378.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Large-scale automated machine reading discovers new cancerdriving mechanisms", "authors": [ { "first": "A", "middle": [], "last": "Marco", "suffix": "" }, { "first": "\u00d6zg\u00fcn", "middle": [], "last": "Valenzuela-Esc\u00e1rcega", "suffix": "" }, { "first": "Gus", "middle": [], "last": "Babur", "suffix": "" }, { "first": "Dane", "middle": [], "last": "Hahn-Powell", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Bell", "suffix": "" }, { "first": "Enrique", "middle": [], "last": "Hicks", "suffix": "" }, { "first": "Xia", "middle": [], "last": "Noriega-Atala", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Emek", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Clayton T", "middle": [], "last": "Demir", "suffix": "" }, { "first": "", "middle": [], "last": "Morrison", "suffix": "" } ], "year": 2018, "venue": "Database", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco A Valenzuela-Esc\u00e1rcega,\u00d6zg\u00fcn Babur, Gus Hahn-Powell, Dane Bell, Thomas Hicks, Enrique Noriega-Atala, Xia Wang, Mihai Surdeanu, Emek Demir, and Clayton T Morrison. 2018. Large-scale automated machine reading discovers new cancer- driving mechanisms. Database, 2018.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Odinson: A fast rule-based information extraction framework", "authors": [ { "first": "A", "middle": [], "last": "Marco", "suffix": "" }, { "first": "Gus", "middle": [], "last": "Valenzuela-Esc\u00e1rcega", "suffix": "" }, { "first": "Dane", "middle": [], "last": "Hahn-Powell", "suffix": "" }, { "first": "", "middle": [], "last": "Bell", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Twelfth International Conference on Language Resources and Evaluation (LREC 2020)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco A. Valenzuela-Esc\u00e1rcega, Gus Hahn-Powell, and Dane Bell. 2020. Odinson: A fast rule-based in- formation extraction framework. In Proceedings of the Twelfth International Conference on Language Resources and Evaluation (LREC 2020), Marseille, France. European Language Resources Association (ELRA).", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A domainindependent rule-based framework for event extraction", "authors": [ { "first": "A", "middle": [], "last": "Marco", "suffix": "" }, { "first": "Gus", "middle": [], "last": "Valenzuela-Esc\u00e1rcega", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Hahn-Powell", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "", "middle": [], "last": "Hicks", "suffix": "" } ], "year": 2015, "venue": "Proceedings of ACL-IJCNLP 2015 System Demonstrations", "volume": "", "issue": "", "pages": "127--132", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco A Valenzuela-Esc\u00e1rcega, Gus Hahn-Powell, Mi- hai Surdeanu, and Thomas Hicks. 2015. A domain- independent rule-based framework for event extrac- tion. In Proceedings of ACL-IJCNLP 2015 System Demonstrations, pages 127-132.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "type_str": "figure", "text": "next page) shows the interface of our web-based system. The user issued the query:" }, "FIGREF1": { "num": null, "uris": null, "type_str": "figure", "text": "Syntactic Search System" }, "FIGREF4": { "num": null, "uris": null, "type_str": "figure", "text": "John w:wanted to v:[tag=VB]go h:[word=home]home with: John w:wanted to v:[tag]go h:[word]homeThis further drives the \"by example\" agenda, as the user does not need to know what the lemma, entity-type or POS-tag of a word are in order to specify them as a constraint. Full property names can be replaced with their shorthands w,l,t,e: John w:wanted to v:[t]go h:[w]home" }, "FIGREF5": { "num": null, "uris": null, "type_str": "figure", "text": "John :wanted to :[t]go :[w]home" }, "FIGREF6": { "num": null, "uris": null, "type_str": "figure", "text": "subj:John obtained his d:[w]degree from inst:Harvard" }, "FIGREF7": { "num": null, "uris": null, "type_str": "figure", "text": "subj:[e]John :[l]obtained his d:degree $from inst:[w]Harvard Browsing the results, the d capture includes words such as \"BA, PhD, MBA, certificate\". But the use-cases." }, "TABREF1": { "text": "marking of a word takes the form: :word (unnamed capture) name:word (named capture) or name:[constraints]word , :[constraints]word . Consider the query:", "num": null, "content": "
John w:wanted to v:[tag=VB] go h:[word=home] home
corresponding to the above graph query. The
marked words are:
q 2 =w:wanted(unconstrained, name:w)
q 4 =v:[tag=VB]go(cnstr:tag=VB, name:v)
q 5 =h:[word=home]home (cnstr:word=home, name:h)
", "html": null, "type_str": "table" }, "TABREF4": { "text": "Over wikipedia: p:[e]Sam $[l=win|receive]won an $Oscar. p:[e]Sam $[l=win|receive]won an $Oscar $for thing:something -$fish $such $as fish:salmon hero:[t]Spiderman $is a $superhero -I like kind:coconut $oil kind:coconut $oil is $used for purpose:eating", "num": null, "content": "", "html": null, "type_str": "table" } } } }