Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "I17-1031",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:37:27.937836Z"
},
"title": "Domain-Adaptable Hybrid Generation of RDF Entity Descriptions",
"authors": [
{
"first": "Or",
"middle": [],
"last": "Biran",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Mckeown",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "RDF ontologies provide structured data on entities in many domains and continue to grow in size and diversity. While they can be useful as a starting point for generating descriptions of entities, they often miss important information about an entity that cannot be captured as simple relations. In addition, generic approaches to generation from RDF cannot capture the unique style and content of specific domains. We describe a framework for hybrid generation of entity descriptions, which combines generation from RDF data with text extracted from a corpus, and extracts unique aspects of the domain from the corpus to create domain-specific generation systems. We show that each component of our approach significantly increases the satisfaction of readers with the text across multiple applications and domains.",
"pdf_parse": {
"paper_id": "I17-1031",
"_pdf_hash": "",
"abstract": [
{
"text": "RDF ontologies provide structured data on entities in many domains and continue to grow in size and diversity. While they can be useful as a starting point for generating descriptions of entities, they often miss important information about an entity that cannot be captured as simple relations. In addition, generic approaches to generation from RDF cannot capture the unique style and content of specific domains. We describe a framework for hybrid generation of entity descriptions, which combines generation from RDF data with text extracted from a corpus, and extracts unique aspects of the domain from the corpus to create domain-specific generation systems. We show that each component of our approach significantly increases the satisfaction of readers with the text across multiple applications and domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "RDF ontologies are a wonderful source for generation: they feature standardized structure, are constantly expending and span many interesting domains. However, generation from RDF introduces two major difficulties. First, RDF contains relationships between entities but often lacks other important information about an entity (e.g., historical background and context) which is hard to capture with simple relations. Second, RDF data spans many domains, and presents the difficulty of handling specific domains in generation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Generally speaking, there are three approaches: domain-specific approaches (with hand-written or other rules relevant to each domain), which are not scalable; generic approaches (generating in exactly the same way for all domains) which result in unnatural text and miss important content; and domain adaptation, which attempts to automatically transfer an approach from one domain to another.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our approach aims to leverage the advantages of all three. We present a generic framework of generation meta-systems for RDF applications, which uses domain adaptation to create domainspecific systems. Biography and Company Description are examples of applications (an application is the description of RDF entities of a particular type), while Politician and Model are examples of domains within the Biography application.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The reason our framework is able to adapt to new domains automatically is that it relies on hybrid concept-to-text (C2T) and text-to-text (T2T) generation: part of the generated text consists of messages that are created from structured data according to a generic recipe, while another part comes from messages extracted from a domain corpus. In addition, we use existing methods to extract paraphrases and discourse models from the domain corpus, which further refines how text is generated differently for each domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Generation from RDF data is not a new topic. Duboue and McKeown (2003) described a content selection approach for generation from RDF data. Sun and Mellish (2006) present a domain-independent approach for sentence generation from RDF triples. Duma and Klein (2013) propose an architecture for learning end-to-end generation systems from aligned RDF data and sampled generated text. End-to-end concept-totext systems were proposed by Galanis et al. (2009) , Androutsopoulos et al. (2013) and Cimiano et al. (2013) , among others. For a survey of the history of generation from semantic web data and its difficulties, see (Bouayad-Agha et al., 2014) .",
"cite_spans": [
{
"start": 45,
"end": 70,
"text": "Duboue and McKeown (2003)",
"ref_id": "BIBREF8"
},
{
"start": 140,
"end": 162,
"text": "Sun and Mellish (2006)",
"ref_id": "BIBREF20"
},
{
"start": 243,
"end": 264,
"text": "Duma and Klein (2013)",
"ref_id": "BIBREF9"
},
{
"start": 433,
"end": 454,
"text": "Galanis et al. (2009)",
"ref_id": "BIBREF10"
},
{
"start": 457,
"end": 486,
"text": "Androutsopoulos et al. (2013)",
"ref_id": "BIBREF0"
},
{
"start": 491,
"end": 512,
"text": "Cimiano et al. (2013)",
"ref_id": "BIBREF7"
},
{
"start": 620,
"end": 647,
"text": "(Bouayad-Agha et al., 2014)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Generation meta-systems which can be automatically adapted to a new domain have been explored in recent years. Angeli et al. (2010) learn to make decisions about content selection and (separately) template selection from an aligned corpus of database records and text describing them. Kondadadi et al. (2013) describe a framework that learns domain-specific templates, content selection, ordering and template selection from an aligned corpus. Both approaches rely on supervised learning from an aligned corpus of data and sample texts generated from the data, which is a rare resource that does not exist for most domains.",
"cite_spans": [
{
"start": 111,
"end": 131,
"text": "Angeli et al. (2010)",
"ref_id": "BIBREF1"
},
{
"start": 285,
"end": 308,
"text": "Kondadadi et al. (2013)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Other recent work has focused on domain adaptation for existing generation systems (as opposed to creating adaptable meta-systems). There has been work on adapting generated text for different user groups (Janarthanam and Lemon, 2010; Gkatzia et al., 2014) ; adapting summarization systems to new genres (Lloret and Boldrini, 2015) ; adapting dialog generation systems to new applications (Rieser and Lemon, 2011) and domains (Walker et al., 2007) ; and parameterizing existing handcrafted systems to increase the range of domains they can handle (Lukin et al., 2015) .",
"cite_spans": [
{
"start": 205,
"end": 234,
"text": "(Janarthanam and Lemon, 2010;",
"ref_id": "BIBREF13"
},
{
"start": 235,
"end": 256,
"text": "Gkatzia et al., 2014)",
"ref_id": "BIBREF11"
},
{
"start": 304,
"end": 331,
"text": "(Lloret and Boldrini, 2015)",
"ref_id": "BIBREF15"
},
{
"start": 389,
"end": 413,
"text": "(Rieser and Lemon, 2011)",
"ref_id": "BIBREF18"
},
{
"start": 426,
"end": 447,
"text": "(Walker et al., 2007)",
"ref_id": "BIBREF21"
},
{
"start": 547,
"end": 567,
"text": "(Lukin et al., 2015)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In comparison, hybrid C2T-T2T generation is fairly unexplored territory. One recent example is Saldanha et al. (2016) , which evaluated two approaches to generating company descriptions -one with Wikipedia structured data, the other utilizing web search results -and determined that the best results were achieved by combining the two. However, the hybrid system in this case was only a concatenation of two independent approaches.",
"cite_spans": [
{
"start": 95,
"end": 117,
"text": "Saldanha et al. (2016)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our approach is a framework for creating generation meta-systems for specific applications of RDF entity description, such as biography and company description generation. Each meta-system, in turn, can be automatically adapted to a new domain within the application (e.g., the politician domain within the biography application) with only a simple text corpus, resulting in a concrete generation system that is specifically adapted to the domain. The generation system uses hybrid generation, building core messages from RDF data (C2T) and adding domain-specific secondary messages from the text corpus (T2T).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Framework Overview",
"sec_num": "3"
},
{
"text": "Our main data structure is the Semantic Typed Template (STT). An STT is a tuple V, R, L consisting of a set of vertices labeled with entity types V = {v 1 , . . . , v n }, a set of edges labeled with relations among the vertices R = {r 1 , . . . , r m } and a set of lexical templates L = {l 1 , . . . , l k }. The lexical templates L are all assumed to be lexicalizations of the semantics of the STT and paraphrases of each other, and must be phrases or sentences (that is, multiple-sentence lexicalizations are not allowed). The STT represents both the meaning and possible realizations of a sentence-level unit of semantics, without directly modeling the meaning in any way other than through the graph embodied in V and R. Instead, the meaning is grounded in the lexical template set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Data Structures",
"sec_num": "3.1"
},
{
"text": "A message is an instance of an STT \u03c4 with a concrete set of entities E. The set of types V (\u03c4 ) constrains the number and types of entities that are allowed to participate in E, and the set of relations R(\u03c4 ) constrains them further (the entities must have the proper relations among them).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Data Structures",
"sec_num": "3.1"
},
{
"text": "RDF is a framework for organizing data using triples. Each triple contains a subject, a predicate and an object. In this paper, we use DBPedia (Auer et al., 2007) as our source of RDF data.",
"cite_spans": [
{
"start": 143,
"end": 162,
"text": "(Auer et al., 2007)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Application Definition",
"sec_num": "3.2"
},
{
"text": "Each RDF application defines a single entity type \u03b7: each instance of the application is an entity belonging to this type (that is, there exists a triple such that the subject is the instance entity, the predicate is typeOf and the object is \u03b7). In Biography, \u03b7 = Person, while in Company Description \u03b7 = Company. In addition, each application defines a domain-differentiating predicate \u03c0: in Biography, \u03c0 = Occupation, while in Company Description \u03c0 = Industry. \u03c0 must be chosen so that for each instance of the application, there exists an RDF triple where the subject is the instance entity and the predicate is \u03c0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Application Definition",
"sec_num": "3.2"
},
{
"text": "Our framework defines each application as a generation meta-system: a generic system from which concrete, domain-adapted systems can be created using a text corpus. This section describes the process of domain adaptation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Preparation",
"sec_num": "4"
},
{
"text": "In this paper, we use Wikipedia as our source for domain corpora (each corpus is the set of Wikipe-dia articles for all entities of the domain). While it is convenient to select the corpus in this way, there is nothing in the framework that requires the domain corpus to come from Wikipedia.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Preparation",
"sec_num": "4"
},
{
"text": "Given a new domain corpus, we first extract definitional sentences: sentences in the corpus which contain an entity which is an instance of the domain. For example, in the Company Description application, in the Computer Hardware domain, definitional sentences for the entity Apple may include \"Apple is an American multinational technology company\" and \"In 1984, Apple launched the Macintosh, the first computer to be sold without a programming language at all\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Domain STTs and Messages",
"sec_num": "4.1"
},
{
"text": "To templatize the sentence and find its paraphrases, we use the approach of . Each definitional sentence is parsed, and NNPs that match an entity in DBPedia become typed slots, resulting in a template and a set of entities that match the slot types. The slot types are determined in two stages -sense disambiguation and hierarchical positioning -both achieved by leveraging the DBPedia ontology in combination with vector representations. We then use the templated paraphrase detection method described in to compare the template with existing STTs that match the entities' types and relations (all of which are known from the RDF ontology). The paraphrasing approach uses sentencelevel vector representations to calculate the similarity of the template to all of the existing lexicalizations of an STT. If the template is determined to be a paraphrase for an existing STT, it is added as a new lexicalization; otherwise it is treated as a new STT. This new STT (or the old STT with a new lexicalization) can be used for any entity sets that have the appropriate types and relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Domain STTs and Messages",
"sec_num": "4.1"
},
{
"text": "In addition, we create a domain message from the STT and the entities found in the definitional sentence (effectively making the definitional sentence itself a possible lexicalization of this message, along with any alternative lexicalizations if the STT contains any). This gives us the set of potential secodary messages which we will use in the generation pipeline. Figure 1 shows an example of this process. Two definitional sentences for the entity are found and templatized, and the first is matched to an existing STT (ST T 1 ) as a paraphrase. The first two lexica-lizations of this STT are the default ones, created for all RDF triples as explained in Section 5.1; the third is the template of the definitional sentence. The STT can be used with any matching entity set, but in particular, it is matched to the entity set of the definitional sentence to create domain message 1. The second template cannot be matched to an existing STT, so a new one is created, along with domain message 2. ",
"cite_spans": [],
"ref_spans": [
{
"start": 369,
"end": 377,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Extracting Domain STTs and Messages",
"sec_num": "4.1"
},
{
"text": "V = {Person, City} R = {v 2 birthPlace v 1 } L = { \"The birth place of [v 1 ] is [v 2 ]\", \"[v 1 ]'s birth place is [v 2 ]\", \"[v 1 ] was born and raised in [v 2 ]\", . . . } Domain message 1: ST T = ST T 1 E = {Candice Bergen, Beverly Hills} ST T 2 (new, no RDF relation): V = {Model, Fashion Magazine} R = {\u2205} L = { \"[v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Domain STTs and Messages",
"sec_num": "4.1"
},
{
"text": "A discourse planning model is extracted from the domain corpus as described in (Biran and McKeown, 2015) . The model provides prior and transition probabilities for the four top-level Penn Discourse TreeBank (PDTB) (Prasad et al., 2008) discourse relations: expansion, comparison, contingency and temporal. These probabilities reflect the discourse style that characterizes the domain, and will be used in Section 5 to determine the ordering of, and relations between, generated messages.",
"cite_spans": [
{
"start": 79,
"end": 104,
"text": "(Biran and McKeown, 2015)",
"ref_id": "BIBREF5"
},
{
"start": 215,
"end": 236,
"text": "(Prasad et al., 2008)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting the Discourse Planning Model",
"sec_num": "4.2"
},
{
"text": "The language model used in the realization component of the pipeline is not a typical n-gram model. We are not trying to generate words within a sentence. Instead, we have a set of templates for each message to generate (which corresponds to a sentence or phrase in the final text) and we want to choose one that best fits the context. For this purpose, we define and extract three cross-sentence language models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting the Language Model",
"sec_num": "4.3"
},
{
"text": "The first language model is a cross-sentence model for pairs of words that appear in adjacent sentences. The probability that a word w appears in a sentence if word v appears in the previous sentence, independently of everything else, is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting the Language Model",
"sec_num": "4.3"
},
{
"text": "P (w|v) = Count(v, w) Count(v)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting the Language Model",
"sec_num": "4.3"
},
{
"text": "For the probability of a particular template T given a selected previous sentence S, we take the average over all word pairs:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting the Language Model",
"sec_num": "4.3"
},
{
"text": "P LM 1 (T |S) = (w,v)\u2208{T \u00d7S} P (w|v) |{T \u00d7 S}|",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting the Language Model",
"sec_num": "4.3"
},
{
"text": "The second language model is a POS bigram pair model. It treats POS bigrams as individual words in the first model; in other words, P LM 2 (T |S) is defined in the same way as P LM 1 (T |S), except that w and v stand for POS bigrams (instead of words) in the candidate template and the selected previous sentence, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting the Language Model",
"sec_num": "4.3"
},
{
"text": "The third is a sentence length model. Here we compute the expected length of a sentence T given the length of the previous sentence S as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting the Language Model",
"sec_num": "4.3"
},
{
"text": "E[#T |#S] = {\u03c3 i :#\u03c3 i\u22121 =#S} #\u03c3 i |{\u03c3 i : #\u03c3 i\u22121 = #S}|",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting the Language Model",
"sec_num": "4.3"
},
{
"text": "where #S is the length of sentence S in words. We then smooth this expectation estimate using the estimates of nearby lengths:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting the Language Model",
"sec_num": "4.3"
},
{
"text": "E[#T |#S] = #S+3 i=#S\u22123 E[#T |i] 7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting the Language Model",
"sec_num": "4.3"
},
{
"text": "Based on this smoothed expectation, we define the probability of a template T given a selected previous sentence S:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting the Language Model",
"sec_num": "4.3"
},
{
"text": "P LM 3 (T |S) \u2206 = 1 (#T \u2212\u1ebc[#T |#S]) 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting the Language Model",
"sec_num": "4.3"
},
{
"text": "This definition is not intended to have a true probabilistic interpretation, but it preserves an order of likelihood since it increases monotonically as the length of T approaches the expected values.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting the Language Model",
"sec_num": "4.3"
},
{
"text": "These models are used in Section 5 to rank possible templates for a message being generated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting the Language Model",
"sec_num": "4.3"
},
{
"text": "Once a domain has been prepared, we can generate text for any instance in that domain. The generation pipeline contains four components: core message selection, domain message selection, discourse planning and realization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generation",
"sec_num": "5"
},
{
"text": "For each instance, we produce one core message from each RDF triple that has the instance's entity as the subject. To create a message from a triple, we first match it to an STT based on the predicate. Each predicate becomes an STT with two entity types (the type of the subject, which is the instance entity, and the type of the object) in V ; a single relation between the two types (the predicate) in R; and two simple initial templates in L:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Core Message Selection",
"sec_num": "5.1"
},
{
"text": "\u2022 The (PREDICATE) of [v 1 ] is [v 2 ] \u2022 [v 1 ] 's (PREDICATE) is [v 2 ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Core Message Selection",
"sec_num": "5.1"
},
{
"text": "where (PREDICATE) is replaced with the relevant predicate. Additional templates are then found using paraphrasal template mining as described in the previous section. We also create plural versions for cases where v 2 is a list of entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Core Message Selection",
"sec_num": "5.1"
},
{
"text": "For example, in the biography domain, we create an STT for the birthDate predicate with V = {person, date}; R = {v 1 birthDate v 2 }; and an initial template set L = {\"The birth date of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Core Message Selection",
"sec_num": "5.1"
},
{
"text": "[v 1 ] is [v 2 ]\", \"[v 1 ]'s birth date is [v 2 ]\"}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Core Message Selection",
"sec_num": "5.1"
},
{
"text": "In the preparation stage described in Section 4, L may be expanded with paraphrasal templates found in the corpus, for example \"[v 1 ] was born in [v 2 ]\" (see Figure 1 for an example).",
"cite_spans": [],
"ref_spans": [
{
"start": 160,
"end": 168,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Core Message Selection",
"sec_num": "5.1"
},
{
"text": "We then create a message that contains the relevant STT and the entities in the triple. In case there are multiple triples with the same subject and predicate but different objects, we create a single message with a plural version of the STT and define the second entity as the list of all objects. We shall refer to the set of core messages as C.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Core Message Selection",
"sec_num": "5.1"
},
{
"text": "In this paper we separate the content selection problem into two parts. The first (this component) is application-dependent and domainagnostic, and handles the skeleton or core structure of the generated text; the next component handles additional domain-specific content.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Core Message Selection",
"sec_num": "5.1"
},
{
"text": "The set of core messages gives us the core entities which participate in the core messages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Message Selection",
"sec_num": "5.2"
},
{
"text": "We also have the set of domain messages for the domain which are prepared (extracted from the domain corpus) ahead of time as described in Section 4. The set P of potential domain messages for generation is the subset of domain messages which contain the instance entity. In this stage of the pipeline, we select a subset of these potential domain messages to include in the generated text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Message Selection",
"sec_num": "5.2"
},
{
"text": "To select domain messages, we utilize the energy minimization framework described by Barzilay and Lapata (2005) . They describe a formulation that allows efficient optimization of what they call independent scores of content units and link scores among them through the energy minimization framework. The function to minimize is:",
"cite_spans": [
{
"start": 85,
"end": 111,
"text": "Barzilay and Lapata (2005)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Message Selection",
"sec_num": "5.2"
},
{
"text": "p\u2208S ind N (p) + p\u2208N ind S (p) + p i \u2208S p j \u2208N link(p i , p j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Message Selection",
"sec_num": "5.2"
},
{
"text": "where S is the subset of P selected for generation, N is the subset not selected (P \u2212 S = N ), ind S (p) is p's intrinsic tendency to be selected, ind N (p) is p's intrinsic tendency to not be selected and link(p i , p j ) is the dependency score for the link between p i and p j . A globally optimal partition of P to S and N can be found in polynomial time by constructing a particular kind of graph and finding a minimal cut partition (Greig et al., 1989) .",
"cite_spans": [
{
"start": 438,
"end": 458,
"text": "(Greig et al., 1989)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Message Selection",
"sec_num": "5.2"
},
{
"text": "The base preference of a message p is defined",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Message Selection",
"sec_num": "5.2"
},
{
"text": "Bp(p) = |R(\u03c4 (p))| if M (p) = E(p) \u2212|E(p) \\ M (p)| #L(\u03c4 (p)) 10 otherwise",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Message Selection",
"sec_num": "5.2"
},
{
"text": "where M (p) is the subset of E(p) -the entities of message p -which contains only entities that participate in at least one relation in R(\u03c4 (p)), and #L(\u03c4 (p)) is the average length in words of the templates of the STT \u03c4 (p). This definition results in a positive score for a message where all entities participate in a relation, whose weight is the number of relations it covers; conversely, messages which have entities that do not participate in a relation (unaccounted entities), have a negative score which increases in magnitude with the number of unaccounted entities and with the length of the templates realizing them. The intuition is that a long message containing many entities that match no triples is unlikely to be relevant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Message Selection",
"sec_num": "5.2"
},
{
"text": "Then, we define the individual preference scores ind(p) as an average of the similarity of p to each of the core messages using the Jaccard coefficient as a similarity score:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Message Selection",
"sec_num": "5.2"
},
{
"text": "ind(p) = m\u2208C J(p, m) |C|",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Message Selection",
"sec_num": "5.2"
},
{
"text": "Finally, we define ind S (p) and ind N (p) as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Message Selection",
"sec_num": "5.2"
},
{
"text": "ind S (p) = Bp(p) \u00d7 ind(p) if Bp(p) \u2265 0 0 otherwise ind N (p) = Bp(p) ind(p) if Bp(p) < 0 0 otherwise",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Message Selection",
"sec_num": "5.2"
},
{
"text": "The link scores link(p i , p j ) are defined using a type similarity score. In contrast to the individual preference scores, where we maximize the entity overlap with core messages (to avoid including messages with no connection to the core of the text), we should not encourage the domain messages to all share the exact same set of entities. Instead, we focus on a softer semantic similarity: shared entity types. This score enhances the coherence of the generated text (for example, by encouraging a focus on the executives of a company in a particular instance, and on its products in another) but allows a flexible range of messages to be selected. The link score definition is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Message Selection",
"sec_num": "5.2"
},
{
"text": "link(p i , p j ) = (e i ,e j )\u2208{E(p i )\u00d7E(p j )} typsim(e i , e j ) |{E(p i ) \u00d7 E(p j )}|",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Message Selection",
"sec_num": "5.2"
},
{
"text": "where typsim(e i , e j ) = 1 if type(e i ) = type(e j ) 0 otherwise Denoting the subset of P selected by this process as selected(P ), at the end of this process, we have M = C \u222a selected(P ) -the full set of messages to be generated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Message Selection",
"sec_num": "5.2"
},
{
"text": "The discourse planning component transforms the unordered set of messages M into an ordered sequence of paragraphs P = (p 1 , . . . , p k ) where each paragraph p i is an ordered discourse sequence p i = (m 1 , r 1 , m 2 , r 2 , . . . , r n\u22121 , m n ) , where the alternating m i and r i are messages and discourse relations, respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 202,
"end": 250,
"text": "= (m 1 , r 1 , m 2 , r 2 , . . . , r n\u22121 , m n )",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Discourse Planning",
"sec_num": "5.3"
},
{
"text": "First, we calculate the semantic similarity of each pair of messages in M as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse Planning",
"sec_num": "5.3"
},
{
"text": "sim(m i , m j ) = cos(V \u03c8m i , V \u03c8m j )link(m i , m j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse Planning",
"sec_num": "5.3"
},
{
"text": "where \u03c8 m i is the pseudo-sentence of message m i , constructed by concatenating all of its templates; V \u03c8m i is the vector representing \u03c8 m i , defined as the geometric mean of the vectors of all words participating in \u03c8 m i (the word vectors are traditional context vectors extracted from Gigaword with a window of 5 words); and link(m i , m j ) is defined as above. Essentially, this is a combination of the entity type-based semantic similarity and the distributional similarity of the lexicalizations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse Planning",
"sec_num": "5.3"
},
{
"text": "We use single-linkage agglomerative clustering (with a stopping criteria of sim(m i , m j ) \u2264 0.05) to group the messages into semantic groups of messages that are similar in topic. Then, within each semantic group, we find potential discourse relations for each pair of messages: Next, we use the discourse model extracted from the domain corpus to generate a discourse sequence. In order to make sure entity coherence is taken into account when choosing the ordering in addition to discourse coherence, we augment the probabilities coming from the discourse model P D (r i |R i\u22121 ), where R i\u22121 is the sequence of relations chosen so far, with the entity coherence score J(m i , m i\u22121 ), so that the probability of a relation between two messages is given by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse Planning",
"sec_num": "5.3"
},
{
"text": "P (r i |R i\u22121 , m i , m i\u22121 ) = P D (r i |R i\u22121 )J(m i , m i\u22121 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse Planning",
"sec_num": "5.3"
},
{
"text": "The discourse sequence is created stochastically using these probabilities as described in (Biran and McKeown, 2015) . Then, we break the discourse sequence into paragraphs that do not contain norel relations. Concatenating all of the paragraphs built from the discourse sequences of all semantic groups, we have an unordered set of paragraphs P, where each p i is an ordered discourse sequence of messages and relations.",
"cite_spans": [
{
"start": 91,
"end": 116,
"text": "(Biran and McKeown, 2015)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse Planning",
"sec_num": "5.3"
},
{
"text": "To order the paragraphs, we use the following importance score:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse Planning",
"sec_num": "5.3"
},
{
"text": "imp(p i ) = m\u2208p i |{e|e \u2208 E(m)}|Bp(m) |p i |",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse Planning",
"sec_num": "5.3"
},
{
"text": "which is the average number of entities in a message of p i , weighted by the base preference score Bp(m). The paragraphs are then sorted in decreasing order using this score, so that the paragraphs containing the most important messages tend to appear earlier in the text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse Planning",
"sec_num": "5.3"
},
{
"text": "At this stage, we have the ordered set of paragraphs P to be realized. To generate a paragraph, we select a template for each message and then select a discourse connective, or choose not to use one, for each discourse relation. Selecting a template is done using the three language models prepared ahead of time, as described in Section 4. We build a ranker from each model, and choose the template from {l \u2208 L(\u03c4 (m))} that maximizes the the sum of ranks given the previously realized sentence (in the paragraph) s:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Realization",
"sec_num": "5.4"
},
{
"text": "l = argmax l\u2208L(\u03c4 (m)) 3 i=1 rank P LM i (l|s)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Realization",
"sec_num": "5.4"
},
{
"text": "Once the template is chosen, we fill the slots with the entities E(m) to make it a sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Realization",
"sec_num": "5.4"
},
{
"text": "At this point we have the final lexical form of the message, and the last task is to link it with the previous sentence. We have a small set of discourse connective templates for each one of the 4 class-level PDTB relations (for example, \"m i . However, m j \" is one of the templates for the comparison relation), and we know the relation between the message and the previous message. We randomly select a connective, with a 50% chance of having no connective and a uniform distribution among the connectives for the relation, but avoid using connectives for sentence pairs that are together larger than 40 words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Realization",
"sec_num": "5.4"
},
{
"text": "At the end of this step, all paragraphs are generated with lexicalized sentences and connectives.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Realization",
"sec_num": "5.4"
},
{
"text": "To evaluate our RDF applications we conducted a crowd-sourced human experiment using texts generated from four domains in two applications: Biographies of Politicians and Models, and Company Descriptions of Automobile Manufacturers and Video Game Developers. We picked 100 instances from each domain of each application, for a total of 400 (we picked the instances that had the most RDF triples in each domain). Then, we generated 4 versions for each instance:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "6"
},
{
"text": "1. A full-system version 2. A version that excludes paraphrase detection (so core messages only had the two manually-created templates, and domain messages only had a single template each)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "6"
},
{
"text": "3. A version that excludes the discourse model (so discourse planning was done using only entity coherence scores) 4. A baseline version that has no domain adaptation at all and is fully C2T instead of hybrid (i.e., only core messages were generated, without any extracted domain messages)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "6"
},
{
"text": "Using these 4 versions, we devised 3 questions for each instance. In each question, the annotator saw two texts about the same entity -the full system version, and one of the other three versions -and was asked which is better (with an option of saying they are equal), along several criteria. The questions were presented in random order and the systems were anonymized. We showed each question to three annotators and used the majority vote, throwing out results where there was total disagreement between the annotators, which happened 12% of the time for the baseline version and 17 \u2212 21% of the time for the other variants.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "6"
},
{
"text": "The questions included four criteria: the content of the text (information relevance); the ordering of the sentences and paragraphs; the style of the text (how human-like it is); and the overall satisfiability of the text as a description of the person/company in question.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "6"
},
{
"text": "We show the results of the experiment in Table 1. The results in this table are for both applications and all four domains. Each comparison (e.g., \"No Hybrid VS Full System\" shows the breakdown of preference by annotators when they were shown texts generated by the two variants: how many (in percentage) preferred the baseline system (e.g. No Hybrid), how many preferred the full system, and how many thought they were equal. We also show the winning difference between the two systems, i.e. those who thought that the full system was better than the baseline minus those who thought the opposite, and we measure statistical significance on these differences. Statistically significant results are marked with a dagger.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "6"
},
{
"text": "The most striking result of Table 1 is that the full system is overwhelmingly favored by annotators over the non-hybrid baseline, with a 32% \u2212 46% lead in all categories. This result, more than anything, shows the value of our framework and the hybrid approach. The full system was particularly better than this baseline in content, which is generally expected since it by definition contains less content than the full system (it only generates the core messages); note, however, that this result suggests that the extracted and selected messages are relevant and enhance the reader's satisfaction with the text. The baseline (which, in addition to not using extracted domain messages, also does not use the extracted paraphrasal templates and discourse model) also loses heavily to the full system in ordering and style, as well as overall. In all criteria, the percentage of annotators who thought the texts were equally good was low (11% \u2212 20%), suggesting that the difference was very visible.",
"cite_spans": [],
"ref_spans": [
{
"start": 28,
"end": 35,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6.1"
},
{
"text": "While the effect of removing a single component is not as dramatic as removing both in addition to the domain messages, it is clearly visible in the preferences of 10% \u2020 10% 9% \u2020 Table 1 : Preferences, with different criteria, given by the human annotators when presented with two versions -the full system VS each of the baseline versions. Statistically significant winning differences are marked with a dagger.",
"cite_spans": [],
"ref_spans": [
{
"start": 179,
"end": 186,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6.1"
},
{
"text": "appears to be the paraphrases: the No Paraphrases version loses to the full system more heavily than No Discourse in content, style and overall. This result is not surprising since paraphrases have a dramatic effect on the text itself (they change the templates that are used to convey information, enhance the diversity of the text and may merge messages that are duplicates), and it suggests that the paraphrases we find are generally more satisfying than the default. It is also not surprising that the No Discourse Model variant loses most on ordering. While the difference is not as dramatic here, it is statistically significant and shows that our extracted domain-specific discourse model produces a more satisfying ordering of the text. Figure 2 shows the output of the biography for politician Marine Le Pen of the full system and the non-hybrid baseline. To show the contributions of different components, we mark sentences generated from extracted domain messages in bold, and sentences generated from core messages using an extracted paraphrase in italics. Sentences in unmarked typeface are those that were generated from core messages using a default template. The two variants make clear the main advantage of the full system: it simply has more content. The full output contains six sentences (messages) more than the baseline, which are clearly relevant to the biography. The entire last paragraph, concerned with Le Pen's policies and positions -an important part of a politician's biography -is missing from the baseline. These messages were extracted from the corpus and show the power of the hybrid approach. In addition to the final paragraph, two extracted messages are included which are concerned with Le Pen's controversial history, and together with the RDF-derived message about her offices, they comprise a paragraph generally about her political background. This is typical of the way that extracted messages contribute to the organization of the text in addition to the content: in the baseline version, the offices message is lumped together with messages about her background in general (alma mater, birth date, religion, partner etc). It demonstrates how the full system consistently outperforms the baseline in the ordering and style criteria, in addition to content and overall. Figure 3 shows the output of the company description for video game developer Taito Corporation of the full system and the no-paraphrases variant. In this case the two outputs contain exactly the same information and have almost the same organization of the text. The way in which the text is realized, however, is very different in the last paragraph. The full system realizes four of the six messages in that paragraph using extracted templates, and merges two messages into a single template in one case (\"Taito Corporation was founded in 1953 by Michael Kogan\", instead of the two sentences in the no-paraphrases baseline). The single-sentence messages also look better, e.g. \"Taito Corporation has around 662 employees\" instead of the awkward-sounding \"Taito Marine Le Pen regularly denounces sharp rises in energy prices which has \"harmful consequences on the purchasing power of the working and middle-class families\". Marine Le Pen denounces the current corporate tax as \"a crying injustice\". Marine Le Pen advocates to \"vote for the abolition of the law enabling the regularization of the illegal immigrants\". Marine Le Pen seeks to establish a moratorium on legal immigration. Taito Corporation was founded in 1953 by Michael Kogan. Taito Corporation has around 662 employees. Taito Corporation's location is Shibuya, Tokyo, Japan. Taito Corporation currently has a subsidiary in Beijing, China. Taito Corporation was merged with \"Square Enix\".",
"cite_spans": [],
"ref_spans": [
{
"start": 745,
"end": 753,
"text": "Figure 2",
"ref_id": "FIGREF5"
},
{
"start": 2315,
"end": 2323,
"text": "Figure 3",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6.1"
},
{
"text": "Taito Corporation's homepage is http://www.taito.com. Corporation's number of employees is 662\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "No-paraphrases output:",
"sec_num": null
},
{
"text": "We introduced a framework for creating hybrid concept-to-text and text-to-text generation systems that produce descriptions of RDF entities, and can be automatically adapted to a new domain with only a simple text corpus. We showed through a human evaluation that both the hybrid approach and domain adaptation result in significantly more satisfying descriptions, and that individual methods of domain adaptation help with the criteria we expect them to (i.e., finding paraphrases helps with content and style while an extracted discourse model helps with ordering). The code for this framework is available at www.cs. columbia.edu/\u02dcorb/hygen/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Generating natural language descriptions from owl ontologies: the naturalowl system",
"authors": [
{
"first": "Ion",
"middle": [],
"last": "Androutsopoulos",
"suffix": ""
},
{
"first": "Gerasimos",
"middle": [],
"last": "Lampouras",
"suffix": ""
},
{
"first": "Dimitrios",
"middle": [],
"last": "Galanis",
"suffix": ""
}
],
"year": 2013,
"venue": "Journal of Artificial Intelligence Research",
"volume": "",
"issue": "",
"pages": "671--715",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ion Androutsopoulos, Gerasimos Lampouras, and Di- mitrios Galanis. 2013. Generating natural language descriptions from owl ontologies: the naturalowl sy- stem. Journal of Artificial Intelligence Research, pa- ges 671-715.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A simple domain-independent probabilistic approach to generation",
"authors": [
{
"first": "Gabor",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP '10",
"volume": "",
"issue": "",
"pages": "502--512",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gabor Angeli, Percy Liang, and Dan Klein. 2010. A simple domain-independent probabilistic approach to generation. In Proceedings of the 2010 Confe- rence on Empirical Methods in Natural Language Processing, EMNLP '10, pages 502-512, Strouds- burg, PA, USA. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Dbpedia: a nucleus for a web of open data",
"authors": [
{
"first": "S\u00f6ren",
"middle": [],
"last": "Auer",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Bizer",
"suffix": ""
},
{
"first": "Georgi",
"middle": [],
"last": "Kobilarov",
"suffix": ""
},
{
"first": "Jens",
"middle": [],
"last": "Lehmann",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Cyganiak",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Ives",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 6th international The semantic web and 2nd Asian conference on Asian semantic web conference, ISWC'07/ASWC'07",
"volume": "",
"issue": "",
"pages": "722--735",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S\u00f6ren Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary Ives. 2007. Dbpedia: a nucleus for a web of open data. In Proceedings of the 6th international The seman- tic web and 2nd Asian conference on Asian semantic web conference, ISWC'07/ASWC'07, pages 722- 735, Berlin, Heidelberg. Springer-Verlag.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Collective content selection for concept-to-text generation",
"authors": [
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the HLT/EMNLP",
"volume": "",
"issue": "",
"pages": "331--338",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Regina Barzilay and Mirella Lapata. 2005. Collective content selection for concept-to-text generation. In Proceedings of the HLT/EMNLP, pages 331-338, Vancouver.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Mining paraphrasal typed templates from a plain text corpus",
"authors": [
{
"first": "Or",
"middle": [],
"last": "Biran",
"suffix": ""
},
{
"first": "Terra",
"middle": [],
"last": "Blevins",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Association for Computational Linguistics (ACL). Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Or Biran, Terra Blevins, and Kathleen McKeown. 2016. Mining paraphrasal typed templates from a plain text corpus. In Proceedings of the Associa- tion for Computational Linguistics (ACL). Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Discourse planning with an n-gram model of relations",
"authors": [
{
"first": "Or",
"middle": [],
"last": "Biran",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1973--1977",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Or Biran and Kathleen McKeown. 2015. Discourse planning with an n-gram model of relations. In Pro- ceedings of the 2015 Conference on Empirical Met- hods in Natural Language Processing, pages 1973- 1977, Lisbon, Portugal. Association for Computati- onal Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Natural language generation in the context of the semantic web",
"authors": [
{
"first": "Nadjet",
"middle": [],
"last": "Bouayad-Agha",
"suffix": ""
},
{
"first": "Gerard",
"middle": [],
"last": "Casamayor",
"suffix": ""
},
{
"first": "Leo",
"middle": [],
"last": "Wanner",
"suffix": ""
}
],
"year": 2014,
"venue": "Semantic Web",
"volume": "5",
"issue": "",
"pages": "493--513",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nadjet Bouayad-Agha, Gerard Casamayor, and Leo Wanner. 2014. Natural language generation in the context of the semantic web. Semantic Web, 5(6):493-513.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Exploiting ontology lexica for generating natural language texts from rdf data",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Cimiano",
"suffix": ""
},
{
"first": "Janna",
"middle": [],
"last": "L\u00fcker",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Nagel",
"suffix": ""
},
{
"first": "Christina",
"middle": [],
"last": "Unger",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 14th European Workshop on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "10--19",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Cimiano, Janna L\u00fcker, David Nagel, and Chris- tina Unger. 2013. Exploiting ontology lexica for generating natural language texts from rdf data. In Proceedings of the 14th European Workshop on Na- tural Language Generation, pages 10-19, Sofia, Bulgaria. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Statistical acquisition of content selection rules for natural language generation",
"authors": [
{
"first": "Pablo",
"middle": [
"A"
],
"last": "Duboue",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [
"R"
],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, EMNLP '03",
"volume": "",
"issue": "",
"pages": "121--128",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pablo A. Duboue and Kathleen R. McKeown. 2003. Statistical acquisition of content selection rules for natural language generation. In Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, EMNLP '03, pages 121-128, Stroudsburg, PA, USA. Association for Computati- onal Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Generating natural language from linked data: Unsupervised template extraction",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Duma",
"suffix": ""
},
{
"first": "Ewan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 10th International Conference on Computational Semantics (IWCS 2013) -Long Papers",
"volume": "",
"issue": "",
"pages": "83--94",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Duma and Ewan Klein. 2013. Generating na- tural language from linked data: Unsupervised tem- plate extraction. In Proceedings of the 10th Inter- national Conference on Computational Semantics (IWCS 2013) -Long Papers, pages 83-94. ASSOC COMPUTATIONAL LINGUISTICS-ACL.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "An open-source natural language generator for owl ontologies and its use in prot\u00c9g\u00c9 and second life",
"authors": [
{
"first": "Dimitrios",
"middle": [],
"last": "Galanis",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Karakatsiotis",
"suffix": ""
},
{
"first": "Gerasimos",
"middle": [],
"last": "Lampouras",
"suffix": ""
},
{
"first": "Ion",
"middle": [],
"last": "Androutsopoulos",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics: Demonstrations Session, EACL '09",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dimitrios Galanis, George Karakatsiotis, Gerasimos Lampouras, and Ion Androutsopoulos. 2009. An open-source natural language generator for owl on- tologies and its use in prot\u00c9g\u00c9 and second life. In Proceedings of the 12th Conference of the European Chapter of the Association for Computational Lin- guistics: Demonstrations Session, EACL '09, pa- ges 17-20, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Multi-adaptive natural language generation using principal component regression",
"authors": [
{
"first": "Dimitra",
"middle": [],
"last": "Gkatzia",
"suffix": ""
},
{
"first": "Helen",
"middle": [],
"last": "Hastie",
"suffix": ""
},
{
"first": "Oliver",
"middle": [],
"last": "Lemon",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the International Natural Language Generation (INLG)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dimitra Gkatzia, Helen Hastie, and Oliver Lemon. 2014. Multi-adaptive natural language generation using principal component regression. In Procee- dings of the International Natural Language Gene- ration (INLG).",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Exact maximum a posteriori estimation for binary images",
"authors": [
{
"first": "D",
"middle": [
"M"
],
"last": "Greig",
"suffix": ""
},
{
"first": "B",
"middle": [
"T"
],
"last": "Porteous",
"suffix": ""
},
{
"first": "A",
"middle": [
"H"
],
"last": "Seheult",
"suffix": ""
}
],
"year": 1989,
"venue": "Journal of the Royal Statistical Society. Series B (Methodological)",
"volume": "51",
"issue": "2",
"pages": "271--279",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. M. Greig, B. T. Porteous, and A. H. Seheult. 1989. Exact maximum a posteriori estimation for binary images. Journal of the Royal Statistical Society. Se- ries B (Methodological), 51(2):271-279.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Adaptive referring expression generation in spoken dialogue systems: Evaluation with real users",
"authors": [
{
"first": "Srinivasan",
"middle": [],
"last": "Janarthanam",
"suffix": ""
},
{
"first": "Oliver",
"middle": [],
"last": "Lemon",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue, SIGDIAL '10",
"volume": "",
"issue": "",
"pages": "124--131",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Srinivasan Janarthanam and Oliver Lemon. 2010. Adaptive referring expression generation in spoken dialogue systems: Evaluation with real users. In Proceedings of the 11th Annual Meeting of the Spe- cial Interest Group on Discourse and Dialogue, SIGDIAL '10, pages 124-131, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A statistical NLG framework for aggregated planning and realization",
"authors": [
{
"first": "Ravi",
"middle": [],
"last": "Kondadadi",
"suffix": ""
},
{
"first": "Blake",
"middle": [],
"last": "Howald",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Schilder",
"suffix": ""
}
],
"year": 2013,
"venue": "The Association for Computer Linguistics",
"volume": "",
"issue": "",
"pages": "1406--1415",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ravi Kondadadi, Blake Howald, and Frank Schilder. 2013. A statistical NLG framework for aggregated planning and realization. In ACL, pages 1406-1415. The Association for Computer Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Multi-genre summarization: Approach, potentials and challenges",
"authors": [
{
"first": "E",
"middle": [],
"last": "Lloret",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Boldrini",
"suffix": ""
}
],
"year": 2015,
"venue": "eChallenges e-2015 Conference",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Lloret and E. Boldrini. 2015. Multi-genre summa- rization: Approach, potentials and challenges. In eChallenges e-2015 Conference, pages 1-9.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Generating sentence planning variations for story telling",
"authors": [
{
"first": "Stephanie",
"middle": [],
"last": "Lukin",
"suffix": ""
},
{
"first": "Lena",
"middle": [],
"last": "Reed",
"suffix": ""
},
{
"first": "Marilyn",
"middle": [],
"last": "Walker",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "188--197",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephanie Lukin, Lena Reed, and Marilyn Walker. 2015. Generating sentence planning variations for story telling. In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 188-197, Prague, Czech Repu- blic. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The penn discourse treebank 2.0",
"authors": [
{
"first": "Rashmi",
"middle": [],
"last": "Prasad",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Dinesh",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Eleni",
"middle": [],
"last": "Miltsakaki",
"suffix": ""
},
{
"first": "Livio",
"middle": [],
"last": "Robaldo",
"suffix": ""
},
{
"first": "Aravind",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Webber",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Milt- sakaki, Livio Robaldo, Aravind Joshi, and Bonnie Webber. 2008. The penn discourse treebank 2.0. In In Proceedings of LREC.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Learning and evaluation of dialogue strategies for new applications: Empirical methods for optimization from small data sets",
"authors": [
{
"first": "Verena",
"middle": [],
"last": "Rieser",
"suffix": ""
},
{
"first": "Oliver",
"middle": [],
"last": "Lemon",
"suffix": ""
}
],
"year": 2011,
"venue": "Comput. Linguist",
"volume": "37",
"issue": "1",
"pages": "153--196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Verena Rieser and Oliver Lemon. 2011. Learning and evaluation of dialogue strategies for new applicati- ons: Empirical methods for optimization from small data sets. Comput. Linguist., 37(1):153-196.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "An entity-focused approach to generating company descriptions",
"authors": [
{
"first": "Gavin",
"middle": [],
"last": "Saldanha",
"suffix": ""
},
{
"first": "Or",
"middle": [],
"last": "Biran",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Mckeown",
"suffix": ""
},
{
"first": "Alfio",
"middle": [],
"last": "Gliozzo",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gavin Saldanha, Or Biran, Kathleen McKeown, and Alfio Gliozzo. 2016. An entity-focused approach to generating company descriptions. In Procee- dings of the Association for Computational Linguis- tics (ACL), Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Domain independent sentence generation from rdf representations for the semantic web",
"authors": [
{
"first": "Xiantang",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Mellish",
"suffix": ""
}
],
"year": 2006,
"venue": "Combined Workshop on Language-Enabled Educational Technology and Development and Evaluation of Robust Spoken Dialogue Systems, European Conference on AI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiantang Sun and Chris Mellish. 2006. Domain inde- pendent sentence generation from rdf representati- ons for the semantic web. In Combined Workshop on Language-Enabled Educational Technology and Development and Evaluation of Robust Spoken Dia- logue Systems, European Conference on AI.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Individual and domain adaptation in sentence planning for dialogue",
"authors": [
{
"first": "Marilyn",
"middle": [],
"last": "Walker",
"suffix": ""
},
{
"first": "Amanda",
"middle": [],
"last": "Stent",
"suffix": ""
}
],
"year": 2007,
"venue": "J. Artif. Int. Res",
"volume": "30",
"issue": "1",
"pages": "413--456",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marilyn Walker, Amanda Stent, Fran\u00e7ois Mairesse, and Rashmi Prasad. 2007. Individual and domain adap- tation in sentence planning for dialogue. J. Artif. Int. Res., 30(1):413-456.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Entity: Candice Bergen (a model) Definitional sentences (found in Wikipedia): -\"Candice Bergen was born and raised in Beverly Hills, California\" -\"Bergen began her career as a fashion model and appeared on the front cover of Vogue magazine\" Templates: -[Person] was born and raised in [City] -[Model] began her career as a fashion model and appeared on the front cover of [Fashion Magazine]ST T 1 (matched through paraphrasing):"
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "1 ] began her career as a fashion model and appeared on the front cover of [v 2 ]\" } Domain message 2: ST T = ST T 2 E = {Candice Bergen, Vogue Magazine} Figure 1: An example of the domain STT and message extraction process."
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "1. If the STTs of m i and m j are the same but they have no entities in common then there is a potential comparison relation between them 2. If J(m i , m j ) \u2265 0.5 then there is a potential expansion relation between them 3. Manually annotated relations for 20 specific pairs of RDF predicates, e.g. birthPlace and residence may have a temporal or a comparison relation between them 4. All message pairs can have a norel relation"
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "system output: Marine Le Pen's birth places are Neuilly-sur-Seine and France. Marine Le Pen's residences are Millas, H\u00e9nin-Beaumont and Saint-Cloud. The birth name of Marine Le Pen is Marion Anne Perrine Le Pen. Marine Le Pen's offices are Leader of the National Front, Municipal Councillor, Member of the European Parliament and Regional Councillor. Marine Le Pen's ups and downs in the political arena follow those of the National Front at the time. Marine Le Pen stirred up controversy during the internal campaign. The homepage of Marine Le Pen is http://www.marinelepen.fr/. The alma mater of Marine Le Pen is Panth\u00e9on-Assas University. Marine Le Pen's birth date was 1968-08-05. Marine Le Pen's religion is Catholic Church. Marine Le Pen's occupation is Politician. Marine Le Pen's partner is Louis Aliot."
},
"FIGREF4": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "'s homepage is http://www.marinelepen.fr/. Marine Le Pen's offices are Leader of the National Front, Municipal Councillor, Member of the European Parliament and Regional Councillor. Marine Le Pen's birth name is Marion Anne Perrine Le Pen. Marine Le Pen's religion is Catholic Church. Marine Le Pen's alma mater is Panth\u00e9on-Assas University. Marine Le Pen's birth date was 1968-08-05. Marine Le Pen's partner is Louis Aliot. The birth places of Marine Le Pen are Neuilly-sur-Seine and France. Marine Le Pen's residences are Millas, H'enin-Beaumont and Saint-Cloud."
},
"FIGREF5": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Output for Marine Le Pen.Full system output: The homepage of Taito Corporation is http://www.taito.com. The products of Taito Corporation are Lufia, Bubble Bobble, Cooking Mama, Space Invaders, Chase H.Q., Gun Fight and Puzzle Bobble."
},
"FIGREF6": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Output for Taito Corporation."
},
"TABREF0": {
"num": null,
"text": "Both reduced versions (No Paraphrases and No Discourse Model) lose to the full system in every criteria, often in double digits. The more meaningful component",
"html": null,
"content": "<table><tr><td/><td>Preference</td><td colspan=\"3\">Content Ordering Style</td><td>Overall</td></tr><tr><td/><td>No Hybrid</td><td>20%</td><td>27%</td><td>24%</td><td>22%</td></tr><tr><td>No Hybrid</td><td>Equal</td><td>14%</td><td>11%</td><td>20%</td><td>14%</td></tr><tr><td>VS Full System</td><td>Full System</td><td>66%</td><td>62%</td><td>56%</td><td>64%</td></tr><tr><td/><td colspan=\"2\">Full -baseline win diff. 46% \u2020</td><td>35% \u2020</td><td colspan=\"2\">32% \u2020 42% \u2020</td></tr><tr><td/><td>No Paraphrases</td><td>29%</td><td>33%</td><td>29%</td><td>30%</td></tr><tr><td>No Paraphrases</td><td>Equal</td><td>31%</td><td>26%</td><td>28%</td><td>27%</td></tr><tr><td>VS Full System</td><td>Full System</td><td>40%</td><td>41%</td><td>43%</td><td>43%</td></tr><tr><td/><td colspan=\"2\">Full -baseline win diff. 11% \u2020</td><td>8% \u2020</td><td colspan=\"2\">14% \u2020 13% \u2020</td></tr><tr><td/><td>No Discourse Model</td><td>33%</td><td>34%</td><td>32%</td><td>34%</td></tr><tr><td colspan=\"2\">No Discourse Model Equal</td><td>30%</td><td>22%</td><td>26%</td><td>23%</td></tr><tr><td>VS Full System</td><td>Full System</td><td>37%</td><td>44%</td><td>42%</td><td>43%</td></tr><tr><td/><td colspan=\"2\">Full -baseline win diff. 4%</td><td/><td/><td/></tr></table>",
"type_str": "table"
},
"TABREF1": {
"num": null,
"text": "The products of Taito Corporation are Lufia, Bubble Bobble, Cooking Mama, Space Invaders, Chase H.Q., Gun Fight and Puzzle Bobble. Taito Corporation's founding year is 1953. The founder of Taito Corporation is Michael Kogan. Taito Corporation's owner is Square Enix. Taito Corporation currently has a subsidiary in Beijing, China. Taito Corporation's location is Shibuya, Tokyo, Japan. Taito Corporation's number of employees is 662.",
"html": null,
"content": "<table/>",
"type_str": "table"
}
}
}
}