{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:29:00.593670Z" }, "title": "Generating Diverse Descriptions from Semantic Graphs", "authors": [ { "first": "Jiuzhou", "middle": [], "last": "Han", "suffix": "", "affiliation": { "laboratory": "", "institution": "The University of Melbourne", "location": { "country": "Australia" } }, "email": "jiuzhouh@foxmail.com" }, { "first": "Daniel", "middle": [], "last": "Beck", "suffix": "", "affiliation": { "laboratory": "", "institution": "The University of Melbourne", "location": { "country": "Australia" } }, "email": "d.beck@unimelb.edu.au" }, { "first": "Trevor", "middle": [], "last": "Cohn", "suffix": "", "affiliation": { "laboratory": "", "institution": "The University of Melbourne", "location": { "country": "Australia" } }, "email": "trevor.cohn@unimelb.edu.au" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Text generation from semantic graphs is traditionally performed with deterministic methods, which generate a unique description given an input graph. However, the generation problem admits a range of acceptable textual outputs, exhibiting lexical, syntactic and semantic variation. To address this disconnect, we present two main contributions. First, we propose a stochastic graph-to-text model, incorporating a latent variable in an encoder-decoder model, and its use in an ensemble. Second, to assess the diversity of the generated sentences, we propose a new automatic evaluation metric which jointly evaluates output diversity and quality in a multi-reference setting. We evaluate the models on WebNLG datasets in English and Russian, and show an ensemble of stochastic models produces diverse sets of generated sentences, while retaining similar quality to state-of-the-art models.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Text generation from semantic graphs is traditionally performed with deterministic methods, which generate a unique description given an input graph. However, the generation problem admits a range of acceptable textual outputs, exhibiting lexical, syntactic and semantic variation. To address this disconnect, we present two main contributions. First, we propose a stochastic graph-to-text model, incorporating a latent variable in an encoder-decoder model, and its use in an ensemble. Second, to assess the diversity of the generated sentences, we propose a new automatic evaluation metric which jointly evaluates output diversity and quality in a multi-reference setting. We evaluate the models on WebNLG datasets in English and Russian, and show an ensemble of stochastic models produces diverse sets of generated sentences, while retaining similar quality to state-of-the-art models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Semantic graphs are an integral part of knowledge bases that integrate and store information in a structured and machine-accessible way (van Harmelen et al., 2008) . They are usually limited to specific domains, describing concepts, entities and their relationships in the real world. Generating descriptions from semantic graphs is an important application of Natural Language Generation (NLG) and can be framed in a graph-to-text transduction approach.", "cite_spans": [ { "start": 136, "end": 163, "text": "(van Harmelen et al., 2008)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In recent years, approaches to graph-to-text generation can be broadly categorised into two groups. The first uses a sequence-to-sequence model (Trisedya et al., 2018; Konstas et al., 2017; Ferreira et al., 2019) : the key step in this approach is to linearise the input graph to a sequence. Sequence-to-sequence models have been proved to be effective for tasks like question answering (Yin et al., 2016 ), text summarisation (Nallapati et al., 2016) , and constituency parsing (Vinyals et al., 2015) . However, when dealing with graph inputs, this method does not take full advantage of the graph structure. Another approach is to handle the graph directly, using a graph-to-sequence model (Ribeiro et al., 2020; Beck et al., 2018; Zhao et al., 2020) . This approach has been recently widely adopted as it shows better performance for generating text from graphs (Xu et al., 2018) .", "cite_spans": [ { "start": 144, "end": 167, "text": "(Trisedya et al., 2018;", "ref_id": "BIBREF27" }, { "start": 168, "end": 189, "text": "Konstas et al., 2017;", "ref_id": "BIBREF13" }, { "start": 190, "end": 212, "text": "Ferreira et al., 2019)", "ref_id": "BIBREF3" }, { "start": 387, "end": 404, "text": "(Yin et al., 2016", "ref_id": "BIBREF33" }, { "start": 427, "end": 451, "text": "(Nallapati et al., 2016)", "ref_id": "BIBREF20" }, { "start": 479, "end": 501, "text": "(Vinyals et al., 2015)", "ref_id": "BIBREF30" }, { "start": 692, "end": 714, "text": "(Ribeiro et al., 2020;", "ref_id": "BIBREF24" }, { "start": 715, "end": 733, "text": "Beck et al., 2018;", "ref_id": "BIBREF0" }, { "start": 734, "end": 752, "text": "Zhao et al., 2020)", "ref_id": "BIBREF35" }, { "start": 865, "end": 882, "text": "(Xu et al., 2018)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The models used in previous work are deterministic: given the same input graph, they will always generate the same text (assuming a deterministic decoding algorithm is used). However, it is widely known that many graphs admit multiple valid descriptions. This is evidenced by the presence of multiple references in datasets such as WebNLG (Gardent et al., 2017a,b) and it is a common phenomenon in other generation tasks such as machine translation and image captioning. In this work, we propose to use models that generate sets of descriptions instead of a single one. In particular, we develop stochastic models with latent variables that capture diversity aspects of semantic graph descriptions, such as lexical and syntactic variability. We also propose a novel evaluation methodology that combines quality and diversity into a single score, in order to address caveats of previously proposed diversity metrics. Our findings show that stochastic models perform favourably when generating sets of descriptions, without sacrificing the quality of state-of-the-art architectures.", "cite_spans": [ { "start": 339, "end": 364, "text": "(Gardent et al., 2017a,b)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Graph-to-sequence Models Standard graph-tosequence models have two main components: a graph encoder and a sequence decoder. The encoder learns the hidden representation of the input graph and the decoder generates text based on this representation. Different graph-to-sequence mod-els vary mainly in the graph encoders.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Marcheggiani and Perez-Beltrachini (2018) proposed an encoder based on Graph Convolutional Networks (Kipf and Welling, 2017, GCNs) , which directly exploit the input structure. Similar to Convolutional Neural Networks (LeCun et al., 1998) , GCN layers can be stacked, resulting in representations that take into account non-adjacent, longdistance neighbours. Beck et al. (2018) used Gated Graph Neural Networks by extending networks on graph architectures with gating mechanisms, similar to Gated Recurrent Units (Cho et al., 2014, GRUs) . Koncel-Kedziorski et al. (2019) proposed Graph Transformer Encoder by extending Transformers (Vaswani et al., 2017) to graph-structured inputs, based on the Graph Attention Network (Velickovic et al., 2017, GAT) architecture. This graph encoder generates node embeddings by attending over its neighbours through a self-attention strategy. Ribeiro et al. (2020) propose new models to encode an input graph with both global and local node contexts. To combine these two node representations together, they make a comparison between a cascaded architecture and a parallel architecture.", "cite_spans": [ { "start": 100, "end": 130, "text": "(Kipf and Welling, 2017, GCNs)", "ref_id": null }, { "start": 188, "end": 238, "text": "Convolutional Neural Networks (LeCun et al., 1998)", "ref_id": null }, { "start": 359, "end": 377, "text": "Beck et al. (2018)", "ref_id": "BIBREF0" }, { "start": 513, "end": 537, "text": "(Cho et al., 2014, GRUs)", "ref_id": null }, { "start": 540, "end": 571, "text": "Koncel-Kedziorski et al. (2019)", "ref_id": "BIBREF12" }, { "start": 633, "end": 655, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF28" }, { "start": 721, "end": 751, "text": "(Velickovic et al., 2017, GAT)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Within neural networks, a standard approach for generative models with latent variables is the Variational Autoencoder (VAE) (Kingma and Welling, 2014). The generative process is represented as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Variable Models", "sec_num": null }, { "text": "p \u03b8 (x, z) = p \u03b8 (x | z)p \u03b8 (z),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Variable Models", "sec_num": null }, { "text": "where p \u03b8 (z) is the prior from which the latent variable is drawn, p \u03b8 (x | z) is the likelihood of data point x conditioned on the latent variable z, typically calculated using a deep non-linear neural network, and \u03b8 denotes the model parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Variable Models", "sec_num": null }, { "text": "Bowman et al. (2016) proposed a pioneering variational autoencoder for text generation to explicitly learn the global features using a continuous latent variable. They adapt the VAE to text data using an LSTM (Hochreiter and Schmidhuber, 1997) for both the encoder and the decoder, using a Gaussian prior to build a sequence autoencoder. This architecture can be extended to conditional tasks (when there is an input guiding the generation). Zhang et al. (2016) proposed an end-to-end variational model for Neural Machine Translation (NMT), using a continuous latent variable to capture the semantics in source sentences and guide the translation process. Schulz et al. (2018) proposed a more expressive word-level machine translation model incorporating a chain of latent variables, modelling lexical and syntactic variation in parallel corpora.", "cite_spans": [ { "start": 209, "end": 243, "text": "(Hochreiter and Schmidhuber, 1997)", "ref_id": "BIBREF8" }, { "start": 442, "end": 461, "text": "Zhang et al. (2016)", "ref_id": "BIBREF34" }, { "start": 656, "end": 676, "text": "Schulz et al. (2018)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Latent Variable Models", "sec_num": null }, { "text": "Variational latent variable models are commonly employed when there is a need for generating diverse outputs. This is achieved by sampling from the latent variable every time a new output is required. One can also use a standard deterministic model and sample from the decoder distributions instead but this tends to decrease the quality of the generated outputs. Here we review a few common techniques to address this issue.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Diversity in Neural Networks and Generation", "sec_num": null }, { "text": "Dropout (Srivastava et al., 2014 ) is a regularisation method used to prevent overfitting in neural networks. At training time, it masks random parameters in the network at every iteration. Dropout can also be employed in the testing phase, during generation. This idea was first proposed by Gal and Ghahramani (2016) and it is also called Monte Carlo (MC) dropout. Because MC dropout disables neurons randomly, the network will have different outputs every generation, which can make a deterministic model generate different outputs.", "cite_spans": [ { "start": 8, "end": 32, "text": "(Srivastava et al., 2014", "ref_id": "BIBREF26" }, { "start": 292, "end": 317, "text": "Gal and Ghahramani (2016)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Diversity in Neural Networks and Generation", "sec_num": null }, { "text": "Another technique to generate diverse outputs is ensemble learning. Typically, they are employed to prevent overfitting but they can also be used to generate diverse outputs. The idea is for each individual model in the ensemble to generate its own output. This approach can be very useful as each model tends to provide different optimal solutions in the network parameter space. This property has shown to benefit uncertainty estimation in deep learning (Lakshminarayanan et al., 2017) . It can also be used both with deterministic and stochastic models, a property we exploit in our experiments.", "cite_spans": [ { "start": 456, "end": 487, "text": "(Lakshminarayanan et al., 2017)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Diversity in Neural Networks and Generation", "sec_num": null }, { "text": "In this section we introduce the proposed approach to generate diverse descriptions from semantic graphs. We start from the state-of-the-art model of Ribeiro et al. (2020), which is a deterministic graph-to-sequence architecture. Then we incorporate a latent variable and a variational training procedure to this model, in order to turn the model stochastic. This latent variable aims at capturing linguistic variations in the descriptions and is responsible for increasing the diversity at generation time. The architecture is shown in Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 537, "end": 545, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Stochastic Graph-to-Sequence Model", "sec_num": "3" }, { "text": "The encoder is similar to Ribeiro et al. (2020), consisting of a global and a local subencoder. The Source: x = Figure 1 : Proposed stochastic graph-to-sequence model architecture.", "cite_spans": [], "ref_spans": [ { "start": 112, "end": 120, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Graph Encoder and Text Decoder", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u221a Node Embeddings Global Encoder Local Encoder N x q \" (z|x) p $ (z) Word Embeddings Prediction: ' , ) , \u2026 , + Transformer Decoder Target: ' , ) , \u2026 , ,", "eq_num": "Loss 01 h" } ], "section": "Graph Encoder and Text Decoder", "sec_num": "3.1" }, { "text": "global encoder considers a wide range of contexts but it ignores the graph topology by considering each node as if it were connected to all the other nodes in the graph. The local encoder learns the hidden representation of each node on the basis of its neighbour nodes, which exploits the graph structure effectively. Combining both global and local node aggregations, this encoder can learn better contextualised node embeddings. The global encoding strategy is mainly based on the Transformer architecture (Vaswani et al., 2017) , using a selfattention mechanism to calculate node representations of all nodes in the graph. The local encoding strategy adopts a modified version of Graph Attention Network (Velickovic et al., 2017) by adding relational weights to calculate the local node representations.", "cite_spans": [ { "start": 509, "end": 531, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF28" }, { "start": 708, "end": 733, "text": "(Velickovic et al., 2017)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Graph Encoder and Text Decoder", "sec_num": "3.1" }, { "text": "The decoder is also based on a transformer architecture. In our model, the input of the decoder is the contextualised node embeddings h x concatenated with the hidden state of the latent variable h z , which can be represented as [h x ; h z ]. Following Ribeiro et al. 2020, we also use beam search with length penalty (Wu et al., 2016) to encourage the model to generate longer sentences.", "cite_spans": [ { "start": 319, "end": 336, "text": "(Wu et al., 2016)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Graph Encoder and Text Decoder", "sec_num": "3.1" }, { "text": "Here is where we introduce a latent Gaussian variable z, which together with the input graph x, guides the generation process. With this, the condi-tional probability of sentence y given x is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference Model", "sec_num": "3.2" }, { "text": "p(y|x) = z p(y|z, x)p(z|x)dz.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference Model", "sec_num": "3.2" }, { "text": "The posterior inference in this model is intractable. Following previous work (Bowman et al., 2016; Kingma and Welling, 2014), we employ neural networks to fit the posterior distribution, to make the inference tractable. We regard the posterior as a diagonal Gaussian N \u00b5, diag \u03c3 2 . The mean \u00b5 and variance \u03c3 2 are parameterised with feed-forward neural networks (FFNNs), using the reparametrisation trick (Bowman et al., 2016; Kingma and Welling, 2014) of the Gaussian variables. It reparameterises the latent variable z as a function of mean \u00b5 and variance \u03c3:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference Model", "sec_num": "3.2" }, { "text": "z = \u00b5 + \u03c3 \u223c N (0, I),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference Model", "sec_num": "3.2" }, { "text": "where is a standard Gaussian variable which plays the role of introducing noises, and denotes element-wise multiplication. The reparametrisation trick enables back-propagation in optimisation process with Stochastic Gradient Descent (SGD). Then we transform the latent variable z into its hidden state h z through another FFNN. The training objective encourages the model to keep its posterior distributions q(z | x) close to a prior p(z) that is a standard Gaussian N (\u00b5 = 0, \u03c3 = 1). The loss function of the stochastic conditional model can be defined as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference Model", "sec_num": "3.2" }, { "text": "L(\u03c6, \u03b8; x, y) = \u2212E z\u223cq \u03c6 (z|x) [log p \u03b8 (y | z, x)] + KL (q \u03c6 (z | x) p(z)) .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference Model", "sec_num": "3.2" }, { "text": "The first term is the expected negative loglikelihood of data which is called reconstruction loss or cross-entropy loss. It forces the model to learn to reconstruct the data. The second term is the KL divergence which acts as a regulariser. By minimising the KL term, we want to make the approximate posterior stay close to the prior. We use SGD to optimise the loss function.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference Model", "sec_num": "3.2" }, { "text": "As shown above, the stochastic model objective comprises two terms reconstruction and KL regularisation. The KL divergence term will be nonzero and the cross-entropy term will be relatively small if the model encodes task-relevant information in the latent variable z. A difficulty of training is that the KL term tends to zero, causing the model to ignore z. This makes the model deterministic. This phenomenon is also known as the KL collapse or KL vanishing problem (Lucas et al., 2019) . We adopt the KL Threshold method (Pagnoni et al., 2018) to alleviate this issue. In this approach, we introduce a threshold \u03b6 into the loss function to control the KL term. A large KL term means the latent variable learns much information. By setting a threshold, we can force the model to take at least a fixed KL regularisation cost. In our experiments, we set the threshold \u03b6 as 10. The new loss function can be represented as", "cite_spans": [ { "start": 469, "end": 489, "text": "(Lucas et al., 2019)", "ref_id": "BIBREF18" }, { "start": 525, "end": 547, "text": "(Pagnoni et al., 2018)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Optimisation", "sec_num": "3.3" }, { "text": "L(\u03c6, \u03b8; x, y) = \u2212E z\u223cq \u03c6 (z|x) [log p \u03b8 (y | z, x)] + max (KL (q \u03c6 (z | x) p(z)) , \u03b6) .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimisation", "sec_num": "3.3" }, { "text": "Addressing diversity in language generation is a recent topic that attracted attention in particular in image captioning. This led to the development of metrics that aim at measuring the diversity of a set of sentences, such as Self-BLEU (Zhu et al., 2018) . However, these metrics are based only on the generated output space, ignoring the references in the gold standard. This lead to spurious measurements, such as unconditional language models having excellent performance according to these metrics, even though they have no practical use as they ignore the input.", "cite_spans": [ { "start": 238, "end": 256, "text": "(Zhu et al., 2018)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Joint Evaluation of Diversity and Quality", "sec_num": "4" }, { "text": "To address these caveats, we propose a new evaluation procedure that assesses diversity and quality jointly. Our key insight (and assumption) is based on using the reference set as a gold standard for both aspects. Given a graph, the set of references acts as the \"end goal\", containing high-quality descriptions with sufficient levels of diversity. 1 We call this procedure Multi-Score (MS).", "cite_spans": [ { "start": 350, "end": 351, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Joint Evaluation of Diversity and Quality", "sec_num": "4" }, { "text": "The idea behind Multi-Score is shown pictorially in Figure 2 . In this example, we have a single instance with three references and three predicted descriptions generated by a model. Given a sentencelevel quality metric we can calculate it among all possible pairs between each prediction and reference, obtaining a weighted bipartite graph. We then solve the respective maximum matching problem for this bipartite graph and take the average weight of the edges corresponding to the optimal matching. We show the full procedure to calculate Multi-Score in Algorithm 1.", "cite_spans": [], "ref_spans": [ { "start": 52, "end": 60, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Joint Evaluation of Diversity and Quality", "sec_num": "4" }, { "text": "Algorithm 1 Multi-Score procedure function MULTI-SCORE(o: outputs, r: refer- ences, M: sentence-level metric) G \u2190 0 initialise graph for i \u2190 0 to len(o) do fill graph for j \u2190 0 to len(r) do G(i, j) \u2190 M(o[i], r[j]) match \u2190 MAXMATCH(G) stores edges score \u2190 0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Joint Evaluation of Diversity and Quality", "sec_num": "4" }, { "text": "for edge \u2208 match do score \u2190 score + edge.weight return score / len(match) returns average weight For the example in Figure 2 , the optimal matching (shown in red) matches prediction 1 with output 2, prediction 2 with output 3 and prediction 3 with output 1. From this, the resulting Multi-Score is: (56 + 50 + 58)/3 = 54.67. The matching problem MAXMATCH can be solved using the Hungarian Algorithm (Kuhn, 2010) in O(n 3 ) time, where n is the number of nodes in the bipartite graph. This makes the procedure efficient for reference set sizes found in standard datasets.", "cite_spans": [], "ref_spans": [ { "start": 116, "end": 124, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Joint Evaluation of Diversity and Quality", "sec_num": "4" }, { "text": "As a metric, Multi-Score has a number of desirable properties:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Joint Evaluation of Diversity and Quality", "sec_num": "4" }, { "text": "\u2022 As long as the sentence-level metric has an upper bound (which is the case of most standard automatic evaluation metrics), if the set of predictions is exactly equal to the references, then MS will give the maximum score. \u2022 If the outputs are diverse but unrelated to the references (as in an unconditional LM), MS will penalise the output because the underlying quality values will be low.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Joint Evaluation of Diversity and Quality", "sec_num": "4" }, { "text": "\u2022 If the outputs are high-quality but not diverse (typical of an n-best list in a deterministic model), MS will penalise the output due to the assignment constraint. One of the outputs will have a high-quality value but the others will have a low-quality value because they will be forced to match other references.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Joint Evaluation of Diversity and Quality", "sec_num": "4" }, { "text": "\u2022 Finally, MS can be used with any sentencelevel quality metric, making it easily adaptable to any developments in better quality metrics, as well as other generation tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Joint Evaluation of Diversity and Quality", "sec_num": "4" }, { "text": "5 Experimental Settings", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Joint Evaluation of Diversity and Quality", "sec_num": "4" }, { "text": "We evaluate the models using datasets from the WebNLG shared tasks (Gardent et al., 2017a,b) . The data is composed of data-text pairs where the data is a set of RDF triples extracted from DBpedia and the text is the verbalisation of these triples. For each graph, there may be multiple descriptions. In our experiments, we assume a reference set of size 3 for each input, as most graphs in both datasets have three reference descriptions. Russian WebNLG 2020 The Russian dataset comprises 16571 training, 790 development and 1102 test data-text pairs. This dataset has 9 distinct categories (Airport, Astronaut, Building, Ce-lestialBody, ComicsCharacter, Food, Monument, SportsTeam, and University).", "cite_spans": [ { "start": 67, "end": 92, "text": "(Gardent et al., 2017a,b)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "5.1" }, { "text": "Levi Graph Transformation To decrease the number of parameters and avoid parameter explosion, we follow previous work and use a Levi Graph Transformation (Ribeiro et al., 2020; Beck et al., 2018) . This transformation creates new relation nodes from relational edges between entities, which explicitly represents the relations between an original node and its neighbour edges.", "cite_spans": [ { "start": 154, "end": 176, "text": "(Ribeiro et al., 2020;", "ref_id": "BIBREF24" }, { "start": 177, "end": 195, "text": "Beck et al., 2018)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Preprocessing", "sec_num": "5.2" }, { "text": "Byte Pair Encoding Following previous work (Ribeiro et al., 2020), we employ Byte Pair Encoding (BPE) to split entity words into frequent characters or character sequences which are subword units. After the BPE operations, some nodes in the graph are split to subwords. Likewise, we also split the target descriptions using BPE.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preprocessing", "sec_num": "5.2" }, { "text": "All models are able to generate sets of descriptions: we generate three sentences per graph as this matches the number of available references. For the proposed stochastic models, we generate each sentence by sampling a new value for the latent variable. For the deterministic models, we use different decoding strategies to generate these sets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "5.3" }, { "text": "Top-3 Beam Search Beam Search is the standard algorithm to obtain a sentence from deterministic models, by selecting the output with (approximate) highest probability. In Top-3 Beam Search, we choose the top-3 generated sentences from the final candidate list.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "5.3" }, { "text": "Total Random Sampling Random Sampling (Ippolito et al., 2019) generates a sentence from left to right sampling the next token from all possible candidates until the end-of-sequence symbol is generated. Because each token is sampled from the distribution over next tokens given the previous ones, this method generates different outputs each time it generates a new description.", "cite_spans": [ { "start": 38, "end": 61, "text": "(Ippolito et al., 2019)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "5.3" }, { "text": "Top-3 Random Sampling In this approach, we still use Random Sampling but modify it slightly while generating the next token. Instead of sampling the next token from all possible candidates, the model samples the next token from the top-3 most likely candidates (Ippolito et al., 2019) .", "cite_spans": [ { "start": 261, "end": 284, "text": "(Ippolito et al., 2019)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "5.3" }, { "text": "We employ MC dropout to the deterministic model and keep the dropout rate in the testing phase and training phase the same. It disables neurons randomly at decoding time, resulting in different outputs at each generation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MC Dropout", "sec_num": null }, { "text": "Ensemble Finally, we create an ensemble of three independently-trained deterministic models, whereby we select the most likely sentence from each model using Beam Search. These sentences then form the output set from the ensemble. Since this is a general strategy, we also apply it to the stochastic model as another point of comparison in our experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MC Dropout", "sec_num": null }, { "text": "We assess each model on the test set of English and Russian datasets respectively and report the quality and diversity results. The quality evaluation scores (BLEU: Papineni et al. (2002) , CHRF++: Popovic (2017)) are calculated based on the average score of the three outputs. We report the original BLEU and CHRF++ score to show the quality of the generated sentences from each model. The diversity evaluation scores (Self-BLEU, Multi-Score) are computed using the three outputs. As we describe in Section 4, our proposed diversity evaluation metrics require a sentence-level quality evaluation metric to compute the score of two sentences. We adopt sentence-level BLEU and CHRF++ and refer to their corresponding Multi-Score versions as MS-BLEU and MS-CHRF. Table 1 shows the quality results on both English and Russian datasets. As expected, the two random sampling methods do not show good quality performance. For English data, our stochastic models perform on par with previous work and have comparable quality with deterministic models. The trends for English and Russian data are similar but Russian has lower scores in general.", "cite_spans": [ { "start": 165, "end": 187, "text": "Papineni et al. (2002)", "ref_id": "BIBREF22" } ], "ref_spans": [ { "start": 761, "end": 768, "text": "Table 1", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Results", "sec_num": "6" }, { "text": "The diversity scores of these two datasets are shown in Table 2 . Total random sampling has the lowest Self-BLEU on two datasets, as expected, but it also has the worst quality. On the other hand, with our new metrics, the stochastic ensemble model gives the best results on both English and Russian datasets, showing high diversity without compromising quality.", "cite_spans": [], "ref_spans": [ { "start": 56, "end": 63, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Results", "sec_num": "6" }, { "text": "To further assess the quality of the generated sentences from each model, we perform a manual error analysis in a subset of the English test data. We randomly selected five input graphs, generating 15 sentences for each model (as we generate 3 sentences for each graph). Given we analysed five models, this gives a total of 75 sentences for our analysis. We observed three common mistakes from the outputs:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "6.1" }, { "text": "\u2022 Syntax/Spelling Mistake: There are grammar mistakes or spelling mistakes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "6.1" }, { "text": "\u2022 Lack of Information: The information in the graph is not fully realised in the description.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "6.1" }, { "text": "\u2022 Information Redundancy: Some information in the sentence is repeated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "6.1" }, { "text": "We calculate the rates of each model making different types of mistakes and report the results in Table 3 . The results show that total random sampling makes the most mistakes among all models and most of them are syntax or spelling mistakes. Top-3 random sampling and MC dropout make the same percentage of total mistakes. The former makes almost half of the total information redundancy mistakes while the latter makes the most lack of information mistakes. Top-3 beam search makes fewer mistakes than the other three models and it does not make information redundancy mistakes in our evaluated test cases.", "cite_spans": [], "ref_spans": [ { "start": 98, "end": 105, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Error Analysis", "sec_num": "6.1" }, { "text": "As for ensemble-based models, both deterministic and stochastic ensembles make the fewest total mistakes among all models. This is in line with the results obtained from automatic quality metrics. In particular, the deterministic ensemble does not make any syntax or spelling mistakes in the evaluated test cases. The stochastic ensemble also shows good performance with regard to the quality of the generated sentences, which has a low error rate for all types of mistakes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "6.1" }, { "text": "In general, the diverse outputs generated by our proposed model tend to have comparable quality to the outputs from the best baseline model. However, ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "6.1" }, { "text": "Melbourne (Gardent et al., 2017b) 54.52 70.72 --Adapt (Gardent et al., 2017b) 60.59 76.01 --CGE-LW (Ribeiro et al., 2020) 63.69 76.66 -- lack of information still remains a challenge for some instances in this setting. Addressing this problem is an avenue that we leave for future work. Table 4 shows an instance of a semantic graph from which we collect three outputs from a deterministic model (MC dropout) and a stochastic model (Ensemble). The outputs from MC dropout contain three types of mistakes and have low diversity. While there is no mistake in the outputs of the stochastic model, and the boldface illustrates syntactic variation.", "cite_spans": [ { "start": 10, "end": 33, "text": "(Gardent et al., 2017b)", "ref_id": "BIBREF6" }, { "start": 54, "end": 77, "text": "(Gardent et al., 2017b)", "ref_id": "BIBREF6" }, { "start": 99, "end": 121, "text": "(Ribeiro et al., 2020)", "ref_id": "BIBREF24" } ], "ref_spans": [ { "start": 287, "end": 294, "text": "Table 4", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Previous Work", "sec_num": null }, { "text": "In this work, we first propose stochastic graphto-text models to generate diverse sentences from semantic graphs. This was implemented through latent variable models that aim to capture linguistic variation and ensembling techniques. Furthermore, to solve the limitation of the existing diversity eval-uation metrics, we also propose Multi-Score, a new automatic evaluation metric assessing diversity and quality jointly. It provides a general and effective way to assess the diversity of generated sentences for any text generation task. We perform experiments on English and Russian datasets and results demonstrate the generated sentences from the stochastic ensemble have both high diversity and high quality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "7" }, { "text": "Since Multi-Score is based on using the reference set as the gold standard, it has a limitation that the variety of the reference sentences can largely influence the metric. Datasets containing reference sentences with higher quality and diversity will likely generate a more accurate Multi-Score for the predicted sentences. In other words, Multi-Score evaluates diversity implicitly through the references, as opposed to explicit judgements of diversity. However, explicit human evaluation requires a formal definition of diversity which is difficult to establish (as compared to quality judgements, for instance). Nevertheless, addressing this challenge could provide a pathway to reduce the need for multiple references in evaluating diversity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "7" }, { "text": "To the best of our knowledge this is the first work that incorporates a latent variable within a graphto-sequence model. This in turn leads to many promising research avenues to explore in future work. Our analysis showed that the latent variable mostly helps in syntactic variation but less in other aspects such as semantics. Analysing the behaviour of the latent variable when modelling linguistic information is an important avenue that will enhance the understanding of stochastic models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "7" }, { "text": "We discuss limitations of this assumption in Section 7.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Graph-to-sequence learning using gated graph neural networks", "authors": [ { "first": "Daniel", "middle": [], "last": "Beck", "suffix": "" }, { "first": "Gholamreza", "middle": [], "last": "Haffari", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Cohn", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018", "volume": "1", "issue": "", "pages": "273--283", "other_ids": { "DOI": [ "10.18653/v1/P18-1026" ] }, "num": null, "urls": [], "raw_text": "Daniel Beck, Gholamreza Haffari, and Trevor Cohn. 2018. Graph-to-sequence learning using gated graph neural networks. In Proceedings of the 56th Annual Meeting of the Association for Computa- tional Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 273-283. Association for Computational Linguis- tics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Generating sentences from a continuous space", "authors": [ { "first": "R", "middle": [], "last": "Samuel", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Bowman", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vilnis", "suffix": "" }, { "first": "Andrew", "middle": [ "M" ], "last": "Vinyals", "suffix": "" }, { "first": "Rafal", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Samy", "middle": [], "last": "J\u00f3zefowicz", "suffix": "" }, { "first": "", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "10--21", "other_ids": { "DOI": [ "10.18653/v1/k16-1002" ] }, "num": null, "urls": [], "raw_text": "Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, An- drew M. Dai, Rafal J\u00f3zefowicz, and Samy Ben- gio. 2016. Generating sentences from a continuous space. In Proceedings of the 20th SIGNLL Confer- ence on Computational Natural Language Learning, CoNLL 2016, Berlin, Germany, August 11-12, 2016, pages 10-21. ACL.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation", "authors": [ { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Bart", "middle": [], "last": "Van Merrienboer", "suffix": "" }, { "first": "Dzmitry", "middle": [], "last": "Aglar G\u00fcl\u00e7ehre", "suffix": "" }, { "first": "Fethi", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Bougares", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1724--1734", "other_ids": { "DOI": [ "10.3115/v1/d14-1179" ] }, "num": null, "urls": [], "raw_text": "Kyunghyun Cho, Bart van Merrienboer, \u00c7 aglar G\u00fcl\u00e7ehre, Dzmitry Bahdanau, Fethi Bougares, Hol- ger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Nat- ural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1724-1734. ACL.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Neural datato-text generation: A comparison between pipeline and end-to-end architectures", "authors": [ { "first": "Chris", "middle": [], "last": "Thiago Castro Ferreira", "suffix": "" }, { "first": "", "middle": [], "last": "Van Der Lee", "suffix": "" }, { "first": "Emiel", "middle": [], "last": "Emiel Van Miltenburg", "suffix": "" }, { "first": "", "middle": [], "last": "Krahmer", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "552--562", "other_ids": { "DOI": [ "10.18653/v1/D19-1052" ] }, "num": null, "urls": [], "raw_text": "Thiago Castro Ferreira, Chris van der Lee, Emiel van Miltenburg, and Emiel Krahmer. 2019. Neural data- to-text generation: A comparison between pipeline and end-to-end architectures. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, Novem- ber 3-7, 2019, pages 552-562. Association for Com- putational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Dropout as a bayesian approximation: Representing model uncertainty in deep learning", "authors": [ { "first": "Yarin", "middle": [], "last": "Gal", "suffix": "" }, { "first": "Zoubin", "middle": [], "last": "Ghahramani", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 33nd International Conference on Machine Learning, ICML 2016", "volume": "48", "issue": "", "pages": "1050--1059", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yarin Gal and Zoubin Ghahramani. 2016. Dropout as a bayesian approximation: Representing model un- certainty in deep learning. In Proceedings of the 33nd International Conference on Machine Learn- ing, ICML 2016, New York City, NY, USA, June 19- 24, 2016, volume 48 of JMLR Workshop and Con- ference Proceedings, pages 1050-1059. JMLR.org.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Creating training corpora for NLG micro-planners", "authors": [ { "first": "Claire", "middle": [], "last": "Gardent", "suffix": "" }, { "first": "Anastasia", "middle": [], "last": "Shimorina", "suffix": "" }, { "first": "Shashi", "middle": [], "last": "Narayan", "suffix": "" }, { "first": "Laura", "middle": [], "last": "Perez-Beltrachini", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "179--188", "other_ids": { "DOI": [ "10.18653/v1/P17-1017" ] }, "num": null, "urls": [], "raw_text": "Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017a. Creating train- ing corpora for NLG micro-planners. In Proceed- ings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancou- ver, Canada, July 30 -August 4, Volume 1: Long Pa- pers, pages 179-188. Association for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "The webnlg challenge: Generating text from RDF data", "authors": [ { "first": "Claire", "middle": [], "last": "Gardent", "suffix": "" }, { "first": "Anastasia", "middle": [], "last": "Shimorina", "suffix": "" }, { "first": "Shashi", "middle": [], "last": "Narayan", "suffix": "" }, { "first": "Laura", "middle": [], "last": "Perez-Beltrachini", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 10th International Conference on Natural Language Generation", "volume": "", "issue": "", "pages": "124--133", "other_ids": { "DOI": [ "10.18653/v1/w17-3518" ] }, "num": null, "urls": [], "raw_text": "Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017b. The webnlg challenge: Generating text from RDF data. In Pro- ceedings of the 10th International Conference on Natural Language Generation, INLG 2017, Santi- ago de Compostela, Spain, September 4-7, 2017, pages 124-133. Association for Computational Lin- guistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "of Foundations of Artificial Intelligence", "authors": [ { "first": "Vladimir", "middle": [], "last": "Frank Van Harmelen", "suffix": "" }, { "first": "Bruce", "middle": [ "W" ], "last": "Lifschitz", "suffix": "" }, { "first": "", "middle": [], "last": "Porter", "suffix": "" } ], "year": 2008, "venue": "", "volume": "3", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Frank van Harmelen, Vladimir Lifschitz, and Bruce W. Porter, editors. 2008. Handbook of Knowledge Rep- resentation, volume 3 of Foundations of Artificial In- telligence. Elsevier.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural Comput", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": { "DOI": [ "10.1162/neco.1997.9.8.1735" ] }, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8):1735- 1780.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Comparison of diverse decoding methods from conditional language models", "authors": [ { "first": "Daphne", "middle": [], "last": "Ippolito", "suffix": "" }, { "first": "Reno", "middle": [], "last": "Kriz", "suffix": "" }, { "first": "Jo\u00e3o", "middle": [], "last": "Sedoc", "suffix": "" }, { "first": "Maria", "middle": [], "last": "Kustikova", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019", "volume": "1", "issue": "", "pages": "3752--3762", "other_ids": { "DOI": [ "10.18653/v1/p19-1365" ] }, "num": null, "urls": [], "raw_text": "Daphne Ippolito, Reno Kriz, Jo\u00e3o Sedoc, Maria Kustikova, and Chris Callison-Burch. 2019. Com- parison of diverse decoding methods from condi- tional language models. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-Au- gust 2, 2019, Volume 1: Long Papers, pages 3752- 3762. Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Autoencoding variational bayes", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Max", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Welling", "suffix": "" } ], "year": 2014, "venue": "2nd International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma and Max Welling. 2014. Auto- encoding variational bayes. In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Con- ference Track Proceedings.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Semisupervised classification with graph convolutional networks", "authors": [ { "first": "N", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "Max", "middle": [], "last": "Kipf", "suffix": "" }, { "first": "", "middle": [], "last": "Welling", "suffix": "" } ], "year": 2017, "venue": "5th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas N. Kipf and Max Welling. 2017. Semi- supervised classification with graph convolutional networks. In 5th International Conference on Learn- ing Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Text generation from knowledge graphs with graph transformers", "authors": [ { "first": "Rik", "middle": [], "last": "Koncel-Kedziorski", "suffix": "" }, { "first": "Dhanush", "middle": [], "last": "Bekal", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" }, { "first": "Hannaneh", "middle": [], "last": "Hajishirzi", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019", "volume": "1", "issue": "", "pages": "2284--2293", "other_ids": { "DOI": [ "10.18653/v1/n19-1238" ] }, "num": null, "urls": [], "raw_text": "Rik Koncel-Kedziorski, Dhanush Bekal, Yi Luan, Mirella Lapata, and Hannaneh Hajishirzi. 2019. Text generation from knowledge graphs with graph transformers. In Proceedings of the 2019 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, NAACL-HLT 2019, Minneapo- lis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 2284-2293. Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Neural AMR: sequence-to-sequence models for parsing and generation", "authors": [ { "first": "Ioannis", "middle": [], "last": "Konstas", "suffix": "" }, { "first": "Srinivasan", "middle": [], "last": "Iyer", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Yatskar", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "146--157", "other_ids": { "DOI": [ "10.18653/v1/P17-1014" ] }, "num": null, "urls": [], "raw_text": "Ioannis Konstas, Srinivasan Iyer, Mark Yatskar, Yejin Choi, and Luke Zettlemoyer. 2017. Neural AMR: sequence-to-sequence models for parsing and gener- ation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 -August 4, Vol- ume 1: Long Papers, pages 146-157. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "The hungarian method for the assignment problem", "authors": [ { "first": "W", "middle": [], "last": "Harold", "suffix": "" }, { "first": "", "middle": [], "last": "Kuhn", "suffix": "" }, { "first": "M", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "Denis", "middle": [], "last": "Liebling", "suffix": "" }, { "first": "George", "middle": [ "L" ], "last": "Naddef", "suffix": "" }, { "first": "William", "middle": [ "R" ], "last": "Nemhauser", "suffix": "" }, { "first": "Gerhard", "middle": [], "last": "Pulleyblank", "suffix": "" }, { "first": "Giovanni", "middle": [], "last": "Reinelt", "suffix": "" }, { "first": "Laurence", "middle": [ "A" ], "last": "Rinaldi", "suffix": "" }, { "first": "", "middle": [], "last": "Wolsey", "suffix": "" } ], "year": 2010, "venue": "50 Years of Integer Programming 1958-2008 -From the Early Years to the State-of-the-Art", "volume": "", "issue": "", "pages": "29--47", "other_ids": { "DOI": [ "10.1007/978-3-540-68279-0_2" ] }, "num": null, "urls": [], "raw_text": "Harold W. Kuhn. 2010. The hungarian method for the assignment problem. In Michael J\u00fcnger, Thomas M. Liebling, Denis Naddef, George L. Nemhauser, William R. Pulleyblank, Gerhard Reinelt, Giovanni Rinaldi, and Laurence A. Wolsey, editors, 50 Years of Integer Programming 1958-2008 -From the Early Years to the State-of-the-Art, pages 29-47. Springer.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Simple and scalable predictive uncertainty estimation using deep ensembles", "authors": [ { "first": "Balaji", "middle": [], "last": "Lakshminarayanan", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Pritzel", "suffix": "" }, { "first": "Charles", "middle": [], "last": "Blundell", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems", "volume": "", "issue": "", "pages": "6402--6413", "other_ids": {}, "num": null, "urls": [], "raw_text": "Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. 2017. Simple and scalable predic- tive uncertainty estimation using deep ensembles. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Pro- cessing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 6402-6413.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Gradient-based learning applied to document recognition", "authors": [ { "first": "Yann", "middle": [], "last": "Lecun", "suffix": "" }, { "first": "L\u00e9on", "middle": [], "last": "Bottou", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Haffner", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the IEEE", "volume": "86", "issue": "11", "pages": "2278--2324", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yann LeCun, L\u00e9on Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Gated graph sequence neural networks", "authors": [ { "first": "Yujia", "middle": [], "last": "Li", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Tarlow", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Brockschmidt", "suffix": "" }, { "first": "Richard", "middle": [ "S" ], "last": "Zemel", "suffix": "" } ], "year": 2016, "venue": "4th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard S. Zemel. 2016. Gated graph sequence neural networks. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Pro- ceedings.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Understanding posterior collapse in generative latent variable models", "authors": [ { "first": "James", "middle": [], "last": "Lucas", "suffix": "" }, { "first": "George", "middle": [], "last": "Tucker", "suffix": "" }, { "first": "Roger", "middle": [ "B" ], "last": "Grosse", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Norouzi", "suffix": "" } ], "year": 2019, "venue": "Deep Generative Models for Highly Structured Data, ICLR 2019 Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "James Lucas, George Tucker, Roger B. Grosse, and Mohammad Norouzi. 2019. Understanding pos- terior collapse in generative latent variable mod- els. In Deep Generative Models for Highly Struc- tured Data, ICLR 2019 Workshop, New Orleans, Louisiana, United States, May 6, 2019. OpenRe- view.net.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Deep graph convolutional encoders for structured data to text generation", "authors": [ { "first": "Diego", "middle": [], "last": "Marcheggiani", "suffix": "" }, { "first": "Laura", "middle": [], "last": "Perez-Beltrachini", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 11th International Conference on Natural Language Generation", "volume": "", "issue": "", "pages": "1--9", "other_ids": { "DOI": [ "10.18653/v1/w18-6501" ] }, "num": null, "urls": [], "raw_text": "Diego Marcheggiani and Laura Perez-Beltrachini. 2018. Deep graph convolutional encoders for struc- tured data to text generation. In Proceedings of the 11th International Conference on Natural Lan- guage Generation, Tilburg University, The Nether- lands, November 5-8, 2018, pages 1-9. Association for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Abstractive text summarization using sequence-tosequence rnns and beyond", "authors": [ { "first": "Ramesh", "middle": [], "last": "Nallapati", "suffix": "" }, { "first": "Bowen", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "C\u00edcero", "middle": [], "last": "Nogueira", "suffix": "" }, { "first": "", "middle": [], "last": "Santos", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Aglar G\u00fcl\u00e7ehre", "suffix": "" }, { "first": "", "middle": [], "last": "Xiang", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "280--290", "other_ids": { "DOI": [ "10.18653/v1/k16-1028" ] }, "num": null, "urls": [], "raw_text": "Ramesh Nallapati, Bowen Zhou, C\u00edcero Nogueira dos Santos, \u00c7 aglar G\u00fcl\u00e7ehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-to- sequence rnns and beyond. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, CoNLL 2016, Berlin, Germany, August 11-12, 2016, pages 280-290. ACL.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Conditional variational autoencoder for neural machine translation", "authors": [ { "first": "Artidoro", "middle": [], "last": "Pagnoni", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Shangyan", "middle": [], "last": "Li", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Artidoro Pagnoni, Kevin Liu, and Shangyan Li. 2018. Conditional variational autoencoder for neural ma- chine translation. CoRR, abs/1812.04405.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "311--318", "other_ids": { "DOI": [ "10.3115/1073083.1073135" ] }, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Compu- tational Linguistics, July 6-12, 2002, Philadelphia, PA, USA, pages 311-318. ACL.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "chrf++: words helping character n-grams", "authors": [ { "first": "Maja", "middle": [], "last": "Popovic", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Second Conference on Machine Translation", "volume": "", "issue": "", "pages": "612--618", "other_ids": { "DOI": [ "10.18653/v1/w17-4770" ] }, "num": null, "urls": [], "raw_text": "Maja Popovic. 2017. chrf++: words helping character n-grams. In Proceedings of the Second Conference on Machine Translation, WMT 2017, Copenhagen, Denmark, September 7-8, 2017, pages 612-618. As- sociation for Computational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Modeling global and local node contexts for text generation from knowledge graphs", "authors": [ { "first": "F", "middle": [ "R" ], "last": "Leonardo", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Ribeiro", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gardent", "suffix": "" }, { "first": "", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2020, "venue": "Trans. Assoc. Comput. Linguistics", "volume": "8", "issue": "", "pages": "589--604", "other_ids": {}, "num": null, "urls": [], "raw_text": "Leonardo F. R. Ribeiro, Yue Zhang, Claire Gardent, and Iryna Gurevych. 2020. Modeling global and local node contexts for text generation from knowl- edge graphs. Trans. Assoc. Comput. Linguistics, 8:589-604.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "A stochastic decoder for neural machine translation", "authors": [ { "first": "Philip", "middle": [], "last": "Schulz", "suffix": "" }, { "first": "Wilker", "middle": [], "last": "Aziz", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Cohn", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018", "volume": "1", "issue": "", "pages": "1243--1252", "other_ids": { "DOI": [ "10.18653/v1/P18-1115" ] }, "num": null, "urls": [], "raw_text": "Philip Schulz, Wilker Aziz, and Trevor Cohn. 2018. A stochastic decoder for neural machine translation. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 1243-1252. Association for Computational Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Dropout: a simple way to prevent neural networks from overfitting", "authors": [ { "first": "Nitish", "middle": [], "last": "Srivastava", "suffix": "" }, { "first": "Geoffrey", "middle": [ "E" ], "last": "Hinton", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Krizhevsky", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" } ], "year": 2014, "venue": "J. Mach. Learn. Res", "volume": "15", "issue": "1", "pages": "1929--1958", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdi- nov. 2014. Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res., 15(1):1929-1958.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "GTR-LSTM: A triple encoder for sentence generation from RDF data", "authors": [ { "first": "Jianzhong", "middle": [], "last": "Bayu Distiawan Trisedya", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Qi", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018", "volume": "1", "issue": "", "pages": "1627--1637", "other_ids": { "DOI": [ "10.18653/v1/P18-1151" ] }, "num": null, "urls": [], "raw_text": "Bayu Distiawan Trisedya, Jianzhong Qi, Rui Zhang, and Wei Wang. 2018. GTR-LSTM: A triple encoder for sentence generation from RDF data. In Pro- ceedings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics, ACL 2018, Mel- bourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 1627-1637. Association for Computa- tional Linguistics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Lukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4- 9, 2017, Long Beach, CA, USA, pages 5998-6008.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Grammar as a foreign language", "authors": [ { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Lukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Terry", "middle": [], "last": "Koo", "suffix": "" }, { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Geoffrey", "middle": [ "E" ], "last": "Hinton", "suffix": "" } ], "year": 2015, "venue": "Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015", "volume": "", "issue": "", "pages": "2773--2781", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oriol Vinyals, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey E. Hinton. 2015. Gram- mar as a foreign language. In Advances in Neu- ral Information Processing Systems 28: Annual Conference on Neural Information Processing Sys- tems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 2773-2781.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Google's neural machine translation system: Bridging the gap between human and machine translation", "authors": [ { "first": "Yonghui", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "Zhifeng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Quoc", "middle": [ "V" ], "last": "Le", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Norouzi", "suffix": "" }, { "first": "Wolfgang", "middle": [], "last": "Macherey", "suffix": "" }, { "first": "Maxim", "middle": [], "last": "Krikun", "suffix": "" }, { "first": "Yuan", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Qin", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Klaus", "middle": [], "last": "Macherey", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Klingner", "suffix": "" }, { "first": "Apurva", "middle": [], "last": "Shah", "suffix": "" }, { "first": "Melvin", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Xiaobing", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Lukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Stephan", "middle": [], "last": "Gouws", "suffix": "" }, { "first": "Yoshikiyo", "middle": [], "last": "Kato", "suffix": "" }, { "first": "Taku", "middle": [], "last": "Kudo", "suffix": "" }, { "first": "Hideto", "middle": [], "last": "Kazawa", "suffix": "" }, { "first": "Keith", "middle": [], "last": "Stevens", "suffix": "" }, { "first": "George", "middle": [], "last": "Kurian", "suffix": "" }, { "first": "Nishant", "middle": [], "last": "Patil", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2016, "venue": "Oriol Vinyals", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin John- son, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rud- nick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Graph2seq: Graph to sequence learning with attention-based neural networks", "authors": [ { "first": "Kun", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Lingfei", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Zhiguo", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kun Xu, Lingfei Wu, Zhiguo Wang, Yansong Feng, and Vadim Sheinin. 2018. Graph2seq: Graph to sequence learning with attention-based neural net- works. CoRR, abs/1804.00823.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Neural generative question answering", "authors": [ { "first": "Jun", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Xin", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Zhengdong", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Lifeng", "middle": [], "last": "Shang", "suffix": "" }, { "first": "Hang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Xiaoming", "middle": [], "last": "Li", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI 2016", "volume": "", "issue": "", "pages": "2972--2978", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jun Yin, Xin Jiang, Zhengdong Lu, Lifeng Shang, Hang Li, and Xiaoming Li. 2016. Neural generative question answering. In Proceedings of the Twenty- Fifth International Joint Conference on Artificial In- telligence, IJCAI 2016, New York, NY, USA, 9-15 July 2016, pages 2972-2978. IJCAI/AAAI Press.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Variational neural machine translation", "authors": [ { "first": "Biao", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Deyi", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Jinsong", "middle": [], "last": "Su", "suffix": "" }, { "first": "Hong", "middle": [], "last": "Duan", "suffix": "" }, { "first": "Min", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "521--530", "other_ids": { "DOI": [ "10.18653/v1/d16-1050" ] }, "num": null, "urls": [], "raw_text": "Biao Zhang, Deyi Xiong, Jinsong Su, Hong Duan, and Min Zhang. 2016. Variational neural machine trans- lation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 521-530. The Association for Compu- tational Linguistics.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Bridging the structural gap between encoding and decoding for data-to-text generation", "authors": [ { "first": "Chao", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Marilyn", "middle": [ "A" ], "last": "Walker", "suffix": "" }, { "first": "Snigdha", "middle": [], "last": "Chaturvedi", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "2020", "issue": "", "pages": "2481--2491", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.224" ] }, "num": null, "urls": [], "raw_text": "Chao Zhao, Marilyn A. Walker, and Snigdha Chaturvedi. 2020. Bridging the structural gap be- tween encoding and decoding for data-to-text gener- ation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 2481-2491. Association for Computational Linguistics.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Texygen: A benchmarking platform for text generation models", "authors": [ { "first": "Yaoming", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Sidi", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Jiaxian", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Weinan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yong", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2018, "venue": "The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, SIGIR 2018", "volume": "", "issue": "", "pages": "1097--1100", "other_ids": { "DOI": [ "10.1145/3209978.3210080" ] }, "num": null, "urls": [], "raw_text": "Yaoming Zhu, Sidi Lu, Lei Zheng, Jiaxian Guo, Weinan Zhang, Jun Wang, and Yong Yu. 2018. Texy- gen: A benchmarking platform for text generation models. In The 41st International ACM SIGIR Con- ference on Research & Development in Information Retrieval, SIGIR 2018, Ann Arbor, MI, USA, July 08- 12, 2018, pages 1097-1100. ACM.", "links": null } }, "ref_entries": { "FIGREF1": { "text": "An example of calculating Multi-Score. The three \"Pred\" nodes on the left side represent three predicted descriptions while the three \"Ref\" nodes on the right side represent three references. The weight of each edge corresponds to the sentence-level quality score of this prediction-reference pair. The highlighted scores are the ones corresponding to the maximal matching, which are then used to calculate the MS metric. Other scores are ignored.", "type_str": "figure", "uris": null, "num": null }, "FIGREF2": { "text": "killed in the Battle of Baku d e d ic a te d T o DM (MC dropout) 1: The Baku Turkish Martyrs' Memorial, which is dedicated to the Ottoman Army soldiers killed in the battle of Baku, is found in Azerbaijan. The capital of Azerbaijan is Baku and the leader is Artur Rasizade. (missing: legislature information) DM (MC dropout) 2: The Baku Turkish Martyrs' Memorial, which is dedicated to the Ottoman Army soldiers killed in the battle of Baku, is dedicated to the Ottoman Army soldiers killed in the country .. is ..... led .... by ....... Artur ........... Rasizade. (missing: legislature information) DM (MC dropout) 3: The Baku Turkish Martyrs' Memorial is dedicated to the Ottoman Army soldiers killed in the battle of Baku. It is dedicated to the Ottoman Army soldiers killed in the battle of Baku, the leader of Azerbaijan is Artur Rasizade. (missing: legislature information)SM (Ensemble) 1: The Baku Turkish Martyrs' Memorial is dedicated to the Ottoman Army soldiers killed in the battle of Baku. It is located in Azerbaijan whose capital is Baku and its leader is Artur Rasizade. The legislature is the National Assembly. SM (Ensemble) 2: Baku is the capital of Azerbaijan where the legislature is the National Assembly and the leader is Artur Rasizade. The country is the location of the Baku Turkish Martyrs Memorial which is dedicated to the Ottoman Army soldiers killed in the battle of Baku. SM (Ensemble) 3: The Baku Turkish Martyrs' Memorial is dedicated to the Ottoman Army soldiers killed in the battle of Baku. It is located in Azerbaijan whose capital is Baku and its leader is Artur Rasizade, and its legislature is the National Assembly.", "type_str": "figure", "uris": null, "num": null }, "TABREF3": { "text": "Quality evaluation results on the test sets of both English and Russian datasets. Note that models without declaring decoding strategy use Beam Search. For reference, we also report results from previous work in the English dataset. Boldface shows the best result for a column, and arrows indicate the direction of improvement, i.e., \u2191: higher is better.", "html": null, "type_str": "table", "content": "
EnglishRussian
Self-B\u2193 MS-B\u2191 MS-C\u2191 Self-B\u2193 MS-B\u2191 MS-C\u2191
Deterministic Models
Top-3 beam search86.7246.6571.4576.5038.2361.58
Total random sampling56.4840.4767.0052.3031.3756.30
Top-3 random sampling64.6645.1570.4060.3135.6159.95
MC dropout68.7046.9070.8761.5936.1459.37
Ensemble81.3147.3271.5275.7038.5061.71
Stochastic Models
Single model97.3043.2569.4597.6233.5358.40
Ensemble77.8547.6171.9573.5038.8661.95
", "num": null }, "TABREF4": { "text": "Diversity evaluation results on the test sets of both English and Russian datasets. Self-B refers to Self-BLEU while MS-B and MS-C refer to the proposed Multi-Score metric using sentence-level BLEU and CHRF++ as the underlying quality metric. Note that models without declaring decoding strategy use beam search decoding.", "html": null, "type_str": "table", "content": "
ModelsSyntax/Spelling MistakeLack of InformationInformation RedundancyAverage
Deterministic Models
Total random sampling0.540.180.200.33
Top-3 random sampling0.180.140.490.22
MC dropout0.180.320.200.22
Top-3 beam search0.070.140.000.09
Ensemble0.000.090.030.06
Stochastic Models
Ensemble0.030.130.080.08
", "num": null }, "TABREF5": { "text": "Error analysis results, showing the rates of mistakes for each model.", "html": null, "type_str": "table", "content": "", "num": null }, "TABREF6": { "text": "A WebNLG input graph and the outputs from a Deterministic Model (MC dropout) and a Stochastic Model (Ensemble). Highlighted segments indicate mistakes: . . . . . red, . . . . . . . . dotted . . . . . . lines represent Syntax/Spelling mistakes, blue, solid lines corresponds to Lack of Information, and orange, dashed lines represent Information Redundancy. Bold segments show examples of syntactic variations.", "html": null, "type_str": "table", "content": "
", "num": null } } } }