{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:26:59.495846Z" }, "title": "Towards Faithful Neural Table-to-Text Generation with Content-Matching Constraints", "authors": [ { "first": "Zhenyi", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "", "institution": "State University of New York at Buffalo, \u00a7 Tencent AI Lab", "location": { "settlement": "Bellevue", "region": "WA" } }, "email": "zhenyiwa@buffalo.edu" }, { "first": "Xiaoyang", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "", "institution": "State University of New York at Buffalo, \u00a7 Tencent AI Lab", "location": { "settlement": "Bellevue", "region": "WA" } }, "email": "" }, { "first": "Dong", "middle": [], "last": "Yu", "suffix": "", "affiliation": { "laboratory": "", "institution": "State University of New York at Buffalo, \u00a7 Tencent AI Lab", "location": { "settlement": "Bellevue", "region": "WA" } }, "email": "" }, { "first": "Changyou", "middle": [], "last": "Chen", "suffix": "", "affiliation": { "laboratory": "", "institution": "State University of New York at Buffalo, \u00a7 Tencent AI Lab", "location": { "settlement": "Bellevue", "region": "WA" } }, "email": "changyou@buffalo.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Text generation from a knowledge base aims to translate knowledge triples to naturallanguage descriptions. Most existing methods ignore the faithfulness between a generated text description and the original table, leading to generated information that goes beyond the content of the table. In this paper, for the first time, we propose a novel Transformerbased generation framework to achieve the goal. The core techniques in our method to enforce faithfulness include a new table-text optimal-transport matching loss and a tabletext embedding similarity loss based on the Transformer model. Furthermore, to evaluate faithfulness, we propose a new automatic metric specialized to the table-to-text generation problem. We also provide detailed analysis on each component of our model in our experiments. Automatic and human evaluations show that our framework can significantly outperform state-of-the-art by a large margin.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Text generation from a knowledge base aims to translate knowledge triples to naturallanguage descriptions. Most existing methods ignore the faithfulness between a generated text description and the original table, leading to generated information that goes beyond the content of the table. In this paper, for the first time, we propose a novel Transformerbased generation framework to achieve the goal. The core techniques in our method to enforce faithfulness include a new table-text optimal-transport matching loss and a tabletext embedding similarity loss based on the Transformer model. Furthermore, to evaluate faithfulness, we propose a new automatic metric specialized to the table-to-text generation problem. We also provide detailed analysis on each component of our model in our experiments. Automatic and human evaluations show that our framework can significantly outperform state-of-the-art by a large margin.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Understanding structured knowledge, e.g., information encoded in tables, and automatically generating natural-language descriptions is an important task in the area of Natural Language Generation. Table- to-text generation helps making knowledge elements and their connections in tables easier to comprehend by human. There have been a number of practical application scenarios in this field, for example, weather report generation, NBA news generation, biography generation and medical-record description generation (Liang et al., 2009; Barzilay and Lapata, 2005; Lebret et al., 2016a; Cawsey et al., 1997) .", "cite_spans": [ { "start": 517, "end": 537, "text": "(Liang et al., 2009;", "ref_id": "BIBREF18" }, { "start": 538, "end": 564, "text": "Barzilay and Lapata, 2005;", "ref_id": "BIBREF1" }, { "start": 565, "end": 586, "text": "Lebret et al., 2016a;", "ref_id": "BIBREF16" }, { "start": 587, "end": 607, "text": "Cawsey et al., 1997)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 197, "end": 203, "text": "Table-", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Most existing methods for table-to-text generation are based on an encoder-decoder framework (Sutskever et al., 2014; Bahdanau et al., Figure 1 : An example of table-to-text generation. This generation is unfaithful because there exists information in table not covered by generated text (marked in blue); At the same time, hallucinated information in text does not appear in table (marked in red). 2015), most of which are RNN-based Sequenceto-Sequence (Seq2Seq) models (Lebret et al., 2016b; Liu et al., 2018; Wiseman et al., 2018; Ma et al., 2019; Liu et al., 2019a) . Though significant progress has been achieved, we advocate two key problems in existing methods. Firstly, because of the intrinsic shortage of RNN, RNN-based models are not able to capture longterm dependencies, which would lose important information reflected in a table. This drawback prevents them from being applied to larger tables, for example, a table describing a large Knowledge Base . Secondly, little work has focused on generating faithful text descriptions, which is defined, in this paper, as the level of matching between a generated text sequence and the corresponding table content. An unfaithful generation example is illustrated in Figure 1 . The training objectives and evaluation metrics of existing methods encourage generating texts to be as similar as possible to reference texts. One problem with this is that the reference text often contains extra information that is not presented in the table because human beings have external knowledge beyond the input table when writing the text, or it even misses some important information in the table (Dhingra et al., 2019) due to the noise from the dataset collection process. As a result, unconstrained training with such mis-matching information usually leads to hallucinated words or phrases in generated texts, making them unfaithful to the table and thus harmful in practical uses.", "cite_spans": [ { "start": 93, "end": 117, "text": "(Sutskever et al., 2014;", "ref_id": "BIBREF30" }, { "start": 118, "end": 143, "text": "Bahdanau et al., Figure 1", "ref_id": null }, { "start": 471, "end": 493, "text": "(Lebret et al., 2016b;", "ref_id": "BIBREF17" }, { "start": 494, "end": 511, "text": "Liu et al., 2018;", "ref_id": "BIBREF22" }, { "start": 512, "end": 533, "text": "Wiseman et al., 2018;", "ref_id": "BIBREF36" }, { "start": 534, "end": 550, "text": "Ma et al., 2019;", "ref_id": "BIBREF24" }, { "start": 551, "end": 569, "text": "Liu et al., 2019a)", "ref_id": "BIBREF20" }, { "start": 1643, "end": 1665, "text": "(Dhingra et al., 2019)", "ref_id": null } ], "ref_spans": [ { "start": 1223, "end": 1231, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we aim to overcome the above problems to automatically generate faithful texts from tables. In other words, we aim to produce the writing that a human without any external knowledge would do given the same table data as input. In contrast to existing RNN-based models, we leverage the powerful attention-based Transformer model to capture long-term dependencies and generate more informative paragraphlevel texts. To generate descriptions faithful to tables, two content-matching constraints are proposed. The first one is a latent-representationlevel matching constraint encouraging the latent semantics of the whole text to be consistent with that of the whole table. The second one is an explicit entity-level matching scheme, which utilizes Optimal-Transport (OT) techniques to constrain key words of a table and the corresponding text to be as identical as possible. To evaluate the faithfulness, we also propose a new PARENT-T metric evaluating the content matching between texts and tables, based on the recently proposed PARENT (Dhingra et al., 2019) metric. We train and evaluate our model on a large-scale knowledge base dataset . Automatic and human evaluations both show that our method achieve the state-of-the-art performance, and can generates paragraph-level descriptions much more informative and faithful to input tables.", "cite_spans": [ { "start": 1051, "end": 1073, "text": "(Dhingra et al., 2019)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The task of text generation for a knowledge base is to take the structured table, T =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposed Method", "sec_num": "2" }, { "text": "{(t 1 , v 1 ), (t 2 , v 2 ), , (t m , v m )},", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposed Method", "sec_num": "2" }, { "text": "as input, and outputs a natural-language description consisting of a Figure 2 : The architecture of our proposed model for table-to-text generation. To enhance the ability of generating multi-sentence faithful texts, our loss consists of three parts, including a maximum-likelihood loss (green), a latent matching disagreement loss (orange), and an optimal-transport loss (blue).", "cite_spans": [], "ref_spans": [ { "start": 69, "end": 77, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "The Proposed Method", "sec_num": "2" }, { "text": "sequence of words y = {y 1 , y 2 , , y n } that is faithful to the input table. Here, t i denotes the slot type for the i th row, and v i denotes the slot value for the i th row in a table.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Proposed Method", "sec_num": "2" }, { "text": "Our model adopts the powerful Transformer model (Vaswani et al., 2017) to translate a table to a text sequence. Specifically, the Transformer is a Seq2Seq model, consisting of an encoder and a decoder. Our proposed encoder-to-decoder Transformer model learns to estimate the conditional probability of a text sequence from a source table input in an autoregressive way:", "cite_spans": [ { "start": 48, "end": 70, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "The Proposed Method", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (y|T ; \u03b8) = n i=1 P (y i |y \". As an example, the table in Figure 1 is converted into a sequence: {< Name ID >, Willie Burden, < date of birth > , July 21 1951, \u2022 \u2022 \u2022 }. We note that encoding a table in this way might lose some high-order structure information presented in the original knowledge graph. However, our knowledge graph is relatively simple. According to our preliminary studies, a naive combination of feature extracted with graph neural networks (Beck et al., 2018) does not seem helpful. As a result, we only rely on the sequence representation in this paper.", "cite_spans": [ { "start": 657, "end": 676, "text": "(Beck et al., 2018)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 255, "end": 263, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Table Representation", "sec_num": "2.1" }, { "text": "Our base objective comes from the standard Transformer model, which is defined as the negative log-likelihood loss L mle of a target sentence y given its input T , i.e.,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Base Objective", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L mle = \u2212 log P (y|T ; \u03b8)", "eq_num": "(2)" } ], "section": "The Base Objective", "sec_num": "2.2" }, { "text": "with P (y|T ; \u03b8) defined in (1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Base Objective", "sec_num": "2.2" }, { "text": "One key element of our model is to enforce a generated text sequence to be consistent with (or faithful to) the table input. To achieve this, we propose to add some constraints so that a generated text sequence only contains information from the table.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Faithfulness Modeling with a Table-Text Disagreement Constraint Loss", "sec_num": "2.3" }, { "text": "Our first idea is inspired by related work in machine translation . Specifically, we propose to constrain a table embedding to be close to the corresponding target sentence embedding. Since the embedding of a text sequence (or the table) in our model is also represented as a sequence, we propose to match the mean embeddings of both sequences. In fact, the mean embedding has been proved to be an effective representation for the whole sequence in machine translation Wang et al., 2017) . ", "cite_spans": [ { "start": 469, "end": 487, "text": "Wang et al., 2017)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Faithfulness Modeling with a Table-Text Disagreement Constraint Loss", "sec_num": "2.3" }, { "text": "Let V", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Faithfulness Modeling with a Table-Text Disagreement Constraint Loss", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L disagree = V table \u2212V text 2", "eq_num": "(3)" } ], "section": "Faithfulness Modeling with a Table-Text Disagreement Constraint Loss", "sec_num": "2.3" }, { "text": "Our second strategy is to explicitly match the key words in a table and the corresponding generated text. In our case, key words are defined as nouns, which can be easily extracted with existing tools such as NLTK (Loper and Bird, 2002) .", "cite_spans": [ { "start": 214, "end": 236, "text": "(Loper and Bird, 2002)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Faithfulness Modeling with Constrained Content Matching via Optimal Transport", "sec_num": "2.4" }, { "text": "To match key words, a mis-matching loss should be defined. Such a mis-matching loss could be non-differentiable, e.g., when the loss is defined as the number of matched entities. In order to still be able to learn by gradient descent, one can adopt the policy gradient algorithm to deal with the non-differentiability. However, policy gradient is known to exhibit high variance. To overcome this issue, we instead propose to perform optimization via optimal transport (OT), inspired by the recent techniques in (Chen et al., 2019a) .", "cite_spans": [ { "start": 511, "end": 531, "text": "(Chen et al., 2019a)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Faithfulness Modeling with Constrained Content Matching via Optimal Transport", "sec_num": "2.4" }, { "text": "Optimal-Transport Distance In the context of text generation, a generated text sequence, y", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Faithfulness Modeling with Constrained Content Matching via Optimal Transport", "sec_num": "2.4" }, { "text": "= (y 1 , \u2022 \u2022 \u2022 , y n ), can be represented as a discrete dis- tribution \u00b5 = n i=1 u i \u03b4 y i (\u2022), where u i \u2265 0 and i u i = 1, \u03b4 x (\u2022) denotes a spike distribu- tion located at x. Given two discrete distribu- tions \u00b5 and \u03bd, written as \u00b5 = n i=1 u i \u03b4 x i and \u03bd = m j=1 v j \u03b4 y j ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Faithfulness Modeling with Constrained Content Matching via Optimal Transport", "sec_num": "2.4" }, { "text": "respectively, the OT distance between \u00b5 and \u03bd is defined as the solution of the following maximum network-flow problem:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Faithfulness Modeling with Constrained Content Matching via Optimal Transport", "sec_num": "2.4" }, { "text": "L OT = min U \u2208\u03a0(\u00b5,\u03bd) n i=1 m j=1 U ij \u2022 d(x i , y j ) , (4) where d(x, y)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Faithfulness Modeling with Constrained Content Matching via Optimal Transport", "sec_num": "2.4" }, { "text": "is the cost of moving x to y (matching x and y). In this paper, we use the cosine distance between the two word-embedding vectors of x and y, defined", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Faithfulness Modeling with Constrained Content Matching via Optimal Transport", "sec_num": "2.4" }, { "text": "as d(x, y) = 1 \u2212 xy x 2 y 2 . \u03a0(\u00b5, \u03bd)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Faithfulness Modeling with Constrained Content Matching via Optimal Transport", "sec_num": "2.4" }, { "text": "is the set of joint distributions such that the two marginal distributions equal to \u00b5 and \u03bd, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Faithfulness Modeling with Constrained Content Matching via Optimal Transport", "sec_num": "2.4" }, { "text": "Figure 3: Illustration of the OT loss, which is defined with OT distance to only match key words in both the table and the generated sentence. Left: the generated sentence not only contains extra information not presented in the table (shown as orange), but also lacks some information presented in the table (shown as red). This is unfaithful generation. The OT lost is thus high. Right: all information in the table is covered in the generated sentence, and the generated sentence does not contain extra information not presented in the table. This is faithful generation. The OT cost is thus low. This example is borrowed and modified from (Dhingra et al., 2019) .", "cite_spans": [ { "start": 643, "end": 665, "text": "(Dhingra et al., 2019)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Faithfulness Modeling with Constrained Content Matching via Optimal Transport", "sec_num": "2.4" }, { "text": "Exact minimization over U in the above problem is in general computational intractable (Genevay et al., 2018) . Therefore, we adopt the recently proposed Inexact Proximal point method for Optimal Transport (IPOT) (Xie et al., 2018) as an approximation. The details of the IPOT algorithm are shown in Appendix C.", "cite_spans": [ { "start": 87, "end": 109, "text": "(Genevay et al., 2018)", "ref_id": "BIBREF10" }, { "start": 213, "end": 231, "text": "(Xie et al., 2018)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Faithfulness Modeling with Constrained Content Matching via Optimal Transport", "sec_num": "2.4" }, { "text": "Constrained Content Matching via OT To apply the OT distance to our setting, we need to first specify the atoms in the discrete distributions. Since nouns typically are more informative, we propose to match the nouns in both an input table and the decoded target sequence. We use NLTK (Loper and Bird, 2002) to extract the nouns that are then used for computing the OT loss. In this way, the computational cost can also be significantly reduced comparing to matching all words.", "cite_spans": [ { "start": 285, "end": 307, "text": "(Loper and Bird, 2002)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Faithfulness Modeling with Constrained Content Matching via Optimal Transport", "sec_num": "2.4" }, { "text": "The OT loss can be used as a metric to measure the goodness of the match between two sequences. To illustrate the motivation of applying the OT loss to our setting, we provide an example illustrated in Figure 3 , where we try to match the table with the two generated text sequences. On the left plot, the generated text sequence contains \"California brand Grateful Dead\", which is not presented in the input table. Similarly, and the phrases \"Seattle, Washington\" and \"Skokie Illinois\" in the table are not covered by the generated text. Consequently, the resulting OT loss will be high. By contrast, on the right plot, the table contains all information in the text, and all the phrases in the table are also covered well by the generated text, leading to a low OT loss. As a result, optimizing over the OT loss in (4) would enforce faithful matching be-tween a table and its generated text.", "cite_spans": [], "ref_spans": [ { "start": 202, "end": 210, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Faithfulness Modeling with Constrained Content Matching via Optimal Transport", "sec_num": "2.4" }, { "text": "Optimization via OT When optimizing the OT loss with the IPOT algorithm, the gradients of the OT loss is required to be able to propagate back to the Transformer component. In other words, this requires gradients to flow back from a generated sentence. Note that a sentence is generated by sampling from a multinomial distribution, whose parameter is the Transformer decoder output represented as a logit vector S t for each word in the vocabulary. This sampling process is unfortunately non-differentiable. To enable backpropagation, we follow Chen et al. (2019a) and use the Soft-argmax trick to approximate each word with the corresponding soft-max output.", "cite_spans": [ { "start": 545, "end": 564, "text": "Chen et al. (2019a)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Faithfulness Modeling with Constrained Content Matching via Optimal Transport", "sec_num": "2.4" }, { "text": "To further reduce the number of parameters and improve the computational efficiency, we adopt the factorized embedding parameterization proposed recently (Lan et al., 2019) . Specifically, we decompose a word embedding matrix of size V \u00d7 D into the product of two matrices of sizes V \u00d7 H and H \u00d7 D, respectively. In this way, the parameter number of the embedding matrices could be significantly reduced as long as H is to be much smaller than D.", "cite_spans": [ { "start": 154, "end": 172, "text": "(Lan et al., 2019)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Faithfulness Modeling with Constrained Content Matching via Optimal Transport", "sec_num": "2.4" }, { "text": "Combing all the above components, the final training loss of our model is defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Final Objective", "sec_num": "2.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L = L mle + \u03bbL disagree + \u03b3L OT ,", "eq_num": "(5)" } ], "section": "The Final Objective", "sec_num": "2.5" }, { "text": "where \u03bb and \u03b3 controls the relative importance of each component of the loss function.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Final Objective", "sec_num": "2.5" }, { "text": "To enforce a generated sentence to stick to the words presented in the table as much as possible, we follow (See et al., 2017) to employ a copy mechanism when generating an output sequence. Specifically, let P vocab be the output of the Transformer decoder. P vocab is a discrete distribution over the vocabulary words and denotes the probabilities of generating the next word. The standard methods typically generate the next word by directly sampling from P vocab . In the copy mechanism, we instead generate the next word y i with the following discrete distribution:", "cite_spans": [ { "start": 108, "end": 126, "text": "(See et al., 2017)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Decoder with a Copy Mechanism", "sec_num": "2.6" }, { "text": "P (y i ) = p g P vocab (y i ) + (1 \u2212 p g )P att (y i ) ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoder with a Copy Mechanism", "sec_num": "2.6" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoder with a Copy Mechanism", "sec_num": "2.6" }, { "text": "p g = \u03c3(W 1 h i + b 1 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoder with a Copy Mechanism", "sec_num": "2.6" }, { "text": "is the probability of switching sampling between P vocab and P att , with learnable parameters (W 1 , b 1 ) and h i as the hidden state from the Transformer decoder for the i-th word. P att is the attention weights (probability) returned from the encoder-decoder attention module in the Transformer. Specifically, when generating the current word y i , the encoder-decoder attention module calculates the probability vector P att denoting the probabilities of attending to each word in the input table. Note that the probabilities of the words not presented in the table are set to zero.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoder with a Copy Mechanism", "sec_num": "2.6" }, { "text": "We conduct experiments to verify the effectiveness and superiority of our proposed approach against related methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "Our model is evaluated on the large-scale knowledge-base Wikiperson dataset released by . It contains 250,186, 30,487, and 29,982 table-text pairs for training, validation, and testing, respectively. Compared to the Wik-iBio dataset used in previous studies (Lebret et al., 2016b; Liu et al., 2018; Wiseman et al., 2018; Ma et al., 2019) whose reference text only contains one-sentence descriptions, this dataset contains multiple sentences for each table to cover as many facts encoded in the input structured knowledge base as possible.", "cite_spans": [ { "start": 258, "end": 280, "text": "(Lebret et al., 2016b;", "ref_id": "BIBREF17" }, { "start": 281, "end": 298, "text": "Liu et al., 2018;", "ref_id": "BIBREF22" }, { "start": 299, "end": 320, "text": "Wiseman et al., 2018;", "ref_id": "BIBREF36" }, { "start": 321, "end": 337, "text": "Ma et al., 2019)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "3.1" }, { "text": "For automatic evaluation, we apply the widely used evaluation metrics including the standard BLEU-4 (Papineni et al., 2002) , METEOR (Denkowski and Lavie, 2014) and ROUGE (Lin, 2004) scores to evaluate the generation quality.", "cite_spans": [ { "start": 100, "end": 123, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF26" }, { "start": 133, "end": 160, "text": "(Denkowski and Lavie, 2014)", "ref_id": "BIBREF6" }, { "start": 171, "end": 182, "text": "(Lin, 2004)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "3.2" }, { "text": "Since these metrics rely solely on the reference texts, they usually show poor correlations with human judgments when the references deviate too much from the table. To this end, we also apply the PARENT (Dhingra et al., 2019 ) metric that considers both the reference texts and table content in evaluations. To evaluate the faithfulness of the generated texts, we further modify the PARENT metric to measure the level of matching between generated texts and the corresponding tables. We denote this new metric as PARENT-T. Please see Appendix A for details. Note that the precision in PARENT-T corresponds to the percentage of words in a text sequence that co-occur in the table; and the recall corresponds to the percentage of words in a table that co-occur in the text.", "cite_spans": [ { "start": 204, "end": 225, "text": "(Dhingra et al., 2019", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "3.2" }, { "text": "We compare our model with several strong baselines, including", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Models", "sec_num": "3.3" }, { "text": "\u2022 The vanilla Seq2Seq attention model (Bahdanau et al., 2015).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Models", "sec_num": "3.3" }, { "text": "\u2022 The method in : The stateof-art model on the Wikiperson dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Models", "sec_num": "3.3" }, { "text": "\u2022 The method in (Liu et al., 2018) : The stateof-the-art method on the WikiBio dataset.", "cite_spans": [ { "start": 16, "end": 34, "text": "(Liu et al., 2018)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline Models", "sec_num": "3.3" }, { "text": "\u2022 The pointer-generator (See et al., 2017 ): A Seq2Seq model with attention, copying and coverage mechanism.", "cite_spans": [ { "start": 24, "end": 41, "text": "(See et al., 2017", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline Models", "sec_num": "3.3" }, { "text": "Our implementation is based on OpenNMT (Klein et al., 2017) . We train our models end-to-end to minimize our objective function with/without the copy mechanism. The vocabulary is limited to BLEU METEOR ROUGE PARENT PARENT-T 16.20 19.01 40.10 51.03 54.22 Seq2Seq (Bahdanau et al., 2015) 22 Table 3 : Ablation study of our model components. means the corresponding column component is used. means do not use the corresponding column component. Specifically, \"Copy\" means using copy mechanism, \"EF\" means using embedding factorization, \"OT\" means using optimal transport constraint loss, \"N\" means extracting nouns from both the table and text, and \"W\" means using the whole table and text to compute OT. Lastly, \"latent\" means using latent similarity loss.", "cite_spans": [ { "start": 39, "end": 59, "text": "(Klein et al., 2017)", "ref_id": "BIBREF13" }, { "start": 262, "end": 285, "text": "(Bahdanau et al., 2015)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 289, "end": 296, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Implementation Details", "sec_num": "3.4" }, { "text": "the 50, 000 most common words in the training dataset. The hidden units of the multi-head component and the feed-forward layer are set to 2048. The baseline embedding size is 512. Following (Lan et al., 2019) , the embedding size with embedding factorization is set to be 128. The number of heads is set to 8, and the number of Transformer blocks is 3. Beam size is set to be 5. Label smoothing is set to 0.1.", "cite_spans": [ { "start": 190, "end": 208, "text": "(Lan et al., 2019)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Implementation Details", "sec_num": "3.4" }, { "text": "For the optimal-transport based regularizer, we first train the model without OT for about 20,000 steps, then fine tune the network with OT for about 10,000 steps. We use the Adam (Kingma and Ba, 2015) optimizer to train the models. We set the hyper-parameters of Adam optimizer accordingly, including the learning rate \u03b1 = 0.00001, and the two momentum parameters, batch size = 4096 (tokens) and \u03b2 2 = 0.998. Table 1 and 2 show the experiment results in terms of different evaluation metrics compared with different baselines. \"Ours\" means our proposed model with components of copy mechanism, embedding factorization, OT-matching with nouns, and latent similarity loss 1 . We can see that our model outperforms existing models in all of the automatic evaluation scores, indicating high quality of the generated texts. The superiority of the PARENT-T scores (in terms of precision and recall) indicates that the generated text from our model is more faithful than others. Example out-Precision Recall F-1 measure Fluency Grammar 76.3 62.1 68.02 2.98 3.06 Seq2Seq (Bahdanau et al., 2015) 70 Figure 4 . The blue color indicates the corresponding row appears in the input table, but not in the output generation text. The red color indicates that these entities appear in the text but do not appear in the input table.", "cite_spans": [ { "start": 1064, "end": 1087, "text": "(Bahdanau et al., 2015)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 410, "end": 417, "text": "Table 1", "ref_id": "TABREF2" }, { "start": 1091, "end": 1099, "text": "Figure 4", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Implementation Details", "sec_num": "3.4" }, { "text": "puts from different models are shown in Table 5 with an input table shown in Figure 4 . In this example, our model covers all the entities in the input, while all other models miss some entities. Furthermore, other models hallucinate some information that does not appear in the input, while our model generates almost no extra information other than that in the input. These results indicate the faithfulness of our model. More examples are shown in Appendix E.", "cite_spans": [], "ref_spans": [ { "start": 40, "end": 47, "text": "Table 5", "ref_id": "TABREF6" }, { "start": 77, "end": 85, "text": "Figure 4", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Results", "sec_num": "3.5" }, { "text": "We also conduct extensive ablation studies to better understand each component of our model, including the copy mechanism, embedding factorization, optimal transport constraint loss, and latent similarity loss. Table 3 shows the results in different evaluation metrics.", "cite_spans": [], "ref_spans": [ { "start": 211, "end": 218, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Ablation Study", "sec_num": "3.6" }, { "text": "Effect of copy mechanism The first and second rows in Table 3 demonstrate the impacts of the copy mechanism. It is observed that with the copy mechanism, one can significantly improve the performance in all of the automatic metrics, especially on the faithfulness reflected by the PARENT-T score.", "cite_spans": [], "ref_spans": [ { "start": 54, "end": 61, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Ablation Study", "sec_num": "3.6" }, { "text": "We compare our model with the one without embedding factorization. The comparisons are shown in the second and third rows of Table 3 . We can see that with embedding factorization, around half of the parameters can be reduced, while comparable performance can still be maintained.", "cite_spans": [], "ref_spans": [ { "start": 125, "end": 132, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Effect of embedding factorization", "sec_num": null }, { "text": "We also test the model by removing the table-text embedding similarity loss component. The third and fourth rows in Table 3 summarize the results. With the table-text embedding similarity loss, the BLEU and METEOR scores drop a little, but the PARENT and PARENT-T scores improve over the model without the loss. This is reasonable because the loss aims at improving faithfulness of generated texts, reflected by the PARENT-T score.", "cite_spans": [], "ref_spans": [ { "start": 116, "end": 123, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Effect of table-text embedding similarity loss", "sec_num": null }, { "text": "Effect of the OT constraint loss We further compare the performance of the model (a) without using OT loss, (b) with using the whole table and text to compute OT, and (c) with using the extracted nouns from both table and text to compute OT. Results are presented in the third, fifth, and sixth rows of Table 3 , respectively. The model with the OT loss improve performance on almost all scores, especially on the PARENT-T score. Furthermore, with only using the nouns to compute the OT loss, one can obtain even better results. These results demonstrate the effectiveness of the proposed OT loss on enforcing the model to be faithful to the original table.", "cite_spans": [], "ref_spans": [ { "start": 303, "end": 310, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Effect of table-text embedding similarity loss", "sec_num": null }, { "text": "Following Tian et al., 2019) , we conduct extensive human evaluation on the generated descriptions and compare the results to the state-of-the-art methods. We design our evaluation criteria based on Tian et al., 2019) , but our criteria differs from (Tian et al., 2019) in several aspects. Specifically, for each group of generated texts, we ask the human raters to evaluate the grammar, fluency, and faithfulness. The human evaluation metrics of faithfulness is defined in terms of precision, recall and F1-score with respect to the reconstructed Knowledge-base table from a generated text sequence. To ensure accurate human evaluation, the raters are trained with word instructions and text examples of the grading standard beforehand. During evaluation, we randomly sample 100 examples from the predictions of each model on the Wikiperson test set, and provide these examples to the raters for blind testing. More details about the human evaluation are provided in the Appendix B. The human evaluation results in Table 4 clearly show the superiority of our proposed method. Table- to-text generation has been widely studied, and Seq2Seq models have achieved promising performance. (Lebret et al., 2016b; Liu et al., 2018; Wiseman et al., 2018; Ma et al., 2019; Liu et al., 2019a) . For Transformer-based methods, the Seq2Seq Transformer is used by Ma et al. (2019) for table-to-text generation in lowresource scenario. Thus, instead of encoding an entire table as in our approach, only the predicted key facts are encoded in (Ma et al., 2019) . Extended transformer has been applied to game summary (Gong et al., 2019) and E2E NLG tasks (Gehrmann et al., 2018) . However, their goals focus on matching the reference text instead of being faithful to the input.", "cite_spans": [ { "start": 10, "end": 28, "text": "Tian et al., 2019)", "ref_id": "BIBREF31" }, { "start": 199, "end": 217, "text": "Tian et al., 2019)", "ref_id": "BIBREF31" }, { "start": 250, "end": 269, "text": "(Tian et al., 2019)", "ref_id": "BIBREF31" }, { "start": 1184, "end": 1206, "text": "(Lebret et al., 2016b;", "ref_id": "BIBREF17" }, { "start": 1207, "end": 1224, "text": "Liu et al., 2018;", "ref_id": "BIBREF22" }, { "start": 1225, "end": 1246, "text": "Wiseman et al., 2018;", "ref_id": "BIBREF36" }, { "start": 1247, "end": 1263, "text": "Ma et al., 2019;", "ref_id": "BIBREF24" }, { "start": 1264, "end": 1282, "text": "Liu et al., 2019a)", "ref_id": "BIBREF20" }, { "start": 1351, "end": 1367, "text": "Ma et al. (2019)", "ref_id": "BIBREF24" }, { "start": 1528, "end": 1545, "text": "(Ma et al., 2019)", "ref_id": "BIBREF24" }, { "start": 1602, "end": 1621, "text": "(Gong et al., 2019)", "ref_id": "BIBREF11" }, { "start": 1640, "end": 1663, "text": "(Gehrmann et al., 2018)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 1016, "end": 1023, "text": "Table 4", "ref_id": "TABREF5" }, { "start": 1077, "end": 1083, "text": "Table-", "ref_id": null } ], "eq_spans": [], "section": "Human Evaluation", "sec_num": "3.7" }, { "text": "Another line of work attempts to use external knowledge to improve the quality of generated text (Chen et al., 2019b) . These methods allow generation from an expanded external knowledge base that may contain information not relevant to the input table. Comparatively, our setting requires the generated text to be faithful to the input table. Nie et al. (2018) further study fidelitydata-to-text generation, where several executable symbolic operations are applied to guide text generation. Both models do not consider the matching between the input and generated output.", "cite_spans": [ { "start": 97, "end": 117, "text": "(Chen et al., 2019b)", "ref_id": "BIBREF5" }, { "start": 344, "end": 361, "text": "Nie et al. (2018)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "4" }, { "text": "Regarding datasets, most previous methods are trained and evaluated on much simpler datasets like WikiBio (Lebret et al., 2016b ) that contains only one sentence as a reference description. Instead, we focus on the more complicated structured knowledge base dataset that aims to generate multi-sentence texts. propose a model based on the pointer network that can copy facts directly from the input knowledge base. Our model uses a similar strategy but obtains much better performance.", "cite_spans": [ { "start": 106, "end": 127, "text": "(Lebret et al., 2016b", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "4" }, { "text": "In terms of faithfulness, one related parallel work is Tian et al. (2019) . However, our method is completely different from theirs. Specifically, Tian et al. (2019) develop a confidence oriented decoder that assigns a confidence score to each target position to reduce the unfaithful information in the generated text. Comparatively, our method enforces faithfulness by including the proposed table-text optimal-transport matching loss and table-text embedding similarity loss. Moreover, the faithfulness of Tian et al. (2019) only requires generated texts to be supported by either a table or the reference; whereas ours constrains generated texts to be faithful only to the table.", "cite_spans": [ { "start": 55, "end": 73, "text": "Tian et al. (2019)", "ref_id": "BIBREF31" }, { "start": 147, "end": 165, "text": "Tian et al. (2019)", "ref_id": "BIBREF31" }, { "start": 509, "end": 527, "text": "Tian et al. (2019)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "4" }, { "text": "Other related works are (Perez-Beltrachini and Lapata, 2018; Liu et al., 2019b) . For (Perez-Beltrachini and Lapata, 2018) , the content selection mechanism training with multi-task learning and reinforcement learning is proposed. For (Liu et al., 2019b) , they propose force attention and reinforcement learning based method. Their learning methods are completely different from our method that simultaneously incorporates optimaltransport matching loss and embedding similarity loss. Moreover, the REINFORCE algorithm (Williams, 1992) and policy gradient method used in (Perez-Beltrachini and Lapata, 2018; Liu et al., 2019b) exhibits high variance when training the model.", "cite_spans": [ { "start": 24, "end": 60, "text": "(Perez-Beltrachini and Lapata, 2018;", "ref_id": "BIBREF27" }, { "start": 61, "end": 79, "text": "Liu et al., 2019b)", "ref_id": "BIBREF21" }, { "start": 86, "end": 122, "text": "(Perez-Beltrachini and Lapata, 2018)", "ref_id": "BIBREF27" }, { "start": 235, "end": 254, "text": "(Liu et al., 2019b)", "ref_id": "BIBREF21" }, { "start": 520, "end": 536, "text": "(Williams, 1992)", "ref_id": "BIBREF35" }, { "start": 572, "end": 608, "text": "(Perez-Beltrachini and Lapata, 2018;", "ref_id": "BIBREF27" }, { "start": 609, "end": 627, "text": "Liu et al., 2019b)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "4" }, { "text": "Finally, the content-matching constraints between text and table is inspired by ideas in machine translation and Seq2Seq models (Chen et al., 2019a) .", "cite_spans": [ { "start": 128, "end": 148, "text": "(Chen et al., 2019a)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "4" }, { "text": "In this paper, we propose a novel Transformerbased table-to-text generation framework to address the faithful text-generation problem. To enforce faithful generation, we propose a new tabletext optimal-transport matching loss and a tabletext embedding similarity loss. To evaluate the faithfulness of the generated texts, we further propose a new automatic evaluation metric specialized to the table-to-text generation problem. Extensive experiments are conducted to verify the proposed method. Both automatic and human evaluations show that our framework can significantly outperform the state-of-the-art methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "PARENT-T evaluates each instance (T i , G i ) separately, by computing the precision and recall of generated text G i against table T i . In other words, PARENT-T is a table-focused version of PARENT (Dhingra et al., 2019) .", "cite_spans": [ { "start": 200, "end": 222, "text": "(Dhingra et al., 2019)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "A PARENT-T Metric", "sec_num": null }, { "text": "When computing precision, we want to check what fraction of the n-grams in G i n are correct. We consider an n-gram g to be correct if it has a high probability of being entailed by the table. We use the word overlap model for entailment probability w(g). The precision score E p for one instance is computed as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A PARENT-T Metric", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "w(g) = n j=1 1(g j \u2208T i ) n (6) E n p = g\u2208G i n w(g)# G i n (g) g\u2208G i n # G i n (g)", "eq_num": "(7)" } ], "section": "A PARENT-T Metric", "sec_num": null }, { "text": "E p = exp", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A PARENT-T Metric", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "4 n=1 1 4 logE n p", "eq_num": "(8)" } ], "section": "A PARENT-T Metric", "sec_num": null }, { "text": "whereT i denotes all the lexical items present in the table T i , n is the length of g, and g j is the jth token in g. w(g) is the entailment probability, and E n p is the entailed precision score for n-grams of order n. # G i n (g) denotes the count of n-gram g in G i n . The precision score E p is a combination of n-gram orders 1-4 using a geometric average.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A PARENT-T Metric", "sec_num": null }, { "text": "For recall, we only compute it against table to ensure that texts that mention more information from the table get higher scores. E r (T i ) is computed in the same way as in Dhingra et al. (2019) :", "cite_spans": [ { "start": 175, "end": 196, "text": "Dhingra et al. (2019)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "A PARENT-T Metric", "sec_num": null }, { "text": "E r = E r (T i ) = 1 K K k=1 1 |r k | LCS(r k , G i ) (9)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A PARENT-T Metric", "sec_num": null }, { "text": "where a table is a set of records T i = {r k } K k=1 , r k denotes the value string of record r k , and LCS(x, y) is the length of the longest common subsequence between x and y. Higher values of E r (T i ) denote that more records are likely to be mentioned in G i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A PARENT-T Metric", "sec_num": null }, { "text": "Thus, the PARENT-T score (i.e. F score) for one instance is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A PARENT-T Metric", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "PARENT-T = 2E p E r E p + E r", "eq_num": "(10)" } ], "section": "A PARENT-T Metric", "sec_num": null }, { "text": "The system-level PARENT-T score for a model M is the average of instance-level PARENT-T scores across the evaluation set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A PARENT-T Metric", "sec_num": null }, { "text": "The following are the details for instructing our human evaluation raters how to rate each generated sentence:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B Details of Human Evaluation", "sec_num": null }, { "text": "We only provide the input table and the generated text for the raters. There are 20 well-trained raters participating in the evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B Details of Human Evaluation", "sec_num": null }, { "text": "4: The sentence meaning is clear and flow naturally and smoothly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fluency :", "sec_num": null }, { "text": "3: The sentence meaning is clear, but there are a few interruptions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fluency :", "sec_num": null }, { "text": "2: The sentence does not flow smoothly but people can understand its meaning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fluency :", "sec_num": null }, { "text": "1: The sentence is not fluent at all and people cannot understand its meaning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fluency :", "sec_num": null }, { "text": "4 : There are no grammar errors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammar :", "sec_num": null }, { "text": "3: There are a few grammar errors, but sentence meaning is clear.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammar :", "sec_num": null }, { "text": "2: There are some grammar errors, but not influencing its meaning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammar :", "sec_num": null }, { "text": "1: There are many grammar errors. People cannot understand the sentence meaning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammar :", "sec_num": null }, { "text": "Faithfulness A sentence is faithful if it contains only information supported by the table. It should not contain additional information other than the information provided by the table or inferred from the table. Also, the generated sentence should cover as much information in the given table as possible. The raters first manually extract entities from the generated sentences and then calculate the precision as the percentage of entities in the generated text also appear in the table; calculate the recall as the percentage of entities in the table also appear in the generated text. For each tabletext pair, its F-1 score is then calculated according to the precision and recall.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammar :", "sec_num": null }, { "text": "Given a pair of table and its corresponding text description, we can obtain table words embedding as S = {x i } i=n i=1 , and the model output for sentence words embedding as S = {y j } j=m j=1 . The cost matrix C is then computed as in Section 2.4. Both S and S are used as inputs to the IPOT algorithm in Algorithm 1 to obtain the OT-matching distance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C IPOT algorithm", "sec_num": null }, { "text": "Algorithm 1 IPOT algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C IPOT algorithm", "sec_num": null }, { "text": "Require: Feature vector S = {x i } i=n i=1 , S = {y j } j=m j=1 , and stepsize 1/\u03b2 Figure 5 illustrates three matching cases from top to bottom, namely hard matching, soft bipartite matching, and optimal transport matching. The hard matching stands for exactly matching words between the table and the target sequences. This operation is non-differentiable. The soft bipartite matching, on the other hand, supposes the similarity between the word embedding", "cite_spans": [], "ref_spans": [ { "start": 83, "end": 91, "text": "Figure 5", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "C IPOT algorithm", "sec_num": null }, { "text": "\u03c3 = 1 m 1 m T 1 = 1 n 1 T m C ij = d(x i , y j ), A ij = e \u2212 C ij \u03b2 for t = 1 to N do Q = A T t for k = 1 to K do \u03b4 = 1 nQ\u03c3 , \u03c3 = 1 mQ T \u03b4 end for T t+1 = diag(\u03b4)Qdiag(\u03c3) end for return T D Details of Optimal Transport Loss", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C IPOT algorithm", "sec_num": null }, { "text": "v i k and v j k is d(v i k , v j k )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C IPOT algorithm", "sec_num": null }, { "text": ", and finds the matching such that", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C IPOT algorithm", "sec_num": null }, { "text": "L = k d(v i k , v j k )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C IPOT algorithm", "sec_num": null }, { "text": "is minimized. This minimization can be solved exactly by the Hungarian algorithm (Kuhn, 1955) . But, its objective is still non-differentiable. Our proposed optimal transport matching can be viewed as the relaxed problem of the soft bipartite matching by computing the distance between the distribution over the input table and the decoded text sentence. This distance in optimal transport matching is differentiable.", "cite_spans": [ { "start": 81, "end": 93, "text": "(Kuhn, 1955)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "C IPOT algorithm", "sec_num": null }, { "text": "More generation examples from different models are shown in Figure 6 , 7, and Table 6, 7. Specifically, Table 7 and Figure 7 show a more challenging example, as its table has 22 rows. In this example, we can observe that all the RNN-based models cannot capture such long term dependencies and miss most of the input records in the table. By contrast, our model miss much less input records. ", "cite_spans": [], "ref_spans": [ { "start": 60, "end": 68, "text": "Figure 6", "ref_id": "FIGREF2" }, { "start": 104, "end": 111, "text": "Table 7", "ref_id": null }, { "start": 116, "end": 124, "text": "Figure 7", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "E More generation examples", "sec_num": null }, { "text": "Miss Generated texts ) 7, 8 Aaron Miller ( born August 11 1971 is an United States former professional Ice hockey Defenceman who played in the National Hockey League ( NHL ) for the Quebec Nordiques and the Colorado Avalanche . he was born in Buffalo, New York and played for the Quebec Nordiques and the Ottawa Senators . Figure 6 . The \"Miss\" column indicates the corresponding row appears in the input table, but does not appear in the output generation text. The red color indicates that these entities appear in the text but do not appear in the input table. ", "cite_spans": [ { "start": 21, "end": 62, "text": ") 7, 8 Aaron Miller ( born August 11 1971", "ref_id": null } ], "ref_spans": [ { "start": 323, "end": 331, "text": "Figure 6", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "The result of the method by is different from the score reported in their paper, as we use their publicly released code https://github.com/EagleW/Describing a Knowledge Base and data that is three times larger than the original 106,216 table-text pair data used in the paper. We have confirmed the correctness of our results with the author.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We sincerely thank all the reviewers for providing valuable feedback. We thank Linfeng Song, Dian Yu, Wei-yun Ma, and Ruiyi Zhang for the helpful discussions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null }, { "text": "Miss Generated texts (Wang et al., 2018) 2, 3, 4, 5, 6, 8, 10, 11, 12, 13, 14, 15, 16, 21, 22\u00c9 mile Mbouh ( born 30 May 1966 ) is a former Cameroon national football team Association football . he was born in Douala and played for the Tanjong Pagar United FC in the 1994 FIFA World Cup .Pointer generator 2, 4, 5, 6, 7, 8, 10, 11, 12, 13, 14, 16, 16, 20\u00c9 mile Mbouh, (born 30 May 1966 ) is a Cameroon retired Association football who played as a Midfielder . he played for Cameroon national football team in the 1990 FIFA World Cup . he also played for Perlis FA and Liaoning Whowin F.C. .\u00c9mile was born in Douala, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 20\u00c9 mile Mbouh, (born 30 May 1966 ) is a retired Cameroonian Association football who played as a Midfielder . he was born in Douala . he was a member of the Cameroon national football team at the 1990 FIFA World Cup . he was a member of the Cameroon national football team at the 1990 FIFA World Cup . he was a member of the Cameroon national football team at the 1990 FIFA World Cup . he was a member of the Cameroon national football team at the 1990 FIFA World Cup . 2, 3, 4, 5, 6, 8, 10, 11, 12, 13, 14, 15, 16, 17 Figure 7 . The \"Miss\" column indicates the corresponding row appears in the input table, but does not appear in the output generation text. The red color indicates that these entities appear in the text but do not appear in the input table.", "cite_spans": [ { "start": 21, "end": 94, "text": "(Wang et al., 2018) 2, 3, 4, 5, 6, 8, 10, 11, 12, 13, 14, 15, 16, 21, 22\u00c9", "ref_id": null }, { "start": 305, "end": 384, "text": "2, 4, 5, 6, 7, 8, 10, 11, 12, 13, 14, 16, 16, 20\u00c9 mile Mbouh, (born 30 May 1966", "ref_id": null }, { "start": 615, "end": 700, "text": "2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 20\u00c9 mile Mbouh, (born 30 May 1966", "ref_id": null }, { "start": 1138, "end": 1140, "text": "2,", "ref_id": null }, { "start": 1141, "end": 1143, "text": "3,", "ref_id": null }, { "start": 1144, "end": 1146, "text": "4,", "ref_id": null }, { "start": 1147, "end": 1149, "text": "5,", "ref_id": null }, { "start": 1150, "end": 1152, "text": "6,", "ref_id": null }, { "start": 1153, "end": 1155, "text": "8,", "ref_id": null }, { "start": 1156, "end": 1159, "text": "10,", "ref_id": null }, { "start": 1160, "end": 1163, "text": "11,", "ref_id": null }, { "start": 1164, "end": 1167, "text": "12,", "ref_id": null }, { "start": 1168, "end": 1171, "text": "13,", "ref_id": null }, { "start": 1172, "end": 1175, "text": "14,", "ref_id": null }, { "start": 1176, "end": 1179, "text": "15,", "ref_id": null }, { "start": 1180, "end": 1183, "text": "16,", "ref_id": null }, { "start": 1184, "end": 1186, "text": "17", "ref_id": null } ], "ref_spans": [ { "start": 1187, "end": 1195, "text": "Figure 7", "ref_id": null } ], "eq_spans": [], "section": "Model", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the International Conference on Learning Represen- tations.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Collective content selection for concept-to-text generation", "authors": [ { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2005, "venue": "HLT/EMNLP 2005, Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Regina Barzilay and Mirella Lapata. 2005. Collective content selection for concept-to-text generation. In HLT/EMNLP 2005, Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference, 6-8 October 2005, Vancouver, British Columbia, Canada.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Graph-to-sequence learning using gated graph neural networks", "authors": [ { "first": "Daniel", "middle": [], "last": "Beck", "suffix": "" }, { "first": "Gholamreza", "middle": [], "last": "Haffari", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Cohn", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Beck, Gholamreza Haffari, and Trevor Cohn. 2018. Graph-to-sequence learning using gated graph neural networks. In Proceedings of the An- nual Meeting of the Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Brief review: Natural language generation in health care", "authors": [ { "first": "Alison", "middle": [], "last": "Cawsey", "suffix": "" }, { "first": "Bonnie", "middle": [ "L" ], "last": "Webber", "suffix": "" }, { "first": "Ray", "middle": [ "B" ], "last": "Jones", "suffix": "" } ], "year": 1997, "venue": "JAMIA", "volume": "4", "issue": "6", "pages": "473--482", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alison Cawsey, Bonnie L. Webber, and Ray B. Jones. 1997. Brief review: Natural language generation in health care. JAMIA, 4(6):473-482.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Improving sequence-to-sequence learning via optimal transport", "authors": [ { "first": "Liqun", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yizhe", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Ruiyi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Chenyang", "middle": [], "last": "Tao", "suffix": "" }, { "first": "Zhe", "middle": [], "last": "Gan", "suffix": "" }, { "first": "Haichao", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Bai", "middle": [], "last": "Li", "suffix": "" }, { "first": "Dinghan", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Changyou", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Lawrence", "middle": [], "last": "Carin", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liqun Chen, Yizhe Zhang, Ruiyi Zhang, Chenyang Tao, Zhe Gan, Haichao Zhang, Bai Li, Dinghan Shen, Changyou Chen, and Lawrence Carin. 2019a. Improving sequence-to-sequence learning via opti- mal transport. In Proceedings of the International Conference on Learning Representations.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Enhancing neural data-to-text generation models with external background knowledge", "authors": [ { "first": "Shuang", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Jinpeng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xiaocheng", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Feng", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing and the International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shuang Chen, Jinpeng Wang, Xiaocheng Feng, Feng Jiang, Bing Qin, and Chin-Yew Lin. 2019b. Enhanc- ing neural data-to-text generation models with exter- nal background knowledge. In Proceedings of the Conference on Empirical Methods in Natural Lan- guage Processing and the International Joint Con- ference on Natural Language Processing.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Meteor universal: Language specific translation evaluation for any target language", "authors": [ { "first": "Michael", "middle": [], "last": "Denkowski", "suffix": "" }, { "first": "Alon", "middle": [], "last": "Lavie", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Ninth Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the Ninth Workshop on Statistical Machine Translation.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Handling divergent reference texts when evaluating table-to-text generation", "authors": [ { "first": "", "middle": [], "last": "Cohen", "suffix": "" } ], "year": 2019, "venue": "Proceedings of 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cohen. 2019. Handling divergent reference texts when evaluating table-to-text generation. In Pro- ceedings of 57th Annual Meeting of the Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "End-to-end content and plan selection for data-to-text generation", "authors": [ { "first": "Sebastian", "middle": [], "last": "Gehrmann", "suffix": "" }, { "first": "Z", "middle": [], "last": "Falcon", "suffix": "" }, { "first": "Henry", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Alexander", "middle": [ "M" ], "last": "Elder", "suffix": "" }, { "first": "", "middle": [], "last": "Rush", "suffix": "" } ], "year": 2018, "venue": "Proceedings of International Conference on Natural Language Generation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Gehrmann, Falcon Z. Dai, Henry Elder, and Alexander M. Rush. 2018. End-to-end content and plan selection for data-to-text generation. In Pro- ceedings of International Conference on Natural Language Generation.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Learning generative models with sinkhorn divergences", "authors": [ { "first": "Aude", "middle": [], "last": "Genevay", "suffix": "" }, { "first": "Gabriel", "middle": [], "last": "Peyr\u00e9", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Cuturi", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the International Conference on Artificial Intelligence and Statistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aude Genevay, Gabriel Peyr\u00e9, and Marco Cuturi. 2018. Learning generative models with sinkhorn diver- gences. In Proceedings of the International Con- ference on Artificial Intelligence and Statistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Enhanced transformer model for data-to-text generation", "authors": [ { "first": "Li", "middle": [], "last": "Gong", "suffix": "" }, { "first": "Josep", "middle": [], "last": "Crego", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Senellart", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 3rd Workshop on Neural Generation and Translation (EMNLP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li Gong, Josep Crego, and Jean Senellart. 2019. En- hanced transformer model for data-to-text genera- tion. In Proceedings of the 3rd Workshop on Neural Generation and Translation (EMNLP).", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the International Conference for Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the International Conference for Learning Repre- sentations.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Open-NMT: Open-source toolkit for neural machine translation", "authors": [ { "first": "Guillaume", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Yuntian", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Senellart", "suffix": "" }, { "first": "Alexander", "middle": [ "M" ], "last": "Rush", "suffix": "" } ], "year": 2017, "venue": "Proceedings of Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/P17-4012" ] }, "num": null, "urls": [], "raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander M. Rush. 2017. Open- NMT: Open-source toolkit for neural machine trans- lation. In Proceedings of Annual Meeting of the As- sociation for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "The hungarian method for the assignment problem", "authors": [ { "first": "Harold", "middle": [ "W" ], "last": "Kuhn", "suffix": "" } ], "year": 1955, "venue": "Naval research logistics quarterly", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Harold W. Kuhn. 1955. The hungarian method for the assignment problem. In Naval research logistics quarterly.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Albert: A lite bert for selfsupervised learning of language representations", "authors": [ { "first": "Zhenzhong", "middle": [], "last": "Lan", "suffix": "" }, { "first": "Mingda", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Goodman", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "Piyush", "middle": [], "last": "Sharma", "suffix": "" }, { "first": "Radu", "middle": [], "last": "Soricut", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Good- man, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self- supervised learning of language representations. In https://arxiv.org/abs/1909.11942.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Neural text generation from structured data with application to the biography domain", "authors": [ { "first": "R\u00e9mi", "middle": [], "last": "Lebret", "suffix": "" }, { "first": "David", "middle": [], "last": "Grangier", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R\u00e9mi Lebret, David Grangier, and Michael Auli. 2016a. Neural text generation from structured data with application to the biography domain. In Pro- ceedings of the Conference on Empirical Methods in Natural Language Processing.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Neural text generation from structured data with application to the biography domain", "authors": [ { "first": "R\u00e9mi", "middle": [], "last": "Lebret", "suffix": "" }, { "first": "David", "middle": [], "last": "Grangier", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R\u00e9mi Lebret, David Grangier, and Michael Auli. 2016b. Neural text generation from structured data with application to the biography domain. In Pro- ceedings of the Conference on Empirical Methods in Natural Language Processing.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Learning semantic correspondences with less supervision", "authors": [ { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Michael", "middle": [ "I" ], "last": "Jordan", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics and International Joint Conference on Natural Language Processing of the AFNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Percy Liang, Michael I. Jordan, and Dan Klein. 2009. Learning semantic correspondences with less super- vision. In Proceedings of the Annual Meeting of the Association for Computational Linguistics and In- ternational Joint Conference on Natural Language Processing of the AFNLP.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Rouge: A package for automatic evaluation of summaries", "authors": [ { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2004, "venue": "Proceedings of Text Summarization Branches Out", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Proceedings of Text Summarization Branches Out.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Hierarchical encoder with auxiliary supervision for neural table-to-text generation: Learning better representation for tables", "authors": [ { "first": "Tianyu", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Fuli", "middle": [], "last": "Luo", "suffix": "" }, { "first": "Qiaolin", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Shuming", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Baobao", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Zhifang", "middle": [], "last": "Sui", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tianyu Liu, Fuli Luo, Qiaolin Xia, Shuming Ma, Baobao Chang, and Zhifang Sui. 2019a. Hierar- chical encoder with auxiliary supervision for neural table-to-text generation: Learning better representa- tion for tables. In Proceedings of the AAAI Confer- ence on Artificial Intelligence.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Towards comprehensive description generation from factual attribute-value tables", "authors": [ { "first": "Tianyu", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Fuli", "middle": [], "last": "Luo", "suffix": "" }, { "first": "Pengcheng", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Baobao", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Zhifang", "middle": [], "last": "Sui", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tianyu Liu, Fuli Luo, Pengcheng Yang, Wei Wu, Baobao Chang, and Zhifang Sui. 2019b. Towards comprehensive description generation from factual attribute-value tables. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Table-to-text generation by structure-aware seq2seq learning", "authors": [ { "first": "Tianyu", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Kexiang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Sha", "suffix": "" }, { "first": "Baobao", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Zhifang", "middle": [], "last": "Sui", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tianyu Liu, Kexiang Wang, Lei Sha, Baobao Chang, and Zhifang Sui. 2018. Table-to-text generation by structure-aware seq2seq learning. In Proceedings of the AAAI Conference on Artificial Intelligence.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Nltk: The natural language toolkit", "authors": [ { "first": "Edward", "middle": [], "last": "Loper", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bird", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Edward Loper and Steven Bird. 2002. Nltk: The natural language toolkit. In https://arxiv.org/abs/cs/0205028.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Key fact as pivot: A two-stage model for low resource table-to-text generation", "authors": [ { "first": "Shuming", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Pengcheng", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Tianyu", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Li", "suffix": "" }, { "first": "Jie", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shuming Ma, Pengcheng Yang, Tianyu Liu, Peng Li, Jie Zhou, and Xu Sun. 2019. Key fact as pivot: A two-stage model for low resource table-to-text gen- eration. In https://arxiv.org/pdf/1908.03067.pdf.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Operation-guided neural networks for high fidelity data-to-text generation", "authors": [ { "first": "Feng", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Jinpeng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jin-Ge", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Rong", "middle": [], "last": "Pan", "suffix": "" }, { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Feng Nie, Jinpeng Wang, Jin-Ge Yao, Rong Pan, and Chin-Yew Lin. 2018. Operation-guided neural net- works for high fidelity data-to-text generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the Annual Meeting of the Association for Compu- tational Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Bootstrapping generators from noisy data", "authors": [ { "first": "Laura", "middle": [], "last": "Perez", "suffix": "" }, { "first": "-", "middle": [], "last": "Beltrachini", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Laura Perez-Beltrachini and Mirella Lapata. 2018. Bootstrapping generators from noisy data. In Pro- ceedings of the 2018 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Get to the point: Summarization with pointer-generator networks", "authors": [ { "first": "Abigail", "middle": [], "last": "See", "suffix": "" }, { "first": "J", "middle": [], "last": "Peter", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Liu", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summa- rization with pointer-generator networks. In https://arxiv.org/abs/1704.04368.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "A graph-to-sequence model for amrto-text generation", "authors": [ { "first": "Linfeng", "middle": [], "last": "Song", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhiguo", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Gildea", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Linfeng Song, Yue Zhang, Zhiguo Wang, and Daniel Gildea. 2018. A graph-to-sequence model for amr- to-text generation. In Proceedings of the Annual Meeting of the Association for Computational Lin- guistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Sequence to sequence learning with neural networks", "authors": [ { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "", "middle": [], "last": "Le", "suffix": "" } ], "year": 2014, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural net- works. In Advances in Neural Information Process- ing Systems.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Sticking to the facts: Confident decoding for faithful data-to-text generation", "authors": [ { "first": "Ran", "middle": [], "last": "Tian", "suffix": "" }, { "first": "Shashi", "middle": [], "last": "Narayan", "suffix": "" }, { "first": "Thibault", "middle": [], "last": "Sellam", "suffix": "" }, { "first": "Ankur", "middle": [ "P" ], "last": "Parikh", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ran Tian, Shashi Narayan, Thibault Sellam, and Ankur P. Parikh. 2019. Sticking to the facts: Confi- dent decoding for faithful data-to-text generation. In https://arxiv.org/abs/1910.08684.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Lukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Describing a knowledge base", "authors": [ { "first": "Qingyun", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xiaoman", "middle": [], "last": "Pan", "suffix": "" }, { "first": "Lifu", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Boliang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhiying", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Ji", "middle": [], "last": "Heng", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2018, "venue": "International Conference on Natural Language Generation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qingyun Wang, Xiaoman Pan, Lifu Huang, Boliang Zhang, Zhiying Jiang, Heng Ji, and Kevin Knight. 2018. Describing a knowledge base. In Interna- tional Conference on Natural Language Generation.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Sentence embedding for neural machine translation domain adaptation", "authors": [ { "first": "Rui", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Finch", "suffix": "" }, { "first": "Masao", "middle": [], "last": "Utiyama", "suffix": "" }, { "first": "Eiichiro", "middle": [], "last": "Sumita", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rui Wang, Andrew Finch, Masao Utiyama, and Ei- ichiro Sumita. 2017. Sentence embedding for neural machine translation domain adaptation. In Proceed- ings of the 55th Annual Meeting of the Association for Computational Linguistics.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Simple statistical gradientfollowing algorithms for connectionist reinforcement learning", "authors": [ { "first": "Ronald", "middle": [ "J" ], "last": "Williams", "suffix": "" } ], "year": 1992, "venue": "Machine Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronald J. Williams. 1992. Simple statistical gradient- following algorithms for connectionist reinforce- ment learning. In Machine Learning.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Learning neural templates for text generation", "authors": [ { "first": "Sam", "middle": [], "last": "Wiseman", "suffix": "" }, { "first": "Stuart", "middle": [ "M" ], "last": "Shieber", "suffix": "" }, { "first": "Alexander", "middle": [ "M" ], "last": "Rush", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sam Wiseman, Stuart M. Shieber, and Alexander M. Rush. 2018. Learning neural templates for text gen- eration. In Proceedings of the Conference on Em- pirical Methods in Natural Language Processing.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "A fast proximal point method for computing exact wasserstein distance", "authors": [ { "first": "Yujia", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Xiangfeng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Ruijia", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Hongyuan", "middle": [], "last": "Zha", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yujia Xie, Xiangfeng Wang, Ruijia Wang, and Hongyuan Zha. 2018. A fast proximal point method for computing exact wasserstein distance. In https://arxiv.org/pdf/1802.04307.pdf.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Sentence-level agreement for neural machine translation", "authors": [ { "first": "Mingming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Kehai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Masao", "middle": [], "last": "Utiyama", "suffix": "" }, { "first": "Eiichiro", "middle": [], "last": "Sumita", "suffix": "" }, { "first": "Min", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Tiejun", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mingming Yang, Rui Wang, Kehai Chen, Masao Utiyama, Eiichiro Sumita, Min Zhang, and Tiejun Zhao. 2019. Sentence-level agreement for neural machine translation. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "num": null, "text": "Example input for different models" }, "FIGREF1": { "type_str": "figure", "uris": null, "num": null, "text": "Hard matching (top), soft bipartite matching (middle), and optimal transport matching (bottom)." }, "FIGREF2": { "type_str": "figure", "uris": null, "num": null, "text": "Example input for different models." }, "FIGREF3": { "type_str": "figure", "uris": null, "num": null, "text": "Example input for different models." }, "TABREF0": { "num": null, "text": "", "html": null, "content": "
text be the mean embeddings of a table
and the target text embeddings in our Transformer-
based model, respectively. A table-target sentence
disagreement loss L disagree is then defined as
", "type_str": "table" }, "TABREF2": { "num": null, "text": "Comparison of our model and baseline. PARENT and PARENT-T are the average of PARENT and PARENT-T scores of all table-text pairs.", "html": null, "content": "
P-recall P-precision PT-recall PT-precision
(Wang et al., 2018)44.8363.9284.3441.10
Seq2Seq (Bahdanau et al., 2015)41.8049.0976.0733.13
Pointer-Generator (See et al., 2017)44.0961.7381.6542.03
Structure-Aware Seq2Seq (Liu et al., 2018)46.3451.1883.8435.99
Ours48.8362.8685.2143.52
", "type_str": "table" }, "TABREF3": { "num": null, "text": "Comparison of our model and baseline. P-recall and P-precision refer to the average of PARENT precisions and recalls of all table-text pairs. Similarly, PT-recall and PT-precision are the average of PARENT-T precisions and recalls of all table-text pairs.", "html": null, "content": "
Copy EF OT (N/W) latent BLEU METEOR ROUGE PARENT PARENT-T params
24.4922.0140.9848.3149.8998.92M
24.5722.4342.2651.8754.2998.92M
25.0722.3842.3751.7654.3645.94M
23.8622.0842.6552.7255.3045.94M
W24.6422.3942.5252.7755.4645.94M
N25.2922.6042.2552.7455.8045.94M
N24.5622.3742.4053.0656.1045.94M
", "type_str": "table" }, "TABREF5": { "num": null, "text": "Human Evaluation of various aspects of generated text. British Physicist . Brompton Cemetery he was born in London the son of Sir Thomas and his wife Mary ( n\u00e9e Fleming ) . he was educated at University College School and University College London . November 1908) was an English Physicist . William was born in London and educated at University College London. he is buried in Brompton Cemetery London . he was elected a Fellow of the Royal Society in 1901. he was the father of Barbara Ayrton-Gould .", "html": null, "content": "
ModelMissGenerated texts
(Wang et al., 2018) 9William Edward Ayrton Fellow of the Royal Society ( 14 September 1847 -8
November 1908 ) was a Pointer generator 2, 9 William Edward Ayrton-Gould Fellow of the Royal Society (14 September
1847 -8 November 1908) was an English Physicist who was born in London
and was educated at Brompton College and University College London . he
died in London on 8 November 1908 . William was elected a Fellow of the
Royal Society in 1902.
Seq2Seq1, 2, 3, 9
Structure-Aware1, 2, 9
OursNoneWilliam Edward Ayrton Fellow of the Royal Society (14 September 1847 -8
", "type_str": "table" }, "TABREF6": { "num": null, "text": "Example outputs from different methods with an input table shown in", "html": null, "content": "", "type_str": "table" }, "TABREF7": { "num": null, "text": "Aaron Miller (born August 11 1971) is a retired United States professional Ice hockey Defenceman who played in the National Hockey League (NHL) for the Quebec Nordiques Quebec Nordiques Quebec Nordiques and the Quebec Nordiques . he was born in Buffalo, New York and grew up in New York City, Aaron Miller (born August 11 1971) is an United States former professional Ice hockey Defenceman who played in the National Hockey League . miller was born in Buffalo, New York . he was drafted by the Colorado Avalanche in the second round of the 1994 NHL Entry Draft . he was drafted in the sixth round of the 1994 NHL Entry Draft by the Colorado Avalanche . he was drafted in the sixth round of the 1994 NHL Entry Draft by the Colorado Avalanche . Aaron Miller (born August 11 1971 in Buffalo, New York New York) is a retired United States professional Ice hockey Defenceman who played in the National Hockey League (NHL) for the Quebec Nordiques Colorado Avalanche Colorado Avalanche Colorado Avalanche Colorado Avalanche and Quebec Nordiques. he was drafted in the 2nd round of overall of the 2002 NHL Entry Draft. None Aaron Miller (born August 11 1971 in Buffalo, New York) is an United States former professional Ice hockey Defenceman who played in the National Hockey League (NHL) for the Quebec Nordiques and Colorado Avalanche . he was a member of the United States men's national Ice hockey team at the 2002 Winter Olympics and 2006 Winter Olympics.", "html": null, "content": "
Pointer generator2, 7, 8
Seq2Seq3, 7, 8
Structure-Aware7, 8
Ours
", "type_str": "table" }, "TABREF8": { "num": null, "text": "Example outputs from different methods with an input table shown in", "html": null, "content": "", "type_str": "table" } } } }