{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:40:24.158201Z" }, "title": "Towards Domain-Independent Text Structuring Trainable on Large Discourse Treebanks", "authors": [ { "first": "Grigorii", "middle": [], "last": "Guz", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of British Columbia Vancouver", "location": { "postCode": "V6T 1Z4", "region": "BC", "country": "Canada" } }, "email": "gguz@cs.ubc.ca" }, { "first": "Giuseppe", "middle": [], "last": "Carenini", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of British Columbia Vancouver", "location": { "postCode": "V6T 1Z4", "region": "BC", "country": "Canada" } }, "email": "carenini@cs.ubc.ca" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Text structuring is a fundamental step in NLG, especially when generating multi-sentential text. With the goal of fostering more general and data-driven approaches to text structuring, we propose the new and domain-independent NLG task of structuring and ordering a (possibly large) set of EDUs. We then present a solution for this task that combines neural dependency tree induction with pointer networks and can be trained on large discourse treebanks that have only recently become available. Further, we propose a new evaluation metric that is arguably more suitable for our new task compared to existing content ordering metrics. Finally, we empirically show that our approach outperforms competitive alternatives on the proposed measure and is equivalent in performance with respect to previously established measures.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Text structuring is a fundamental step in NLG, especially when generating multi-sentential text. With the goal of fostering more general and data-driven approaches to text structuring, we propose the new and domain-independent NLG task of structuring and ordering a (possibly large) set of EDUs. We then present a solution for this task that combines neural dependency tree induction with pointer networks and can be trained on large discourse treebanks that have only recently become available. Further, we propose a new evaluation metric that is arguably more suitable for our new task compared to existing content ordering metrics. Finally, we empirically show that our approach outperforms competitive alternatives on the proposed measure and is equivalent in performance with respect to previously established measures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Natural Language Generation (NLG) plays a fundamental role in data-to-text tasks like automatically producing soccer, weather and financial reports (Chen and Mooney, 2008; Plachouras et al., 2016; Balakrishnan et al., 2019) , as well as in text-to-text generation tasks like summarization (Nenkova and McKeown, 2012) .", "cite_spans": [ { "start": 148, "end": 171, "text": "(Chen and Mooney, 2008;", "ref_id": "BIBREF2" }, { "start": 172, "end": 196, "text": "Plachouras et al., 2016;", "ref_id": "BIBREF23" }, { "start": 197, "end": 223, "text": "Balakrishnan et al., 2019)", "ref_id": "BIBREF1" }, { "start": 289, "end": 316, "text": "(Nenkova and McKeown, 2012)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Generally speaking, NLG involves three key steps (Gatt and Krahmer, 2017) : first there is content determination which selects what information units should be conveyed, secondly there is text structuring, which is responsible for properly structuring and ordering those units; and finally microplanning-realization that aggregates information units into sentences and paragraphs that are then verbalized.", "cite_spans": [ { "start": 49, "end": 73, "text": "(Gatt and Krahmer, 2017)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The focus of this paper is on the text structuring step, which is critical for the overall performance of an NLG system, as it ensures that the communicative goals of the text are realized in the most structurally coherent and cohesive way possible, making the main ideas expressed by the text easy to follow for the target audience.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Aiming to develop very general computational methods for text structuring, we keep our study independent from particular ways in which the input information units are represented and from explicitly provided ordering constraints for the target application domain (Gatt and Krahmer, 2017) . More specifically, we propose and attack, in a fully datadriven way, the novel and domain-independent task of simultaneously structuring and ordering a set of Elementary Discourse Units (EDUs), i.e., clauselike text fragments that the Rhetorical Structure Theory (RST) (Mann and Thompson, 1988) assumes to be the building blocks of any discourse structure (see Figure 1 (a)(left)). In other words, we assume that the system is given a set of EDUs (with cardinality possibly > 100) as input and returns their ordering, as well as the unlabelled RST dependency discourse tree structure for a document consisting of this set of EDUs, as illustrated in Figure 1(a) .", "cite_spans": [ { "start": 263, "end": 287, "text": "(Gatt and Krahmer, 2017)", "ref_id": "BIBREF6" }, { "start": 559, "end": 584, "text": "(Mann and Thompson, 1988)", "ref_id": "BIBREF19" }, { "start": 939, "end": 950, "text": "Figure 1(a)", "ref_id": null } ], "ref_spans": [ { "start": 651, "end": 659, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our data-driven approach relies on the very recent availability of large treebanks containing hundreds of thousands of (silver-standard) discourse trees that can be automatically generated by distant supervision following the approach presented by Huber and Carenini (2020) . We formulate the problem as one of the dependency tree induction, repurposing existing solutions (Ma and Hovy, 2017; Vinyals et al., 2015) to perform an RST-based text structuring where both EDU ordering and tree building are executed simultaneously (Reiter and Dale, 2000) . The resulting structures can be highly useful for subsequent NLG pipeline stages such as aggregation, and for downstream tasks like text simplification (Zhong et al., 2019) . Our approach is trainable end-to-end, but since the discourse trees in the training treebank are constituency trees (see Figure 1 (b)), we face the additional challenge of turning them into dependency trees (see Figure 1 (a)) before the learning process can start (Hayashi et al., 2016) .", "cite_spans": [ { "start": 248, "end": 273, "text": "Huber and Carenini (2020)", "ref_id": "BIBREF8" }, { "start": 373, "end": 392, "text": "(Ma and Hovy, 2017;", "ref_id": "BIBREF18" }, { "start": 393, "end": 414, "text": "Vinyals et al., 2015)", "ref_id": "BIBREF30" }, { "start": 526, "end": 549, "text": "(Reiter and Dale, 2000)", "ref_id": "BIBREF25" }, { "start": 704, "end": 724, "text": "(Zhong et al., 2019)", "ref_id": "BIBREF33" }, { "start": 992, "end": 1014, "text": "(Hayashi et al., 2016)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 848, "end": 856, "text": "Figure 1", "ref_id": null }, { "start": 939, "end": 948, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In a comprehensive evaluation, we compare our solution to three baselines along with a competitive approach based on pointer networks (Vinyals et al., 2015) , which is the established method of choice not only for sentence ordering (Cui et al., 2018) , but also for basic domain-specific text structuring in data-to-text applications (Puduppully et al., 2019) . In particular, the comparison involves training and testing the different models on the MEGA-DT treebank (Huber and Carenini, 2020) , containing \u2248250,000 discourse trees obtained by distant supervision from a the Yelp'13 corpus of customer reviews (Tang et al., 2015) .", "cite_spans": [ { "start": 134, "end": 156, "text": "(Vinyals et al., 2015)", "ref_id": "BIBREF30" }, { "start": 232, "end": 250, "text": "(Cui et al., 2018)", "ref_id": "BIBREF3" }, { "start": 334, "end": 359, "text": "(Puduppully et al., 2019)", "ref_id": "BIBREF24" }, { "start": 467, "end": 493, "text": "(Huber and Carenini, 2020)", "ref_id": "BIBREF8" }, { "start": 610, "end": 629, "text": "(Tang et al., 2015)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "With respect to evaluation metrics, we found the current ways of measuring content ordering (e.g., Kendall's \u03c4 ) to be inadequate to capture the quality of long sequences of relatively short information units (i.e., sequences of EDUs of long multi-sentential text). Thus, we propose a novel evaluation measure, Blocked Kendall's \u03c4 , that we argue should be used for our new NLG task of ordering and structuring a possibly large set of EDUs, because it critically measures how well semantically close units are clustered together in the correct order.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To summarize the contributions of this paper: (i) we propose the new and domain-independent NLG task involving the structuring and ordering a set of EDUs, which is intended to enable more general and data-driven approaches to text structuring; (ii) we present a strong benchmark solution for this task, trainable on large discourse treebank, that combines neural dependency tree induction with pointer networks; (iii) we propose a new evaluation metric that is arguably much more suitable for this task than existing ordering metrics; (iv) and on this new metric along with standard tree-quality metrics, we show empirically that our approach outperforms or is comparable to competitive alternatives. The code for our solution and the new metric, as well as the treebank for training, is publicly available. 1 1 http://www.cs.ubc.ca/ cs-research/lci/research-groups/ natural-language-processing/index.html 2 Related Work (a) Text structuring is a key step in NLG, especially when generating long multi-sentential documents. Not surprisingly, this is also the case in recent neural approaches. Wiseman et al. (2017) presented the RotoWire corpus, targeting longdocument data-to-text NLG. To generate the document, their model conditions on all records in the data table by weighting their embeddings with attention, in addition to using copying mechanism for out-of-vocabulary data entries. The follow-up work of Puduppully et al. (2019) , instead of conditioning on all records, arguably performs better text structuring by first selecting and then ordering the entries of a data table using Pointer network architecture (Vinyals et al., 2015) . That way, the surface realization module considers previously generated text and only one new data table entry at a time. Their model was extended by Iso et al. (2019) , with an additional GRU for tracking the entities that the model already referred to in the past. Pursuing a rather different approach to improve text structuring, Shao et al. (2019) proposed a hierarchical latent-variable model where the problem is decomposed into dependent sub-tasks, aggregating groups of data table entries into sentences first and then generating the sentences sequentially, conditioned on the plan and already generated sentences. Overall, these last three models significantly outperform the initial approach of Wiseman et al. (2017) both in terms of fluency and coverage, with increasing sophistication of the text structuring module yielding bigger gains, confirming that text structuring is indeed crucial for generating coherent long documents.", "cite_spans": [ { "start": 1093, "end": 1114, "text": "Wiseman et al. (2017)", "ref_id": "BIBREF32" }, { "start": 1412, "end": 1436, "text": "Puduppully et al. (2019)", "ref_id": "BIBREF24" }, { "start": 1621, "end": 1643, "text": "(Vinyals et al., 2015)", "ref_id": "BIBREF30" }, { "start": 1796, "end": 1813, "text": "Iso et al. (2019)", "ref_id": "BIBREF9" }, { "start": 1979, "end": 1997, "text": "Shao et al. (2019)", "ref_id": "BIBREF26" }, { "start": 2351, "end": 2372, "text": "Wiseman et al. (2017)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The task we propose and investigate in this paper can be seen as pushing this line of research even further. We aim for a more ambitious text structuring module inspired by traditional NLG work, viewing the process as the construction of an RST discourse tree for the target document (Reiter and Dale, 2000) , which critically includes assigning importance to each constituent. Tellingly, our task is also domain-independent and agnostic on the representation of the input information units.", "cite_spans": [ { "start": 284, "end": 307, "text": "(Reiter and Dale, 2000)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(b) The goal of sentence ordering is to sort a given set of unordered sentences into a maximally coherent document. Most recent work on sentence ordering (Logeswaran et al., 2016; Cui et al., 2018; Wang and Wan, 2019) involves constructing contextualized order-agnostic representations of indi- ", "cite_spans": [ { "start": 154, "end": 179, "text": "(Logeswaran et al., 2016;", "ref_id": "BIBREF16" }, { "start": 180, "end": 197, "text": "Cui et al., 2018;", "ref_id": "BIBREF3" }, { "start": 198, "end": 217, "text": "Wang and Wan, 2019)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1 2 3 4 5 6 N N N N N N N (a) (b) N S S", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Figure 1: (a) A simple example of the novel NLG task we propose in this paper: generating an ordered discourse dependency tree (right) for a given set of EDUs (left). (b) The constituency discourse tree corresponding to the dependency tree shown in (a). The RST-style discourse trees in the treebanks we use for our experiments are initially represented as constituency trees.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "vidual sentences and full documents using architectures such as Transformer Encoder without positional embeddings (Vaswani et al., 2017) , and then feeding those representations into a pointer-based decoder (Vinyals et al., 2015) .", "cite_spans": [ { "start": 114, "end": 136, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF29" }, { "start": 207, "end": 229, "text": "(Vinyals et al., 2015)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The new task we propose in this paper is similar, but more challenging than sentence ordering. Instead of ordering sentences, we need to order EDUs, which are often shorter sentence constituents, and therefore by expressing smaller semantic units they arguably require more finegrained processing. Furthermore, our task goes beyond ordering by also requiring the synergistic and simultaneous step of generating the RST discourse structure for the EDUs. To address these challenges, more powerful techniques for tree induction are needed on top of pointer networks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(c) Document discourse tree structure induction: The third related line of research involves the induction of latent tree structures over documents. Some of these works aim at obtaining better document representations for tasks such as text classification (Karimi and Tang, 2019) and single-document extractive summarization (Liu et al., 2019) . In essence, a neural framework is designed so that a discourse tree for a document is induced while training on the target downstream task. However, even if these approaches demonstrated improvements over non-tree-based models, subsequent studies have shown that the resulting latent discourse dependency trees are often trivial and too shallow (Ferracane et al., 2019) . In contrast, recent work on distant supervision from sentiment (Huber and Carenini, 2020) indicates that large treebanks of discourse trees can be generated by combining neural multiple-instance learning (Angelidis and Lapata, 2018 ) with a CKY-inspired algorithm (Jurafsky and Martin, 2014) . Since a series of experiments in inter-domain discourse parsing have certified the high-quality of these treebanks, we use one of such treebaks, called MEGA-DT, for training and testing our data-driven text structuring approach.", "cite_spans": [ { "start": 256, "end": 279, "text": "(Karimi and Tang, 2019)", "ref_id": "BIBREF11" }, { "start": 325, "end": 343, "text": "(Liu et al., 2019)", "ref_id": "BIBREF15" }, { "start": 691, "end": 715, "text": "(Ferracane et al., 2019)", "ref_id": "BIBREF5" }, { "start": 781, "end": 807, "text": "(Huber and Carenini, 2020)", "ref_id": "BIBREF8" }, { "start": 922, "end": 949, "text": "(Angelidis and Lapata, 2018", "ref_id": "BIBREF0" }, { "start": 982, "end": 1009, "text": "(Jurafsky and Martin, 2014)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our novel task for text structuring receives as input a set of n EDUs and returns both an ordering and a discourse structure for that set. We first describe how the EDUs are encoded, as this is the initial step for all the approaches we consider. Then, after discussing a basic method for just ordering the input EDUs (Pointer Networks), which will serve as our main baseline, we present our solution for fully solving the task in detail, which combines tree induction with pointer networks. We will refer to our final approach as DepStructurer. We conclude the section with two simple baselines for EDU ordering and structuring, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Novel Task and Methods", "sec_num": "3" }, { "text": "For a clear comparison of tree vs. non-tree based approaches, we encode EDUs in a very similar way to previous sentence ordering works (Cui et al., 2018; Wang and Wan, 2019) . Given a document with n EDUs e 1:n , with each EDU e i containing a list of m i words w 1:m i , the final output of the EDU encoder is a set v 1:n , v i \u2208 R d of embeddings of its EDUs. First, using the base version of the ALBERT language model (Lan et al., 2020) , we construct individual EDU embeddings b i \u2208 R 768 as the means of EDU word embeddings\u0175 1:m i from the last layer of ALBERT:", "cite_spans": [ { "start": 135, "end": 153, "text": "(Cui et al., 2018;", "ref_id": "BIBREF3" }, { "start": 154, "end": 173, "text": "Wang and Wan, 2019)", "ref_id": "BIBREF31" }, { "start": 421, "end": 439, "text": "(Lan et al., 2020)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "EDU Encoder", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "b i = 1 m i m i j=1\u0175 j", "eq_num": "(1)" } ], "section": "EDU Encoder", "sec_num": "3.1" }, { "text": "This language model was chosen because it uses a sentence-ordering objective during pre-training, see Lan et al. (2020) . The EDU embeddings are then fed into a Transformer Encoder (Vaswani et al., 2017) without positional embeddings, yielding contextualized EDU representations v 1:n :", "cite_spans": [ { "start": 102, "end": 119, "text": "Lan et al. (2020)", "ref_id": "BIBREF13" }, { "start": 181, "end": 203, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "EDU Encoder", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "v 1:n = TransformerEncoder(b 1:n )", "eq_num": "(2)" } ], "section": "EDU Encoder", "sec_num": "3.1" }, { "text": "As Cui et al. 2018, we compute the final document representation z by averaging the document's EDU embeddings v 1:n .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EDU Encoder", "sec_num": "3.1" }, { "text": "Pointer networks are commonly used for sentence ordering tasks (Cui et al., 2018) and have been recently applied to basic text structuring in datato-text NLG (Puduppully et al., 2019) . We train a pointer network to maximize the probability of correct ordering o s of an unordered set of EDUs v 1:n as a sequence prediction:", "cite_spans": [ { "start": 63, "end": 81, "text": "(Cui et al., 2018)", "ref_id": "BIBREF3" }, { "start": 158, "end": 183, "text": "(Puduppully et al., 2019)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Predicting Order Only: Pointer Networks", "sec_num": "3.2" }, { "text": "P (o s |v 1:n ) = n i=1 P (o s i |o s i\u22121 , ..., o s 1 , v 1:n ) (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Predicting Order Only: Pointer Networks", "sec_num": "3.2" }, { "text": "Here, each term in the product of probabilities is computed as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Predicting Order Only: Pointer Networks", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h j , c j = LST M (h j\u22121 , c j\u22121 , v i\u22121 ) (4) u j i = k T tanh(W 1 v i + W 2 h j ) (5) p(o i |o i\u22121 , .., o 1 , s) = sof tmax(u i )", "eq_num": "(6)" } ], "section": "Predicting Order Only: Pointer Networks", "sec_num": "3.2" }, { "text": "where k \u2208 R d and W 1 , W 2 \u2208 R d\u00d7d are learnable parameters and i, j \u2208 (1, ..., n) index into input and output sequences respectively. Similarly to (Vinyals et al., 2015) , we use the document embedding vector z as the initial hidden state and a vector of zeros as the first input to the pointer network. More specifically, during training, for each document s in our dataset D we feed in the EDU embeddings v i according to the gold document order o s and maximize the probability according to", "cite_spans": [ { "start": 149, "end": 171, "text": "(Vinyals et al., 2015)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Predicting Order Only: Pointer Networks", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b8 * = arg max \u03b8 s\u2208D log p(o * |s, \u03b8)", "eq_num": "(7)" } ], "section": "Predicting Order Only: Pointer Networks", "sec_num": "3.2" }, { "text": "During inference, since an exhaustive search over the most likely ordering is intractable, we use beam search for finding a suboptimal solution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Predicting Order Only: Pointer Networks", "sec_num": "3.2" }, { "text": "The first design choice in addressing the task of simultaneously structuring and ordering a set of EDUs is whether the system should learn how to build dependency or constituency discourse trees (see Figure 1 (a)-(b) for corresponding examples). We decided to aim for dependency discourse structures for two key reasons. Not only have they been shown to be more general and expressive (Morey et al., 2018) , but there are also readily available graph-based methods for learning and inference of dependency trees (Ma and Hovy, 2017 ) that when properly combined enable structure and ordering prediction to benefit from each other. However, since the only large-scale discourse treebank for training (MEGA-DT) contains constituency trees, we first convert them into dependency ones. For this, we follow the protocol of (Hayashi et al., 2016) , which effectively resolves the ambiguity involved in converting multinuclear constituency units. Notice that we want dependency trees that fully specify the ordering for the EDUs, so our translation algorithm also labels each dependency arc with label -L or R, denoting whether the modifier node should be on the left (L) or on the right (R) of the head node in the linearized EDU sequence.", "cite_spans": [ { "start": 385, "end": 405, "text": "(Morey et al., 2018)", "ref_id": "BIBREF20" }, { "start": 512, "end": 530, "text": "(Ma and Hovy, 2017", "ref_id": "BIBREF18" }, { "start": 817, "end": 839, "text": "(Hayashi et al., 2016)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 200, "end": 208, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Performing the whole task: Our DepStructurer", "sec_num": "3.3" }, { "text": "Once the training data is generated as a dependency treebank, our two-step solution for the task of structuring and ordering a set of EDUs can be applied. Notice that the same EDU embeddings v 1:n are reused in both steps -for tree induction (Step 1 \u00a73.3.1) and child ordering (Step 2 \u00a73.3.2). These embeddings are generated by training a single EDU Encoder as described in \u00a73.1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performing the whole task: Our DepStructurer", "sec_num": "3.3" }, { "text": "The first step of our solution learns how to build a discourse dependency tree for the input sequence of EDU embeddings v 1:n . Formally, this can be framed as learning a compatibility matrix (edge score tensor more precisely) M \u2208 R n\u00d7n\u00d72 , where the last dimension of l an entry i, j corresponds to the scores of the labels L and R for the edge from node i to node j. Similarly to (Ma and Hovy, 2017) , each entry is computed as follows:", "cite_spans": [ { "start": 382, "end": 401, "text": "(Ma and Hovy, 2017)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Step 1: Compatibility Matrix and Initial Tree Induction", "sec_num": "3.3.1" }, { "text": "M i,j = v T i W 1 v j + W 2 v i + W 3 v j + b (8)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Step 1: Compatibility Matrix and Initial Tree Induction", "sec_num": "3.3.1" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Step 1: Compatibility Matrix and Initial Tree Induction", "sec_num": "3.3.1" }, { "text": "W 1 \u2208 R d\u00d7d\u00d72 , W 2 \u2208 R d\u00d72 and W 3 \u2208 R d\u00d72 , b \u2208 R 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Step 1: Compatibility Matrix and Initial Tree Induction", "sec_num": "3.3.1" }, { "text": "are learnable bilinear, linear and bias terms. Once the tensor M is predicted, it is used for inferring an initial dependency structure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Step 1: Compatibility Matrix and Initial Tree Induction", "sec_num": "3.3.1" }, { "text": "Learning M : The objective is to maximize the probability of the correct tree structure y:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Step 1: Compatibility Matrix and Initial Tree Induction", "sec_num": "3.3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (y|e 1:n , \u03b8) = exp (v i ,v j ,l)\u2208y M i,j,l Z(e 1:n , \u03b8)", "eq_num": "(9)" } ], "section": "Step 1: Compatibility Matrix and Initial Tree Induction", "sec_num": "3.3.1" }, { "text": "where Z(e 1:n , \u03b8) =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Step 1: Compatibility Matrix and Initial Tree Induction", "sec_num": "3.3.1" }, { "text": "y\u2208T (e 1:n )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Step 1: Compatibility Matrix and Initial Tree Induction", "sec_num": "3.3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "exp (v i ,v j ,l)\u2208y M i,j,l", "eq_num": "(10)" } ], "section": "Step 1: Compatibility Matrix and Initial Tree Induction", "sec_num": "3.3.1" }, { "text": "with T (e 1:n ) denoting all possible trees from a set of EDUs e 1:n . Since the number |T (e 1:n )| of possible trees grows exponentially with the number of EDUs, we need an efficient way of computing Z(e 1:n , \u03b8). Following (Koo et al., 2007) , we achieve this goal by first computing the weighted adjacency matrix A(M ) \u2208 R n\u00d7n\u00d72 for left-child and right-child edges:", "cite_spans": [ { "start": 226, "end": 244, "text": "(Koo et al., 2007)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Step 1: Compatibility Matrix and Initial Tree Induction", "sec_num": "3.3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "A i,j,l = 0, if i = j exp{M i,j,l } otherwise", "eq_num": "(11)" } ], "section": "Step 1: Compatibility Matrix and Initial Tree Induction", "sec_num": "3.3.1" }, { "text": "as well as the root scores for each node:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Step 1: Compatibility Matrix and Initial Tree Induction", "sec_num": "3.3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "r i (v) = exp{M LP (v i )}", "eq_num": "(12)" } ], "section": "Step 1: Compatibility Matrix and Initial Tree Induction", "sec_num": "3.3.1" }, { "text": "Then, the weight of the correct dependency structure y is defined as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Step 1: Compatibility Matrix and Initial Tree Induction", "sec_num": "3.3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03c8(y, \u03b8) = r root(y) (v) i,j,l\u2208y A i,j,l", "eq_num": "(13)" } ], "section": "Step 1: Compatibility Matrix and Initial Tree Induction", "sec_num": "3.3.1" }, { "text": "where root(y) is the child of the root node in the dependency tree. We then compute the Laplacian matrix L of G:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Step 1: Compatibility Matrix and Initial Tree Induction", "sec_num": "3.3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L i,j = n i =1 2 l=1 A i ,j,l , if i = j 2 l=1 \u2212A i,j,l otherwise", "eq_num": "(14)" } ], "section": "Step 1: Compatibility Matrix and Initial Tree Induction", "sec_num": "3.3.1" }, { "text": "and replace its first row by r(v):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Step 1: Compatibility Matrix and Initial Tree Induction", "sec_num": "3.3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L i,j = r i (v), if i = 1 L i,j otherwise", "eq_num": "(15)" } ], "section": "Step 1: Compatibility Matrix and Initial Tree Induction", "sec_num": "3.3.1" }, { "text": "It can be shown (Koo et al., 2007) that the determinant ofL is in fact equal to the normalizing constant that we need:", "cite_spans": [ { "start": 16, "end": 34, "text": "(Koo et al., 2007)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Step 1: Compatibility Matrix and Initial Tree Induction", "sec_num": "3.3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Z(e 1:n , \u03b8) = |L|", "eq_num": "(16)" } ], "section": "Step 1: Compatibility Matrix and Initial Tree Induction", "sec_num": "3.3.1" }, { "text": "which takes O(n 3 ) time to compute. Hence, the loss for tree construction derived from eq. 9 can be computed efficiently:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Step 1: Compatibility Matrix and Initial Tree Induction", "sec_num": "3.3.1" }, { "text": "l tree (\u03b8) = \u2212 log \u03c8(y, \u03b8) + log Z(e 1:n , \u03b8) (17)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Step 1: Compatibility Matrix and Initial Tree Induction", "sec_num": "3.3.1" }, { "text": "Inference of the initial tree structure: The learned model is applied to the input sequence of EDU embeddings v 1:n . Then, using the predicted compatibility matrix M , the highest-weighting tree structure can be constructed by the Chu-Liu-Edmonds algorithm (Edmonds, 1967) , with the root being the node with highest root score r i (eq. 12). Figure 2 (a) shows a sample output of this process.", "cite_spans": [ { "start": 258, "end": 273, "text": "(Edmonds, 1967)", "ref_id": null } ], "ref_spans": [ { "start": 343, "end": 351, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Step 1: Compatibility Matrix and Initial Tree Induction", "sec_num": "3.3.1" }, { "text": "The key limitation of Step 1 is that some nodes in the resulting dependency tree can have multiple left or right children, which makes their relative order unrecoverable from the basic tree structure. For instance, this is the case for nodes 1 and 2 in Figure 2 (a), both of which have two left children (outgoing edges labeled by L). To address this issue, in", "cite_spans": [], "ref_spans": [ { "start": 253, "end": 261, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Step 2: Ordering Children", "sec_num": "3.3.2" }, { "text": "Step 2 for every node s i \u2208 s 1:n that has k > 1 left or right children s i 1 , ..., s i k , we train a pointer network that predicts the correct order of children on each side -in the same way as described in \u00a7 3.2, but specifically trained on groups of children in MEGA-DT. Inference of final ordering: The pseudocode for predicting the final ordering is provided in Algorithm 1. The ordering is built recursively bottomup -at each step, given the ordering of all left and right subtrees (recursive calls in lines 4, 9), the ordering is obtained by concatenating, in the order predicted by Pointer network (lines 2, 7) , the orderings of those subtrees, together with the current root node (line 6). Specifically, the children are ordered according to their root node; for example in Figure 2 (b)(top), when deciding the order for child subtrees rooted at nodes 2,6 for the node 1, the pointer network orders them using the embeddings for those nodes.", "cite_spans": [], "ref_spans": [ { "start": 608, "end": 620, "text": "(lines 2, 7)", "ref_id": "FIGREF0" }, { "start": 786, "end": 794, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Step 2: Ordering Children", "sec_num": "3.3.2" }, { "text": "Language model decoding (LMD): greedily predicts the linear EDU ordering. The next EDU at each timestep is the one maximizing the length normalized language modelling objective from AL-BERT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines for Ordering and Full Task", "sec_num": "3.4" }, { "text": "Unsupervised tree induction (UTI): computes the compatibility matrix M using cosine similarity between the means of ALBERT embeddings for each EDU. The label for dependency (left vs. right child) is chosen randomly, while dependent orders for nodes with multiple children are chosen according to above ordering baseline LMD.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines for Ordering and Full Task", "sec_num": "3.4" }, { "text": "Tree Induction (TI+LMD): being an ablation for our main model, this baseline only learns to induce the tree structure in the same way as DepStructurer, but orders the children as in LMD, without performing supervised leaf ordering.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines for Ordering and Full Task", "sec_num": "3.4" }, { "text": "Our evaluation relies on MEGA-DT, a discourse treebank generated by distant supervision from the Yelp'13 corpus of customer reviews (Tang et al., 2015) , according to the method presented by Huber and Carenini (2020) . The high-quality of MEGA-DT trees has been certified in experiments in interdomain discourse parsing similar to the ones described in (Huber and Carenini, 2020) . In practice, their approach for generating the discourse trees for a set of documents can be applied to any other genre. If the required sentiment annotation is not naturally available (like star ratings for customer reviews), it can be obtained from an off-the-shelf sentiment analyzer. We train all models on 100k and 215k subsets of MEGA-DT, and use 7.5k documents for development and 15k for testing. Due to memory requirements induced by finetuning AL-BERT, the training splits only contain documents with less than 35 EDUs, whereas to evaluate the performance on longer documents, the development and test sets contain respectively 2.5k and 5k of longer documents. The project GitHub repository provides the exact splits.", "cite_spans": [ { "start": 132, "end": 151, "text": "(Tang et al., 2015)", "ref_id": "BIBREF28" }, { "start": 191, "end": 216, "text": "Huber and Carenini (2020)", "ref_id": "BIBREF8" }, { "start": 353, "end": 379, "text": "(Huber and Carenini, 2020)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "The MEGA-DT Dataset", "sec_num": "4.1" }, { "text": "In all experiments, we assess the quality of the EDUs ordering and of their tree structure independently with two sets of corresponding metrics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "4.2" }, { "text": "Measuring the quality of information ordering is a challenging task because different metrics can be more or less appropriate depending on the num-ber and the nature/granularity of the information units that are ordered. In accord with previous works, we first consider a set of simple metrics that essentially penalize the distance of an information unit from its correct position. These include Kendall's \u03c4 , Position Accuracy (POS) and Perfect Match Ratio (PMR). Then, we propose a new, more sophisticated metric, which is arguably much more appropriate for longer sequences of relatively short information units (i.e., sequences of EDUs of long multisentential text). This metric, that we call Blocked Kendall'\u03c4 rewards a correctly ordered sub-sequence even if its location is shifted as a single block.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Information Ordering Metrics", "sec_num": "4.2.1" }, { "text": "Kendall's \u03c4 : a metric of rank correlation, widely used for information ordering evaluation; found to correlate with human judgement (Lapata, 2006) . It is computed as follows:", "cite_spans": [ { "start": 133, "end": 147, "text": "(Lapata, 2006)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Information Ordering Metrics", "sec_num": "4.2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "1 |D| o i \u2208D \u03c4\u00f4 i", "eq_num": "(18)" } ], "section": "Information Ordering Metrics", "sec_num": "4.2.1" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Information Ordering Metrics", "sec_num": "4.2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03c4\u00f4 i = 1 \u2212 2 * # of transpositions n 2", "eq_num": "(19)" } ], "section": "Information Ordering Metrics", "sec_num": "4.2.1" }, { "text": "Position Accuracy (POS) computes the average proportion of EDUs that are in their correct absolute position according to the gold ordering.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Information Ordering Metrics", "sec_num": "4.2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "o i \u2208D count(\u00f4 i = o i ) length(o i )", "eq_num": "(20)" } ], "section": "|D|", "sec_num": "1" }, { "text": "Perfect Match Ratio (PMR) is the strictest metric, measuring the proportion of documents where positions of all EDUs are predicted correctly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "|D|", "sec_num": "1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "1 |D| o i \u2208D 1(\u00f4 i = o i )", "eq_num": "(21)" } ], "section": "|D|", "sec_num": "1" }, { "text": "The new metric Blocked Kendall's \u03c4 : All metrics from previous work simply penalize the distance of an information unit from its correct position. However, ideally, a good metric for information ordering should also capture how well semantically close units are clustered together. This aspect is even more critical when ordering discourse units of long documents -oftentimes, paragraphs or groups of sentences are largely independent in their meaning from other parts of text, so as long as a paragraph's subset of EDUs is ordered correctly, placing it in a different position should not be penalized harshly. As a short example, given the correct ordering o c [1, 2, 3, 4, 5] , all aforementioned metrics would give a low score to the predicted ordering o p [3, 4, 5, 1, 2] -zero for PMR and POS, and -0.2 for Kendall's \u03c4 . Yet, since the blocks [1, 2] and [3, 4, 5] are preserved in o p , it makes sense to penalize this ordering for only one transposition, and not for twelve like Kendall's \u03c4 does. Arguably, these blocks of EDUs are likely to be much more coherent and interpretable than random sequences. Therefore, we propose a modification for Kendall's \u03c4 that treats the correctly ordered blocks as single units. For the example above with n = 5, we first merge the correct blocks into single units (indexed by the first EDU in the block), so [3, 4, 5, 1, 2] \u2192 [3, 1], and compute the Kendall's \u03c4 on the resulting reduced sequence:", "cite_spans": [ { "start": 662, "end": 665, "text": "[1,", "ref_id": null }, { "start": 666, "end": 668, "text": "2,", "ref_id": null }, { "start": 669, "end": 671, "text": "3,", "ref_id": null }, { "start": 672, "end": 674, "text": "4,", "ref_id": null }, { "start": 675, "end": 677, "text": "5]", "ref_id": null }, { "start": 848, "end": 868, "text": "[1, 2] and [3, 4, 5]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "|D|", "sec_num": "1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Block \u03c4\u00f4 i = 1 \u2212 2 # block transpositions n 2", "eq_num": "(22)" } ], "section": "|D|", "sec_num": "1" }, { "text": "The number of transpositions can be at least zero (if the sequence is perfectly ordered) and at most n 2 , if the sequence is in reversed order. Thus, Blocked Kendall's \u03c4 has the same range [\u22121, 1] and is lower bounded by the standard Kendall's \u03c4 , with the key advantage of rewarding correct blocks of EDUs. We also note that our proposed measure and the standard Kendall's \u03c4 are not metrics in mathematical sense, as they both give a score of 1 to perfectly ordered sequences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "|D|", "sec_num": "1" }, { "text": "UAS and LAS: Unlabelled and labelled attachment scores are the most commonly used measures for evaluation of dependency parsers:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tree Structure Metrics", "sec_num": "4.2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "UAS = {e|e \u2208 E G \u2229 E P } |V |", "eq_num": "(23)" } ], "section": "Tree Structure Metrics", "sec_num": "4.2.2" }, { "text": "LAS = {e|l G (e) = l P (e), e \u2208 E G \u2229 E P } |V |", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tree Structure Metrics", "sec_num": "4.2.2" }, { "text": "where V is the set of EDUs, E G , E P are the sets of gold and predicted edges, and l G (e) is the label of edge e in G.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tree Structure Metrics", "sec_num": "4.2.2" }, { "text": "Results are presented in and surprisingly, our TI+LMD baseline also outperforms the Pointer Network on the full test set and has the performance similar to it on the longdocument subset. In contrast, results are mixed for ordering metrics from previous work (last column), which as we have argued in \u00a74.2.1 are however less appropriate for our text structuring task. Interestingly, all trainable models (Pointer Networks \u00a73.2, our DepStructurer \u00a73.3 and TI+LMD \u00a73.4) benefit from more training data (100K \u2192 215K), with equal or even bigger absolute gains for the DepStructurer, especially on the new metric. This validates the quality of the MEGA-DT treebank and suggests that training on larger corpora could increase the performance even further.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quantitative and Qualitative Results", "sec_num": "5" }, { "text": "Focusing on the performance of tree induction systems, our DepStructurer outperforms the unsupervised model (UTI) by a wide margin and has nearly identical performance with TI+LMD model, indicating that a trainable tree induction model is essential to obtain much more accurate trees.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quantitative and Qualitative Results", "sec_num": "5" }, { "text": "Lastly, among the unsupervised models, UTI outperforms LM across all metrics. This suggests that even without training, forcing a model to generate a tree structure is by itself a useful inductive bias.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quantitative and Qualitative Results", "sec_num": "5" }, { "text": "To highlight the strengths and potential weaknesses of our solution and new metric, we analyze the output of the DepStructurer and Pointer models for two medium-length illustrative sample documents with 16 and 14 EDUs respectively (see Figures 3 and 4) . In each figure, the top row indicates the ordering output of the DepStructurer, the middle row is the gold (i.e., correct) ordering, and the bottom is the Pointer's output. We color-coded the blocks that each model predicted correctly, with the highlights in the middle gold ordering denoting whether the top or bottom model predicted that block correctly. Additionally, for both exam -1 2 3 4 5 6 7 8 9 10 11 12 13 14 13 10 7 8 9 12 4 11 5 6 3 1 2 14 13 10 7 3 2 9 11 8 4 6 5 14 1 12", "cite_spans": [], "ref_spans": [ { "start": 236, "end": 252, "text": "Figures 3 and 4)", "ref_id": "FIGREF2" }, { "start": 640, "end": 771, "text": "-1 2 3 4 5 6 7 8 9 10 11 12 13 14 13 10 7 8 9 12 4 11 5 6 3 1 2 14 13 10 7 3 2 9 11 8 4 6 5", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Quantitative and Qualitative Results", "sec_num": "5" }, { "text": "Figure 4: Example illustrating benefits of new metric.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Quantitative and Qualitative Results", "sec_num": "5" }, { "text": "ples, on the top of the DepStructurer ordering, we show the predicted tree dependency edges within the blocks. The main structural benefit of the Dep-Structurer can be clearly seen in the Figure 3 -the adjacent EDUs tend to form subtrees, the nodes of which the model learns to put close together. In the case of the Pointer model, however, even though it was able to infer a reasonable approximate ordering -with EDUs 1, 3, 2 and EDUs 15, 12, 16 being placed respectively at the beginning/end of the sequence, it failed to arrange them properly in coherent blocks. In Figure 4 , we can see an example where the DepStructurer scores in the standard and Blocked Kendall's \u03c4 are very different: \u221236.3 vs. 34.1; while they are the same for the pointer model \u22129.9. This example clearly illustrates the benefit of our new metric for text structuring. While both models made poor predictions with respect to the distance of each EDU to its correct position, our DepStructurer arguably learned a much more coherent document structure by better grouping related information, which is reflected in the Blocked metric, but is ignored by the standard Kendall's \u03c4 .", "cite_spans": [], "ref_spans": [ { "start": 188, "end": 196, "text": "Figure 3", "ref_id": "FIGREF2" }, { "start": 569, "end": 577, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Quantitative and Qualitative Results", "sec_num": "5" }, { "text": "By proposing the domain-independent task of structuring and ordering a set of EDUs, we aim to stimulate more general and data-driven approaches for text structuring. The solution we have developed for such task combines neural dependency tree induction with pointer networks, which are both trainable on large discourse treebanks. Since existing text ordering metrics are not capturing key aspects of text structuring, we have also proposed a new metric that is arguably much more suitable for the task. In a series of experiments, complemented by qualitative error analysis, we have shown that our solution delivers top performance and represents a promising initial framework for further developments. Fruitful directions for future work include:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future work", "sec_num": "6" }, { "text": "(1) Exploring more recent techniques for tree induction, such as pointer-based and higher-order dependency parsing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future work", "sec_num": "6" }, { "text": "(2) Integrating our approach into existing long-document data-to-text NLG pipelines such as Puduppully et al. (2019) , to explore the benefits of content structuring pre-training for datato-text applications.", "cite_spans": [ { "start": 92, "end": 116, "text": "Puduppully et al. (2019)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future work", "sec_num": "6" }, { "text": "(3) Verifying the validity of our proposed measure for ordering textual units of long documents (i.e. correlation with human judgement), as well as exploring further metrics for text structuring. (4) Extending our approach to fullylabelled RST discourse trees involving nuclearity and relation annotations, which can be obtained from state-of-the-art RST discourse parsers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future work", "sec_num": "6" } ], "back_matter": [ { "text": "For the Pointer Model \u00a73.2, similarly to (Cui et al., 2018) , the hidden state size in the decoder and transformer EDU encoder is 512, and beam size is 64. Also, as in Cui et al. (2018) , the 4-layer Transformer has 8 attention heads. For the Dependency Model \u00a73.3, the edge prediction weights have d = 512, and we choose the highest-scoring tree among the top-5 root classifier predictions during inference. The 768-dimensional outputs of ALBERT are transformed with a dense layer to match the dimensionality of EDU encoder. We use AdamW optimizer (Loshchilov and Hutter, 2019) with default weight decay 0.01 and learning rate 0.001, and clip gradient norm at 0.2. The learning rate scheduling rule as in (Vaswani et al., 2017) has 4000 warm-up steps. We apply word dropout (Srivastava et al., 2014) to outputs of ALBERT and of the contextual EDU encoder. We tune dropout value using 15k training subset, selecting among [0, 0.05, 0.15, 0.3], with best values 0.15 for Pointer and 0 for the Dependency Model. All models are trained using early stopping if validation loss did not decrease for three epochs. As only 1% of EDUs have length > 20 word tokens, we clip each EDU's size at 50 ALBERT tokenizer tokens (since it keeps spaces). Batch size for all models is 2 -the highest that could fit into a single GTX 1080 Ti GPU with 11 GB of memory.", "cite_spans": [ { "start": 41, "end": 59, "text": "(Cui et al., 2018)", "ref_id": "BIBREF3" }, { "start": 168, "end": 185, "text": "Cui et al. (2018)", "ref_id": "BIBREF3" }, { "start": 549, "end": 578, "text": "(Loshchilov and Hutter, 2019)", "ref_id": "BIBREF17" }, { "start": 706, "end": 728, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF29" }, { "start": 775, "end": 800, "text": "(Srivastava et al., 2014)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "A Hyperparameters and training setup", "sec_num": null }, { "text": "See the next page. 13: i simply love their gyros! 10: it is set up like sauce 7: the food is cooked fresh 8: for you 9: so there will be a short wait. 12: and they bring the food to you. 4: the interior is cutesy and bright 11: where you order at the cashier area 5: while upbeat music is playing. 6: they have a small outdoor seating area and some booths and tables inside. 3: it's tucked away in a strip plaza shockingly! 1: i hope more people are frequenting this place 2: since i was last there. 14: it's relatively quick but always fresh and inexpensive! Gold:1: i hope more people are frequenting this place 2: since i was last there. 3: it's tucked away in a strip plaza shockingly! 4: the interior is cutesy and bright 5: while upbeat music is playing. 6: they have a small outdoor seating area and some booths and tables inside. 7: the food is cooked fresh 8: for you 9: so there will be a short wait. 10: it is set up like sauce 11: where you order at the cashier area, 12: and they bring the food to you. 13: i simply love their gyros! 14: it's relatively quick but always fresh and inexpensive! Pointer:13: i simply love their gyros! 10: it is set up like sauce 7: the food is cooked fresh 3: it's tucked away in a strip plaza shockingly! 2: since i was last there. 9: so there will be a short wait. 11: where you order at the cashier area 8: for you, 5: while upbeat music is playing. 4: the interior is cutesy and bright 6: they have a small outdoor seating area and some booths and tables inside. 14: it's relatively quick but always fresh and inexpensive! 1: i hope more people are frequenting this place 12: and they bring the food to you. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B EDU Ordering Examples", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Multiple instance learning networks for fine-grained sentiment analysis", "authors": [ { "first": "Stefanos", "middle": [], "last": "Angelidis", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2018, "venue": "Transactions of the Association for Computational Linguistics", "volume": "6", "issue": "", "pages": "17--31", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stefanos Angelidis and Mirella Lapata. 2018. Multi- ple instance learning networks for fine-grained sen- timent analysis. Transactions of the Association for Computational Linguistics, 6:17-31.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Constrained decoding for neural NLG from compositional representations in task-oriented dialogue", "authors": [ { "first": "Anusha", "middle": [], "last": "Balakrishnan", "suffix": "" }, { "first": "Jinfeng", "middle": [], "last": "Rao", "suffix": "" }, { "first": "Kartikeya", "middle": [], "last": "Upasani", "suffix": "" }, { "first": "Michael", "middle": [], "last": "White", "suffix": "" }, { "first": "Rajen", "middle": [], "last": "Subba", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "831--844", "other_ids": { "DOI": [ "10.18653/v1/P19-1080" ] }, "num": null, "urls": [], "raw_text": "Anusha Balakrishnan, Jinfeng Rao, Kartikeya Upasani, Michael White, and Rajen Subba. 2019. Con- strained decoding for neural NLG from composi- tional representations in task-oriented dialogue. In Proceedings of the 57th Annual Meeting of the As- sociation for Computational Linguistics, pages 831- 844, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Learning to sportscast: A test of grounded language acquisition", "authors": [ { "first": "L", "middle": [], "last": "David", "suffix": "" }, { "first": "Raymond", "middle": [ "J" ], "last": "Chen", "suffix": "" }, { "first": "", "middle": [], "last": "Mooney", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 25th International Conference on Machine Learning, ICML '08", "volume": "", "issue": "", "pages": "128--135", "other_ids": { "DOI": [ "10.1145/1390156.1390173" ] }, "num": null, "urls": [], "raw_text": "David L. Chen and Raymond J. Mooney. 2008. Learn- ing to sportscast: A test of grounded language ac- quisition. In Proceedings of the 25th International Conference on Machine Learning, ICML '08, page 128-135, New York, NY, USA. Association for Computing Machinery.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Deep attentive sentence ordering network", "authors": [ { "first": "Baiyun", "middle": [], "last": "Cui", "suffix": "" }, { "first": "Yingming", "middle": [], "last": "Li", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Zhongfei", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "4340--4349", "other_ids": { "DOI": [ "10.18653/v1/D18-1465" ] }, "num": null, "urls": [], "raw_text": "Baiyun Cui, Yingming Li, Ming Chen, and Zhongfei Zhang. 2018. Deep attentive sentence ordering net- work. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4340-4349, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Optimum branchings. JOUR-NAL OF RESEARCH of the National Bureau of Standards -B. Mathematics and Mathematical Physics", "authors": [], "year": 1967, "venue": "", "volume": "71", "issue": "", "pages": "233--240", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jack Edmonds. 1967. Optimum branchings. JOUR- NAL OF RESEARCH of the National Bureau of Stan- dards -B. Mathematics and Mathematical Physics, 71B:233-240.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Evaluating discourse in structured text representations", "authors": [ { "first": "Elisa", "middle": [], "last": "Ferracane", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Durrett", "suffix": "" }, { "first": "Junyi", "middle": [ "Jessy" ], "last": "Li", "suffix": "" }, { "first": "Katrin", "middle": [], "last": "Erk", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "646--653", "other_ids": { "DOI": [ "10.18653/v1/P19-1062" ] }, "num": null, "urls": [], "raw_text": "Elisa Ferracane, Greg Durrett, Junyi Jessy Li, and Ka- trin Erk. 2019. Evaluating discourse in structured text representations. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 646-653, Florence, Italy. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Survey of the state of the art in natural language generation: Core tasks, applications and evaluation", "authors": [ { "first": "Albert", "middle": [], "last": "Gatt", "suffix": "" }, { "first": "Emiel", "middle": [], "last": "Krahmer", "suffix": "" } ], "year": 2017, "venue": "Journal of Artificial Intelligence Research", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1613/jair.5714" ] }, "num": null, "urls": [], "raw_text": "Albert Gatt and Emiel Krahmer. 2017. Survey of the state of the art in natural language generation: Core tasks, applications and evaluation. Journal of Artifi- cial Intelligence Research, 61.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Empirical comparison of dependency conversions for RST discourse trees", "authors": [ { "first": "Katsuhiko", "middle": [], "last": "Hayashi", "suffix": "" }, { "first": "Tsutomu", "middle": [], "last": "Hirao", "suffix": "" }, { "first": "Masaaki", "middle": [], "last": "Nagata", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue", "volume": "", "issue": "", "pages": "128--136", "other_ids": { "DOI": [ "10.18653/v1/W16-3616" ] }, "num": null, "urls": [], "raw_text": "Katsuhiko Hayashi, Tsutomu Hirao, and Masaaki Na- gata. 2016. Empirical comparison of dependency conversions for RST discourse trees. In Proceed- ings of the 17th Annual Meeting of the Special Inter- est Group on Discourse and Dialogue, pages 128- 136, Los Angeles. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Mega rst discourse treebanks with structure and nuclearity from scalable distant sentiment supervision", "authors": [ { "first": "Patrick", "middle": [], "last": "Huber", "suffix": "" }, { "first": "Giuseppe", "middle": [], "last": "Carenini", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patrick Huber and Giuseppe Carenini. 2020. Mega rst discourse treebanks with structure and nuclear- ity from scalable distant sentiment supervision. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Learning to select, track, and generate for data-to-text", "authors": [ { "first": "Hayate", "middle": [], "last": "Iso", "suffix": "" }, { "first": "Yui", "middle": [], "last": "Uehara", "suffix": "" }, { "first": "Tatsuya", "middle": [], "last": "Ishigaki", "suffix": "" }, { "first": "Hiroshi", "middle": [], "last": "Noji", "suffix": "" }, { "first": "Eiji", "middle": [], "last": "Aramaki", "suffix": "" }, { "first": "Ichiro", "middle": [], "last": "Kobayashi", "suffix": "" }, { "first": "Yusuke", "middle": [], "last": "Miyao", "suffix": "" }, { "first": "Naoaki", "middle": [], "last": "Okazaki", "suffix": "" }, { "first": "Hiroya", "middle": [], "last": "Takamura", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2102--2113", "other_ids": { "DOI": [ "10.18653/v1/P19-1202" ] }, "num": null, "urls": [], "raw_text": "Hayate Iso, Yui Uehara, Tatsuya Ishigaki, Hiroshi Noji, Eiji Aramaki, Ichiro Kobayashi, Yusuke Miyao, Naoaki Okazaki, and Hiroya Takamura. 2019. Learning to select, track, and generate for data-to-text. In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 2102-2113, Florence, Italy. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Speech and language processing", "authors": [ { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "H", "middle": [], "last": "James", "suffix": "" }, { "first": "", "middle": [], "last": "Martin", "suffix": "" } ], "year": 2014, "venue": "", "volume": "3", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Jurafsky and James H Martin. 2014. Speech and language processing, volume 3. Pearson London.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Learning hierarchical discourse-level structure for fake news detection", "authors": [ { "first": "Hamid", "middle": [], "last": "Karimi", "suffix": "" }, { "first": "Jiliang", "middle": [], "last": "Tang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "3432--3442", "other_ids": { "DOI": [ "10.18653/v1/N19-1347" ] }, "num": null, "urls": [], "raw_text": "Hamid Karimi and Jiliang Tang. 2019. Learning hier- archical discourse-level structure for fake news de- tection. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 3432-3442, Minneapolis, Minnesota. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Structured prediction models via the matrix-tree theorem", "authors": [ { "first": "Terry", "middle": [], "last": "Koo", "suffix": "" }, { "first": "Amir", "middle": [], "last": "Globerson", "suffix": "" }, { "first": "Xavier", "middle": [], "last": "Carreras", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)", "volume": "", "issue": "", "pages": "141--150", "other_ids": {}, "num": null, "urls": [], "raw_text": "Terry Koo, Amir Globerson, Xavier Carreras, and Michael Collins. 2007. Structured prediction mod- els via the matrix-tree theorem. In Proceedings of the 2007 Joint Conference on Empirical Meth- ods in Natural Language Processing and Com- putational Natural Language Learning (EMNLP- CoNLL), pages 141-150, Prague, Czech Republic. Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Albert: A lite bert for self-supervised learning of language representations", "authors": [ { "first": "Zhenzhong", "middle": [], "last": "Lan", "suffix": "" }, { "first": "Mingda", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Goodman", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "Piyush", "middle": [], "last": "Sharma", "suffix": "" }, { "first": "Radu", "middle": [], "last": "Soricut", "suffix": "" } ], "year": 2020, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. Albert: A lite bert for self-supervised learning of language representations. In International Con- ference on Learning Representations.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Automatic evaluation of information ordering: Kendall's tau", "authors": [ { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2006, "venue": "Computational Linguistics", "volume": "32", "issue": "4", "pages": "471--484", "other_ids": { "DOI": [ "10.1162/coli.2006.32.4.471" ] }, "num": null, "urls": [], "raw_text": "Mirella Lapata. 2006. Automatic evaluation of infor- mation ordering: Kendall's tau. Computational Lin- guistics, 32(4):471-484.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Single document summarization as tree induction", "authors": [ { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1745--1755", "other_ids": { "DOI": [ "10.18653/v1/N19-1173" ] }, "num": null, "urls": [], "raw_text": "Yang Liu, Ivan Titov, and Mirella Lapata. 2019. Single document summarization as tree induction. In Pro- ceedings of the 2019 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol- ume 1 (Long and Short Papers), pages 1745-1755, Minneapolis, Minnesota. Association for Computa- tional Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Sentence ordering and coherence modeling using recurrent neural networks", "authors": [ { "first": "Lajanugen", "middle": [], "last": "Logeswaran", "suffix": "" }, { "first": "Honglak", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Dragomir", "middle": [ "R" ], "last": "Radev", "suffix": "" } ], "year": 2016, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lajanugen Logeswaran, Honglak Lee, and Dragomir R. Radev. 2016. Sentence ordering and coherence mod- eling using recurrent neural networks. In AAAI.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Decoupled weight decay regularization", "authors": [ { "first": "Ilya", "middle": [], "last": "Loshchilov", "suffix": "" }, { "first": "Frank", "middle": [], "last": "Hutter", "suffix": "" } ], "year": 2019, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In International Con- ference on Learning Representations.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Neural probabilistic model for non-projective MST parsing", "authors": [ { "first": "Xuezhe", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "59--69", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xuezhe Ma and Eduard Hovy. 2017. Neural proba- bilistic model for non-projective MST parsing. In Proceedings of the Eighth International Joint Con- ference on Natural Language Processing (Volume 1: Long Papers), pages 59-69, Taipei, Taiwan. Asian Federation of Natural Language Processing.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Rhetorical structure theory: Toward a functional theory of text organization. Text-Interdisciplinary Journal for the Study of Discourse", "authors": [ { "first": "C", "middle": [], "last": "William", "suffix": "" }, { "first": "Sandra", "middle": [ "A" ], "last": "Mann", "suffix": "" }, { "first": "", "middle": [], "last": "Thompson", "suffix": "" } ], "year": 1988, "venue": "", "volume": "8", "issue": "", "pages": "243--281", "other_ids": {}, "num": null, "urls": [], "raw_text": "William C Mann and Sandra A Thompson. 1988. Rhetorical structure theory: Toward a functional the- ory of text organization. Text-Interdisciplinary Jour- nal for the Study of Discourse, 8(3):243-281.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A dependency perspective on RST discourse parsing and evaluation", "authors": [ { "first": "Mathieu", "middle": [], "last": "Morey", "suffix": "" }, { "first": "Philippe", "middle": [], "last": "Muller", "suffix": "" }, { "first": "Nicholas", "middle": [], "last": "Asher", "suffix": "" } ], "year": 2018, "venue": "Computational Linguistics", "volume": "44", "issue": "2", "pages": "197--235", "other_ids": { "DOI": [ "10.1162/COLI_a_00314" ] }, "num": null, "urls": [], "raw_text": "Mathieu Morey, Philippe Muller, and Nicholas Asher. 2018. A dependency perspective on RST discourse parsing and evaluation. Computational Linguistics, 44(2):197-235.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A Survey of Text Summarization Techniques", "authors": [ { "first": "Ani", "middle": [], "last": "Nenkova", "suffix": "" }, { "first": "Kathleen", "middle": [], "last": "Mckeown", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "43--76", "other_ids": { "DOI": [ "10.1007/978-1-4614-3223-4_3" ] }, "num": null, "urls": [], "raw_text": "Ani Nenkova and Kathleen McKeown. 2012. A Sur- vey of Text Summarization Techniques, pages 43-76.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Interacting with financial data using natural language", "authors": [ { "first": "Vassilis", "middle": [], "last": "Plachouras", "suffix": "" }, { "first": "Charese", "middle": [], "last": "Smiley", "suffix": "" }, { "first": "Hiroko", "middle": [], "last": "Bretz", "suffix": "" }, { "first": "Ola", "middle": [], "last": "Taylor", "suffix": "" }, { "first": "Jochen", "middle": [ "L" ], "last": "Leidner", "suffix": "" }, { "first": "Dezhao", "middle": [], "last": "Song", "suffix": "" }, { "first": "Frank", "middle": [], "last": "Schilder", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '16", "volume": "", "issue": "", "pages": "1121--1124", "other_ids": { "DOI": [ "10.1145/2911451.2911457" ] }, "num": null, "urls": [], "raw_text": "Vassilis Plachouras, Charese Smiley, Hiroko Bretz, Ola Taylor, Jochen L. Leidner, Dezhao Song, and Frank Schilder. 2016. Interacting with financial data using natural language. In Proceedings of the 39th Inter- national ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '16, page 1121-1124, New York, NY, USA. Association for Computing Machinery.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Data-to-text generation with content selection and planning", "authors": [ { "first": "Ratish", "middle": [], "last": "Puduppully", "suffix": "" }, { "first": "Li", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2019, "venue": "AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ratish Puduppully, Li Dong, and Mirella Lapata. 2019. Data-to-text generation with content selection and planning. In AAAI.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Building Natural Language Generation Systems", "authors": [ { "first": "Ehud", "middle": [], "last": "Reiter", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Dale", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ehud Reiter and Robert Dale. 2000. Building Natural Language Generation Systems. Cambridge Univer- sity Press, USA.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Long and diverse text generation with planning-based hierarchical variational model", "authors": [ { "first": "Zhihong", "middle": [], "last": "Shao", "suffix": "" }, { "first": "Minlie", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Jiangtao", "middle": [], "last": "Wen", "suffix": "" }, { "first": "Wenfei", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Xiaoyan", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "3257--3268", "other_ids": { "DOI": [ "10.18653/v1/D19-1321" ] }, "num": null, "urls": [], "raw_text": "Zhihong Shao, Minlie Huang, Jiangtao Wen, Wenfei Xu, and Xiaoyan Zhu. 2019. Long and diverse text generation with planning-based hierarchical varia- tional model. In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 3257-3268, Hong Kong, China. As- sociation for Computational Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Dropout: A simple way to prevent neural networks from overfitting", "authors": [ { "first": "Nitish", "middle": [], "last": "Srivastava", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Krizhevsky", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" } ], "year": 2014, "venue": "Journal of Machine Learning Research", "volume": "15", "issue": "56", "pages": "1929--1958", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Re- search, 15(56):1929-1958.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Document modeling with gated recurrent neural network for sentiment classification", "authors": [ { "first": "Duyu", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 conference on empirical methods in natural language processing", "volume": "", "issue": "", "pages": "1422--1432", "other_ids": {}, "num": null, "urls": [], "raw_text": "Duyu Tang, Bing Qin, and Ting Liu. 2015. Docu- ment modeling with gated recurrent neural network for sentiment classification. In Proceedings of the 2015 conference on empirical methods in natural language processing, pages 1422-1432.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "30", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran Asso- ciates, Inc.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Pointer networks", "authors": [ { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Meire", "middle": [], "last": "Fortunato", "suffix": "" }, { "first": "Navdeep", "middle": [], "last": "Jaitly", "suffix": "" } ], "year": 2015, "venue": "Advances in Neural Information Processing Systems", "volume": "28", "issue": "", "pages": "2692--2700", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 28, pages 2692-2700. Curran Asso- ciates, Inc.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Hierarchical attention networks for sentence ordering", "authors": [ { "first": "Tianming", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xiaojun", "middle": [], "last": "Wan", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "33", "issue": "", "pages": "7184--7191", "other_ids": { "DOI": [ "10.1609/aaai.v33i01.33017184" ] }, "num": null, "urls": [], "raw_text": "Tianming Wang and Xiaojun Wan. 2019. Hierarchi- cal attention networks for sentence ordering. Pro- ceedings of the AAAI Conference on Artificial Intel- ligence, 33:7184-7191.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Challenges in data-to-document generation", "authors": [ { "first": "Sam", "middle": [], "last": "Wiseman", "suffix": "" }, { "first": "Stuart", "middle": [], "last": "Shieber", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Rush", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2253--2263", "other_ids": { "DOI": [ "10.18653/v1/D17-1239" ] }, "num": null, "urls": [], "raw_text": "Sam Wiseman, Stuart Shieber, and Alexander Rush. 2017. Challenges in data-to-document generation. In Proceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing, pages 2253-2263, Copenhagen, Denmark. Association for Computational Linguistics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Gold: 1: i would actually go for 2 1/2 stars. 2: the lechon special on saturdays tasted 3: like it was premade . 4: the ``crispy`` part of the pork belly was almost gooey. 5: the meat itself tasted good , although better with some kikkoman shoyu. 6: the pancit was good, but heavy on the vegetables. 7: the gem was the shanghai. 8: for $2.00 , you get 5 mini half, 9: that are great! 10: they give you a sweet and sour sauce on the side, 11: which i don't think goes well with it. 12: being a true filipino , i like my lumpia with a vinegar sauce. 13: if you ask the cashier, for a vinegar sauce, 14: they have a white vinegar, with some onions in it. 15: it was ok , better then then the sweet and sour. 16: overall, a descent find", "authors": [ { "first": "Yang", "middle": [], "last": "Zhong", "suffix": "" }, { "first": "Chao", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Junyi Jessy", "middle": [], "last": "Li", "suffix": "" } ], "year": 2019, "venue": "Discourse level factors for sentence deletion in text simplification. Dependency: 2: the lechon special on saturdays tasted 3: like it was premade. 4: the ``crispy`` part of the pork belly was almost gooey. 1: i would actually go for 2 1/2 stars. 8: for $2.00, you get 5 mini half", "volume": "9", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yang Zhong, Chao Jiang, Wei Xu, and Junyi Jessy Li. 2019. Discourse level factors for sentence deletion in text simplification. Dependency: 2: the lechon special on saturdays tasted 3: like it was premade. 4: the ``crispy`` part of the pork belly was almost gooey. 1: i would actually go for 2 1/2 stars. 8: for $2.00, you get 5 mini half, 9: that are great! 12: being a true filipino, i like my lumpia with a vinegar sauce. 13: if you ask the cashier, for a vinegar sauce, 14: they have a white vinegar, with some onions in it. 10: they give you a sweet and sour sauce on the side, 11: which i do n't think goes well with it. 7: the gem was the shanghai. 15: it was ok, better then then the sweet and sour. 6: the pancit was good, but heavy on the vegetables. 5: the meat itself tasted good, although better with some kikkoman shoyu. 16: overall, a descent find. Gold: 1: i would actually go for 2 1/2 stars. 2: the lechon special on saturdays tasted 3: like it was premade . 4: the ``crispy`` part of the pork belly was almost gooey. 5: the meat itself tasted good , although better with some kikkoman shoyu. 6: the pancit was good, but heavy on the vegetables. 7: the gem was the shanghai. 8: for $2.00 , you get 5 mini half, 9: that are great! 10: they give you a sweet and sour sauce on the side, 11: which i don't think goes well with it. 12: being a true filipino , i like my lumpia with a vinegar sauce. 13: if you ask the cashier, for a vinegar sauce, 14: they have a white vinegar, with some onions in it. 15: it was ok , better then then the sweet and sour. 16: overall, a descent find. Pointer: 1: i would actually go for 2 1/2 stars. 8: for $2.00 , you get 5 mini half , 9: that are great! 3: like it was premade. 2: the lechon special on saturdays tasted 14: they have a white vinegar, with some onions in it. 11: which i don't think goes well with it. 13: if you ask the cashier, for a vinegar sauce, 7: the gem was the shanghai. 4: the ``crispy`` part of the pork belly was almost gooey. 6: the pancit was good, but heavy on the vegetables. 15: it was ok, better then then the sweet and sour. 12: being a true filipino, i like my lumpia with a vinegar sauce. 10: they give you a sweet and sour sauce on the side, 5: the meat itself tasted good, although better with some kikkoman shoyu. 16: overall, a descent find.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "Outputs of the two inference steps: (a) Initially induced Dependency Tree and (b) Final total ordering.", "uris": null, "type_str": "figure" }, "FIGREF1": { "num": null, "text": "Figure 2(b)(top) illustrates an output of the Pointer network applied to plain dependency structure inFigure 2(a), from which the final EDU ordering 2 (b)(bottom) is decoded as follows. Algorithm 1: PredictEduOrder Data: Root Result: The ordering of elements of V 1 ordering = [] 2 ordChildren = PtrNet(Root.leftChildren) 3 for child in ordChildren do 4 ordering.extend(PredictEduOrder(child)) 5 end 6 ordering.append(Root) 7 ordChildren = PtrNet(Root.rightChildren) 8 for child in ordChildren do 9 ordering.extend(PredictEduOrder(child)) 10 end", "uris": null, "type_str": "figure" }, "FIGREF2": { "num": null, "text": "Ordering produced by DepStructurer (top row) and Pointer (bottom row); Gold ordering in middle row.", "uris": null, "type_str": "figure" }, "TABREF1": { "content": "
for the full test set
", "text": "Evaluation results on full test set (15k documents) and its long-document subset (5k documents), with best results per subtable highlighted in bold. The entries marked as (\u00d7) signify that these metrics cannot be computed for the corresponding models, since they do not induce document tree structures.", "html": null, "type_str": "table", "num": null } } } }