{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:47:24.112692Z" }, "title": "Template-aware Attention Model for Earnings Call Report Generation", "authors": [ { "first": "Yangchen", "middle": [], "last": "Huang", "suffix": "", "affiliation": {}, "email": "yangchen.huang@jpmchase.com" }, { "first": "Danial", "middle": [ "Mohseni" ], "last": "Taheri", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Prashant", "middle": [ "K" ], "last": "Dhingra", "suffix": "", "affiliation": {}, "email": "prashant.k.dhingra@jpmchase.com" }, { "first": "J", "middle": [ "P" ], "last": "Morgan", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Earning calls are among important resources for investors and analysts for updating their price targets. Firms usually publish corresponding transcripts soon after earnings events. However, raw transcripts are often too long and miss the coherent structure. To enhance the clarity, analysts write well-structured reports for some important earnings call events by analyzing them, requiring time and effort. In this paper, we propose TATSum (Template-Aware aTtention model for Summarization), a generalized neural summarization approach for structured report generation, and evaluate its performance in the earnings call domain. We build a large corpus with thousands of transcripts and reports using historical earnings events. We first generate a candidate set of reports from the corpus as potential soft templates which do not impose actual rules on the output. Then, we employ an encoder model with margin-ranking loss to rank the candidate set and select the best quality template. Finally, the transcript and the selected soft template are used as input in a seq2seq framework for report generation. Empirical results on the earnings call dataset show that our model significantly outperforms state-of-the-art models in terms of informativeness and structure.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Earning calls are among important resources for investors and analysts for updating their price targets. Firms usually publish corresponding transcripts soon after earnings events. However, raw transcripts are often too long and miss the coherent structure. To enhance the clarity, analysts write well-structured reports for some important earnings call events by analyzing them, requiring time and effort. In this paper, we propose TATSum (Template-Aware aTtention model for Summarization), a generalized neural summarization approach for structured report generation, and evaluate its performance in the earnings call domain. We build a large corpus with thousands of transcripts and reports using historical earnings events. We first generate a candidate set of reports from the corpus as potential soft templates which do not impose actual rules on the output. Then, we employ an encoder model with margin-ranking loss to rank the candidate set and select the best quality template. Finally, the transcript and the selected soft template are used as input in a seq2seq framework for report generation. Empirical results on the earnings call dataset show that our model significantly outperforms state-of-the-art models in terms of informativeness and structure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Earnings Calls, conference calls held by public companies to disclose their performance of a specific period, are key resources in providing signals for financial analysts' decision-making process. Analysts, investors, and the mass media can learn about a company's financial results, operation details, and future guidance by listening to these conference calls. Previous works have highlighted the importance of earnings calls in modeling analysts' behavior (Frankel et al., 1999; Keith and Stent, 2019) .", "cite_spans": [ { "start": 460, "end": 482, "text": "(Frankel et al., 1999;", "ref_id": "BIBREF10" }, { "start": 483, "end": 505, "text": "Keith and Stent, 2019)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "As nowadays firms disclose more and more information (Dyer et al., 2017) , earnings call transcripts Figure 1 : Example of an analyst report. Generated reports of our system follow the same structure.", "cite_spans": [ { "start": 53, "end": 72, "text": "(Dyer et al., 2017)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 101, "end": 109, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "are usually longer and contain more information than before, resulting in challenges in efficiently analyzing these unstructured documents and detecting informative facts. Some financial analysts write well-structured reports (Figure 1 ) (Refinitiv) for earnings calls after attending the event or reading the transcript. However, writing such reports usually takes time and effort. In addition, reports are not available for every company and event. Therefore, generating earnings reports quickly and automatically can fill the gap for no-report-available conferences and strongly accelerate the research process in the financial industry.", "cite_spans": [], "ref_spans": [ { "start": 226, "end": 235, "text": "(Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To fulfill this goal, we aim to develop an effective text summarization system to automatically generate reports with a hierarchical structure. Text summarization (Maybury, 1999) , as an important field of Natural Language Processing, attracts considerable interest from researchers. Various datasets for summarization tasks have been built, most of which contains small to middle-sized document and short summary, e.g., CNN/Daily Mail (Hermann et al., 2015; Nallapati et al., 2016) , WikiHow (Koupaee and Wang, 2018) , Reddit (Kim et al., 2019) , etc. Researchers design innovative architectures and benchmark the model performance on these mainstream datasets, yet extending summarization framework to the domain of earnings call transcripts, has never been explored.", "cite_spans": [ { "start": 163, "end": 178, "text": "(Maybury, 1999)", "ref_id": "BIBREF21" }, { "start": 421, "end": 458, "text": "CNN/Daily Mail (Hermann et al., 2015;", "ref_id": null }, { "start": 459, "end": 482, "text": "Nallapati et al., 2016)", "ref_id": "BIBREF22" }, { "start": 493, "end": 517, "text": "(Koupaee and Wang, 2018)", "ref_id": "BIBREF17" }, { "start": 527, "end": 545, "text": "(Kim et al., 2019)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In contrast to popular summarization tasks, structured earnings report generation has several challenges due to the special properties of data. First, earnings call conferences usually take a few hours, and transcripts contain thousands of words. This feature makes it impossible for popular pre-trained models such as BERT (Devlin et al., 2019) and BART (Lewis et al., 2020) to provide high-quality summaries since these approaches partition the long documents into smaller sequences within 512 tokens to meet the input limit, resulting in loss of cross-partition information. Second, generated reports are required to be clearly organized and well-formatted. In addition to summarization, it is important for the model to recognize and output the explicit logical structure of the earnings call presentation. To the best of our knowledge, the structure quality of generated summaries from lengthy documents, in terms of logic and format, has rarely been examined in literature.", "cite_spans": [ { "start": 324, "end": 345, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF7" }, { "start": 355, "end": 375, "text": "(Lewis et al., 2020)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we formally conceptualize report generation as an extension of summarization tasks and propose a novel approach, Template-Aware aTtention model for Summarization (TATSum), to produce hierarchically-structured reports. Inspired by traditional template-based summarization (Zhou and Hovy, 2004) , and soft template-based sentence summarization (Cao et al., 2018) , we use historical reports as soft templates to provide supplemental structure information for a summarization system. We use soft templates as they do not enforce an actual rule in the generated summaries. To deal with the long sequence problem, we leverage the advantage of Long-Documents-Transformer (Longformer) (Beltagy et al., 2020) , which reduces the complexity of the self-attention mechanism in Transformers (Vaswani et al., 2017) and allow for longer input sequences.", "cite_spans": [ { "start": 286, "end": 307, "text": "(Zhou and Hovy, 2004)", "ref_id": "BIBREF31" }, { "start": 357, "end": 375, "text": "(Cao et al., 2018)", "ref_id": "BIBREF3" }, { "start": 693, "end": 715, "text": "(Beltagy et al., 2020)", "ref_id": "BIBREF1" }, { "start": 795, "end": 817, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We collect historical earnings call transcripts and reports, and divide them into speaker sections with fewer words. The combination of a transcript section and a report section serves as an individual data point in the corpus. Our proposed framework consists of three modules as illustrated in Figure 2 : (i) Candidate Generation, which generates a set of potential soft template candidates for a transcript section, (ii) Candidate Ranking, which ranks candidates through a Siamese-architected (Bromley et al., 1993) Longformer Encoder and selects the candidate with the highest rank as the final soft template for the transcript, and (iii) Report Generation, which generates the report using the soft template together with the raw transcript through a Longformer-Encoder-Decoder (LED) model (Beltagy et al., 2020) . Figure 1 illustrates the structure of reports generated by our algorithm. We evaluate the proposed framework on hundreds of earnings call events. Experiments show that TATSum significantly outperforms the stateof-the-art summarization models in terms of informativeness, format, and logical structure. Besides, extensive experiments are conducted to analyze the effect of different components of our framework on the performance of the model. The contributions of this work are summarized as follows:", "cite_spans": [ { "start": 495, "end": 517, "text": "(Bromley et al., 1993)", "ref_id": "BIBREF2" }, { "start": 794, "end": 816, "text": "(Beltagy et al., 2020)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 295, "end": 303, "text": "Figure 2", "ref_id": "FIGREF0" }, { "start": 819, "end": 827, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We introduce a section-based soft template as supplemental information to the encoderdecoder framework to generate structured and readable earnings call reports.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We design a Siamese-architected Longformer encoder for better template selection and further improve the quality of generated reports.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Our algorithm adopts and extends the LED to provide template-aware summarization and overcome the challenge of long sequence encoding and long document generation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We conduct experiments on earnings call transcripts for the first time and evaluate the impact of different components of the proposed system. Results show that TATSum achieves superior performance compared with state-of-the-art baselines.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The paper is organized as follows: in Section 2, we review the relevant prior literature. Section 3 presents the novel architecture of TATSum. We conduct extensive experiments on the earnings call dataset and analyze the results in Section 4. Section 5 concludes the paper and provides future directions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Earlier studies of neural abstractive summarization employ encoder-decoder architecture to generate a shorter version of a sentence (Rush et al., 2015) . Nallapati et al. (2016) extend previous work to summarize documents with more than one sentence using hierarchical attention. A variety of studies focus on building advanced attention mechanism for better summarization, e.g., convolutional attention (Chopra et al., 2016) , graph-based attention (Tan et al., 2017) , Bottom-up attention (Gehrmann et al., 2018 ), etc. See et al. (2017 propose a hybrid pointer-generator network and a coverage mechanism to keep track of already-summarized words. Paulus et al. (2018) introduce a deep reinforced model with a novel intra-attention mechanism and show improved performance for long document summarization.", "cite_spans": [ { "start": 132, "end": 151, "text": "(Rush et al., 2015)", "ref_id": "BIBREF24" }, { "start": 154, "end": 177, "text": "Nallapati et al. (2016)", "ref_id": "BIBREF22" }, { "start": 404, "end": 425, "text": "(Chopra et al., 2016)", "ref_id": "BIBREF5" }, { "start": 450, "end": 468, "text": "(Tan et al., 2017)", "ref_id": "BIBREF26" }, { "start": 491, "end": 513, "text": "(Gehrmann et al., 2018", "ref_id": "BIBREF11" }, { "start": 514, "end": 538, "text": "), etc. See et al. (2017", "ref_id": null }, { "start": 650, "end": 670, "text": "Paulus et al. (2018)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Recently, pre-trained language models, which are trained to learn contextual representations from large-scale corpora, have been proved to be successful in summarization tasks. Popular pretrained models like BERT (Devlin et al., 2019) and BART (Lewis et al., 2020) are adopted to build summarization-specific architecture. BERTSum (Liu and Lapata, 2019 ) proposes a novel documentlevel BERT-based encoder and an auto-regressive decoder with Trigram Blocking techniques and shows strong performance in both extractive and abstractive summarization. Aghajanyan et al. (2020) integrate BART with the Robust Representations through Regularized Finetuning (R3F) method to perform better fine-tuning for pre-trained models and achieve the state-of-the-art performance on CNN/Daily Mail.", "cite_spans": [ { "start": 213, "end": 234, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF7" }, { "start": 244, "end": 264, "text": "(Lewis et al., 2020)", "ref_id": "BIBREF18" }, { "start": 331, "end": 352, "text": "(Liu and Lapata, 2019", "ref_id": "BIBREF20" }, { "start": 548, "end": 572, "text": "Aghajanyan et al. (2020)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Longformer (Beltagy et al., 2020 ) significantly reduces the time and space complexity of the attention mechanism and allows for much longer input sequences. It achieves this goal by replacing the self-attention in traditional Transformers (Vaswani et al., 2017) with windowed attention and introducing new task-oriented global attention. Longformer-Encoder-Decoder (LED) (Beltagy et al., 2020), a variant of Longformer, is also introduced for supporting long document seq2seq tasks. LED-large 16K, a BART-pretrained LED model with no additional pretraining, outperformed Bigbird summarization (Zaheer et al., 2020) , a modified Transformer for long sequences with Pegasus pretraining (Zhang et al., 2020) , and achieved the state-of-the-art performance on arXiv dataset (Co-han et al., 2018) . In this paper, LED is adopted as the base model and a strong baseline benchmark.", "cite_spans": [ { "start": 11, "end": 32, "text": "(Beltagy et al., 2020", "ref_id": "BIBREF1" }, { "start": 240, "end": 262, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF27" }, { "start": 594, "end": 615, "text": "(Zaheer et al., 2020)", "ref_id": "BIBREF28" }, { "start": 685, "end": 705, "text": "(Zhang et al., 2020)", "ref_id": "BIBREF29" }, { "start": 771, "end": 792, "text": "(Co-han et al., 2018)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In the domain of earnings calls, there is limited work exploring the potential of applying text summarization techniques. Cardinaels et al. (2019) generate an automatic summary for earnings releases using off-the-shelf unsupervised summarization methods such as KLSum (Haghighi and Vanderwende, 2009) , LexRank (Erkan and Radev, 2004) , etc., and conduct experiments to analyze the impact of automatic and management summaries on the investors' judgment. However, comprehensive experiments on the performance of summarization techniques are missing. In this work, we develop a novel summarization algorithm for report generation, provide extensive experiments on the earnings call dataset, and compare it with the state-of-the-art models in the literature.", "cite_spans": [ { "start": 122, "end": 146, "text": "Cardinaels et al. (2019)", "ref_id": "BIBREF4" }, { "start": 268, "end": 300, "text": "(Haghighi and Vanderwende, 2009)", "ref_id": "BIBREF12" }, { "start": 311, "end": 334, "text": "(Erkan and Radev, 2004)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "An important feature of our system is to generate well-structured and formatted reports. Templatebased summarization (Zhou and Hovy, 2004 ) is a traditional technique to summarize sentences. With a manually designed incomplete sentence template, the method fills the template using some input text, following pre-defined rules. This method can guarantee that the output sentence follows a specific format. However, constructing templates for long documents and large-scale datasets still remains challenging and requires domain knowledge. Cao et al. (2018) extended the template-based summarization and introduced a soft template, which is a summary sentence selected from the training set, to resolve this issue. Re3Sum (Cao et al., 2018) selects the soft template through an Information Retrieval (IR) platform and jointly learns template quality as well as generates the summary through a seq2seq framework. In this paper, we select historical reports from the corpus and form candidate sets.", "cite_spans": [ { "start": 117, "end": 137, "text": "(Zhou and Hovy, 2004", "ref_id": "BIBREF31" }, { "start": 539, "end": 556, "text": "Cao et al. (2018)", "ref_id": "BIBREF3" }, { "start": 721, "end": 739, "text": "(Cao et al., 2018)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "To further improve template quality, our Candidate Ranking Module is inspired by MatchSum (Zhong et al., 2020) . Zhong et al. (2020) formulates extractive summarization as a semantic text matching problem, and architect a Siamese-BERT network with margin-ranking loss to select the best candidate summary.", "cite_spans": [ { "start": 90, "end": 110, "text": "(Zhong et al., 2020)", "ref_id": "BIBREF30" }, { "start": 113, "end": 132, "text": "Zhong et al. (2020)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Our goal is to generate hierarchically structured reports from long documents. The automatic generation system, TATSum, consists of three mod-ules: Candidate Generation, Candidate Ranking, and Report Generation. Given an earnings call transcript, we divide it into different sections and consider each section as an input sequence T . This helps in the tractability of the summarization process by considering fewer tokens and results in well-structured reports since each section usually follows a coherent structure that is different from others. Similarly, human-written reports are divided into sections, R, and mapped to the document sections in the training phase.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3" }, { "text": "For each document section T , the Candidate Generation module filters a candidate set of top i soft templates C T := {R 1 , \u2022 \u2022 \u2022 , R i } from a built corpus. We rank the candidate set C T using the Candidate Ranking module to select the best soft templateR T to use. Finally, in the Report Generation module, the best soft templateR T and the raw document section T together are encoded into hidden states. A decoder takes the combination of encoded hidden states as well as decoder inputs to generate the abstractive report. Figure 3 illustrates Candidate Ranking and Candidate Generation modules. The three modules will be described in detail in the following subsections.", "cite_spans": [], "ref_spans": [ { "start": 527, "end": 535, "text": "Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Method", "sec_num": "3" }, { "text": "This module finds soft templates from the training corpus and forms a set of candidates. Our corpus, P , includes all document sections T and report sections R in the training set. To find the set of template candidates, we consider two assumptions: (i) similar transcripts should have similar reports, and (ii) a good template should give instructions about the format while not adding misleading information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidate Generation", "sec_num": "3.1" }, { "text": "Since our dataset includes thousands of documents, we use an information retrieval technique, TF-IDF, to efficiently find the set of candidates. TF-IDF is a traditional unsupervised learning technique that can convert document text into a bag of words and quickly vectorize it. Since transcripts and reports have different styles, we consider the similarity between transcripts following assumption (i). Therefore, we first compute the similarity between section T and the other transcript sections in the corpus P using TF-IDF cosine similarity. Then, we select the top 5 scored document sections and use their corresponding reports in the corpus to form the candidate set", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidate Generation", "sec_num": "3.1" }, { "text": "C = {R 1 , \u2022 \u2022 \u2022 , R 5 }.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidate Generation", "sec_num": "3.1" }, { "text": "Although TF-IDF is a quick and easy method to calculate similarities and select candidates, it may not always provide candidates that resemble gold reports. We test this hypothesis by calculating the ROUGE average score (average of ROUGE-1, ROUGE-2, and ROUGE-l F1 score) between a sample of candidate sets and the human-written reports. We find that only 17.4% of the best TF-IDF candidates have the highest ROUGE average score among all the candidates, indicating that TF-IDF is not sufficient for retrieving the best soft-template. Thus, we add the second module, Candidate Ranking, to rank the candidate set and predict the best candidate that has the highest ROUGE score with the human-written report.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidate Generation", "sec_num": "3.1" }, { "text": "The purpose of this module is to precisely select the best template, i.e., the template which has the highest ROUGE score with the human-written report.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidate Ranking", "sec_num": "3.2" }, { "text": "To train this module, we first calculate the ROUGE average score between each candidate template in the candidate set, C T , and the human-written report, R * T , and store them in the descending order inC T as a label. Inspired by Siamese network (Bromley et al., 1993) and Siamese-BERT structure (Zhong et al., 2020) , we construct a Siamese-Longformer architecture to rank the candidate set. Longformer, a.k.a Long-Document Transformer, (Beltagy et al., 2020) is a model that successfully addresses the input length limitation of Transformer-based models by reducing the time complexity of the attention mechanism. The Siamese-Longformer model consists of two Longformer encoders with tied weights and a cosine similarity layer to compute comparable output vectors. One Longformer network encodes transcripts, T , and the other one encodes reports, R. We use the encoded hidden state of the bos_token '< s >' from the final Longformer layer to extract the transcript and report embedding vectors, e T and e R , respectively. The cosine-similarity layer connects these representation vectors and obtains the semantic similarity between the two documents.", "cite_spans": [ { "start": 248, "end": 270, "text": "(Bromley et al., 1993)", "ref_id": "BIBREF2" }, { "start": 298, "end": 318, "text": "(Zhong et al., 2020)", "ref_id": "BIBREF30" }, { "start": 440, "end": 462, "text": "(Beltagy et al., 2020)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Candidate Ranking", "sec_num": "3.2" }, { "text": "S(T, R) = cosine(e T , e R )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidate Ranking", "sec_num": "3.2" }, { "text": "If two documents have a higher ROUGE score, we expect them to have higher predicted semantic similarity. We use margin-ranking loss to update the weights, and the model is expected to predict the correct rank of the candidate set based on the ROUGE score. Specifically, the loss function is designed from the following criteria.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidate Ranking", "sec_num": "3.2" }, { "text": "\u2022 Human-written report should be the most semantically similar with the transcript", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidate Ranking", "sec_num": "3.2" }, { "text": "\u2022 A candidate template that has a higher rouge score with the human-written report should have a higher semantical similarity with the transcript Based on the first criteria, we derive the construction of the first loss function:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidate Ranking", "sec_num": "3.2" }, { "text": "L 1 = R\u2208C T max(0, S(T, R) \u2212 S(T, R * T ))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidate Ranking", "sec_num": "3.2" }, { "text": "Where R * T is the human-written report, and R \u2208 C T denotes all the candidate templates for transcript T .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidate Ranking", "sec_num": "3.2" }, { "text": "Based on the second criteria, we use sorted candidate set ranking inC T and design a margin-ranking loss as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidate Ranking", "sec_num": "3.2" }, { "text": "L 2 = {i,j}\u2208C T max(0, S(T, R T j ) \u2212 S(T, R T i ) +(j \u2212 i) ) (i < j)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidate Ranking", "sec_num": "3.2" }, { "text": "where R i denotes the candidate template ranked i, and is a hyperparameter that distinguishes between candidates with good, i, and bad, j, rankings. As described in criteria 2, the construction aims to measure the loss of any mis-ranking within the candidate set. Finally, the margin-ranking loss we use to train the Siamese-Longformer network is a combination of the two loss functions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidate Ranking", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L R = L 1 + L 2", "eq_num": "(1)" } ], "section": "Candidate Ranking", "sec_num": "3.2" }, { "text": "During the inference phase, the model will predict the similarity scores of candidates in the candidate set, and the candidate with the highest score will be set as the best soft templateR T for the transcript for further report generation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidate Ranking", "sec_num": "3.2" }, { "text": "R T := arg max R\u2208C T S(T, R)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidate Ranking", "sec_num": "3.2" }, { "text": "This module aims to generate the final report based on the soft template and the transcript. To generate an abstractive report, we design a soft-templatebased encoder-decoder architecture to conduct seq2seq generation. We employ a pretrained LED as the base encoder-decoder model. The model takes a transcript section T and a soft templateR T as the input. They are tokenized and encoded by a Longformer encoder respectively. Similar to module 2, we use the encoded hidden state of '< s >' from the top Longformer layer as the representation of the corresponding transcript/template in the semantic space. The hidden states of the encoded transcript H t and template H s are concatenated as the final encoding outputs. H T = Longf ormerEncoder(T )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Report Generation", "sec_num": "3.3" }, { "text": "HR T = Longf ormerEncoder(R T ) H e = [H T ; HR T ]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Report Generation", "sec_num": "3.3" }, { "text": "The combined encoding outputs are then fed into a Longformer Decoder, and the decoding hidden state, H d , is generated auto-regressively at position k based on the previous report tokens y k\u22121 :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Report Generation", "sec_num": "3.3" }, { "text": "H d,k = Decoder(H d,k\u22121 , y k\u22121 , H e )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Report Generation", "sec_num": "3.3" }, { "text": "Finally, a softmax layer predicts the probability vector of words at position k in the report:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Report Generation", "sec_num": "3.3" }, { "text": "P k = sof tmax(H d,k W P ),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Report Generation", "sec_num": "3.3" }, { "text": "where W P is learnable matrix. In cases in which the template includes too many tokens, we truncate it to ensure that we get more information from the transcript than the template since the main content in the report should source from the transcript, and templates should only provide information for formatting. Generally, the tokens from a template are about 25% of the transcript. The whole encoder-decoder architecture is finetuned during training. A beam search is conducted during the test to generate an abstractive report that has the highest overall probability.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Report Generation", "sec_num": "3.3" }, { "text": "R = Beam(P 1 , P 2 , ..., P k )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Report Generation", "sec_num": "3.3" }, { "text": "We use two sets of loss functions in generating the reports of the earning calls. In Candidate Ranking module, we aim to train the parameters of Siamese-Longformer to find the best template in the candidate set. We use ranking loss, L R , defined in equation 1to achieve this goal.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimization", "sec_num": "3.4" }, { "text": "In Report Generation module, our learning goal is to maximize the negative log-likelihood of the probability of the actual report.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimization", "sec_num": "3.4" }, { "text": "L G = \u2212 k log(p y k )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimization", "sec_num": "3.4" }, { "text": "We optimize the two losses separately over their respective parameters using gradient-based approaches (see section 4.4 for more details).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Optimization", "sec_num": "3.4" }, { "text": "We evaluate TATSum on Earnings call reports. We aim to answer the two following questions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment", "sec_num": "4" }, { "text": "\u2022 Q1: How does TATSum perform compared to the state-of-the-art summarization systems?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment", "sec_num": "4" }, { "text": "\u2022 Q2: How do different components of TATSum such as soft-template and template ranking affect the performance of the model?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment", "sec_num": "4" }, { "text": "We collected transcripts and human-written reports for 3655 earnings call events from 2017 to 2020, hosted by 1948 listed companies. Most selected companies are listed in NYSE or NASDAQ. For better generalization of our model, we also select a few companies from world-wide exchanges, such as TSX, FWB, Euronext, etc. These transcripts and reports are divided into 11141 sections, and each section is treated as an individual sequence. The statistics of our dataset is shown in Table 1 . In our dataset, transcript lengths are significantly larger than the majority of public datasets such as Daily Mail and NYTimes and similar to long documents such as arXiv (Cohan et al., 2018) . However, the reports in our dataset are substantially longer than the summaries of existing datasets, indicating that instead of doing lots of condensation, analysts tend to retain most of the information by paraphrasing and restructuring the oral transcript. Therefore, generating long sequences in natural language with a well-organized structure, is the most critical part for automatic earnings call reports.", "cite_spans": [ { "start": 660, "end": 680, "text": "(Cohan et al., 2018)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 478, "end": 485, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Dataset", "sec_num": "4.1" }, { "text": "Dataset #docs avg.doc. avg. report len(words) len(words) Docs 3655 3621 2524 We retain about 20% of the dataset as validation and use the rest for training. To prevent data leakage or utilizing future information, we test the performance of TATSum on 2000 transcript sections extracted from 666 earnings events in late 2020 and early 2021. In this setting, when generating the report for an earnings call transcript, TATSum only leverage a historical report that is available prior to the event. The entire dataset is shown in Table 2 .", "cite_spans": [], "ref_spans": [ { "start": 527, "end": 534, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Dataset", "sec_num": "4.1" }, { "text": "Transcripts/ Reports Sections Training Set 2929 8786 Validation Set 726 2265 Test Set 666 2000 Table 2 : Document and section number information of the dataset", "cite_spans": [], "ref_spans": [ { "start": 13, "end": 113, "text": "Reports Sections Training Set 2929 8786 Validation Set 726 2265 Test Set 666 2000 Table 2", "ref_id": null } ], "eq_spans": [], "section": "Dataset", "sec_num": null }, { "text": "We employ ROUGE (Lin and Hovy, 2003) as the automatic evaluation metric. ROUGE has been used as the standard evaluation metric for machine translation and automatic summarization since 2004. The commonly adopted ROUGE metrics are ROUGE-1 (overlap of unigram), ROUGE-2 (overlap of bigrams), and ROUGE-l (Longest Common Subsequence, or LCS). In the Candidate Ranking module, we use an average of ROUGE-1, ROUGE-2, and ROUGE-3 F1 score as labels for training. Through testing of the whole system, we calculate and report all these metrics. ROUGE can measure how much information the generated report maintains compared with the human-written report. However, correctness, which can not be captured by ROUGE, is another important metric for the generated report in the domain of earnings calls. We manually review the rendered result, especially focusing on important financial statistics, trends, and sentiment, to evaluate whether it reports correct information from the earnings call.", "cite_spans": [ { "start": 16, "end": 36, "text": "(Lin and Hovy, 2003)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "4.2" }, { "text": "To compare the performance of our proposed model with others, we consider following state-of-the-art neural summarization baselines:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.3" }, { "text": "\u2022 BERTSum(Liu and Lapata, 2019): It uses document-level BERT-based encoder and an autoregressive decoder with Trigram Blocking for summarization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.3" }, { "text": "\u2022 LED (Beltagy et al., 2020) : It modifies the self-attention in Transformers with windowed attention for long document summarization.", "cite_spans": [ { "start": 6, "end": 28, "text": "(Beltagy et al., 2020)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.3" }, { "text": "We implement our model with Pytorch and adopt the pretrained LED-base model in Candidate Ranking and Report Generation modules. In particular, we use LEDs with 768 hidden state and 3072 feed-forward layer dimensionality. We use dropout with probability 0.1 and a customized Adam optimizer (Kingma and Ba, 2014) (\u03b2 1 = 0.9, \u03b2 2 = 0.999, = 1e \u2212 9) during training. The learning rates through optimization follow Noam decay scheme (Vaswani et al., 2017) with a warmup step of 500 and are set to be: 1. Module Candidate Ranking: Lr = 3e\u22123 * min(step \u22120.5 , step * wrm.steps \u22121.5 ) 2. Module Report Generation: Lr = 3e\u22125 * min(step \u22120.5 , step * wrm.steps \u22121.5 )", "cite_spans": [ { "start": 428, "end": 450, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Implementation Details", "sec_num": "4.4" }, { "text": "We save a model checkpoint every 5000 steps and choose the best-performed checkpoint on the validation set. In Report Generation module, we use the Block Trigram technique (Liu and Lapata, 2019) to reduce potential redundancy. However, we find this approach ineffective for some reports and observe repetitions of words with punctuations in between. Therefore, we add a new Block Tri-word method that forces the decoder never to output the exact same three words in a predicted sequence with all punctuations deleted. When the decoder creates the same three words that exist in the pre-vious pure word sequence, the probability of the beam is set to be 0.", "cite_spans": [ { "start": 172, "end": 194, "text": "(Liu and Lapata, 2019)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Implementation Details", "sec_num": "4.4" }, { "text": "Although we employ the Longformer architecture to deal with long sequences, we still face memory challenges in Report Generation module when the earnings call section is too long. To increase the performance for earnings call sections of arbitrary length, we divide a long section into several short sub-sections and generate reports for each sub-section. We then combine each sub-section and report them in the same hierarchical structure. This method is proved to perform well when a transcript section exceeds the sequence length limit.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implementation Details", "sec_num": "4.4" }, { "text": "We search for best hyperparameters for baselines, and use optimization schemes suggested by authors. Models are trained on Tesla-V100 GPUs. Due to the small input length limit in the architecture of BertSum, it is not able to generate fluent and readable reports by partitioning the long document and combining the output. Taking advantage of the modified attention mechanism and huge sequence length limit, LED, on the contrary, achieves quite good performance. All three Rouge-F scores are above 50, indicating that reports generated by LED can extract critical information from earnings presentations similar to human beings. By adding a precisely-selected soft template to LED, our proposed system, TATSum, boosts the report quality even more, with a significant improvement over the performance of LED by 17%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implementation Details", "sec_num": "4.4" }, { "text": "As discussed in section 4.1, earnings call reports contain much more words than summaries in popular summarization datasets. Therefore, the ROUGE scores are higher than those observed from CNN/DM and arXiv correspondingly. In order to guarantee the quality of generated reports for real use cases, in terms of structure and accuracy, we further conduct manual checking on a small sample of earnings events selected from the test set. We read their transcripts, human-written reports, and automatic report generated by TATSum, and com-pare the content in these documents. Generated reports mimic the analyst report well in format and structure. For information accuracy, we primarily focus on the numbers, trends, and sentiment in generated reports. Our observation shows that except in a few cases where some parts are missing, information within the generated report is accurate and coherent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Results", "sec_num": "4.5" }, { "text": "In this section, we analyze variants of our model to find the effect of different components on the model performance. We consider variations as follows: (1) No template: we remove the first and second modules and consider a pure LED architecture for report generation and (2) NoRanking:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ablation Study", "sec_num": "4.6" }, { "text": "We forgo Candidate Ranking module and use the template with the highest TF-IDF cosine similarity in Candidate Generation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ablation Study", "sec_num": "4.6" }, { "text": "In table 4, we report the average ROUGE score of generated reports on the test set under different experiment setting. Effect of soft template: To capture the impact of soft templates on the performance of our model, we compare the results of NoTemplate and NoRanking. As illustrated in Table 4 , A soft template based seq2seq model achieves significantly higher ROUGE scores. In addition to boosting the performance, incorporating the template stabalize the training process and results in faster convergence, indicating that the model can better learn to write reports in a quicker manner with supplemental information. We also compare several reports generated by the two models, and find that the report of NoRanking model has better format and logical structure. The report is clearly organized, with good heading levels and correct serial numbers. In contrast, the report of NoTemplate contains more incorrect indentations, levels, and serial numbers. It proves that adding a soft template do provide the model with more information on how to write a report logically like a human.", "cite_spans": [], "ref_spans": [ { "start": 287, "end": 294, "text": "Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Ablation Study", "sec_num": "4.6" }, { "text": "Effect of template ranking: Similarly, results of NoRanking and TATSum are compared to study whether ranking the candidate set of templates im-proves the performance. As shown in Table 4 , ranking the candidate set and selecting a template of better quality can slightly increase all three ROUGE scores of generated reports. It is worth mentioning that unlike adding a soft template, ranking the candidate set can take a longer time for labeling and training. For labeling, ROUGE scores need to be calculated for each template in the candidate set with the human-written report, and all data points in the training set should be labeled. For training, a Siamese-Longformer encoder is constructed to predict the rank of the candidate set, which also requires long training and validation time. Therefore, further thoughts on balancing the tradeoff between performance and training time are necessary for each dataset.", "cite_spans": [], "ref_spans": [ { "start": 179, "end": 186, "text": "Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Metrics", "sec_num": null }, { "text": "This paper proposes an innovative neural summarization system, TATSum, with three modules, Candidate Generation, Candidate Ranking, and Report Generation, to generate structured reports automatically. In Candidate Generation module, we build a corpus of historical documents and reports, and for each document, we generate a candidate set using quick and easy similarity-based criteria. The candidate set is then ranked in the Candidate Ranking module, following the predicted result of an encoder model with margin-ranking loss. We choose the candidate with the highest rank as the soft template. In the final Report Generation module, we encode both template and document into hidden states and feed the combined hidden states into a decoder to generate the report. Extensive experiments are conducted on the earning call dataset and show that our model can generate reports with high informativeness (ROUGE) and high accuracy (numbers, trends, etc.). We also prove that adding a template can significantly improve the quality of the generated report, and finely selecting a template with good quality can increase performance even more.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "We mainly test TATSum on automatic report generation for earnings call events. However, the advantage of Longformer architecture for long sequence tasks, as well as the significant power of adding soft templates for structured document generation, can extend our proposed framework to various domains, e.g., medical report, employee annual review, call center record, etc. We would like to take advantage of this proved architecture to explore more potential in structured report generation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Better fine-tuning by reducing representational collapse", "authors": [ { "first": "Armen", "middle": [], "last": "Aghajanyan", "suffix": "" }, { "first": "Akshat", "middle": [], "last": "Shrivastava", "suffix": "" }, { "first": "Anchit", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Sonal", "middle": [], "last": "Gupta", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2008.03156" ] }, "num": null, "urls": [], "raw_text": "Armen Aghajanyan, Akshat Shrivastava, Anchit Gupta, Naman Goyal, Luke Zettlemoyer, and Sonal Gupta. 2020. Better fine-tuning by reducing representa- tional collapse. arXiv preprint arXiv:2008.03156.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Longformer: The long-document transformer", "authors": [ { "first": "Iz", "middle": [], "last": "Beltagy", "suffix": "" }, { "first": "E", "middle": [], "last": "Matthew", "suffix": "" }, { "first": "Arman", "middle": [], "last": "Peters", "suffix": "" }, { "first": "", "middle": [], "last": "Cohan", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.05150" ] }, "num": null, "urls": [], "raw_text": "Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Signature verification using a\" siamese\" time delay neural network", "authors": [ { "first": "Jane", "middle": [], "last": "Bromley", "suffix": "" }, { "first": "Isabelle", "middle": [], "last": "Guyon", "suffix": "" }, { "first": "Yann", "middle": [], "last": "Lecun", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "S\u00e4ckinger", "suffix": "" }, { "first": "Roopak", "middle": [], "last": "Shah", "suffix": "" } ], "year": 1993, "venue": "Advances in neural information processing systems", "volume": "6", "issue": "", "pages": "737--744", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jane Bromley, Isabelle Guyon, Yann LeCun, Eduard S\u00e4ckinger, and Roopak Shah. 1993. Signature veri- fication using a\" siamese\" time delay neural network. Advances in neural information processing systems, 6:737-744.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Retrieve, rerank and rewrite: Soft template based neural summarization. Association for Computational Linguistics (ACL)", "authors": [ { "first": "Ziqiang", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Wenjie", "middle": [], "last": "Li", "suffix": "" }, { "first": "Furu", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Sujian", "middle": [], "last": "Li", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ziqiang Cao, Wenjie Li, Furu Wei, Sujian Li, et al. 2018. Retrieve, rerank and rewrite: Soft template based neural summarization. Association for Com- putational Linguistics (ACL).", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Automatic summarization of earnings releases: attributes and effects on investors' judgments", "authors": [ { "first": "Eddy", "middle": [], "last": "Cardinaels", "suffix": "" }, { "first": "Stephan", "middle": [], "last": "Hollander", "suffix": "" }, { "first": "Brian", "middle": [ "J" ], "last": "White", "suffix": "" } ], "year": 2019, "venue": "Review of Accounting Studies", "volume": "24", "issue": "3", "pages": "860--890", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eddy Cardinaels, Stephan Hollander, and Brian J White. 2019. Automatic summarization of earnings releases: attributes and effects on investors' judg- ments. Review of Accounting Studies, 24(3):860- 890.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Abstractive sentence summarization with attentive recurrent neural networks", "authors": [ { "first": "Sumit", "middle": [], "last": "Chopra", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" }, { "first": "Alexander M", "middle": [], "last": "Rush", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "93--98", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sumit Chopra, Michael Auli, and Alexander M Rush. 2016. Abstractive sentence summarization with at- tentive recurrent neural networks. In Proceedings of the 2016 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 93-98.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A discourse-aware attention model for abstractive summarization of long documents", "authors": [ { "first": "Arman", "middle": [], "last": "Cohan", "suffix": "" }, { "first": "Franck", "middle": [], "last": "Dernoncourt", "suffix": "" }, { "first": "Soon", "middle": [], "last": "Doo", "suffix": "" }, { "first": "Trung", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Seokhwan", "middle": [], "last": "Bui", "suffix": "" }, { "first": "Walter", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Nazli", "middle": [], "last": "Chang", "suffix": "" }, { "first": "", "middle": [], "last": "Goharian", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "2", "issue": "", "pages": "615--621", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arman Cohan, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Seokhwan Kim, Walter Chang, and Nazli Goharian. 2018. A discourse-aware attention model for abstractive summarization of long documents. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 615-621.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "The evolution of 10-k textual disclosure: Evidence from latent dirichlet allocation", "authors": [ { "first": "Travis", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Lang", "suffix": "" }, { "first": "Lorien", "middle": [], "last": "Stice-Lawrence", "suffix": "" } ], "year": 2017, "venue": "Journal of Accounting and Economics", "volume": "64", "issue": "2-3", "pages": "221--245", "other_ids": {}, "num": null, "urls": [], "raw_text": "Travis Dyer, Mark Lang, and Lorien Stice-Lawrence. 2017. The evolution of 10-k textual disclosure: Ev- idence from latent dirichlet allocation. Journal of Accounting and Economics, 64(2-3):221-245.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Lexrank: Graph-based lexical centrality as salience in text summarization", "authors": [ { "first": "G\u00fcnes", "middle": [], "last": "Erkan", "suffix": "" }, { "first": "", "middle": [], "last": "Dragomir R Radev", "suffix": "" } ], "year": 2004, "venue": "Journal of artificial intelligence research", "volume": "22", "issue": "", "pages": "457--479", "other_ids": {}, "num": null, "urls": [], "raw_text": "G\u00fcnes Erkan and Dragomir R Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. Journal of artificial intelligence re- search, 22:457-479.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "An empirical examination of conference calls as a voluntary disclosure medium", "authors": [ { "first": "Richard", "middle": [], "last": "Frankel", "suffix": "" }, { "first": "Marilyn", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Douglas J", "middle": [], "last": "Skinner", "suffix": "" } ], "year": 1999, "venue": "Journal of Accounting Research", "volume": "37", "issue": "1", "pages": "133--150", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Frankel, Marilyn Johnson, and Douglas J Skin- ner. 1999. An empirical examination of conference calls as a voluntary disclosure medium. Journal of Accounting Research, 37(1):133-150.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Bottom-up abstractive summarization", "authors": [ { "first": "Sebastian", "middle": [], "last": "Gehrmann", "suffix": "" }, { "first": "Yuntian", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Alexander M", "middle": [], "last": "Rush", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "4098--4109", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Gehrmann, Yuntian Deng, and Alexander M Rush. 2018. Bottom-up abstractive summarization. In Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, pages 4098-4109.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Exploring content models for multi-document summarization", "authors": [ { "first": "Aria", "middle": [], "last": "Haghighi", "suffix": "" }, { "first": "Lucy", "middle": [], "last": "Vanderwende", "suffix": "" } ], "year": 2009, "venue": "Proceedings of human language technologies: The 2009 annual conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "362--370", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aria Haghighi and Lucy Vanderwende. 2009. Ex- ploring content models for multi-document summa- rization. In Proceedings of human language tech- nologies: The 2009 annual conference of the North American Chapter of the Association for Computa- tional Linguistics, pages 362-370.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Teaching machines to read and comprehend", "authors": [ { "first": "Karl", "middle": [], "last": "Moritz Hermann", "suffix": "" }, { "first": "Tom\u00e1\u0161", "middle": [], "last": "Ko\u010disk\u1ef3", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Grefenstette", "suffix": "" }, { "first": "Lasse", "middle": [], "last": "Espeholt", "suffix": "" }, { "first": "Will", "middle": [], "last": "Kay", "suffix": "" }, { "first": "Mustafa", "middle": [], "last": "Suleyman", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 28th International Conference on Neural Information Processing Systems", "volume": "1", "issue": "", "pages": "1693--1701", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karl Moritz Hermann, Tom\u00e1\u0161 Ko\u010disk\u1ef3, Edward Grefen- stette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Proceedings of the 28th Inter- national Conference on Neural Information Process- ing Systems-Volume 1, pages 1693-1701.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Modeling financial analysts' decision making via the pragmatics and semantics of earnings calls", "authors": [ { "first": "A", "middle": [], "last": "Katherine", "suffix": "" }, { "first": "Amanda", "middle": [], "last": "Keith", "suffix": "" }, { "first": "", "middle": [], "last": "Stent", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1906.02868" ] }, "num": null, "urls": [], "raw_text": "Katherine A Keith and Amanda Stent. 2019. Modeling financial analysts' decision making via the pragmat- ics and semantics of earnings calls. arXiv preprint arXiv:1906.02868.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Abstractive summarization of reddit posts with multi-level memory networks", "authors": [ { "first": "Byeongchang", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Hyunwoo", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Gunhee", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "2519--2531", "other_ids": {}, "num": null, "urls": [], "raw_text": "Byeongchang Kim, Hyunwoo Kim, and Gunhee Kim. 2019. Abstractive summarization of reddit posts with multi-level memory networks. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2519-2531.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1412.6980" ] }, "num": null, "urls": [], "raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Wikihow: A large scale text summarization dataset", "authors": [ { "first": "Mahnaz", "middle": [], "last": "Koupaee", "suffix": "" }, { "first": "William", "middle": [ "Yang" ], "last": "Wang", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.09305" ] }, "num": null, "urls": [], "raw_text": "Mahnaz Koupaee and William Yang Wang. 2018. Wik- ihow: A large scale text summarization dataset. arXiv preprint arXiv:1810.09305.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Bart: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension", "authors": [ { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal ; Abdelrahman Mohamed", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "7871--7880", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. Bart: Denoising sequence-to-sequence pre- training for natural language generation, translation, and comprehension. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7871-7880.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Automatic evaluation of summaries using n-gram cooccurrence statistics", "authors": [ { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "150--157", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chin-Yew Lin and Eduard Hovy. 2003. Auto- matic evaluation of summaries using n-gram co- occurrence statistics. In Proceedings of the 2003 Hu- man Language Technology Conference of the North American Chapter of the Association for Computa- tional Linguistics, pages 150-157.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Text summarization with pretrained encoders", "authors": [ { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "3721--3731", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yang Liu and Mirella Lapata. 2019. Text summariza- tion with pretrained encoders. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3721-3731.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Advances in automatic text summarization", "authors": [ { "first": "Mani", "middle": [], "last": "Maybury", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mani Maybury. 1999. Advances in automatic text sum- marization. MIT press.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Abstractive text summarization using sequence-to-sequence rnns and beyond", "authors": [ { "first": "Ramesh", "middle": [], "last": "Nallapati", "suffix": "" }, { "first": "Bowen", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Caglar", "middle": [], "last": "Cicero Dos Santos", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Gucehre", "suffix": "" }, { "first": "", "middle": [], "last": "Xiang", "suffix": "" } ], "year": 2016, "venue": "Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "280--290", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, Caglar Gucehre, and Bing Xiang. 2016. Abstrac- tive text summarization using sequence-to-sequence rnns and beyond. In Proceedings of The 20th SIGNLL Conference on Computational Natural Lan- guage Learning, pages 280-290.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "A deep reinforced model for abstractive summarization", "authors": [ { "first": "Romain", "middle": [], "last": "Paulus", "suffix": "" }, { "first": "Caiming", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" } ], "year": 2018, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive sum- marization. In International Conference on Learn- ing Representations.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "A neural attention model for abstractive sentence summarization", "authors": [ { "first": "Sumit", "middle": [], "last": "Alexander M Rush", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Chopra", "suffix": "" }, { "first": "", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "379--389", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sen- tence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Lan- guage Processing, pages 379-389.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Get to the point: Summarization with pointergenerator networks", "authors": [ { "first": "Abigail", "middle": [], "last": "See", "suffix": "" }, { "first": "J", "middle": [], "last": "Peter", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Liu", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1073--1083", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointer- generator networks. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073- 1083.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Abstractive document summarization with a graphbased attentional neural model", "authors": [ { "first": "Jiwei", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Xiaojun", "middle": [], "last": "Wan", "suffix": "" }, { "first": "Jianguo", "middle": [], "last": "Xiao", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1171--1181", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiwei Tan, Xiaojun Wan, and Jianguo Xiao. 2017. Abstractive document summarization with a graph- based attentional neural model. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1171-1181.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "30", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in Neural Information Process- ing Systems, 30:5998-6008.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Big bird: Transformers for longer sequences", "authors": [ { "first": "Manzil", "middle": [], "last": "Zaheer", "suffix": "" }, { "first": "Guru", "middle": [], "last": "Guruganesh", "suffix": "" }, { "first": "Joshua", "middle": [], "last": "Kumar Avinava Dubey", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Ainslie", "suffix": "" }, { "first": "Santiago", "middle": [], "last": "Alberti", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Ontanon", "suffix": "" }, { "first": "Anirudh", "middle": [], "last": "Pham", "suffix": "" }, { "first": "Qifan", "middle": [], "last": "Ravula", "suffix": "" }, { "first": "Li", "middle": [], "last": "Wang", "suffix": "" }, { "first": "", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2020, "venue": "NeurIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago On- tanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, et al. 2020. Big bird: Transformers for longer sequences. In NeurIPS.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Pegasus: Pre-training with extracted gap-sentences for abstractive summarization", "authors": [ { "first": "Jingqing", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yao", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Saleh", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2020, "venue": "International Conference on Machine Learning", "volume": "", "issue": "", "pages": "11328--11339", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Pe- ter Liu. 2020. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. In In- ternational Conference on Machine Learning, pages 11328-11339. PMLR.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Extractive summarization as text matching", "authors": [ { "first": "Ming", "middle": [], "last": "Zhong", "suffix": "" }, { "first": "Pengfei", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yiran", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Danqing", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xipeng", "middle": [], "last": "Qiu", "suffix": "" }, { "first": "Xuan-Jing", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "6197--6208", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ming Zhong, Pengfei Liu, Yiran Chen, Danqing Wang, Xipeng Qiu, and Xuan-Jing Huang. 2020. Extrac- tive summarization as text matching. In Proceed- ings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6197-6208.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Template-filtered headline summarization", "authors": [ { "first": "Liang", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2004, "venue": "Text summarization branches out", "volume": "", "issue": "", "pages": "56--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liang Zhou and Eduard Hovy. 2004. Template-filtered headline summarization. In Text summarization branches out, pages 56-60.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "num": null, "text": "Flowchart of the report generation system" }, "FIGREF1": { "uris": null, "type_str": "figure", "num": null, "text": "Illustration of different components of the Candidate Ranking module (left) and the Report Generation module (right). First, we learn parameters of the Longformer layers (LFL) in the Candidate Ranking to rank existing reports in the candidate set. Then, we encode the representations of the transcript and the top candidate using Longformer Encoder and decode their embeddings to generate reports." }, "TABREF0": { "content": "", "text": "Statistics of the built earnings call dataset", "html": null, "num": null, "type_str": "table" }, "TABREF1": { "content": "
shows the ROUGE F1 score for different
methods.
MetricsROUGE1 ROUGE2 ROUGE-l
BERTSum36.8922.1635.40
LED65.1753.0764.93
TATSum76.2061.8975.94
", "text": "", "html": null, "num": null, "type_str": "table" }, "TABREF2": { "content": "", "text": "Results of TATSum and Baseline models", "html": null, "num": null, "type_str": "table" }, "TABREF4": { "content": "
", "text": "Ablation study of design choices in TATSum", "html": null, "num": null, "type_str": "table" } } } }