{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:47:14.951205Z" }, "title": "Evaluation of Abstractive Summarisation Models with Machine Translation in Deliberative Processes", "authors": [ { "first": "Miguel", "middle": [], "last": "Arana-Catania", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Warwick", "location": { "country": "UK" } }, "email": "" }, { "first": "Rob", "middle": [], "last": "Procter", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Warwick", "location": { "country": "UK" } }, "email": "" }, { "first": "Yulan", "middle": [], "last": "He", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Warwick", "location": { "country": "UK" } }, "email": "" }, { "first": "Maria", "middle": [], "last": "Liakata", "suffix": "", "affiliation": { "laboratory": "", "institution": "Alan Turing Institute", "location": { "country": "UK" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present work on summarising deliberative processes for non-English languages. Unlike commonly studied datasets, such as news articles, this deliberation dataset reflects difficulties of combining multiple narratives, mostly of poor grammatical quality, in a single text. We report an extensive evaluation of a wide range of abstractive summarisation models in combination with an off-the-shelf machine translation model. Texts are translated into English, summarised, and translated back to the original language. We obtain promising results regarding the fluency, consistency and relevance of the summaries produced. Our approach is easy to implement for many languages for production purposes by simply changing the translation model.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "We present work on summarising deliberative processes for non-English languages. Unlike commonly studied datasets, such as news articles, this deliberation dataset reflects difficulties of combining multiple narratives, mostly of poor grammatical quality, in a single text. We report an extensive evaluation of a wide range of abstractive summarisation models in combination with an off-the-shelf machine translation model. Texts are translated into English, summarised, and translated back to the original language. We obtain promising results regarding the fluency, consistency and relevance of the summaries produced. Our approach is easy to implement for many languages for production purposes by simply changing the translation model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The processes of deliberation and collective intelligence production have evolved radically thanks to the possibility of carrying them out digitally. However, this often results in large amounts of generated content in the deliberations, causing information overload that prevents their potential from being fully realised (Arana-Catania et al., 2021; Davies and Procter, 2020; Davies et al., 2021) . To address this, we evaluate the potential value of abstractive summarisation models when combined together with a machine translation system in synthesising and filtering information collected through such processes. Whereas the current technology of language models is mostly limited to a few languages, which creates a barrier to their more widespread use, our approach can be deployed for many languages just by changing the translation model without the need to generate new, ad-hoc corpora for the task or costly retraining for each new language. The current evaluation is done in a Spanish deliberation dataset.", "cite_spans": [ { "start": 323, "end": 351, "text": "(Arana-Catania et al., 2021;", "ref_id": "BIBREF3" }, { "start": 352, "end": 377, "text": "Davies and Procter, 2020;", "ref_id": "BIBREF4" }, { "start": 378, "end": 398, "text": "Davies et al., 2021)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We have carried out an evaluation with 6 abstractive summarisation models: BART (Lewis et al., 2019 ), T5 (Raffel et al., 2019) , BERT (PreSumm -BertSumExtAbs: Liu and Lapata, 2019) , PG (Pointer-Generator with Coverage Penalty) (See et al., 2017), CopyTransformer (Gehrmann et al., 2018) , and FastAbsRL (Chen and Bansal, 2018) . Those models are applied in combination with the machine translation system MarianMT (Junczys-Dowmunt et al., 2018) using the Opus-MT models (Tiedemann and Thottingal, 2020) . We have evaluated the quality of the summaries for each model and their comparison.", "cite_spans": [ { "start": 80, "end": 99, "text": "(Lewis et al., 2019", "ref_id": "BIBREF11" }, { "start": 106, "end": 127, "text": "(Raffel et al., 2019)", "ref_id": "BIBREF22" }, { "start": 160, "end": 181, "text": "Liu and Lapata, 2019)", "ref_id": "BIBREF13" }, { "start": 265, "end": 288, "text": "(Gehrmann et al., 2018)", "ref_id": "BIBREF7" }, { "start": 305, "end": 328, "text": "(Chen and Bansal, 2018)", "ref_id": "BIBREF2" }, { "start": 416, "end": 446, "text": "(Junczys-Dowmunt et al., 2018)", "ref_id": "BIBREF9" }, { "start": 472, "end": 504, "text": "(Tiedemann and Thottingal, 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Early research on the problem of text summarisation in low resourced languages (although not focused on deliberation) Or\u01cesan and Chiorean (2008) demonstrated the limitations of machine translation systems at that time. Recently, Ouyang et al. (2019) revisited the problem of low quality translations in low resourced languages and successfully demonstrated the possibility of using abstractive summarisation by retraining their model on corpora that have gone through the same machine translation process. In this study, we complete the cycle, translating from the original language to English, summarising, and translating back to the original language, thus avoiding the need for retraining.", "cite_spans": [ { "start": 118, "end": 144, "text": "Or\u01cesan and Chiorean (2008)", "ref_id": "BIBREF18" }, { "start": 229, "end": 249, "text": "Ouyang et al. (2019)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Using other approaches, Yao et al. (2015) studied English-to-Chinese summarisation combining an extractive approach with a process of sentence compression that effectively abstracts the results. Duan et al. (2019) , following Shen et al. 2018, exploited the capability of a resource-rich language summariser in a teacher-student framework that connects it to the target language summariser.", "cite_spans": [ { "start": 195, "end": 213, "text": "Duan et al. (2019)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The evaluation has been carried out with a dataset from deliberative processes in Spanish, which was translated into English to carry out the summarisation. The generated summaries were then translated back into Spanish for evaluation. Thus, the evaluators evaluated summaries of Spanish texts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "2" }, { "text": "The dataset is available in the Madrid City Council 'Datos Abiertos' repository 1 , called 'Comentarios'. It contains public deliberations in relation to citizen proposals submitted to the participation platform of the city council. The dataset has been selected due to the great success of the participation platform, which has led to 26, 400 proposals and 125, 135 comments being submitted. This is one of the most successful cases of digital participation in the world and is therefore a perfect case study for evaluating the information overload problem in deliberative processes (Arana-Catania et al., 2021) .", "cite_spans": [ { "start": 584, "end": 612, "text": "(Arana-Catania et al., 2021)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "2" }, { "text": "Each proposal presents a debate space where public comments can be found. Forty debates were selected covering different deliberation scenarios in the dataset. These represent three cases: 20 debates with (n = 10) comments, the most common case of debates with few comments; 15 debates with (20 \u2264 n \u2264 30) comments, for the medium case; and 5 debates with (60 \u2264 n \u2264 70) comments, the large number of comments case.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "2" }, { "text": "The debates were also selected to cover three different comment scenarios, i.e., from very short to very lengthy comments. In the first scenario from 1, 000 to 5, 000 total characters; in the medium scenario from 3, 000 to 13, 000; and in the large scenario from 10, 000 to 18, 000 characters. For each debate the text to summarise was created by concatenating its comments into a single text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "2" }, { "text": "By using debates from all scenarios regarding the number of comments and comment length we ensure that the selection is not biased to a specific scenario of deliberation that could skew our results. Examples of the debates can be found in the Appendix, illustrating the combination of multiple narratives through the different comments and the poor grammatical quality of the texts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "2" }, { "text": "Different models were selected, covering some of the best available summarisers, but also different model architectures:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstractive Summarisation Methodology", "sec_num": "3" }, { "text": "\u2022 BART (Lewis et al., 2019) 2 . This combines a bidirectional transformer as an encoder, similar to the following T5 and BERT cases, with a left-to-right autoregressive decoder similar as GPT (Radford et al., 2018) . The 'large-cnn' pre-trained model 2 has been used here. \u2022 T5 (Raffel et al., 2019) 2 . This uses an encoderdecoder transformer architecture, trained in the Colossal Clean Crawled Corpus. The 't5small' pre-trained model 2 has been used. \u2022 BERT (PreSumm -BertSumExtAbs: Liu and Lapata, 2019) 3 . This uses a BERT (Devlin et al., 2018) encoder and a randomly initialized transformer as a decoder, fine-tuning it first as an extractive summariser and then as an abstractive one. The BertSumExtAbs pretrained model 3 has been used. (Gehrmann et al., 2018) 5 .", "cite_spans": [ { "start": 7, "end": 27, "text": "(Lewis et al., 2019)", "ref_id": "BIBREF11" }, { "start": 192, "end": 214, "text": "(Radford et al., 2018)", "ref_id": "BIBREF21" }, { "start": 278, "end": 299, "text": "(Raffel et al., 2019)", "ref_id": "BIBREF22" }, { "start": 485, "end": 506, "text": "Liu and Lapata, 2019)", "ref_id": "BIBREF13" }, { "start": 528, "end": 549, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF5" }, { "start": 744, "end": 767, "text": "(Gehrmann et al., 2018)", "ref_id": "BIBREF7" }, { "start": 768, "end": 769, "text": "5", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Abstractive Summarisation Methodology", "sec_num": "3" }, { "text": "This uses the transformer architecture, but one attention head defines the copy distribution. The 'OpenNMT Transformer' pretrained model 4 has been used. \u2022 FastAbsRL (Chen and Bansal, 2018) 6 . An extractor agent is used to select sentences (using LSTM layers to represent and copy sentences) and an abstractor network is used to compress and paraphrase the selected sentences. Both are trained separately and then the full model is trained with reinforcement learning by using A2C (Mnih et al., 2016) .", "cite_spans": [ { "start": 166, "end": 189, "text": "(Chen and Bansal, 2018)", "ref_id": "BIBREF2" }, { "start": 482, "end": 501, "text": "(Mnih et al., 2016)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Abstractive Summarisation Methodology", "sec_num": "3" }, { "text": "The reported Rouge scores of these models (Lin, 2004) are shown in Table 1 . None of the pre-trained models used were retrained.", "cite_spans": [ { "start": 42, "end": 53, "text": "(Lin, 2004)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 67, "end": 74, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Abstractive Summarisation Methodology", "sec_num": "3" }, { "text": "Additional models were also evaluated: Adversarial Reinforce GAN (Wang and Lee, 2018), using Generative Adversarial Networks; Contextual Song et al., 2018) , combining sequential word generation with tree-based parsing. Our initial qualitative evaluation found that none of them were competitive enough with the selected models. Several of these models work at the sentence level, which may impact their relevance in our deliberative case, where texts are composed of multiple authors' comments. The machine translation system used was Mar-ianMT (Junczys-Dowmunt et al., 2018) using its HuggingFace implementation, with Opus-MT models 7 developed by the Helsinki-NLP group.", "cite_spans": [ { "start": 137, "end": 155, "text": "Song et al., 2018)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Abstractive Summarisation Methodology", "sec_num": "3" }, { "text": "Machine translation was first applied to the original text of the deliberations before applying the summarisers, and then to the summary generated to convert back to the original language (see Appendix). Thus, even when the summarisation models are trained with English datasets, the full system can be used in deliberations of any language supported by the machine translation system. The Opus-MT models used in this work count currently with pre-trained models for 1738 language pairs. It is left for future work to evaluate the effect of the translation model, and to apply it to other languages to determine their quality. The models used here show a good performance (see BLEU scores in OpusMTen; OpusMTes) for the languages used.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstractive Summarisation Methodology", "sec_num": "3" }, { "text": "We developed a protocol for the human evaluation of the summaries generated by the different models, following designs used in previous studies (Amplayo and Lapata, 2020; Liu and Lapata, 2019; Narayan et al., 2018; Paulus et al., 2017; Yoon et al., 2020; Song et al., 2018) . First, the different models were compared regarding their relative overall quality using the Best-Worst scaling (Lou-viere et al., 2015) , shown to be more accurate than a generic individual scoring model, and simultaneously reducing the number of assessments required (Kiritchenko and Mohammad, 2017) .", "cite_spans": [ { "start": 171, "end": 192, "text": "Liu and Lapata, 2019;", "ref_id": "BIBREF13" }, { "start": 193, "end": 214, "text": "Narayan et al., 2018;", "ref_id": "BIBREF16" }, { "start": 215, "end": 235, "text": "Paulus et al., 2017;", "ref_id": "BIBREF20" }, { "start": 236, "end": 254, "text": "Yoon et al., 2020;", "ref_id": null }, { "start": 255, "end": 273, "text": "Song et al., 2018)", "ref_id": null }, { "start": 388, "end": 412, "text": "(Lou-viere et al., 2015)", "ref_id": null }, { "start": 545, "end": 577, "text": "(Kiritchenko and Mohammad, 2017)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Design", "sec_num": "4" }, { "text": "For each debate, 6 different summaries were generated, one for each of the models to be evaluated. These summaries were organised in 9 tuples of 4 elements each, where each summary appeared in 6 of the tuples in random order not allowing the evaluator to identify each model used. In total, considering all the debates, 360 tuples were produced. Each of these tuples was evaluated by 5 independent evaluators (native Spanish speakers with a minimum education level of a Bachelor's degree), producing a total of 1, 800 evaluations. The score for each summary consisted of the percentage of times it was evaluated as Best, minus the percentage of times it was evaluated as Worst.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Design", "sec_num": "4" }, { "text": "In addition, a second evaluation was carried out for two summaries in each debate. The models were selected randomly in each case, while ensuring that each model had the same number of evaluations. Here, we were interested in whether the models produce results of sufficient quality to be useful to participants in the debate. Thus, we we used an absolute rather than a relative score. We asked evaluators to rate the following (definitions were shared with evaluators) on a Likert scale from 1 (Strongly disagree) to 4 (Strongly agree):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Design", "sec_num": "4" }, { "text": "\u2022 Informativeness/Relevance. The summary contains the most relevant ideas and positions of the debate. \u2022 Fluency/Readability/Grammaticality. The summary sentences are grammatically correct, easy to read and understand (considering as a baseline the fluency of the original debate). \u2022 Consistency/Faithfulness. The ideas or facts contained in the summary appear in the original debate. \u2022 Creativity. The summary has been written with its own words and sentences (instead of copying sentences directly from the debate).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Design", "sec_num": "4" }, { "text": "The results obtained for the overall comparison between models are shown in Table 2 , which reports the average scores of all the evaluators. Paired Student's t-tests were performed between all pairs of models to confirm that the difference was statistically significant. This is not the case for the BERT and BART models (p = 0.09), showing very close results. There is also a clear overlap between T5 and CopyTransformer. All the other combination pairs are found to have a statistically significant difference (p < 0.05). These results are in line with the previous results on English datasets that BART and BERT are the top two summarisers (Lewis et al., 2019; Liu and Lapata, 2019) . However, in the present evaluation the performance of a state-of-the-art model (T5) falls below that of a much older model (PG).", "cite_spans": [ { "start": 644, "end": 664, "text": "(Lewis et al., 2019;", "ref_id": "BIBREF11" }, { "start": 665, "end": 686, "text": "Liu and Lapata, 2019)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 76, "end": 83, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Evaluation Results", "sec_num": "5" }, { "text": "The results for the evaluation of the qualitative aspects of each summariser are shown in Table 3 . It is important to note that in this case the standard deviation is larger compared to the first case, which is due to the smaller number of evaluations, and thus the following comments should take into account their statistical significance.", "cite_spans": [], "ref_spans": [ { "start": 90, "end": 97, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Evaluation Results", "sec_num": "5" }, { "text": "In this individual evaluation of each model, it can be seen again how BART obtains the best ratings in all four categories evaluated. BERT is the second best for the categories of 'Informativeness', 'Fluency' and 'Consistency', while PG jumps to the second position for 'Creativity'. T5 is in the third position for the categories 'Informativeness' and 'Fluency' and PG is the third best for 'Consistency'.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Results", "sec_num": "5" }, { "text": "This confirms the best results of BART and BERT, and a close result for T5 for generating informative summaries, but a poorer result for fluency. This may be the reason why the T5 model performed worse in the general overall comparison.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Results", "sec_num": "5" }, { "text": "BART and BERT perform well in terms of 'consistency', with scores close to 3. They perform a bit worse for 'fluency' and 'informativeness', around the middle of the possible rating 2.5. Regarding 'creativity', the models have a poor performance, with a score of around 2, meaning that they tend to copy instead of paraphrase.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Results", "sec_num": "5" }, { "text": "In this study we have evaluated the application of state-of-the-art, abstractive summarisation models to deliberative processes in Spanish using an off-the-shelf machine translation model. Although we focused on Spanish in this study, our proposed pipeline can be easily deployed without additional effort to many other languages. This offers significant benefits for production applications (especially cases dealing with wide ranges of languages) that are rarely available in other approaches that usually need to be tuned for each language. However, the evaluation of the quality for other languages is left for future work. We have done a comparative evaluation of the overall quality of the models, and an evaluation of each model with respect to different qualitative aspects: informativeness, fluency, consistency, and creativity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "As a general conclusion, from the models evaluated BART and BERT produced the best results, and satisfactory results are obtained in the proposed pipeline for the quality of the summaries. With regard to the most important aspects, the models show a good result for the categories of fluency and consistency, and an average result regarding the informativeness. These results are especially promising considering the complexity and low grammatical fluency and consistency involved in texts typical of deliberative processes. BART and BERT are the only models that score above the middle score in each of the three categories, and thus we argue perform sufficiently well to be used in practice. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "We present below an example of a debate used in the evaluation in Spanish and its machine translation to English. Following them we present the summaries generated using T5, FastAbsRL, BART, and BERT. Finally, we include the translations of these summaries. The texts are presented in the same order used in the project. We start with a debate in Spanish, which is translated into English. This translated debate is summarised, and finally the summary is translated back into Spanish. The evaluators analysed only the original debate in Spanish and the final summaries in Spanish.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Appendix", "sec_num": null }, { "text": "\u2022 ademas proponemos tranv\u00eda.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.1 Original Spanish debate", "sec_num": null }, { "text": "\u2022 el casco no es obligatorio para mayores de 15 a\u00f1os mientras circulan en ciudad. lo dice la dgt.por lo dem\u00e1s, te doy la raz\u00f3n. deben cumplir la normativa de circulaci\u00f3n. pero, eh!... los conductores de coches y motos tambi\u00e9n. hay demasiados que no respetan a los ciclistas... \u00bfsabias que en ciudad, un ciclista debe ocupar 1 carril de circulaci\u00f3n... y no ir por el borde?. \u2022 se deber\u00edan sancionar las bicis que van por las aceras o fuera de los carriles bicis. \u2022 si las bicis van por las aceras es porque es muy peligroso ir por los carriles de los coches aunque est\u00e9n marcados. no existe concienciaci\u00f3n todav\u00eda por parte de los usuarios conductores. por otro lado, el hecho en s\u00ed de ir por la acera no es peligroso, siempre que se vaya \"a paso de peat\u00f3n\". lo que no se puede es ir r\u00e1pido.para m\u00ed el verdadero peligro es en las horas nocturnas, en que muchos ciclistas van sin luz alguna y no se ven hasta que est\u00e1s pr\u00e1cticamente encima de ellos... eso en amsterdam est\u00e1 rigurosamente prohibido y se multa. aqu\u00ed he visto a la polic\u00eda municipal pasar de todo al verlos.... \u2022 obviamente quien dice eso no ha cogido una bici en su vida, el casco en bici no salva vidas, es un hecho, salva vidas el conductor respetuoso. \u2022 nunca,pero nunca jam\u00e1s he visto parar un ciclista en un sem\u00e1foro rojo,o se suben a la acera para cruzar sorteando a los peatones o directamente se lo saltan,en un paso de peatones menos se paran.\u00bfqu\u00e9 pasa,que las norma no son para todos por igual? si un coche se salta un sem\u00e1foro,la multa es bestial! un poco m\u00e1s de respeto,sobre todo cuando circulan por la acera a la velocidad que les da la gana,con el peligro que conlleva.se creen que todo vale y la calle es suya. Not everyone goes with headphones, not everyone jumps the traffic lights, and cars have to settle for the presence of bikes....it's one more means of transport, and it deserves respect. \u2022 the obligation of the helmet to discourage the use of bicycles, which in the case of Tuesday is improving mobility without increasing pollution A.3 Generated summaries", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.1 Original Spanish debate", "sec_num": null }, { "text": "\u2022 T5. the rules are not for everyone alike. not everyone jumps the traffic lights, not everyone goes with headphones, and not everybody jumps traffic lights. a little more respect, especially when they circulate along the street at the speed that gives them the desire, with the danger that it carries. I don't believe it... never say it! Don't you think it's generalizing too much? Don' \u2022 FastAbsRL. the helmet is not mandatory for more than 15 years .\" the real danger is in which many cyclists go without any light and you don't see until you are practically above them... that in amsterdam is rigorously forbidden . otherwise, i give you the reason. . they must comply with the traffic \u2022 BART. Bikes that go along the sidewalks or off the bike lanes should be sanctioned. The real danger is in the night hours, in which many cyclists go without any light. Not everyone goes with headphones, not everyone jumps the traffic lights, and cars have to settle for the presence of bikes. It's one more means of transport, and it deserves respect. \u2022 BERT. the helmet is not mandatory for more than 15 years as they travel in the city it is because it is very dangerous to go down the lanes of the cars even if they are marked there is no awareness yet on the part of the users drivers drivers", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.1 Original Spanish debate", "sec_num": null }, { "text": "A.4 Machine translated summaries", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.1 Original Spanish debate", "sec_num": null }, { "text": "\u2022 T5. las reglas no son para todos por igual. no todos saltan los sem\u00e1foros, no todos van con auriculares, y no todo el mundo salta sem\u00e1foros. un poco m\u00e1s de respeto, especialmente cuando circulan por la calle a la velocidad que les da el deseo, con el peligro que conlleva. No lo creo... nunca lo digo! \u00bfNo crees que est\u00e1 generalizando demasiado? \u2022 FastAbsRL. el casco no es obligatorio durante m\u00e1s de 15 a\u00f1os. \" el verdadero peligro es en el que muchos ciclistas van sin ninguna luz y no se ve hasta que usted est\u00e1 pr\u00e1cticamente por encima de ellos... que en amsterdam est\u00e1 rigurosamente prohibido. Si no, te doy la raz\u00f3n. deben cumplir con el tr\u00e1fico. \u2022 BART. Las bicicletas que van por las aceras o fuera de los carriles bici deben ser sancionadas. El verdadero peligro es en las horas de la noche, en las que muchos ciclistas van sin ninguna luz. No todos van con auriculares, no todos saltan los sem\u00e1foros, y los coches tienen que conformarse con la presencia de bicicletas. Es un medio de transporte m\u00e1s, y merece respeto. \u2022 BERT. el casco no es obligatorio por m\u00e1s de 15 a\u00f1os ya que viajan por la ciudad es porque es muy peligroso ir por los carriles de los coches, incluso si est\u00e1n marcados todav\u00eda no hay conciencia por parte de los conductores de los usuarios", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.1 Original Spanish debate", "sec_num": null }, { "text": "https://datos.madrid.es 2 Implementation by HuggingFace https://github. com/huggingface/transformers", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Implementation by the authors https://github. com/nlpyang/PreSumm 4 Implementation by OpenNMT https://opennmt. net/OpenNMT-py/examples/Summarization. html5 OpenNMT implementation thanks to https: //github.com/sebastianGehrmann/ bottom-up-summary 6 Implementation by the authors https://github. com/ChenRocks/fast_abs_rl", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/Helsinki-NLP/ Opus-MT", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was funded in part by the UK Engineering and Physical Sciences Research Council (grant no. EP/V048597/1). RP is supported by a Turing Fellowship (grant no. EP/N510129/1). YH and ML are supported by Turing AI Fellowships (grant no. EP/V020579/1, EP/V030302/1, respectively).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Unsupervised opinion summarization with noising and denoising", "authors": [ { "first": "Reinald", "middle": [], "last": "Kim Amplayo", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.10150" ] }, "num": null, "urls": [], "raw_text": "Reinald Kim Amplayo and Mirella Lapata. 2020. Un- supervised opinion summarization with noising and denoising. arXiv preprint arXiv:2004.10150.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Yulan He, Arkaitz Zubiaga, and Maria Liakata. 2021. Citizen participation and machine learning for a better democracy", "authors": [ { "first": "Miguel", "middle": [], "last": "Arana-Catania", "suffix": "" }, { "first": "Felix-Anselm", "middle": [], "last": "Van Lier", "suffix": "" }, { "first": "Rob", "middle": [], "last": "Procter", "suffix": "" }, { "first": "Nataliya", "middle": [], "last": "Tkachenko", "suffix": "" } ], "year": null, "venue": "Digital Government: Research and Practice", "volume": "2", "issue": "3", "pages": "1--22", "other_ids": {}, "num": null, "urls": [], "raw_text": "Miguel Arana-Catania, Felix-Anselm Van Lier, Rob Procter, Nataliya Tkachenko, Yulan He, Arkaitz Zu- biaga, and Maria Liakata. 2021. Citizen participa- tion and machine learning for a better democracy. Digital Government: Research and Practice, 2(3):1- 22.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Fast abstractive summarization with reinforce-selected sentence rewriting", "authors": [ { "first": "Yen-Chun", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1805.11080" ] }, "num": null, "urls": [], "raw_text": "Yen-Chun Chen and Mohit Bansal. 2018. Fast abstrac- tive summarization with reinforce-selected sentence rewriting. arXiv preprint arXiv:1805.11080.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Evaluating the application of nlp tools on mainstream participatory budgeting processes in scotland", "authors": [ { "first": "Jonathan", "middle": [], "last": "Davies", "suffix": "" }, { "first": "Miguel", "middle": [], "last": "Arana-Catania", "suffix": "" }, { "first": "Rob", "middle": [], "last": "Procter", "suffix": "" }, { "first": "Felix-Anselm", "middle": [], "last": "Van Lier", "suffix": "" }, { "first": "Yulan", "middle": [], "last": "He", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the International Conference on Theory and Practice of Electronic Governance", "volume": "", "issue": "", "pages": "317--320", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan Davies, Miguel Arana-Catania, Rob Procter, Felix-Anselm van Lier, and Yulan He. 2021. Evalu- ating the application of nlp tools on mainstream par- ticipatory budgeting processes in scotland. In Pro- ceedings of the International Conference on Theory and Practice of Electronic Governance, pages 317- 320.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Online platforms of public participation: a deliberative democracy or a delusion?", "authors": [ { "first": "Jonathan", "middle": [], "last": "Davies", "suffix": "" }, { "first": "Rob", "middle": [], "last": "Procter", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 13th International Conference on Theory and Practice of Electronic Governance", "volume": "", "issue": "", "pages": "746--753", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan Davies and Rob Procter. 2020. Online plat- forms of public participation: a deliberative democ- racy or a delusion? In Proceedings of the 13th In- ternational Conference on Theory and Practice of Electronic Governance, pages 746-753.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Zero-shot crosslingual abstractive sentence summarization through teaching generation and attention", "authors": [ { "first": "Xiangyu", "middle": [], "last": "Duan", "suffix": "" }, { "first": "Mingming", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Min", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Boxing", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Weihua", "middle": [], "last": "Luo", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3162--3172", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiangyu Duan, Mingming Yin, Min Zhang, Boxing Chen, and Weihua Luo. 2019. Zero-shot cross- lingual abstractive sentence summarization through teaching generation and attention. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 3162-3172.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Bottom-up abstractive summarization", "authors": [ { "first": "Sebastian", "middle": [], "last": "Gehrmann", "suffix": "" }, { "first": "Yuntian", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Alexander M", "middle": [], "last": "Rush", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1808.10792" ] }, "num": null, "urls": [], "raw_text": "Sebastian Gehrmann, Yuntian Deng, and Alexander M Rush. 2018. Bottom-up abstractive summarization. arXiv preprint arXiv:1808.10792.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Teaching machines to read and comprehend", "authors": [ { "first": "Karl", "middle": [], "last": "Moritz Hermann", "suffix": "" }, { "first": "Tom\u00e1\u0161", "middle": [], "last": "Ko\u010disk\u1ef3", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Grefenstette", "suffix": "" }, { "first": "Lasse", "middle": [], "last": "Espeholt", "suffix": "" }, { "first": "Will", "middle": [], "last": "Kay", "suffix": "" }, { "first": "Mustafa", "middle": [], "last": "Suleyman", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1506.03340" ] }, "num": null, "urls": [], "raw_text": "Karl Moritz Hermann, Tom\u00e1\u0161 Ko\u010disk\u1ef3, Edward Grefen- stette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. arXiv preprint arXiv:1506.03340.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Marian: Fast neural machine translation in c++", "authors": [ { "first": "Marcin", "middle": [], "last": "Junczys-Dowmunt", "suffix": "" }, { "first": "Roman", "middle": [], "last": "Grundkiewicz", "suffix": "" }, { "first": "Tomasz", "middle": [], "last": "Dwojak", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Hoang", "suffix": "" }, { "first": "Kenneth", "middle": [], "last": "Heafield", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Neckermann", "suffix": "" }, { "first": "Frank", "middle": [], "last": "Seide", "suffix": "" }, { "first": "Ulrich", "middle": [], "last": "Germann", "suffix": "" }, { "first": "Alham", "middle": [], "last": "Fikri Aji", "suffix": "" }, { "first": "Nikolay", "middle": [], "last": "Bogoychev", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1804.00344" ] }, "num": null, "urls": [], "raw_text": "Marcin Junczys-Dowmunt, Roman Grundkiewicz, Tomasz Dwojak, Hieu Hoang, Kenneth Heafield, Tom Neckermann, Frank Seide, Ulrich Germann, Al- ham Fikri Aji, Nikolay Bogoychev, et al. 2018. Mar- ian: Fast neural machine translation in c++. arXiv preprint arXiv:1804.00344.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Best-worst scaling more reliable than rating scales: A case study on sentiment intensity annotation", "authors": [ { "first": "Svetlana", "middle": [], "last": "Kiritchenko", "suffix": "" }, { "first": "M", "middle": [], "last": "Saif", "suffix": "" }, { "first": "", "middle": [], "last": "Mohammad", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1712.01765" ] }, "num": null, "urls": [], "raw_text": "Svetlana Kiritchenko and Saif M Mohammad. 2017. Best-worst scaling more reliable than rating scales: A case study on sentiment intensity annotation. arXiv preprint arXiv:1712.01765.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", "authors": [ { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal ; Abdelrahman Mohamed", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Ves", "middle": [], "last": "Stoyanov", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.13461" ] }, "num": null, "urls": [], "raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Rouge: A package for automatic evaluation of summaries", "authors": [ { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2004, "venue": "Text summarization branches out", "volume": "", "issue": "", "pages": "74--81", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Text summarization with pretrained encoders", "authors": [ { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1908.08345" ] }, "num": null, "urls": [], "raw_text": "Yang Liu and Mirella Lapata. 2019. Text summa- rization with pretrained encoders. arXiv preprint arXiv:1908.08345.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Best-worst scaling: Theory, methods and applications", "authors": [ { "first": "J", "middle": [], "last": "Jordan", "suffix": "" }, { "first": "", "middle": [], "last": "Louviere", "suffix": "" }, { "first": "N", "middle": [], "last": "Terry", "suffix": "" }, { "first": "Anthony Alfred John", "middle": [], "last": "Flynn", "suffix": "" }, { "first": "", "middle": [], "last": "Marley", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jordan J Louviere, Terry N Flynn, and Anthony Al- fred John Marley. 2015. Best-worst scaling: The- ory, methods and applications. Cambridge Univer- sity Press.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Asynchronous methods for deep reinforcement learning", "authors": [ { "first": "Volodymyr", "middle": [], "last": "Mnih", "suffix": "" }, { "first": "Adria", "middle": [ "Puigdomenech" ], "last": "Badia", "suffix": "" }, { "first": "Mehdi", "middle": [], "last": "Mirza", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Graves", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Lillicrap", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Harley", "suffix": "" }, { "first": "David", "middle": [], "last": "Silver", "suffix": "" }, { "first": "Koray", "middle": [], "last": "Kavukcuoglu", "suffix": "" } ], "year": 2016, "venue": "International conference on machine learning", "volume": "", "issue": "", "pages": "1928--1937", "other_ids": {}, "num": null, "urls": [], "raw_text": "Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. 2016. Asyn- chronous methods for deep reinforcement learning. In International conference on machine learning, pages 1928-1937. PMLR.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Ranking sentences for extractive summarization with reinforcement learning", "authors": [ { "first": "Shashi", "middle": [], "last": "Narayan", "suffix": "" }, { "first": "B", "middle": [], "last": "Shay", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Cohen", "suffix": "" }, { "first": "", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1802.08636" ] }, "num": null, "urls": [], "raw_text": "Shashi Narayan, Shay B Cohen, and Mirella Lapata. 2018. Ranking sentences for extractive summariza- tion with reinforcement learning. arXiv preprint arXiv:1802.08636.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Evaluation of a cross-lingual romanian-english multi-document summariser", "authors": [ { "first": "Constantin", "middle": [], "last": "Or\u01cesan", "suffix": "" }, { "first": "", "middle": [], "last": "Oana Andreea Chiorean", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 6th International Conference on Language Resources and Evaluation, LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Constantin Or\u01cesan and Oana Andreea Chiorean. 2008. Evaluation of a cross-lingual romanian-english multi-document summariser. Proceedings of the 6th International Conference on Language Resources and Evaluation, LREC 2008.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A robust abstractive system for cross-lingual summarization", "authors": [ { "first": "Jessica", "middle": [], "last": "Ouyang", "suffix": "" }, { "first": "Boya", "middle": [], "last": "Song", "suffix": "" }, { "first": "Kathleen", "middle": [], "last": "Mckeown", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "2025--2031", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jessica Ouyang, Boya Song, and Kathleen McKeown. 2019. A robust abstractive system for cross-lingual summarization. In Proceedings of the 2019 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Pa- pers), pages 2025-2031.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A deep reinforced model for abstractive summarization", "authors": [ { "first": "Romain", "middle": [], "last": "Paulus", "suffix": "" }, { "first": "Caiming", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1705.04304" ] }, "num": null, "urls": [], "raw_text": "Romain Paulus, Caiming Xiong, and Richard Socher. 2017. A deep reinforced model for abstractive sum- marization. arXiv preprint arXiv:1705.04304.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Improving language understanding by generative pre-training", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Karthik", "middle": [], "last": "Narasimhan", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under- standing by generative pre-training (2018).", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "authors": [ { "first": "Colin", "middle": [], "last": "Raffel", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Roberts", "suffix": "" }, { "first": "Katherine", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Sharan", "middle": [], "last": "Narang", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Matena", "suffix": "" }, { "first": "Yanqi", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Peter J", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.10683" ] }, "num": null, "urls": [], "raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text trans- former. arXiv preprint arXiv:1910.10683.", "links": null } }, "ref_entries": { "TABREF2": { "content": "
Matching (Zhou and Rush, 2019), joining ELMo
with a domain fluency model; PoDA (Wang et al.,
2019), denoising autoencoder transformer with a
pointer-generator layer; and GenParse (
", "type_str": "table", "html": null, "num": null, "text": "Rouge scores reported on the CNN/DailyMail dataset(Hermann et al., 2015)." }, "TABREF4": { "content": "
ModelInformative\u03c3Fluent\u03c3
BART2.580.82.850.8
BERT2.530.82.650.9
PG2.330.72.280.8
T52.500.82.300.8
CopyT2.140.62.020.8
FastAbsRL2.020.71.730.6
Consistent\u03c3Creative\u03c3
BART2.880.82.080.7
BERT2.720.91.980.6
PG2.670.82.020.6
T52.630.91.970.6
CopyT2.460.91.810.6
FastAbsRL2.130.71.820.7
", "type_str": "table", "html": null, "num": null, "text": "Comparison scores using the Best-Worst scaling (and thus in the range [\u2212100, 100]) with its standard deviation, and normalised to the [0, 100] range." }, "TABREF5": { "content": "", "type_str": "table", "html": null, "num": null, "text": "" }, "TABREF6": { "content": "
IEEE/ACM Transac-
tions on Audio, Speech, and Language Processing,
26(12):2319-2327.
Kaiqiang Song, Lin Zhao, and Fei Liu. 2018. Structure-
infused copy mechanisms for abstractive summariza-
tion. arXiv preprint arXiv:1806.05658.
J\u00f6rg Tiedemann and Santhosh Thottingal. 2020. Opus-
mt-building open translation services for the world.
In Proceedings of the 22nd Annual Conference of
the European Association for Machine Translation,
pages 479-480.
Liang Wang, Wei Zhao, Ruoyu Jia, Sujian Li, and
Jingming Liu. 2019. Denoising based sequence-
to-sequence pre-training for text generation. arXiv
preprint arXiv:1908.08206.
Yau-Shian Wang and Hung-Yi Lee. 2018. Learning
to encode text as human-readable summaries us-
ing generative adversarial networks. arXiv preprint
arXiv:1810.02851.
Jin-ge Yao, Xiaojun Wan, and Jianguo Xiao. 2015.
Phrase-based compressive cross-language summa-
rization. In Proceedings of the 2015 conference on
empirical methods in natural language processing,
pages 118-127.
Wonjin Yoon, Yoon Sun Yeo, Minbyul Jeong, Bong-
Jun Yi, and Jaewoo Kang. 2020. Learning by se-
mantic similarity makes abstractive summarization
better. arXiv preprint arXiv:2002.07767.
Jiawei Zhou and Alexander M Rush. 2019. Simple un-
supervised summarization by contextual matching.
arXiv preprint arXiv:1907.13337.
", "type_str": "table", "html": null, "num": null, "text": "Abigail See, PeterJ Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointer-generator networks. arXiv preprint arXiv:1704.04368.Shi-qi Shen, Yun Chen, Cheng Yang, Zhi-yuanLiu, Mao-song Sun, et al. 2018. Zero-shot cross-lingual neural headline generation." }, "TABREF7": { "content": "
\u2022 se puede circular por la calzada, aunque haya
carril bici vecin@.
\u2022 no me lo creo....nunda digas nunca!.
\u2022 \u00bfno cree que est\u00e1 generalizando demasiado?
no todos van con auriculares, no todos se
saltan los sem\u00e1foros, y los coches se tienen
que aconstumbrar a la presencia de las bi-
cis....es un medio de transporte m\u00e1s, y se
merece respeto.
\u2022 la obligaci\u00f3n del casco desincentiva el uso d
ela bicicleta, que en el caso de mardid est\u00e1
mejorando la movilidad sin aumentar la con-
taminaci\u00f3n
A.2 Machine translated debate
\u2022 and we're proposing a tram.
\u2022 the helmet is not mandatory for more than 15
years as they
", "type_str": "table", "html": null, "num": null, "text": "travel in the city. says dgt. otherwise, I give you the reason. they must comply with the traffic regulations. but, uh!... the drivers of cars and motorcycles also. there are too many that do not respect cyclists... did you know that in the town, a cyclist must occupy 1 lane of traffic... and not go by the edge?.\u2022 bikes that go along the sidewalks or off the bike lanes should be sanctioned. \u2022 if the bikes go through the sidewalks it is because it is very dangerous to go down the lanes of the cars even if they are marked. there is no awareness yet on the part of the users drivers. On the other hand, the fact itself of going down the sidewalk is not dangerous, as long as it goes \"by foot\".What you can not do is go fast.For me the real danger is in the night hours, in which many cyclists go without any light and you don't see until you are practically above them... that in Amsterdam is rigorously forbidden and is fined. here I have seen the municipal police pass everything when you see them.... \u2022 obviously whoever says that hasn't taken a bike in his life, the bike helmet doesn't save lives, it's a fact, it saves lives the respectful driver. \u2022 never, but I've never seen a cyclist stop at a red light, or get on the sidewalk to cross by shooting pedestrians or directly jump him, at a pace of pedestrians less stop.What happens, that the rules are not for everyone alike? if a car jumps a light, the ticket is best! a little more respect, especially when they circulate along the sidewalk at the speed that gives them the desire, with the danger that it carries. they believe that everything is good and the street is theirs. \u2022 you can drive along the road, even if there is a nearby bicycle lane. \u2022 I don't believe it... never say it! \u2022 Don't you think it's generalizing too much?" } } } }