ACL-OCL / Base_JSON /prefixI /json /in2writing /2022.in2writing-1.6.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
173 kB
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:32:57.128421Z"
},
"title": "Plug-and-Play Controller for Story Completion: A Pilot Study toward Emotion-aware Story Writing Assistance",
"authors": [
{
"first": "Yusuke",
"middle": [],
"last": "Mori",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Tokyo",
"location": {}
},
"email": ""
},
{
"first": "Hiroaki",
"middle": [],
"last": "Yamane",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Tokyo",
"location": {}
},
"email": "yamane@mi.t.u-tokyo.ac.jp"
},
{
"first": "Ryohei",
"middle": [],
"last": "Shimizu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Tokyo",
"location": {}
},
"email": "shimizu@mi.t.u-tokyo.ac.jp"
},
{
"first": "Tatsuya",
"middle": [],
"last": "Harada",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Tokyo",
"location": {}
},
"email": "harada@mi.t.u-tokyo.ac.jp"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Emotions are essential for storytelling and narrative generation, and as such, the relationship between stories and emotions has been extensively studied. The authors of this paper, including a professional novelist, have examined the use of natural language processing to address the problems of novelists from the perspective of practical creative writing. In particular, the story completion task, which requires understanding the existing unfinished context, was studied from the perspective of creative support for human writers, to generate appropriate content to complete the unfinished parts. It was found that unsupervised pre-trained large neural models of the sequence-to-sequence type are useful for this task. Furthermore, based on the plug-and-play module for controllable text generation using GPT-2, an additional module was implemented to consider emotions. Although this is a preliminary study, and the results leave room for improvement before incorporating the model into a practical system, this effort is an important step in complementing the emotional trajectory of the story.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Emotions are essential for storytelling and narrative generation, and as such, the relationship between stories and emotions has been extensively studied. The authors of this paper, including a professional novelist, have examined the use of natural language processing to address the problems of novelists from the perspective of practical creative writing. In particular, the story completion task, which requires understanding the existing unfinished context, was studied from the perspective of creative support for human writers, to generate appropriate content to complete the unfinished parts. It was found that unsupervised pre-trained large neural models of the sequence-to-sequence type are useful for this task. Furthermore, based on the plug-and-play module for controllable text generation using GPT-2, an additional module was implemented to consider emotions. Although this is a preliminary study, and the results leave room for improvement before incorporating the model into a practical system, this effort is an important step in complementing the emotional trajectory of the story.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In this study, the authors, one of whom is a professional novelist, examined the use of natural language processing to solve the problems faced by novelists from the perspective of practical creative writing. Among the diverse topics related to automatic storytelling and human creativity, \"emotion\" should be emphasized as an important keyword. The relationship between stories and emotions has been an essential part of the research in the field of humanities, especially in the cognitive and affective science of literature (Hogan, 2006; Pandit and Hogan, 2006; Johnson-Laird and Oatley, 2008; Hogan, 2010 Hogan, , 2019 .",
"cite_spans": [
{
"start": 527,
"end": 540,
"text": "(Hogan, 2006;",
"ref_id": "BIBREF12"
},
{
"start": 541,
"end": 564,
"text": "Pandit and Hogan, 2006;",
"ref_id": "BIBREF34"
},
{
"start": 565,
"end": 596,
"text": "Johnson-Laird and Oatley, 2008;",
"ref_id": "BIBREF17"
},
{
"start": 597,
"end": 608,
"text": "Hogan, 2010",
"ref_id": "BIBREF13"
},
{
"start": 609,
"end": 622,
"text": "Hogan, , 2019",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In providing practical knowledge for authors, creative techniques emphasize the importance of being conscious of readers' emotions (Field, 2006; Snyder, 2005) . The theory of the emotional arc, which states that a good story can be typified by emotional movement, is well known from the introduction by a popular American novelist, Vonnegut (1995) . As presented in Reagan et al. (2016) , studies have been conducted to reveal the close relationship between emotions and stories. Ackerman and Puglisi (2012) insisted that a key component of every character is emotion. In the context of serious storytelling, Lugmayr et al. (2017) insisted that a fundamental aspect of storytelling is emotions, that is, the cognitive aspects that the story evokes in its audience. Numerous efforts have been made to disclose the mystery of the relationship between emotions and stories (Anderson and McMaster, 1982; Strapparava and Mihalcea, 2008; Abdul-Mageed and Ungar, 2017; Klinger, 2018, 2019a,b; Zad and Finlayson, 2020) .",
"cite_spans": [
{
"start": 131,
"end": 144,
"text": "(Field, 2006;",
"ref_id": "BIBREF10"
},
{
"start": 145,
"end": 158,
"text": "Snyder, 2005)",
"ref_id": "BIBREF46"
},
{
"start": 332,
"end": 347,
"text": "Vonnegut (1995)",
"ref_id": "BIBREF48"
},
{
"start": 366,
"end": 386,
"text": "Reagan et al. (2016)",
"ref_id": "BIBREF42"
},
{
"start": 480,
"end": 507,
"text": "Ackerman and Puglisi (2012)",
"ref_id": "BIBREF2"
},
{
"start": 609,
"end": 630,
"text": "Lugmayr et al. (2017)",
"ref_id": "BIBREF26"
},
{
"start": 870,
"end": 899,
"text": "(Anderson and McMaster, 1982;",
"ref_id": "BIBREF3"
},
{
"start": 900,
"end": 931,
"text": "Strapparava and Mihalcea, 2008;",
"ref_id": "BIBREF47"
},
{
"start": 932,
"end": 961,
"text": "Abdul-Mageed and Ungar, 2017;",
"ref_id": "BIBREF0"
},
{
"start": 962,
"end": 985,
"text": "Klinger, 2018, 2019a,b;",
"ref_id": null
},
{
"start": 986,
"end": 1010,
"text": "Zad and Finlayson, 2020)",
"ref_id": "BIBREF53"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This study focuses on introducing emotions into a story completion (SC) task. The basic task setting in SC is shown in Figure 1 . 1 In the field of story generation and understanding, Wang and Wan (2019) proposed SC. We believe that the artificial intelligence (AI) ability to solve SC tasks is important in the context of providing creative support. If writers cannot complete a story and do not know how to proceed with a plot, a suitable model can provide them with appropriate support.",
"cite_spans": [
{
"start": 130,
"end": 131,
"text": "1",
"ref_id": null
},
{
"start": 184,
"end": 203,
"text": "Wang and Wan (2019)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [
{
"start": 119,
"end": 127,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The main contributions of this study are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 The importance of emotion in stories was confirmed from the perspective of a professional writer, based on which, the possibility of incorporating emotions into SC tasks is discussed for creative support, and a specific method is proposed to accomplish this.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Jake was a good dancer but he was shy. <missing_sentence> Every time he saw her he got shy and didn't ask. The day before the dance Mary asked Jake. Jake said yes and he showed Mary how to dance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Story Completion",
"sec_num": null
},
{
"text": "Jake was a good dancer but he was shy. He wanted to ask Mary to the school dance. Every time he saw her he got shy and didn't ask. The day before the dance Mary asked Jake. Jake said yes and he showed Mary how to dance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Story Completion",
"sec_num": null
},
{
"text": "He was excited that he was ready to ask Mary to the dance party.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Story Completion",
"sec_num": null
},
{
"text": "He wanted to ask Mary to dance together but was afraid to be declined. Figure 1 : Conceptual diagram of the functionality this study aims for. 1 \u20dd Overview of the story completion task. To address the <missing_position> token in an incomplete story, unsupervised pre-trained large neural models are used. 2 \u20dd PPLM is used to control the emotions of the generative text. The representation of the emotions in this figure was reconstructed from an image by Russell (1980) .",
"cite_spans": [
{
"start": 305,
"end": 306,
"text": "2",
"ref_id": null
},
{
"start": 455,
"end": 469,
"text": "Russell (1980)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [
{
"start": 71,
"end": 79,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Story Completion",
"sec_num": null
},
{
"text": "\u2022 Control of SC was examined through our implementation using the plug-and-play language model (PPLM) (Dathathri et al., 2020) , whereby the application of the PPLM, which is originally limited, was expanded.",
"cite_spans": [
{
"start": 102,
"end": 126,
"text": "(Dathathri et al., 2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Story Completion",
"sec_num": null
},
{
"text": "This study is a preliminary study, and the results should be improved before incorporating the model into a practical system. However, we believe that this effort is an important step toward complementing the emotional trajectory of the story and worth discussing for future directions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Story Completion",
"sec_num": null
},
{
"text": "As a complementary contribution to this study, we would like to note that a professional writer researched how to use natural language processing (NLP) technology to reflect the viewpoints of writers and researchers. We expect that this work will contribute to building a bridge toward collaborative work between professional writers and researchers in NLP and human computer interface (HCI) to accelerate research in the field of story writing assistance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Story Completion",
"sec_num": null
},
{
"text": "In the field of story generation and understanding, Wang and Wan (2019) proposed SC. Given any four sentences in a five-sentence story, the objective of the task is to generate a sentence that is not provided (missing plot), to complete the story. In addition to this, research on text infilling has been actively conducted in recent years (Ippolito et al., 2019; Donahue et al., 2020; Wang et al., 2020) . We pointed out that the ability to solve an SC task is essential from the viewpoint of creative support for writers (Mori et al., 2020) . If writers cannot complete a story and do not know how to proceed with the plot, AI can provide appropriate support for filling in the blanks.",
"cite_spans": [
{
"start": 52,
"end": 71,
"text": "Wang and Wan (2019)",
"ref_id": "BIBREF50"
},
{
"start": 340,
"end": 363,
"text": "(Ippolito et al., 2019;",
"ref_id": "BIBREF16"
},
{
"start": 364,
"end": 385,
"text": "Donahue et al., 2020;",
"ref_id": "BIBREF8"
},
{
"start": 386,
"end": 404,
"text": "Wang et al., 2020)",
"ref_id": "BIBREF49"
},
{
"start": 523,
"end": 542,
"text": "(Mori et al., 2020)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Story Completion",
"sec_num": "2.1"
},
{
"text": "In this study, controlled text generation with emotion awareness is applied to SC. Focusing on stories, a method is proposed to handle this task in a simple manner by including a special token, specific to the task. By organizing the task in a simple manner, it becomes possible to solve it in a similar way with various models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Story Completion",
"sec_num": "2.1"
},
{
"text": "Some studies have attempted to control story generation by considering emotions (Chandu et al., 2019; Luo et al., 2019; Brahman and Chaturvedi, 2020; Dathathri et al., 2020; . The study closest to ours is that of Brahman and Chaturvedi (2020) . They insisted that their study was the first to model the emotional trajectory of the protagonist in neural storytelling. There are significant differences between their study and ours with respect to task setting and the approach taken.",
"cite_spans": [
{
"start": 80,
"end": 101,
"text": "(Chandu et al., 2019;",
"ref_id": "BIBREF6"
},
{
"start": 102,
"end": 119,
"text": "Luo et al., 2019;",
"ref_id": "BIBREF27"
},
{
"start": 120,
"end": 149,
"text": "Brahman and Chaturvedi, 2020;",
"ref_id": "BIBREF5"
},
{
"start": 150,
"end": 173,
"text": "Dathathri et al., 2020;",
"ref_id": "BIBREF7"
},
{
"start": 213,
"end": 242,
"text": "Brahman and Chaturvedi (2020)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Emotion-aware Storytelling",
"sec_num": "2.2"
},
{
"text": "First, Brahman and Chaturvedi (2020) attempted to generate an entire story from the task, while our focus is on the SC task that a model reads to understand what is written in the original context. In this study, dimensional emotions (valence and arousal) were used instead of categorical emotions (four basic emotions in addition to neutral). Dividing emotions into categories is easy to understand, but for precise control, it is desirable to handle emotions as continuous values. Luo et al. (2019) tackled fine-grained emotion control of story generation, but their objective was story ending rather than completion. Moreover, the controlled emotion was restricted to one dimension (positive-negative). The interest in this study is the control of more diverse two-dimensional emotions based on Russell's circumplex model (Russell, 1980) .",
"cite_spans": [
{
"start": 483,
"end": 500,
"text": "Luo et al. (2019)",
"ref_id": "BIBREF27"
},
{
"start": 825,
"end": 840,
"text": "(Russell, 1980)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Emotion-aware Storytelling",
"sec_num": "2.2"
},
{
"text": "There are some works in unsupervised pre-trained large neural models for control text generation. Keskar et al. (2019) proposed CTRL to control specific aspects of text generation in large-scale language models. Based on the large-scale language model MEGATRON and knowledge-enhanced story generation (Guan et al., 2020) , proposed MEGATRON-CNTRL. In other studies, Rashkin et al. 2020proposed the task of outline-conditioned story generation, whereby the input only provided a rough sketch of the plot. Therefore, models must generate a story by interweaving the key points provided in the outline. Inspired by plug-and-play generative networks (PPGN) (Nguyen et al., 2017) in computer vision, Dathathri et al. (2020) proposed PPLM, an alternative approach for controlled text generation. Their approach uses attachment models for pre-trained GPT-2 (Radford et al., 2019) to control the word probability distribution during the word-by-word generation process. Optimization is performed ex post facto in the activation space; therefore, no retraining or fine-tuning of the core language model is required. Following this approach, methods have been presented to control the output by adding modules for output control without modifying the core model, such as DE-LOREAN (DEcoding for nonmonotonic LOgical REAsoNing) (Qin et al., 2020) , side-tuning (Zhang et al., 2020a) , auxiliary tuning (Zeldes et al., 2020) , and GeDi (Krause et al., 2021) . In this study, PPLM, which is a well-designed, simple, and powerful method, is applied for emotion-controllable story generation. Dathathri et al. (2020) explored controlled generation for assistive story writing, demonstrating the usefulness of PPLM in this area. However, they conducted an exploration of open-ended story generation, not SC.",
"cite_spans": [
{
"start": 98,
"end": 118,
"text": "Keskar et al. (2019)",
"ref_id": "BIBREF18"
},
{
"start": 301,
"end": 320,
"text": "(Guan et al., 2020)",
"ref_id": "BIBREF11"
},
{
"start": 653,
"end": 674,
"text": "(Nguyen et al., 2017)",
"ref_id": "BIBREF33"
},
{
"start": 695,
"end": 718,
"text": "Dathathri et al. (2020)",
"ref_id": "BIBREF7"
},
{
"start": 850,
"end": 872,
"text": "(Radford et al., 2019)",
"ref_id": "BIBREF39"
},
{
"start": 1317,
"end": 1335,
"text": "(Qin et al., 2020)",
"ref_id": "BIBREF38"
},
{
"start": 1350,
"end": 1371,
"text": "(Zhang et al., 2020a)",
"ref_id": "BIBREF55"
},
{
"start": 1391,
"end": 1412,
"text": "(Zeldes et al., 2020)",
"ref_id": null
},
{
"start": 1424,
"end": 1445,
"text": "(Krause et al., 2021)",
"ref_id": null
},
{
"start": 1578,
"end": 1601,
"text": "Dathathri et al. (2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Controllable text generation with Transformer",
"sec_num": "2.3"
},
{
"text": "This section describes the proposed method in detail, emphasizing the ingenuity of its implementation. The proposed model has a novel architecture composed of two main parts for SC tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "\u2022 Fine-tuning unsupervised pre-trained large neural models for the SC task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "\u2022 Emotion-aware controlling of fine-tuned models using PPLM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "Studies on applying unsupervised pre-trained large neural models for text infilling have been actively conducted recently (Ippolito et al., 2019; Donahue et al., 2020; Wang et al., 2020) . The first part of our method follows this trend and is verified using various models.",
"cite_spans": [
{
"start": 122,
"end": 145,
"text": "(Ippolito et al., 2019;",
"ref_id": "BIBREF16"
},
{
"start": 146,
"end": 167,
"text": "Donahue et al., 2020;",
"ref_id": "BIBREF8"
},
{
"start": 168,
"end": 186,
"text": "Wang et al., 2020)",
"ref_id": "BIBREF49"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "In Subsection 3.2, a modified version of PPLM (Dathathri et al., 2020) is proposed for emotionaware SC. PPLM, given a prompt (user input text), generates subsequent sentences, as it uses GPT-2 as a base model and tiny attribute models. In this study, the PPLM model was expanded through concatenation with other models.",
"cite_spans": [
{
"start": 46,
"end": 70,
"text": "(Dathathri et al., 2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "The model code was implemented using Py-Torch (Paszke et al., 2019) , which is an opensource machine-learning framework provided as a Python library. 2 To make use of unsupervised pre-trained large neural models, our code was also based on Huggingface Transformers (Wolf et al., 2020) , which provide general-purpose architectures for natural language understanding (NLU) and natural language generation (NLG).",
"cite_spans": [
{
"start": 46,
"end": 67,
"text": "(Paszke et al., 2019)",
"ref_id": "BIBREF36"
},
{
"start": 265,
"end": 284,
"text": "(Wolf et al., 2020)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "The focus here is mainly on Seq2Seq language models (Seq2SeqLMs). For Seq2SeqLMs and its variants, the models below were used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "\u2022 BART (Lewis et al., 2020) Causal language models (CLMs), which have a left-to-right architecture, do not seem to perform well on SC because they were originally designed for the generation of a continuation of the given prompt and not for completing the missing part, by considering the before and after of the missing part. However, Donahue et al. (2020) proposed the infilling by language modeling (ILM), an approach that enables CLMs to leverage the entire context for text infilling. We left it for future work to apply CLMs to controllable story completion with our proposed method.",
"cite_spans": [
{
"start": 7,
"end": 27,
"text": "(Lewis et al., 2020)",
"ref_id": "BIBREF23"
},
{
"start": 336,
"end": 357,
"text": "Donahue et al. (2020)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "PyTorch version 1.11.0, and HuggingFace Transformers version 4.18.0 were used. 4 The details of pre-trained models are displayed in Table 1 .",
"cite_spans": [
{
"start": 79,
"end": 80,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 132,
"end": 139,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "Initially, models for SC that do not consider emotions should be trained for plug-and-play control. In this study, these methods are referred to as \"Noemotion-aware baselines.\" As shown in Figure 1 , a special token was defined for the SC task: \"<missing_position>\". A special token is inserted into the missing position k, such that the input to the model becomes S \u2032 = {s 1 , ..., s k\u22121 , <missing_position>, s k+1 , ..., s n }. s stands for a sentence, and the subscript number indicates the position of the sentence in the entire text. Subsequently, the model outputs s k , as defined in the task.",
"cite_spans": [],
"ref_spans": [
{
"start": 189,
"end": 198,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "No-emotion-aware baselines",
"sec_num": "3.1"
},
{
"text": "For Seq2SeqLMs, the S \u2032 are concatenated into one text and fed to the encoder. The encoder then passes the calculated embeddings to the decoder and generates text. The output is expected to be a single sentence; however, it was also explored if the model could learn from fine-tuning, including \"generate only one sentence,\" constraints.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "No-emotion-aware baselines",
"sec_num": "3.1"
},
{
"text": "In this study, PPLM was updated for use in emotion control during story completion. PPLM was originally implemented as an additional module for GPT-2 (the default model was GPT2-medium). Adapting PPLM to Seq2SeqLMs required some implementation ingenuities. PPLM was originally designed to generate the continuation of a given text using a decoder-only model. In contrast, in this study, the given text is first processed with the encoder, and then the resulting tensor is used to generate sentences with the decoder.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Emotion Controlling Methods",
"sec_num": "3.2"
},
{
"text": "PPLM has two types of attribute models: bagof-words (PPLM-BoW) and discriminator (PPLM-Discrim). Originally, PPLM-BoW did not include an emotion control set. PPLM-Discrim has a pretrained model for sentiment control, but it is positive-negative. In this study, the focus was on PPLM-BoW because it can function by preparing a list of words without additional learning. Thus, the original word list provided in PPLM can be used, but this does not consider valence and arousal. Hence, the NRC valence, arousal, and dominance lexicon (Mohammad, 2018 ) (NRC-VAD lexicon) was used to obtain the word list annotated with dimensional emotion values, which was subsequently fed into PPLM-BoW. Instead of using the entire NRC-VAD lexicon as is, in our implementation, a range of values can be specified for valence and arousal (and dominance) at runtime to obtain a subset within that range. discrete uniform distribution. For the development and test sets, the removal procedure was performed when creating the dataset to improve reproducibility. For the training set, the original five-sentence story was retained in the dataset and a sentence was randomly removed while reading the data during training. This setting followed that of our previous study (Mori et al., 2020) .",
"cite_spans": [
{
"start": 531,
"end": 546,
"text": "(Mohammad, 2018",
"ref_id": "BIBREF29"
},
{
"start": 1247,
"end": 1266,
"text": "(Mori et al., 2020)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Emotion Controlling Methods",
"sec_num": "3.2"
},
{
"text": "For training, the AdamW (Loshchilov and Hutter, 2019) optimizer was used with parameters \u03b2 1 = 0.9, \u03b2 2 = 0.999, and\u03f5 = 1e \u2212 08. The initial learning rate was set to 3e \u2212 05 and linearly decreased thereafter from the initial point to 0 to avoid overfitting. The model was fine-tuned using NVIDIA Tesla V100 GPUs and the size of the training batch was set to 8.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Details",
"sec_num": "4.2"
},
{
"text": "We use two sets of training parameters. One is task-specific parameters, defined for each model based on with reference to its use for the summarization task. The other is common parameters for all models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Details",
"sec_num": "4.2"
},
{
"text": "Seq2SeqLMs significantly improved the performance compared to conventional models in text-totext tasks, especially in summarization and translation. Of these two well-worked tasks, we hypothesized that the training settings for summarization are closer to what we need for SC. SC requires methods to understand the context, to generate appropriate sentences for completion. The given context is typically longer than a sentence for completion. In summary, methods are required to understand the entire text, to generate shorter sentences to represent it. Although there are two types of approaches, extractive summarization and abstractive summarization, the basic objective is the same. On the other hand, in translation tasks, although it is also important to understand the input content, the output length is not significantly different from the input length (note that there is a difference related to the nature of each language). There are also application examples, such as para-phrasing in one language, but the input and output are generally in different languages during translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Details",
"sec_num": "4.2"
},
{
"text": "What varies from model to model is the setting such as length penalty and max length of input and output sequence. The length penalty places a constraint on the length of the generated sentences, prompting the generation of longer sentences if it is greater than 1.0, and shorter sentences if it is less than 1.0. As mentioned above, task-specific parameters prepared for summarization were used in this study. This was done to ensure the fairness of the settings by unifying the parameters in \"solving SC by directly applying the settings of the summarization task.\" 5 For this reason, the length penalty was set to 2.0 for T5 in this experiment, 1.0 for BART, and 0.8 for PEGASUS. For XLM-ProphetNet, the penalty was 2.0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Details",
"sec_num": "4.2"
},
{
"text": "For a different sense of fairness, we provided another setting that uses a common length penalty. In this setting, the length penalty is 1.0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Details",
"sec_num": "4.2"
},
{
"text": "It is necessary to evaluate a large number of models and their variants (model parameters, training parameters, tasks that are fine-tuned beforehand, etc.). Thus, automatic evaluation metrics were employed instead of human evaluation. Stories entertain the reader (or evoke other emotions); therefore, human evaluation is important. However, there is a huge cost involved in terms of time and money for evaluating various parameters in many models. In addition, there are factors such as age, gender, and regional trends in texts, particularly in stories. The problem is that stories liked by someone are not always liked by others. In this section, the focus is on automatic evaluation metrics for a large number of models. The human evaluation of a narroweddown list of promising candidate models is left for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.3"
},
{
"text": "The following metrics were used for the evaluation: BLEU (Papineni et al., 2002) , ROUGE (Lin, 2004) , METEOR (Banerjee and Lavie, 2005) , BERTScore , 6 and BLEURT (Sellam et al., 2020) . 7 The Python library Hug-gingFace Datasets was used for certain metrics; 'sacrebleu' as BLEU, ROUGE and METEOR. 8 For each of BERTScore and BLEURT, the original implementation of each paper was used.",
"cite_spans": [
{
"start": 57,
"end": 80,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF35"
},
{
"start": 89,
"end": 100,
"text": "(Lin, 2004)",
"ref_id": "BIBREF24"
},
{
"start": 110,
"end": 136,
"text": "(Banerjee and Lavie, 2005)",
"ref_id": "BIBREF4"
},
{
"start": 151,
"end": 152,
"text": "6",
"ref_id": null
},
{
"start": 164,
"end": 185,
"text": "(Sellam et al., 2020)",
"ref_id": "BIBREF44"
},
{
"start": 188,
"end": 189,
"text": "7",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.3"
},
{
"text": "First, experiments were conducted using noemotion-aware baselines. Table 3 lists the test set results of Seq2SeqLMs evaluated using automatic evaluation metrics. In this comparison, the entire story was not compared; however, the generated complementary sentence was compared with the original sentence (the missing sentence). The value of F1 was used for ROUGE and BERTScore. In addition, for BERTScore, the authors obtained an average when evaluating the models. 9 BLEURT was treated in a similar manner.",
"cite_spans": [],
"ref_spans": [
{
"start": 67,
"end": 74,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "No-emotion-aware baselines",
"sec_num": "5.1"
},
{
"text": "The results indicated that BART large exhibited the highest scores for every metric. For a deeper analysis of the metric results, Table 4 was created for average generation length and runtime. In BART base, BART large, and PEGASUS, the two training settings didn't have a significant impact. On the other hand, for T5 base, T5 large, and XLM-ProphetNet, better results were obtained when using task-specific parameters. The result suggests that the parameters for summarization work well for story completion, especially when the model requires a large length penalty for summarization tasks. Table 5 and 6 display the examples generated.",
"cite_spans": [],
"ref_spans": [
{
"start": 130,
"end": 137,
"text": "Table 4",
"ref_id": null
},
{
"start": 593,
"end": 600,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "No-emotion-aware baselines",
"sec_num": "5.1"
},
{
"text": "The Seq2SeqLM + PPLM-BoW results are presented in Table 7 . As BART large displayed the best result in the no-emotion-aware baseline experiment, BART large was used as the first step of Emotion-aware SC with Seq2SeqLM + PPLM.",
"cite_spans": [],
"ref_spans": [
{
"start": 50,
"end": 57,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Emotion Controlling Method",
"sec_num": "5.2"
},
{
"text": "In the examples shown in Table 7 , the ranges of valence and arousal were set to 0.0 <= valence <= 0.3 and 0.7 <= arousal <= 1.0, respectively. As valence is negative and arousal is high, negative and excited emotions are expected to emerge. The results of an uncontrolled trial (unperturbed) and three controlled trials (perturbed) are presented as examples. Perturbed 1 seems to be controlled by \"negative and excited.\" In the context of careful driving, it is not unnatural for events related to the car to occur, and on top of that, the expression that the car gets stuck is negative. We showed an example where the generation of emotion-controlled sentences worked well. However, the adjustment of the parameters to generate a sequence was very severe. PPLM provides parameters to manipulate the generated results, but it is very difficult to adjust these parameters, at least in combination with Seq2SeqLM.",
"cite_spans": [],
"ref_spans": [
{
"start": 25,
"end": 32,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Emotion Controlling Method",
"sec_num": "5.2"
},
{
"text": "We should note that the BART large model used here was trained with an older version of Py-Torch and Transformers. Unfortunately, the version trained with PyTorch 1.11.0 and Transformers 4.18.0 used in this Seq2SeqLM Story Completion did not produce good results with the same generation parameters. Although we could run the modified PPLM with the libraries of the newer version, the choice of the fine-tuned model is also severe.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Emotion Controlling Method",
"sec_num": "5.2"
},
{
"text": "PPLM was originally designed for use with GPT-2, but in this study, it was modified and applied to Seq2SeqLM. Specifically, it was confirmed that PPLM works on BART. However, when we used the Seq2SeqLM model which was fine-tuned for no-emotion-aware SC to generate sentences controlled with PPLM, we found that the sentences tended to be shorter than those generated without PPLM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Emotion Controlling Method",
"sec_num": "5.2"
},
{
"text": "The no-emotion-aware baseline results indicate that BART large exhibited the highest scores for every metric. In this study, we used two sets of training parameters: one is based on summarization task-specific parameters and the other is common parameters. The result showed that the parameters for summarization work well for story completion, compared to common parameters that do not account for differences between models. Future studies should search for specific parameters for each model that are more suitable for SC.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "In this study, PPLM was extended and combined with BART, a representative model of Seq2SeqLMs. In addition, by combining PPLM with the NRC-VAD lexicon, a basis was created for SC to consider valence and arousal. However, there is still a lot of room for improvement in the results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "In text generation, it is important to control the behavior of the model using parameters such as Table 4 : The mean generated length and the runtime of no-emotion-aware Seq2SeqLMs. \"w/ specific param\" indicates that the model is trained using the task-specific parameters of each model. storyid dc36af5e-a65f-4193-8f3c-5162c8af6755 context <missing_position> I wanted to take out some fish. But then the lady was not using gloves. I was disgusted. I ended up walking out. missing_id 0 GT I went to a restaurant yesterday. BART base I went to the fish market with my friends. BART large I went to the fish market yesterday. PEGASUS large I went to the fish market today for the first time. T5 base I went to a fish market one day. I was very hungry. T5 large I went to a fish market one day with my friends.",
"cite_spans": [],
"ref_spans": [
{
"start": 98,
"end": 105,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "She was to to the.... GT completed story I went to a restaurant yesterday. I wanted to take out some fish. But then the lady was not using gloves. I was disgusted. I ended up walking out. BART base completed story I went to the fish market with my friends. I wanted to take out some fish. But then the lady was not using gloves. I was disgusted. I ended up walking out. BART large completed story I went to the fish market yesterday. I wanted to take out some fish. But then the lady was not using gloves. I was disgusted. I ended up walking out. PEGASUS large completed story I went to the fish market today for the first time. I wanted to take out some fish. But then the lady was not using gloves. I was disgusted. I ended up walking out. T5 base completed story I went to a fish market one day. I was very hungry. I wanted to take out some fish. But then the lady was not using gloves. I was disgusted. I ended up walking out. T5 large completed story I went to a fish market one day with my friends. I wanted to take out some fish. But then the lady was not using gloves. I was disgusted. I ended up walking out. XLM-ProphetNet large completed story She was to to the.... I wanted to take out some fish. But then the lady was not using gloves. I was disgusted. I ended up walking out. Table 5 : Examples of contexts and completion sentences generated by no-emotion-aware Seq2SeqLMs. In this case, the task-specific parameters for each model were used.",
"cite_spans": [],
"ref_spans": [
{
"start": 1290,
"end": 1297,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "XLM-ProphetNet large",
"sec_num": null
},
{
"text": "the length penalty. Two types of parameters were experimented with in this study, but further effort is required to determine the best parameter. The optimal hyperparameters seem to be naturally dif-ferent for each model. It is not realistic to check all outputs using the human eye while adjusting hyperparameters within a wide range of values for many models. Therefore, an automatic evaluation storyid f2a013bd-852f-43f4-9012-4db8ae44c64e context Jane had a very sick dog. Her dog was old and couldn't run anymore. So that he still felt young, Jane used to walk her dog in a pram. <missing_position> Jane didn't care as she knew she was making him feel better. missing_id 3 GT This would look strange to the public.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "XLM-ProphetNet large",
"sec_num": null
},
{
"text": "One day, her dog fell down and broke his leg.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BART base",
"sec_num": null
},
{
"text": "Her dog got very sick and couldn't run anymore.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BART large",
"sec_num": null
},
{
"text": "One day, her dog got sick and had to be put down.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PEGASUS large",
"sec_num": null
},
{
"text": "One day, she noticed that her dog was very sick.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "T5 base",
"sec_num": null
},
{
"text": "One day, her dog got sick and couldn't walk.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "T5 large",
"sec_num": null
},
{
"text": "He was to to the the..",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "XLM-ProphetNet large",
"sec_num": null
},
{
"text": "Jane had a very sick dog. Her dog was old and couldn't run anymore. So that he still felt young, Jane used to walk her dog in a pram. This would look strange to the public. Jane didn't care as she knew she was making him feel better. BART base completed story Jane had a very sick dog. Her dog was old and couldn't run anymore. So that he still felt young, Jane used to walk her dog in a pram. One day, her dog fell down and broke his leg. Jane didn't care as she knew she was making him feel better. BART large completed story Jane had a very sick dog. Her dog was old and couldn't run anymore. So that he still felt young, Jane used to walk her dog in a pram. Her dog got very sick and couldn't run anymore. Jane didn't care as she knew she was making him feel better. PEGASUS large completed story Jane had a very sick dog. Her dog was old and couldn't run anymore. So that he still felt young, Jane used to walk her dog in a pram. One day, her dog got sick and had to be put down. Jane didn't care as she knew she was making him feel better. T5 base completed story Jane had a very sick dog. Her dog was old and couldn't run anymore. So that he still felt young, Jane used to walk her dog in a pram. One day, she noticed that her dog was very sick. Jane didn't care as she knew she was making him feel better. T5 large completed story Jane had a very sick dog. Her dog was old and couldn't run anymore. So that he still felt young, Jane used to walk her dog in a pram. One day, her dog got sick and couldn't walk. Jane didn't care as she knew she was making him feel better.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GT completed story",
"sec_num": null
},
{
"text": "Jane had a very sick dog. Her dog was old and couldn't run anymore. So that he still felt young, Jane used to walk her dog in a pram. He was to to the the.. Jane didn't care as she knew she was making him feel better. Table 6 : Examples of contexts and completion sentences generated by no-emotion-aware Seq2SeqLMs. In this case, the same hyperparameters were used for length penalty and max length.",
"cite_spans": [],
"ref_spans": [
{
"start": 218,
"end": 225,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "XLM-ProphetNet large completed story",
"sec_num": null
},
{
"text": "Context I got a call from the hospital. My doctor told me to stop everything I'm doing and come to her. Although I was nervous, I tried to drive calmly. <missing_sentence> The doctor diagnosed me with leukemia.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "XLM-ProphetNet large completed story",
"sec_num": null
},
{
"text": "The front desk worker sent me to an office.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "missing sentence",
"sec_num": null
},
{
"text": "However, my blood.ItItMy",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unperturbed",
"sec_num": null
},
{
"text": "Perturbed 0 However, the car..",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unperturbed",
"sec_num": null
},
{
"text": "My car got stuck... Perturbed 2 ...... Table 7 : An example of emotion-controlled SC with BART large + PPLM-BoW (0.0 <= Valence <= 0.3 and 0.7 <= Arousal <= 1.0). mechanism is required.",
"cite_spans": [],
"ref_spans": [
{
"start": 39,
"end": 46,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Perturbed 1",
"sec_num": null
},
{
"text": "The application of these methods to other datasets is left for future work. As a representative example, the WritingPrompts dataset (Fan et al., 2018) was considered. Stories in WritingPrompts vary in terms of length; therefore, the importance of a single sentence varies from one story to the other. With very long stories, generally trimming is used to retain a predetermined number of words from the start while truncating the rest. Hence, this dataset was not considered to be suitable for the SC tasks for now. Thus, as a starting point, ROCStories was adopted.",
"cite_spans": [
{
"start": 132,
"end": 150,
"text": "(Fan et al., 2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Perturbed 1",
"sec_num": null
},
{
"text": "As noted in the Introduction, one of the authors of this study was a professional novelist. This work is a collaborative effort between researchers and a professional creative writer. More precisely, the first author of this paper is a professional Japanese novelist as well as a researcher in the field of story understanding and generation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Considerations by a Professional Writer",
"sec_num": "7"
},
{
"text": "In Section 6, the viewpoint of the researchers is discussed. In this section, the positioning and prospects of this study are discussed from the novelist's perspective.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Considerations by a Professional Writer",
"sec_num": "7"
},
{
"text": "In an experiment conducted separately from this study, four professional creative writers were asked to evaluate a creative writing support system. 11 The results of that experiment confirmed that there might be a negative perception of the system's ability to control the output if there are parameters with which the user is not familiar. Although it would be desirable for users to have the freedom to adjust the outcome, too many parameters make them lost. They do not know what to do, resulting in confusion on the user's part in using the system and in a negative impression.",
"cite_spans": [
{
"start": 148,
"end": 150,
"text": "11",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Considerations by a Professional Writer",
"sec_num": "7"
},
{
"text": "As previously mentioned, our modified PPLM for controllable SC addressed in this study is difficult to adjust. Moreover, in its current state, users are required to understand what \"valence\" and \"arousal\" mean. We believe that treating both dimensions rather than one dimension (positivenegative) would be important for future directions in this area, but this idea is not yet widespread. Hence, it is difficult for this approach to provide professional writers with the desired results for now. At this point, there was concern that other professional writers would have a negative impression of the \"creative writing support system that controls the emotions of the generated text\" as a whole. That is why no human evaluation was conducted on this study, except by the novelist author.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Considerations by a Professional Writer",
"sec_num": "7"
},
{
"text": "For practitioners, the extent to which AI could replace their own work is an important issue; there is also concern that it could trigger a sense of avoidance toward AI. Prudence is needed in conducting research, and professional evaluations, which are important topics of discussion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Considerations by a Professional Writer",
"sec_num": "7"
},
{
"text": "Some professional novelists write from beginning to end in order, while others come up with certain parts but cannot come up with the correct sentences to fill in the gaps. SC is an important task in helping the latter. From the creative writer's perspective, it is helpful to have a system that understands the meaning of one's own writing and then fills in the missing parts. Furthermore, as the importance of the emotional arc in a story becomes increasingly apparent, a system that controls the output of the emotions desired by the user as well as an evaluation index that considers emotions would be helpful.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Considerations by a Professional Writer",
"sec_num": "7"
},
{
"text": "In this study, the SC task was considered for various emotions. Previous studies on emotion-aware story generation have restricted emotions to one dimension (positive-negative) or categorical ones. Our aim was to control more diverse emotions, so the issue of two-dimensional control was addressed based on Russell's circumplex model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "Our implementation made it possible to control SC using PPLM. This expands the application of PPLM, which was originally limited to the task of \"generating the continuation of a prompt.\" Although the goal of controlling emotions was accomplished, it was difficult to adjust the parameters. Whether this difficulty in coordination can be improved through innovative implementation or demands a completely different approach requires further examination.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "The original story in this figure is from ROCStories (storyid: 0bb3f8b6-117c-45d0-861f-d9953ccc7ddb; storytitle: Dancing).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://pytorch.org/ 3 We used XLM-ProphetNet because only \"uncased\" models of ProphetNet were available for pretrained models. Hence, XLM-ProphetNet, specifically, \"microsoft/xprophetnet-large-wiki100-cased,\" which is a cased version, was used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We plan to make our code publicly available at https://github.com/mil-tokyo/ controllable-story-completion-pilot-study.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Experimental Setup4.1 DatasetIn this pilot study, the proposed method was trained and evaluated using ROCStories(Mostafazadeh et al., 2016). As shown inTable 2, the dataset was randomly split in a ratio of 8:1:1 to obtain training, development, and test sets. One sentence was removed from the five-sentence story. The missing position k was randomly determined based on a",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "There is no generic parameter for the \"summarization task\" for PEGASUS, so the parameter for summarization of the XSUM dataset was used.6 https://github.com/Tiiiger/bert_ score 7 https://github.com/google-research/ bleurt",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/huggingface/ datasets 9 https://github.com/Tiiiger/bert_ score/blob/master/example/Demo.ipynb",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The details of the human evaluation consist the part of the doctoral dissertation of the first author. The dissertation will be publicly available in the UTokyo Repository, https: //repository.dl.itc.u-tokyo.ac.jp/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Yusuke Mukuta for the helpful discussions. This work was partially supported by JST AIP Acceleration Research JPMJCR20U3, Moonshot R&D Grant Number JPMJPS2011, JSPS KAKENHI Grant Number JP19H01115, and JP20H05556 and Basic Research Grant (Super AI) of Institute for AI and Beyond of the University of Tokyo.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Emonet: Fine-grained emotion detection with gated recurrent neural networks",
"authors": [
{
"first": "Muhammad",
"middle": [],
"last": "Abdul",
"suffix": ""
},
{
"first": "-Mageed",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Lyle",
"middle": [],
"last": "Ungar",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Muhammad Abdul-Mageed and Lyle Ungar. 2017. Emonet: Fine-grained emotion detection with gated recurrent neural networks. In Proceedings of the 55th",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Annual Meeting of the Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "1",
"issue": "",
"pages": "718--728",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 718-728, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The Emotion Thesaurus: A Writer's Guide to Character Expression",
"authors": [
{
"first": "Angela",
"middle": [],
"last": "Ackerman",
"suffix": ""
},
{
"first": "Becca",
"middle": [],
"last": "Puglisi",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Angela Ackerman and Becca Puglisi. 2012. The Emo- tion Thesaurus: A Writer's Guide to Character Ex- pression. JADD Publishing.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Computer assisted modeling of affective tone in written documents",
"authors": [
{
"first": "C",
"middle": [
"W"
],
"last": "Anderson",
"suffix": ""
},
{
"first": "G",
"middle": [
"E"
],
"last": "Mcmaster",
"suffix": ""
}
],
"year": 1982,
"venue": "Computers and the Humanities",
"volume": "16",
"issue": "1",
"pages": "1--9",
"other_ids": {
"DOI": [
"10.1007/BF02259727"
]
},
"num": null,
"urls": [],
"raw_text": "C. W. Anderson and G. E. McMaster. 1982. Computer assisted modeling of affective tone in written docu- ments. Computers and the Humanities, 16(1):1-9.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "METEOR: An automatic metric for MT evaluation with improved correlation with human judgments",
"authors": [
{
"first": "Satanjeev",
"middle": [],
"last": "Banerjee",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization",
"volume": "",
"issue": "",
"pages": "65--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with im- proved correlation with human judgments. In Pro- ceedings of the ACL Workshop on Intrinsic and Ex- trinsic Evaluation Measures for Machine Transla- tion and/or Summarization, pages 65-72, Ann Arbor, Michigan. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Modeling protagonist emotions for emotion-aware storytelling",
"authors": [
{
"first": "Faeze",
"middle": [],
"last": "Brahman",
"suffix": ""
},
{
"first": "Snigdha",
"middle": [],
"last": "Chaturvedi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Faeze Brahman and Snigdha Chaturvedi. 2020. Mod- eling protagonist emotions for emotion-aware story- telling. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, Punta Cana, Dominican Republic. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "my way of telling a story\": Persona based grounded story generation",
"authors": [
{
"first": "Khyathi",
"middle": [],
"last": "Chandu",
"suffix": ""
},
{
"first": "Shrimai",
"middle": [],
"last": "Prabhumoye",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Black",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Second Workshop on Storytelling",
"volume": "",
"issue": "",
"pages": "11--21",
"other_ids": {
"DOI": [
"10.18653/v1/W19-3402"
]
},
"num": null,
"urls": [],
"raw_text": "Khyathi Chandu, Shrimai Prabhumoye, Ruslan Salakhutdinov, and Alan W Black. 2019. \"my way of telling a story\": Persona based grounded story generation. In Proceedings of the Second Workshop on Storytelling, pages 11-21, Florence, Italy. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Plug and play language models: A simple approach to controlled text generation",
"authors": [
{
"first": "Sumanth",
"middle": [],
"last": "Dathathri",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Madotto",
"suffix": ""
},
{
"first": "Janice",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Jane",
"middle": [],
"last": "Hung",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "Piero",
"middle": [],
"last": "Molino",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Yosinski",
"suffix": ""
},
{
"first": "Rosanne",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, and Rosanne Liu. 2020. Plug and play language models: A simple approach to controlled text generation. In International Conference on Learning Representa- tions.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Enabling language models to fill in the blanks",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Donahue",
"suffix": ""
},
{
"first": "Mina",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2492--2501",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.225"
]
},
"num": null,
"urls": [],
"raw_text": "Chris Donahue, Mina Lee, and Percy Liang. 2020. En- abling language models to fill in the blanks. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 2492- 2501, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Hierarchical neural story generation",
"authors": [
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Dauphin",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "889--898",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 889-898, Melbourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The Screenwriter's Workbook, Revised Edition",
"authors": [
{
"first": "Syd",
"middle": [],
"last": "Field",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Syd Field. 2006. The Screenwriter's Workbook, Revised Edition. Delta Trade Paperbacks.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A knowledge-enhanced pretraining model for commonsense story generation",
"authors": [
{
"first": "Jian",
"middle": [],
"last": "Guan",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Zhihao",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Xiaoyan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2020,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "8",
"issue": "",
"pages": "93--108",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00302"
]
},
"num": null,
"urls": [],
"raw_text": "Jian Guan, Fei Huang, Zhihao Zhao, Xiaoyan Zhu, and Minlie Huang. 2020. A knowledge-enhanced pre- training model for commonsense story generation. Transactions of the Association for Computational Linguistics, 8:93-108.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Narrative universals, heroic tragi-comedy, and shakespeare's political ambivalence",
"authors": [
{
"first": "Hogan",
"middle": [],
"last": "Patrick Colm",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "33",
"issue": "",
"pages": "34--66",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrick Colm Hogan. 2006. Narrative universals, heroic tragi-comedy, and shakespeare's political ambiva- lence. College Literature, 33(1):34-66.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A passion for plot: Prolegomena to affective narratology. symplok\u0113",
"authors": [
{
"first": "Hogan",
"middle": [],
"last": "Patrick Colm",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "18",
"issue": "",
"pages": "65--81",
"other_ids": {
"DOI": [
"http://www.jstor.org/stable/10.5250/symploke.18.1-2.0065"
]
},
"num": null,
"urls": [],
"raw_text": "Patrick Colm Hogan. 2010. A passion for plot: Pro- legomena to affective narratology. symplok\u0113, 18(1- 2):65-81.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Description, explanation, and the meanings of \"narrative",
"authors": [
{
"first": "Hogan",
"middle": [],
"last": "Patrick Colm",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrick Colm Hogan. 2019. Description, explanation, and the meanings of \"narrative\". Evolutionary Stud- ies in Imaginative Culture, 3:45+. 1, 45, Critical essay.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "INSET: Sentence infilling with INter-SEntential transformer",
"authors": [
{
"first": "Yichen",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Yizhe",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Oussama",
"middle": [],
"last": "Elachqar",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Cheng",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2502--2515",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.226"
]
},
"num": null,
"urls": [],
"raw_text": "Yichen Huang, Yizhe Zhang, Oussama Elachqar, and Yu Cheng. 2020. INSET: Sentence infilling with INter-SEntential transformer. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 2502-2515, Online. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Unsupervised hierarchical story infilling",
"authors": [
{
"first": "Daphne",
"middle": [],
"last": "Ippolito",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Douglas",
"middle": [],
"last": "Eck",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the First Workshop on Narrative Understanding",
"volume": "",
"issue": "",
"pages": "37--43",
"other_ids": {
"DOI": [
"10.18653/v1/W19-2405"
]
},
"num": null,
"urls": [],
"raw_text": "Daphne Ippolito, David Grangier, Chris Callison-Burch, and Douglas Eck. 2019. Unsupervised hierarchical story infilling. In Proceedings of the First Work- shop on Narrative Understanding, pages 37-43, Min- neapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Emotions, music, and literature., Handbook of emotions",
"authors": [
{
"first": "R",
"middle": [
"N"
],
"last": "Johnson-Laird",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Oatley",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "102--113",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. N. Johnson-Laird and Keith Oatley. 2008. Emotions, music, and literature., Handbook of emotions, 3rd ed., pages 102-113. The Guilford Press, New York, NY, US.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Ctrl: A conditional transformer language model for controllable generation",
"authors": [
{
"first": "Bryan",
"middle": [],
"last": "Nitish Shirish Keskar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mccann",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Lav",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Varshney",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.05858"
]
},
"num": null,
"urls": [],
"raw_text": "Nitish Shirish Keskar, Bryan McCann, Lav R Varshney, Caiming Xiong, and Richard Socher. 2019. Ctrl: A conditional transformer language model for control- lable generation. arXiv preprint arXiv:1909.05858.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Who feels what and why? annotation of a literature corpus with semantic roles of emotions",
"authors": [
{
"first": "Evgeny",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Klinger",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1345--1359",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evgeny Kim and Roman Klinger. 2018. Who feels what and why? annotation of a literature corpus with se- mantic roles of emotions. In Proceedings of the 27th International Conference on Computational Linguis- tics, pages 1345-1359, Santa Fe, New Mexico, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "An analysis of emotion communication channels in fan-fiction: Towards emotional storytelling",
"authors": [
{
"first": "Evgeny",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Klinger",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Second Workshop on Storytelling",
"volume": "",
"issue": "",
"pages": "56--64",
"other_ids": {
"DOI": [
"10.18653/v1/W19-3406"
]
},
"num": null,
"urls": [],
"raw_text": "Evgeny Kim and Roman Klinger. 2019a. An analysis of emotion communication channels in fan-fiction: Towards emotional storytelling. In Proceedings of the Second Workshop on Storytelling, pages 56-64, Florence, Italy. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Frowning Frodo, wincing Leia, and a seriously great friendship: Learning to classify emotional relationships of fictional characters",
"authors": [
{
"first": "Evgeny",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Klinger",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "647--653",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1067"
]
},
"num": null,
"urls": [],
"raw_text": "Evgeny Kim and Roman Klinger. 2019b. Frowning Frodo, wincing Leia, and a seriously great friend- ship: Learning to classify emotional relationships of fictional characters. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 647-653, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Shafiq Joty, richard socher, and Nazneen Rajani. 2021. Gedi: Generative discriminator guided sequence generation",
"authors": [
{
"first": "Ben",
"middle": [],
"last": "Krause",
"suffix": ""
},
{
"first": "Akhilesh",
"middle": [],
"last": "Deepak Gotmare",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "Mccann",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ben Krause, Akhilesh Deepak Gotmare, Bryan McCann, Nitish Shirish Keskar, Shafiq Joty, richard socher, and Nazneen Rajani. 2021. Gedi: Generative discrimina- tor guided sequence generation.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Marjan",
"middle": [],
"last": "Ghazvininejad",
"suffix": ""
},
{
"first": "Abdelrahman",
"middle": [],
"last": "Mohamed",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "7871--7880",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.703"
]
},
"num": null,
"urls": [],
"raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and com- prehension. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "ROUGE: A package for automatic evaluation of summaries",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Text Summarization Branches Out",
"volume": "",
"issue": "",
"pages": "74--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Decoupled weight decay regularization",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Loshchilov",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Hutter",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In International Confer- ence on Learning Representations.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Carolina Islas Sedano, Helmut Hlavacs, and Calkin Suero Montero",
"authors": [
{
"first": "Artur",
"middle": [],
"last": "Lugmayr",
"suffix": ""
},
{
"first": "Erkki",
"middle": [],
"last": "Sutinen",
"suffix": ""
},
{
"first": "Jarkko",
"middle": [],
"last": "Suhonen",
"suffix": ""
}
],
"year": 2017,
"venue": "Multimedia Tools and Applications",
"volume": "76",
"issue": "",
"pages": "15707--15733",
"other_ids": {
"DOI": [
"10.1007/s11042-016-3865-5"
]
},
"num": null,
"urls": [],
"raw_text": "Artur Lugmayr, Erkki Sutinen, Jarkko Suhonen, Car- olina Islas Sedano, Helmut Hlavacs, and Calkin Suero Montero. 2017. Serious storytelling -a first definition and review. Multimedia Tools and Appli- cations, 76:15707-15733.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Learning to control the fine-grained sentiment for story ending generation",
"authors": [
{
"first": "Fuli",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Damai",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Pengcheng",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Tianyu",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Baobao",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Zhifang",
"middle": [],
"last": "Sui",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1603"
]
},
"num": null,
"urls": [],
"raw_text": "Fuli Luo, Damai Dai, Pengcheng Yang, Tianyu Liu, Baobao Chang, Zhifang Sui, and Xu Sun. 2019. Learning to control the fine-grained sentiment for story ending generation. In Proceedings of the 57th",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Annual Meeting of the Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "6020--6026",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computational Linguistics, pages 6020-6026, Florence, Italy. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Obtaining reliable human ratings of valence, arousal, and dominance for",
"authors": [
{
"first": "Saif",
"middle": [],
"last": "Mohammad",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "20",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1017"
]
},
"num": null,
"urls": [],
"raw_text": "Saif Mohammad. 2018. Obtaining reliable human rat- ings of valence, arousal, and dominance for 20,000",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "English words",
"authors": [],
"year": null,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "174--184",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1017"
]
},
"num": null,
"urls": [],
"raw_text": "English words. In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 174-184, Melbourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Finding and generating a missing part for story completion",
"authors": [
{
"first": "Yusuke",
"middle": [],
"last": "Mori",
"suffix": ""
},
{
"first": "Hiroaki",
"middle": [],
"last": "Yamane",
"suffix": ""
},
{
"first": "Yusuke",
"middle": [],
"last": "Mukuta",
"suffix": ""
},
{
"first": "Tatsuya",
"middle": [],
"last": "Harada",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the The 4th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature",
"volume": "",
"issue": "",
"pages": "156--166",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yusuke Mori, Hiroaki Yamane, Yusuke Mukuta, and Tatsuya Harada. 2020. Finding and generating a missing part for story completion. In Proceedings of the The 4th Joint SIGHUM Workshop on Com- putational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature, pages 156-166, Online. International Committee on Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "A corpus and cloze evaluation for deeper understanding of commonsense stories",
"authors": [
{
"first": "Nasrin",
"middle": [],
"last": "Mostafazadeh",
"suffix": ""
},
{
"first": "Nathanael",
"middle": [],
"last": "Chambers",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Vanderwende",
"suffix": ""
},
{
"first": "Pushmeet",
"middle": [],
"last": "Kohli",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Allen",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "839--849",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. 2016. A corpus and cloze evaluation for deeper understanding of commonsense stories. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 839-849, San Diego, California. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Plug & play generative networks: Conditional iterative generation of images in latent space",
"authors": [
{
"first": "Anh",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Clune",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anh Nguyen, Jeff Clune, Yoshua Bengio, Alexey Doso- vitskiy, and Jason Yosinski. 2017. Plug & play gen- erative networks: Conditional iterative generation of images in latent space. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recog- nition. IEEE.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Introduction: morsels and modules: on embodying cognition in shakespeare's plays (1)",
"authors": [
{
"first": "Lalita",
"middle": [],
"last": "Pandit",
"suffix": ""
},
{
"first": "Patrick Colm",
"middle": [],
"last": "Hogan",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "33",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lalita Pandit and Patrick Colm Hogan. 2006. Introduc- tion: morsels and modules: on embodying cognition in shakespeare's plays (1). College Literature, 33:1+. 1, Article.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "BLEU: A method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {
"DOI": [
"10.3115/1073083.1073135"
]
},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Com- putational Linguistics, pages 311-318, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Pytorch: An imperative style, high-performance deep learning library",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Paszke",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Massa",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lerer",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Chanan",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Killeen",
"suffix": ""
},
{
"first": "Zeming",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Gimelshein",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Antiga",
"suffix": ""
},
{
"first": "Alban",
"middle": [],
"last": "Desmaison",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Kopf",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Devito",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Raison",
"suffix": ""
},
{
"first": "Alykhan",
"middle": [],
"last": "Tejani",
"suffix": ""
},
{
"first": "Sasank",
"middle": [],
"last": "Chilamkurthy",
"suffix": ""
},
{
"first": "Benoit",
"middle": [],
"last": "Steiner",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Junjie",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "Soumith",
"middle": [],
"last": "Chintala",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "32",
"issue": "",
"pages": "8024--8035",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Te- jani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelz- imer, F. d'Alch\u00e9-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8024-8035. Curran Associates, Inc.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Prophetnet: Predicting future n-gram for sequence-to-sequence pre-training",
"authors": [
{
"first": "Weizhen",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Yeyun",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Dayiheng",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Jiusheng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Ruofei",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weizhen Qi, Yu Yan, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang, and Ming Zhou. 2020. Prophetnet: Predicting future n-gram for sequence-to-sequence pre-training. In Proceedings of the 2020 Conference on Empirical Methods in Nat- ural Language Processing, Punta Cana, Dominican Republic. Association for Computational Linguistics.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Back to the future: Unsupervised backprop-based decoding for counterfactual and abductive commonsense reasoning",
"authors": [
{
"first": "Lianhui",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Vered",
"middle": [],
"last": "Shwartz",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "West",
"suffix": ""
},
{
"first": "Chandra",
"middle": [],
"last": "Bhagavatula",
"suffix": ""
},
{
"first": "Jena",
"middle": [
"D"
],
"last": "Hwang",
"suffix": ""
},
{
"first": "Ronan",
"middle": [
"Le"
],
"last": "Bras",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bosselut",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "794--805",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.58"
]
},
"num": null,
"urls": [],
"raw_text": "Lianhui Qin, Vered Shwartz, Peter West, Chandra Bha- gavatula, Jena D. Hwang, Ronan Le Bras, Antoine Bosselut, and Yejin Choi. 2020. Back to the future: Unsupervised backprop-based decoding for counter- factual and abductive commonsense reasoning. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 794-805, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Exploring the limits of transfer learning with a unified text-to-text transformer",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Sharan",
"middle": [],
"last": "Narang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Matena",
"suffix": ""
},
{
"first": "Yanqi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"J"
],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "Journal of Machine Learning Research",
"volume": "21",
"issue": "140",
"pages": "1--67",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Kather- ine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Plotmachines: Outlineconditioned generation with dynamic plot state tracking",
"authors": [
{
"first": "Asli",
"middle": [],
"last": "Hannah Rashkin",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Celikyilmaz",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hannah Rashkin, Asli Celikyilmaz, Yejin Choi, and Jianfeng Gao. 2020. Plotmachines: Outline- conditioned generation with dynamic plot state track- ing.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "The emotional arcs of stories are dominated by six basic shapes",
"authors": [
{
"first": "Andrew",
"middle": [
"J"
],
"last": "Reagan",
"suffix": ""
},
{
"first": "Lewis",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Dilan",
"middle": [],
"last": "Kiley",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"M"
],
"last": "Danforth",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"Sheridan"
],
"last": "Dodds",
"suffix": ""
}
],
"year": 2016,
"venue": "EPJ Data Science",
"volume": "5",
"issue": "1",
"pages": "",
"other_ids": {
"DOI": [
"10.1140/epjds/s13688-016-0093-1"
]
},
"num": null,
"urls": [],
"raw_text": "Andrew J. Reagan, Lewis Mitchell, Dilan Kiley, Christo- pher M. Danforth, and Peter Sheridan Dodds. 2016. The emotional arcs of stories are dominated by six basic shapes. EPJ Data Science, 5(1):31.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "A circumplex model of affect",
"authors": [
{
"first": "James",
"middle": [
"A"
],
"last": "Russell",
"suffix": ""
}
],
"year": 1980,
"venue": "Journal of personality and social psychology",
"volume": "39",
"issue": "",
"pages": "1161--1178",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James A. Russell. 1980. A circumplex model of af- fect. Journal of personality and social psychology, 39:1161-1178.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "BLEURT: Learning robust metrics for text generation",
"authors": [
{
"first": "Thibault",
"middle": [],
"last": "Sellam",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Parikh",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "7881--7892",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.704"
]
},
"num": null,
"urls": [],
"raw_text": "Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020. BLEURT: Learning robust metrics for text genera- tion. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7881-7892, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Megatron-lm: Training multi-billion parameter language models using model parallelism",
"authors": [
{
"first": "Mohammad",
"middle": [],
"last": "Shoeybi",
"suffix": ""
},
{
"first": "Mostofa",
"middle": [],
"last": "Patwary",
"suffix": ""
},
{
"first": "Raul",
"middle": [],
"last": "Puri",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Legresley",
"suffix": ""
},
{
"first": "Jared",
"middle": [],
"last": "Casper",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "Catanzaro",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catan- zaro. 2020. Megatron-lm: Training multi-billion parameter language models using model parallelism.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "SAVE THE CAT! The Last Book on Screenwriting You'll Ever Need",
"authors": [
{
"first": "Blake",
"middle": [],
"last": "Snyder",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Blake Snyder. 2005. SAVE THE CAT! The Last Book on Screenwriting You'll Ever Need. Michael Wiese Productions.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Learning to identify emotions in text",
"authors": [
{
"first": "Carlo",
"middle": [],
"last": "Strapparava",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2008 ACM Symposium on Applied Computing, SAC '08",
"volume": "",
"issue": "",
"pages": "1556--1560",
"other_ids": {
"DOI": [
"10.1145/1363686.1364052"
]
},
"num": null,
"urls": [],
"raw_text": "Carlo Strapparava and Rada Mihalcea. 2008. Learning to identify emotions in text. In Proceedings of the 2008 ACM Symposium on Applied Computing, SAC '08, pages 1556-1560, New York, NY, USA. ACM.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Kurt vonnegut on the shapes of stories",
"authors": [
{
"first": "Kurt",
"middle": [],
"last": "Vonnegut",
"suffix": ""
}
],
"year": 1995,
"venue": "watch?v=oP3c1h8v2ZQ. Video. Accessed: October 17",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kurt Vonnegut. 1995. Kurt vonnegut on the shapes of stories. https://www.youtube.com/ watch?v=oP3c1h8v2ZQ. Video. Accessed: Oc- tober 17, 2020.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Narrative interpolation for generating and understanding stories",
"authors": [
{
"first": "Su",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Durrett",
"suffix": ""
},
{
"first": "Katrin",
"middle": [],
"last": "Erk",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Su Wang, Greg Durrett, and Katrin Erk. 2020. Narra- tive interpolation for generating and understanding stories.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "T-CVAE: Transformer-based conditioned variational autoencoder for story completion",
"authors": [
{
"first": "Tianming",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xiaojun",
"middle": [],
"last": "Wan",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "5233--5239",
"other_ids": {
"DOI": [
"10.24963/ijcai.2019/727"
]
},
"num": null,
"urls": [],
"raw_text": "Tianming Wang and Xiaojun Wan. 2019. T-CVAE: Transformer-based conditioned variational autoen- coder for story completion. In Proceedings of the Twenty-Eighth International Joint Conference on Ar- tificial Intelligence, pages 5233-5239. International Joint Conferences on Artificial Intelligence Organi- zation.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "Remi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Drame",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-demos.6"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Megatron-cntrl: Controllable story generation with external knowledge using large-scale language models",
"authors": [
{
"first": "Peng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Mostofa",
"middle": [],
"last": "Patwary",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Shoeybi",
"suffix": ""
},
{
"first": "Raul",
"middle": [],
"last": "Puri",
"suffix": ""
},
{
"first": "Pascale",
"middle": [],
"last": "Fung",
"suffix": ""
},
{
"first": "Anima",
"middle": [],
"last": "Anandkumar",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "Catanzaro",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng Xu, Mostofa Patwary, Mohammad Shoeybi, Raul Puri, Pascale Fung, Anima Anandkumar, and Bryan Catanzaro. 2020. Megatron-cntrl: Controllable story generation with external knowledge using large-scale language models. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing, Punta Cana, Dominican Republic. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Systematic evaluation of a framework for unsupervised emotion recognition for narrative text",
"authors": [
{
"first": "Samira",
"middle": [],
"last": "Zad",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Finlayson",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the First Joint Workshop on Narrative Understanding, Storylines, and Events",
"volume": "",
"issue": "",
"pages": "26--37",
"other_ids": {
"DOI": [
"10.18653/v1/2020.nuse-1.4"
]
},
"num": null,
"urls": [],
"raw_text": "Samira Zad and Mark Finlayson. 2020. Systematic evaluation of a framework for unsupervised emotion recognition for narrative text. In Proceedings of the First Joint Workshop on Narrative Understanding, Storylines, and Events, pages 26-37, Online. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Or Sharir, and Barak Peleg. 2020. Technical report: Auxiliary tuning and its application to conditional text generation",
"authors": [
{
"first": "Yoel",
"middle": [],
"last": "Zeldes",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Padnos",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoel Zeldes, Dan Padnos, Or Sharir, and Barak Peleg. 2020. Technical report: Auxiliary tuning and its application to conditional text generation.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Side-tuning: A baseline for network adaptation via additive side networks",
"authors": [
{
"first": "Jeffrey",
"middle": [
"O"
],
"last": "Zhang",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Sax",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Zamir",
"suffix": ""
},
{
"first": "Leonidas",
"middle": [],
"last": "Guibas",
"suffix": ""
},
{
"first": "Jitendra",
"middle": [],
"last": "Malik",
"suffix": ""
}
],
"year": 2020,
"venue": "Computer Vision -ECCV 2020",
"volume": "",
"issue": "",
"pages": "698--714",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey O. Zhang, Alexander Sax, Amir Zamir, Leonidas Guibas, and Jitendra Malik. 2020a. Side-tuning: A baseline for network adaptation via additive side net- works. In Computer Vision -ECCV 2020, pages 698-714, Cham. Springer International Publishing.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "PEGASUS: Pre-training with extracted gap-sentences for abstractive summarization",
"authors": [
{
"first": "Jingqing",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yao",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Saleh",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 37th International Conference on Machine Learning",
"volume": "119",
"issue": "",
"pages": "11328--11339",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Pe- ter Liu. 2020b. PEGASUS: Pre-training with ex- tracted gap-sentences for abstractive summarization. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 11328-11339. PMLR.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Bertscore: Evaluating text generation with bert",
"authors": [
{
"first": "Tianyi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "*",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Varsha",
"middle": [],
"last": "Kishore",
"suffix": ""
},
{
"first": "*",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "*",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Kilian",
"middle": [
"Q"
],
"last": "Weinberger",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Artzi",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianyi Zhang*, Varsha Kishore*, Felix Wu*, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Eval- uating text generation with bert. In International Conference on Learning Representations.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"content": "<table><tr><td/><td/><td/><td colspan=\"2\">(excited)</td></tr><tr><td>\u2461</td><td/><td/><td/><td>Astonished</td><td>Excited</td></tr><tr><td/><td/><td colspan=\"2\">Frustrated Angry Afraid Alarmed Tense</td><td>Aroused</td><td>Happy Delighted</td></tr><tr><td/><td/><td colspan=\"2\">Annoyed</td><td/></tr><tr><td/><td/><td>Distressed</td><td/><td/><td>Glad</td></tr><tr><td/><td/><td/><td/><td/><td>Pleased</td></tr><tr><td>\u2460</td><td>(negative)</td><td>Miserable</td><td/><td/><td>Content</td><td>(positive)</td></tr><tr><td/><td/><td>Depressed Sad</td><td/><td colspan=\"2\">Satisfied At ease Serene Calm</td></tr><tr><td/><td/><td>Gloomy</td><td/><td/><td>Relaxed</td></tr><tr><td/><td/><td>Bored</td><td>Droopy</td><td>Sleepy Tired</td></tr><tr><td/><td/><td/><td colspan=\"2\">(calm)</td></tr><tr><td/><td colspan=\"2\">High Arousal, Negative Valence</td><td colspan=\"3\">High Arousal, Positive Valence</td></tr></table>",
"type_str": "table",
"text": "",
"num": null,
"html": null
},
"TABREF2": {
"content": "<table/>",
"type_str": "table",
"text": "Details of pre-trained models. The Seq2SeqLM in this study consists of encoders and decoders, both having the same number of layers, as indicated in the table for each.",
"num": null,
"html": null
},
"TABREF4": {
"content": "<table/>",
"type_str": "table",
"text": "Overview of the dataset used.",
"num": null,
"html": null
},
"TABREF6": {
"content": "<table><tr><td/><td>BLEU</td><td>generated length</td><td>runtime</td><td>samples/sec</td></tr><tr><td>BART base w/ specific param BART large w/ specific param PEGASUS large w/ specific param T5 base w/ specific param T5 large w/ specific param XLM-ProphetNet large w/ specific param</td><td>5.3528 7.3907 5.4014 4.3901 6.2494 0.1163</td><td>14.5 15.0 13.6 14.9 14.7 10.8</td><td>344.5440 546.4531 890.2809 595.7259 1031.0659 960.6619</td><td>-0.003 -0.002 -0.001 -0.002 -0.001 -0.001</td></tr><tr><td>BART base BART large 20220410_003_pegasus_large T5 base T5 large XLM-ProphetNet large 10</td><td>5.3528 7.3907 5.4014 2.3308 2.3327 0.0716</td><td>14.5 15.0 13.6 13.8 13.6 9.0</td><td>352.5765 556.1080 893.2609 487.8538 866.5806 11589.1036</td><td>-0.003 -0.002 -0.001 -0.002 -0.001 -0.000</td></tr></table>",
"type_str": "table",
"text": "The result of no-emotion-aware Seq2SeqLMs evaluated with automatic evaluation metrics.",
"num": null,
"html": null
}
}
}
}