ACL-OCL / Base_JSON /prefixN /json /nuse /2021.nuse-1.7.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:05:52.021100Z"
},
"title": "Plug-and-Blend: A Framework for Controllable Story Generation with Blended Control Codes",
"authors": [
{
"first": "Zhiyu",
"middle": [],
"last": "Lin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Georgia Institute of Technology North Ave NW",
"location": {
"postCode": "30332",
"settlement": "Atlanta",
"region": "GA"
}
},
"email": "zhiyulin@gatech.edu"
},
{
"first": "Mark",
"middle": [
"O"
],
"last": "Riedl",
"suffix": "",
"affiliation": {},
"email": "riedl@cc.gatech.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We describe a Plug-and-Play controllable language generation framework, Plug-and-Blend, that allows a human user to input multiple control codes (topics). In the context of automated story generation, this allows a human user loose or fine grained control of the topics that will appear in the generated story, and can even allow for overlapping, blended topics. We show that our framework, working with different generation models, controls the generation towards given continuous-weighted control codes while keeping the generated sentences fluent, demonstrating strong blending capability. Blending generative model Planner Decoder Generative Language Model Control Model John realized that basketballs fall to the ground like apples Apple Line Context Control codes 3 John was playing basketball 70% sports 30% science",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "We describe a Plug-and-Play controllable language generation framework, Plug-and-Blend, that allows a human user to input multiple control codes (topics). In the context of automated story generation, this allows a human user loose or fine grained control of the topics that will appear in the generated story, and can even allow for overlapping, blended topics. We show that our framework, working with different generation models, controls the generation towards given continuous-weighted control codes while keeping the generated sentences fluent, demonstrating strong blending capability. Blending generative model Planner Decoder Generative Language Model Control Model John realized that basketballs fall to the ground like apples Apple Line Context Control codes 3 John was playing basketball 70% sports 30% science",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recent advancement in very large pre-trained neural language models (e.g. (Radford et al., 2019; Brown et al., 2020) ) have enabled a new generation of applications that make use of the text generation capability they provide, ranging from autocompletion of e-mails to solving complicated math equations. However these very large pre-trained neural language models are also difficult to control beyond providing a prompt for a generated continuation. This makes very large language models ill-suited for co-creative tasks wherein a human works with a language model in an iterative fashion to produce novel content, such as stories or poems. Co-creative tasks require an ability to not only prompt the language model but to guide the generation with, for example, style, context, or topic constraints.",
"cite_spans": [
{
"start": 74,
"end": 96,
"text": "(Radford et al., 2019;",
"ref_id": "BIBREF18"
},
{
"start": 97,
"end": 116,
"text": "Brown et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Conditional generation is a family of text generation methods that attempt to provide controllability by either directly modifying the model to accept control signals or posing constraints in the generation process. Conditional text generation techniques add an extra input feature (Ficler and Goldberg, 2017) and fine-tuning with additional information embedded (Fang et al., 2021; Hosseini-Asl et al., 2020; Keskar et al., 2019; Khalifa et al., 2020; Hu et al., 2017; Wu et al., 2020; Ficler and Goldberg, 2017; Chan et al., 2020) , or by sideloading additional discriminators along with a pre-trained model, without changing base model parameters holisticly (Dathathri et al., 2020; Madotto et al., 2020; Duan et al., 2020; Mai et al., 2020) .",
"cite_spans": [
{
"start": 282,
"end": 309,
"text": "(Ficler and Goldberg, 2017)",
"ref_id": "BIBREF6"
},
{
"start": 363,
"end": 382,
"text": "(Fang et al., 2021;",
"ref_id": "BIBREF5"
},
{
"start": 383,
"end": 409,
"text": "Hosseini-Asl et al., 2020;",
"ref_id": "BIBREF8"
},
{
"start": 410,
"end": 430,
"text": "Keskar et al., 2019;",
"ref_id": "BIBREF10"
},
{
"start": 431,
"end": 452,
"text": "Khalifa et al., 2020;",
"ref_id": null
},
{
"start": 453,
"end": 469,
"text": "Hu et al., 2017;",
"ref_id": "BIBREF9"
},
{
"start": 470,
"end": 486,
"text": "Wu et al., 2020;",
"ref_id": "BIBREF8"
},
{
"start": 487,
"end": 513,
"text": "Ficler and Goldberg, 2017;",
"ref_id": "BIBREF6"
},
{
"start": 514,
"end": 532,
"text": "Chan et al., 2020)",
"ref_id": "BIBREF2"
},
{
"start": 661,
"end": 685,
"text": "(Dathathri et al., 2020;",
"ref_id": "BIBREF3"
},
{
"start": 686,
"end": 707,
"text": "Madotto et al., 2020;",
"ref_id": "BIBREF3"
},
{
"start": 708,
"end": 726,
"text": "Duan et al., 2020;",
"ref_id": "BIBREF4"
},
{
"start": 727,
"end": 744,
"text": "Mai et al., 2020)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We seek \"plug-and-play\" approaches to controllable text generation wherein new language models can be slotted into existing generative systems; new language models are being developed and it becomes intractable to update and retrain controlled generation architectures. Plug-and-play techniques such as (Krause et al., 2020; Pascual et al., 2020) aim to only intervene with the outputs-a vector of logits-of a generative language model. This becomes especially important as the latest iteration of very large pre-trained language models such as GPT-3 (Brown et al., 2020) restrict access to the hidden states and layer weights of models. As language models improve, they can be easily incorporated into existing, controllable generation frameworks.",
"cite_spans": [
{
"start": 303,
"end": 324,
"text": "(Krause et al., 2020;",
"ref_id": null
},
{
"start": 325,
"end": 346,
"text": "Pascual et al., 2020)",
"ref_id": "BIBREF17"
},
{
"start": 551,
"end": 571,
"text": "(Brown et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We present Plug-and-Blend 1 , an efficient plugand-play generative framework for controllable text generation that (a) works with the logit outputs of any language model; (b) facilitates fine control of generated sentences by allowing continuous bias towards specific control codes; and (c) allows multiple control codes representing style and topic constraints to be provided in overlapping contexts. These control codes can be blended together to generate content that meets multiple style or topic constraints. We describe that these key capabilities empower latent space walking in the hyperspace of generated sentences, and show a simple content planning technique that utilizes this feature to generate paragraphs regarding user intentions in a co-authoring. We present our work in the context 1 Code available at https://github.com/ xxbidiao/plug-and-blend 10 sentence story Topic: sports, lines 1-5 Topic: science, lines 5-10 User Figure 1 : Illustration of overall architecture of our framework of automated story generation wherein a human author provides a prompt as well as a high-level control specification for topics.",
"cite_spans": [
{
"start": 800,
"end": 801,
"text": "1",
"ref_id": null
}
],
"ref_spans": [
{
"start": 939,
"end": 947,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Researchers aim for \"plug-and-play\" (PnP) frameworks (Dathathri et al., 2020) which can be used along an existing generative LM (referred to as the \"base LM\") with minimum or no interference between the PnP components and the base LM.",
"cite_spans": [
{
"start": 53,
"end": 77,
"text": "(Dathathri et al., 2020)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Plug-and-Play Conditional Generation",
"sec_num": "2.1"
},
{
"text": "Comparing to non-plug-and-play methods (\"white-box\" approaches), these frameworks can be roughly classified into three categories. Graybox approaches access and modify some non-inputoutput layer computations, usually the hidden representation, hence \"plugging\" an additional model in the middle of the base LM (Dathathri et al., 2020; Madotto et al., 2020; Duan et al., 2020; Mai et al., 2020) . Black-box approaches including \"Prompt Engineering\" that aim to change the prompts fed into the base LM at inference time (Wallace et al., 2019; Li and Liang, 2021) . Guided generation targets at building a controllable \"guiding\" model that shifts the output from base LM at inference time (Krause et al., 2020; Pascual et al., 2020) .",
"cite_spans": [
{
"start": 310,
"end": 334,
"text": "(Dathathri et al., 2020;",
"ref_id": "BIBREF3"
},
{
"start": 335,
"end": 356,
"text": "Madotto et al., 2020;",
"ref_id": "BIBREF3"
},
{
"start": 357,
"end": 375,
"text": "Duan et al., 2020;",
"ref_id": "BIBREF4"
},
{
"start": 376,
"end": 393,
"text": "Mai et al., 2020)",
"ref_id": "BIBREF15"
},
{
"start": 518,
"end": 540,
"text": "(Wallace et al., 2019;",
"ref_id": "BIBREF20"
},
{
"start": 541,
"end": 560,
"text": "Li and Liang, 2021)",
"ref_id": "BIBREF13"
},
{
"start": 686,
"end": 707,
"text": "(Krause et al., 2020;",
"ref_id": null
},
{
"start": 708,
"end": 729,
"text": "Pascual et al., 2020)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Plug-and-Play Conditional Generation",
"sec_num": "2.1"
},
{
"text": "The generation model we propose is an extension of GeDi (Krause et al., 2020) . Adding to the complete decoupling of generation and controlling, we enhanced it with additional capabilities to support multi-topic generation with continuous weighting, supporting the downstreaming applications while keeping its capability to transfer to different base LMs.",
"cite_spans": [
{
"start": 56,
"end": 77,
"text": "(Krause et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Plug-and-Play Conditional Generation",
"sec_num": "2.1"
},
{
"text": "Neural story generation systems train or fine-tune a language model on story data. Sampling from a language model trained on story data tends to result in text output that looks like stories as well. However, sampling from P \u03b8 (x t |x <t ) (See Section 3) is uncontrolled in the sense that one does not have any influence over the output after the initial context prompt.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Controllable Story Generation",
"sec_num": "2.2"
},
{
"text": "A number of story generation systems have attempted to condition the generation with some form of high-level plan. Storytelling systems such as (Akoury et al., 2020; Yao et al., 2019) embeds topic constraints directly into the model. These system extract a set of topics from a dataset that must be incorporated into the story. PlotMachines (Rashkin et al., 2020) allows a human user to specify topics that can be incorporated into a story in any order. generate a story by interpolating between a start event and an end event in a slot filling fashion, targeted the same goal. Our work differs in two ways. First, we allow blending of topics such that a single line in a story can meet more than one topic provided by a human user. Second, we have developed a black-box plug-and-play system that works with different LMs.",
"cite_spans": [
{
"start": 144,
"end": 165,
"text": "(Akoury et al., 2020;",
"ref_id": "BIBREF0"
},
{
"start": 166,
"end": 183,
"text": "Yao et al., 2019)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Controllable Story Generation",
"sec_num": "2.2"
},
{
"text": "Generative Language Models (LMs), specifically continuation models, take a context (\"prompt\") and generate a continuation by predicting the next tokens. This is achieved by optimizing the model parameters \u03b8 that best estimates the probability density of a sequence of word tokens x 1:T = {x 1 , . . . , x T } represented as an auto-regressive factorization",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P \u03b8 (x 1:T ) = T t=1 P \u03b8 (x t | x <t ) .",
"eq_num": "(1)"
}
],
"section": "Preliminaries",
"sec_num": "3"
},
{
"text": "By iteratively predicting a distribution on the next token given the previous tokens, a continuation can be generated by repeatedly sampling",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "3"
},
{
"text": "P \u03b8 (x t | x <t )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "3"
},
{
"text": "and attach the selected token back to the \"previous\" tokens for the next step. Sequences generated this way are not controlled; To control the generated sequence, an attribute represented as a class variable (Keskar et al., 2019) that could describe sentiment or topics can be introduced to equation 1to form a Class-Conditional Language Model (CC-LM):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P \u03b8 (x 1:T | c) = T t=1 P \u03b8 (x t | x <t , c)",
"eq_num": "(2)"
}
],
"section": "Preliminaries",
"sec_num": "3"
},
{
"text": "where c represents the class variable, or \"control code\", that describes an attribute of the sequence x 1:T . However, since c and x 1:T are entangled in equation 2, naively optimizing P \u03b8 requires a new CC-LM to be trained.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "3"
},
{
"text": "To decouple the conditional generation component, c, from the unconditional part, P LM (x 1:T ), (Krause et al., 2020) proposed the GeDi framework and an algorithm to enable a separate controlling model to guide the generation process of a base language model. Instead of tackling P \u03b8 (x 1:T | c) directly, they train a contrastive discriminator model on the side to estimate",
"cite_spans": [
{
"start": 97,
"end": 118,
"text": "(Krause et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "3"
},
{
"text": "P \u03b8 (c | x 1:t ) = \u03b1P (c) t j=1 P \u03b8 (x j | x <j , c) (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "3"
},
{
"text": "where \u03b1 is the normalization constant \u03b1 = 1/( c \u2208{c,c}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "3"
},
{
"text": "t j=1 P (c ) P \u03b8 (x j | x <j , c ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "3"
},
{
"text": ", and c and c are contrastive control codes (c and not-c). At the decoding stage of the generation process, one can guide the generation by using P \u03b8 (c | x 1:t ) as a posterior to the output probability distribution of the base LM:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "3"
},
{
"text": "P (x t | x <t , c) \u221d P LM (x t | x <t ) P \u03b8 (c | x t , x <t ) \u03c9 (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "3"
},
{
"text": "where \u03c9 is a parameter for control strength, with larger values biasing generation more strongly towards c. CC-LMs trained this way do not require access to any internal data of the base LM, and works independently of it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "3"
},
{
"text": "Our Plug-and-Blend framework consists of two components (See figure 1): (1) a blending generative Model that is responsible for plug-and-play controlled continuations using the control specifications; and (2) a planner that plans and assigns control specifications based on control sketches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Plug-and-Blend Framework",
"sec_num": "4"
},
{
"text": "A control sketch is a high-level specification of what topics should be present in the story and what portions of the story each topic should approximately appear in. This provides a human co-creator the ability to guide the generator loosely, with a broad range per topic, or tightly, with a narrow range per topic. We envision a co-creative loop wherein the human user provides a control sketch and iteratively updates the control sketch based on generation results, refining the topics and refining the ranges for the topics. The user interface for eliciting control sketches from a human is outside the scope of this paper and experiments about the co-creative loop are left for future work. The next sections provide the algorithmic support for control sketches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Plug-and-Blend Framework",
"sec_num": "4"
},
{
"text": "The blending generative model generates the sentence continuation. It consists of two parts, a (1) plug-and-play language model and (2) a control model. Given a prompt x <t , the plug-andplay language model produces a vector of logits P LM (x t | x <t ). The control model biases the output of the language model toward particular tokens associated with the topics of the control codes c \u2208 C based on the desired strengths of each topic \u03c9 * c\u2208C \u2208 \u2126. Together the two models iteratively find the best token x t that reflects both natural language composition and control bias presented by c and \u03c9. A larger \u03c9 * c means more steering towards the topic represented by control code c.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Blending Generative Model",
"sec_num": "4.1"
},
{
"text": "Inspired by the application of generative adversarial networks to latent space walking, we treat P \u03b8 (c | x t , x <t ) (described in section 3) as a heuristic of direction that increases P (x t | x <t , c) in a |V |-dimensional latent space, where V is the language model's vocabulary. For example, consider two different control codes c 1 and c 2 instantiating equation (4). To apply both control codes in the generation process, we use the heuristic",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Blending Generative Model",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (x t | x <t , c 1 , c 2 ) \u221d P LM (x t | x <t ) \u00d7 P \u03b8 (c 1 | x t , x <t ) \u03c9 1 P \u03b8 (c 2 | x t , x <t ) \u03c9 2",
"eq_num": "(5)"
}
],
"section": "Blending Generative Model",
"sec_num": "4.1"
},
{
"text": "to combine the effect of both posterior distributions into one universal posterior. \u03c9 1 and \u03c9 2 in this case represents control strength for each control code, c 1 and c 2 respectively, and can be different, enabling continuous blending between topics. This process can be repeated with a set of control codes C = {c 1 , . . . , c n } with weights \u2126 = {\u03c9 1 , . . . , \u03c9 n }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Blending Generative Model",
"sec_num": "4.1"
},
{
"text": "Formally, at the decoding stage of the generation process, a control model compute controlled probability using the following equation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Blending Generative Model",
"sec_num": "4.1"
},
{
"text": "P (x t | x <t , C) = P LM (x t | x <t ) c * \u2208C P \u03b8 (c * | x t , x <t ) \u03c9 * c (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Blending Generative Model",
"sec_num": "4.1"
},
{
"text": "where the control strengths of individual control codes are normalized with c \u03c9 * c = \u03c9, where \u03c9 is total control strength. 2 This can be efficiently computed by batching input sequences appended by different control codes, with little overhead compared to the original GeDi (Krause et al., 2020) framework.",
"cite_spans": [
{
"start": 275,
"end": 296,
"text": "(Krause et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Blending Generative Model",
"sec_num": "4.1"
},
{
"text": "The human user provides a high-level control sketch of the story, consisting of the number of sentences, N , a set of topics, C, and a range of lines to which to apply the topic, r := (s, e) where s \u2264 e. See figure 2 for example sketches. Sketches can have their range r overlap such that multiple topics can be applied to the same lines of the story.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Planner",
"sec_num": "4.2"
},
{
"text": "Given the control sketch, the planner produces a control configuration C n , \u2126 n for each sentence position n = {0, . . . , N \u2212 1}. The control configuration for each sentence is passed to the blending generative model along with previous generated sentences as prompt.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Planner",
"sec_num": "4.2"
},
{
"text": "We interpret a control sketch as story arc on a specific topic, which typically contains a transition, an engagement and a phase-out, the planner should give highest control strength to the midpoint of the area, m := (s + e)/2, and lower strength towards the start and end of the span of the area; We capture this as a Gaussian distribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Planner",
"sec_num": "4.2"
},
{
"text": "Formally, the following equation translates the sketch into a control configuration for each position n \u2208 N :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Planner",
"sec_num": "4.2"
},
{
"text": "\u03c9 + c,n = f (N (m, (\u03c3/(e \u2212 s + ) 2 ))(n \u2212 m) (7) where f (\u2022) indicates probability density function,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Planner",
"sec_num": "4.2"
},
{
"text": "is an infinitesimal, and \u03c3 is a tunable parameter representing overall transition smoothness, where higher \u03c3 grants smoother transitions in the cost of reduced topic engagement for midpoint. Since there can be multiple control sketches and they can be of the same control code, we apply each individual sketch in the order they are presented and normalize after each application so that \u03a3 n \u03c9 c,n = 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Planner",
"sec_num": "4.2"
},
{
"text": "For our experiments, we use the GPT2-large model fine-tuned on ROCStories (Mostafazadeh et al., 2016) as our base language model. Fine-tuning GPT2 on ROCStories results in a model that generates short stories about common everyday situations. We pair the language model with a pretrained GeDi (which in turn is based on GPT-2-medium) trained on AG-news 3 as the guiding model. Across all setups, at generation time, we use greedy decoding with repetition penalty described in Keskar et al. 2019, and only use the first sentence generated as the output, discarding every token after it if any.",
"cite_spans": [
{
"start": 74,
"end": 101,
"text": "(Mostafazadeh et al., 2016)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Since there is no ground truth for any generated sequence, metrics such as BLEU and other n-grambased metrics are not applicable. This poses a unique challenge in evaluating our system, limiting us to unsupervised metrics. In this section, we report evaluation of our blending generative model from two aspects:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "\u2022 Fluency: measuring how our generated sequence forms natural language; and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "\u2022 Control fidelity: measuring how our generated sequence respects the requested control codes and strength.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "To evaluate fluency of sequences generated by our blending generation model, we use perplexity of base language model. The intuition is that if generated sentences have low average perplexity when evaluated by the base LM then they are consistent with sentences we would find in the English language, as represented by the data used to train the base LM. This in turn results in fluent-appearing sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Blending Fluency",
"sec_num": "5.1"
},
{
"text": "To generate sequences from our model, we used 100 sentences from a held-out evaluation set of ROCStories not seen at fine-tuning time. ROC-Stories contains five-sentence stories; we always pick the first sentence. That sentence becomes our prompt and is paired with all possible combinations of two topic choices chosen from \"Business\", \"Science\", \"Sports\", or \"World\". These are the topics that the GeDi model are optimized for. Our control sketch gives equal blending weighting for all topics. We vary the control strength using the following Figure 2 : Perplexity (lower is better) of generated sequences with 2 topics. Baseline performance set at 1x of (Krause et al., 2020)-suggested control strength. increments: [0, 0.5, 1, 1.5, 2, 3, 4]x, where 0 represents an uncontrolled base LM and 4x represents 400% of the control strength hyperparameter used by Krause et al. (2020) . Figure 2 shows the average perplexity of generated sequences, measured by the Base LM. We observe that average perplexity increases with stronger control, signaling a departure of generated sequences from what the base LM would generate, and a potential decrease in fluency. This is to be expected as the control is biasing the generated text more and more toward the use of words that are consistent with a particular topic and away from general word frequency. While perplexity increase is more or less linear in the range of 0 to 2x strength, once above 2x strength, it can be better described as exponential, hinting a stabler capability to generate fluent sentences in the region of 0 to 2x control strength.",
"cite_spans": [
{
"start": 860,
"end": 880,
"text": "Krause et al. (2020)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 545,
"end": 553,
"text": "Figure 2",
"ref_id": null
},
{
"start": 883,
"end": 891,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Blending Fluency",
"sec_num": "5.1"
},
{
"text": "Control fidelity is how well the generator responds to multiple control codes applied at once (see Krause et al. (2020) for experiments applying one control code at a time; we do not replicate them in this paper). For story generation, multiple control codes can be applied to the same sentence in a story at different weights. We perform experiments in a latent space walking setting, to measure content changes of generated sentences under the same prompt, same control codes but different relative control strength, in an unsupervised way.",
"cite_spans": [
{
"start": 99,
"end": 119,
"text": "Krause et al. (2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Control Fidelity",
"sec_num": "5.2"
},
{
"text": "Given a particular prompt line in a story and two control topics c 1 and c 2 , we re-generate the same line multiple times under different control strengths for each topic. Specifically we set \u03c9 c 1 to 0%, 25%, 50%, 75% or 100% and \u03c9 c 2 = 1 \u2212 \u03c9 c 1 to represent a range of different possible blends of topics in the same line. See table 1 for an example. Since we know the control parameters used to generate these sentences, in which c 1 receives more and more control strength relative to c 2 , we expect to see sentences that are increasingly about topic c 1 and decreasingly about topic c 2 . These sentences do not comprise a story sequence, but are different alternative sentences for the same line in a story under different topic control specifications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Control Fidelity",
"sec_num": "5.2"
},
{
"text": "To determine whether a given generated sentence was representative of a topic, we score each generated sentence with an off-the-shelf BART-based zero-shot classifier (Wolf et al., 2020) 4 with c 1 and c 2 , in raw text form, as possible classes. We then compare the order of the sentences as determined by the classifier to the ground truth order of increasing control strength of c 1 . We report the correlation of order between these two sequences using Kendall's \u03c4 -a metric. A perfectly strictly increasing classifier score will grant a \u03c4 -a score of 1 for a sequence. If the sentences have some reordering based on classification score, \u03c4 -a is reduced. A score of 0 indicates a random ordering and and a score of \u22121 indicates a sequence that is exactly in opposite order. Table 1 shows the classifier scores for the possible next sentences under different control strengths; the classifier scores are not monotonically decreasing, resulting in a \u03c4 -a score of 0.8. Figure 3 shows a heat-map of the average \u03c4 -a score of sequences of sentences generated with different control code pairs and different total control strength (percentages). For each combination of parameters, 100 sequences of 5 sentences are generated and evaluated. Comparing to the baseline, which is the evaluation metric applied to orderrandomized stories in ROCStories dataset, we observe universal statistical significance (p < .01) in improvement in \u03c4 -a metric. That is, without a control bias, rank ordering is random. As we increase the total control strength, the rank order of generated sentences more closely matches the ground truth order.",
"cite_spans": [
{
"start": 166,
"end": 185,
"text": "(Wolf et al., 2020)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 778,
"end": 785,
"text": "Table 1",
"ref_id": null
},
{
"start": 971,
"end": 979,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Control Fidelity",
"sec_num": "5.2"
},
{
"text": "Some topic combinations (For example, Science-Sports) work better than others (For example, Science-World); the \"World\" category appears to include a lot of overlapping vocabulary usage with Prompt: The people gathered to protest the court's ruling last week. c1 = Sports c2 = Business Generated Sentence Classifier score \u03c9c 1 \u03c9c 2 c1 c2 100% 0% Coach Leeman was in a wheelchair and had been taken to hospital for treatment. 86% 14%",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Control Fidelity",
"sec_num": "5.2"
},
{
"text": "Coach Reebok was one of them. 65% 35% 50% 50% The players were joined by a few of them. 84% 16%",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "75% 25%",
"sec_num": null
},
{
"text": "The company that owns the team was fined $1,000 for violating a rule prohibiting employees from using their own equipment. 37% 63%",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "25% 75%",
"sec_num": null
},
{
"text": "Bankruptcy Judge William H. said that the bank had failed to pay its creditors and was in default on $1 billion of loans it owed them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "0% 100%",
"sec_num": null
},
{
"text": "Comparing column 1 with column 4, Kendall's \u03c4 -a = 0.8 for this generated sequence. Table 1 : An example sequence of sentences generated for evaluation of control fidelity. The first two columns indicate the requested control strengths for two topics, sports and business. The generated sentence results from the prompt and the control weights (all numbers are 2x the default control strength). The last two columns indicate the probability that each line is either Sports or Business based on a BART-based topic classifier. We expect to see the classifier score for c 1 decrease as the classifier score for c 2 increases.",
"cite_spans": [],
"ref_spans": [
{
"start": 84,
"end": 91,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "24% 76%",
"sec_num": null
},
{
"text": "the other categories. Note that a perfect Kendall's \u03c4 -a of 1.0 is likely impossible because our zeroshot topic classifier will introduce some noise to the ranking. However, the results show us that the plug-and-blend technique (a) significantly increases the likelihood that topics will be incorporated into sentences, and (b) is sensitive to blended topics. Figure 4 shows the same experiment as above, but with a non-fine-tuned version of GPT2-large. This shows that the plug-and-blend technique works on language models that haven't been finetuned on ROCStories. The prompts are still selected from ROCStories, however, for comparison, but are not as representative of the untuned model. In this condition, the text generated will not read as sentences in stories. We observe similar improvements over the baseline, demonstrating the ability of our method in keeping the strong adaptation capability.",
"cite_spans": [],
"ref_spans": [
{
"start": 360,
"end": 368,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "24% 76%",
"sec_num": null
},
{
"text": "In this section, we qualitatively demonstrate the capability of our pipeline by analyzing the generated paragraphs using simulated user inputs described as sets of control sketches. Table 2 (left column) shows three sets of control sketches with overlapping topic ranges. For example, sketch 1 requests a 10-line story that covers the topic of sports for the first 6 lines and covers the topic of science for the last 6 lines (topics overlap in the middle). For each control sketch we generate 10-line stories (N = 10) using the hyper-parameter \u03c3 = 1 (see Equation 7). We use a neutral prompt consisting of only the word \"Recently\" as the con-text to generate the first line or if the generator ever generates an empty line. The remainder of lines use up to 2 sentences generated for the previous context. Table 2 (right column) shows the generated stories for each control sketch. We bold the sentence where it is most clear that the topic has changed. Figure 5 shows how the heuristic transforms each control sketch into bias weights. The figure shows \u03c9 c 1 for c 1 = Sports showing how the planner decreases the probability density bias for the topic (the probability density for the second topic, \u03c9 c 2 , is the mirror image).",
"cite_spans": [],
"ref_spans": [
{
"start": 182,
"end": 189,
"text": "Table 2",
"ref_id": null
},
{
"start": 806,
"end": 828,
"text": "Table 2 (right column)",
"ref_id": null
},
{
"start": 954,
"end": 962,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Planner Experiments",
"sec_num": "5.3"
},
{
"text": "With slight differences in the input control sketches, we observe very different generated stories, with the transition between sports and science happening later. One can see from Figure 5 why this would be the case: the probability density for the first topic becomes increasingly stronger for the first lines of the story as the control sketch requests the second topic later.",
"cite_spans": [],
"ref_spans": [
{
"start": 181,
"end": 189,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Planner Experiments",
"sec_num": "5.3"
},
{
"text": "Because each sentence is biased by the previous sentences in addition to the control sketch, the sentence where the topic appears to switch often comes later than the point of earliest topic overlap. The requirement that each sentence continue the previous context creates a sense of momentum from the previous context and thus from the previous topic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Planner Experiments",
"sec_num": "5.3"
},
{
"text": "Incoherent transitions may still happen. In the story in Table 2 for sketch 3 shows one such incoherent transition due to the generation of an endof-text token. Our implementation uses the initial prompt in this case, causing a portion of the story to not be contextualized by the earlier story sentences. Our ROCStories-tuned language model, based on 5-sentence stories, tends to predict end-of-text earlier than models trained on longer stories.",
"cite_spans": [],
"ref_spans": [
{
"start": 57,
"end": 64,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Planner Experiments",
"sec_num": "5.3"
},
{
"text": "Our experiments suggest that there is a trade-off between control fidelity and fluency. As Figures 2 and 3 show, a higher total control strength results in overall better \u03c4 -a scores, meaning more sensitivity and ability to correctly differentiate between topic blends, but worse perplexity, risking less fluent language. In practice, an iterative deepening algorithm where multiple control strengths are used to generate multiple candidate sentences per line, can be used. Control strength modifiers of 1x, 2x, 3x, 4x, etc. can be tried and the best generated sentence, as measured by perplexity (or any other task-specific metric), is selected. This can, just like how multiple control codes are handled, be implemented very efficiently. The current planner is heuristic. Empirically we find the heuristic to create good blends. We envision a planner that can be parameterized and learn from demonstrations. Reinforcement learning, in which the context and control sketches work as world states, can choose control configurations as actions. Feedback (reward) from the user would be necessary. This would incorporate the plug-andblend technique into a human-in-the-loop creative Figure 5 : Control strength generated by the planner for the first control code used for each control sketch in Table 2 . The control strength for the second control code is the mirror of each. process wherein the generator learns blending preferences from the human creator (Guzdial et al., 2018) .",
"cite_spans": [
{
"start": 1457,
"end": 1479,
"text": "(Guzdial et al., 2018)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 91,
"end": 107,
"text": "Figures 2 and 3",
"ref_id": "FIGREF1"
},
{
"start": 1182,
"end": 1190,
"text": "Figure 5",
"ref_id": null
},
{
"start": 1294,
"end": 1301,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "In this paper we have made no assumptions about how control sketches are acquired from the human user. We envision a processes whereby the human co-creator has an idea of what the final product-in this case a story-should be and can express that vision at the level of how topics manifest through the story. Supporting a co-creative",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "This is not the only way to formalize this heuristic; We found this to be effective and efficient.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://groups.di.unipi.it/~gulli/AG_ corpus_of_news_articles.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "pipeline(\"zero-shot-classifier\")",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Generated Paragraph Sketch 1 c1 = Sports, {0 \u2192 5} c2 = Science, {4 \u2192 10} Jackie Robinson was playing in the NBA. He had just been drafted by the Cleveland Cavaliers. He decided to play for a team that was trying to win a championship. \"I'm gonna go out there and beat them,\" he said \"They're going to be great.\" \"Yeah, they're good,\" he said \"they'll be great.\" He added that his wife had been working on the project for about a year. He was happy to see her progress. She was able to get some work done and she started getting more comfortable with it. She had a lot of fun doing it! She also got her own computer, which is now in the living room.Sketch 2 c1 = Sports, 0 \u2192 5 c2 = Science, 5 \u2192 10 Jackie Robinson was playing in the NBA. He had just played basketball for a few years. He was looking forward to his first game of the season. He decided to play with his friends and play against them in the court. He got a lot of feedback from everyone who played against him, including some that were very excited about it! I was really happy when I saw how he played. I also had to admit that my favorite player was the guy who beat me in the finals. The computer game Super Mario 64 is a great game, but it's not perfect. I played it on my laptop and found that I couldn't play it properly because of some bugs. The problem was that the graphics were bad, so I had to use an emulator instead of playing the game.Sketch 3 c1 = Sports, 0 \u2192 5 c2 = Science, 6 \u2192 10 Jackie Robinson was playing in the NBA. He had just played basketball for a few years. He was looking forward to his first game of the season. He decided to play with his friends and play against them in the court. He had a lot of fun playing against them, but he didn't want to lose any time. So he played with his friends for about an hour before going home and playing again. He was very happy when they got home and started playing again! I think it's a good idea to have some fun with your kids, especially if you're not too busy. I'm sure that you'll enjoy this post as much as I did! my daughter was diagnosed with a rare form of cancer. human-AI interaction, the human user can update the control sketch and re-generate parts (or all) of the story by changing the range of topics or choosing different topics. The control model will need to support different topics at different levels of granularity; currently the control model only supports four topics, which is sufficient for conducting experiments to characterize the plug-and-blend technique but not for full co-creativity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Control Sketches",
"sec_num": null
},
{
"text": "In this paper, we present Plug-and-Blend, a plugand-play framework that enhances a base LM, enables controllable generation with continuousweighted control codes, along with capability of generating paragraphs based on control sketches, all without access to internal knowledge of this base LM. These capabilities will fuel a new generation of controllable generation applications with the key assets of decoupling between the controllable component and the generative component, and easiness of adapting to new advancements in the field of generative LMs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "This material is based upon work supported by the Office of Naval Research (ONR) under Grant #N00014-14-1-0003.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgment",
"sec_num": "8"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "STO-RIUM: A Dataset and Evaluation Platform for Machine-in-the-Loop Story Generation",
"authors": [
{
"first": "Nader",
"middle": [],
"last": "Akoury",
"suffix": ""
},
{
"first": "Shufan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Josh",
"middle": [],
"last": "Whiting",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Hood",
"suffix": ""
},
{
"first": "Nanyun",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "6470--6484",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.525"
]
},
"num": null,
"urls": [],
"raw_text": "Nader Akoury, Shufan Wang, Josh Whiting, Stephen Hood, Nanyun Peng, and Mohit Iyyer. 2020. STO- RIUM: A Dataset and Evaluation Platform for Machine-in-the-Loop Story Generation. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6470-6484, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "CoCon: A Self-Supervised Approach for Controlled Text Generation",
"authors": [
{
"first": "Alvin",
"middle": [],
"last": "Chan",
"suffix": ""
},
{
"first": "Yew-Soon",
"middle": [],
"last": "Ong",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Pung",
"suffix": ""
},
{
"first": "Aston",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Fu",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2006.03535[cs].ArXiv:2006.03535"
]
},
"num": null,
"urls": [],
"raw_text": "Alvin Chan, Yew-Soon Ong, Bill Pung, Aston Zhang, and Jie Fu. 2020. CoCon: A Self- Supervised Approach for Controlled Text Genera- tion. arXiv:2006.03535 [cs]. ArXiv: 2006.03535.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Plug and Play Language Models: A Simple Approach to Controlled Text Generation",
"authors": [
{
"first": "Sumanth",
"middle": [],
"last": "Dathathri",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Madotto",
"suffix": ""
},
{
"first": "Janice",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Jane",
"middle": [],
"last": "Hung",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "Piero",
"middle": [],
"last": "Molino",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Yosinski",
"suffix": ""
},
{
"first": "Rosanne",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, and Rosanne Liu. 2020. Plug and Play Language Mod- els: A Simple Approach to Controlled Text Genera- tion. International Conference on Learning Repre- sentations, (2020). ArXiv: 1912.02164.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Pre-train and Plug-in: Flexible Conditional Text Generation with Variational Auto-Encoders",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Jiaxin",
"middle": [],
"last": "Pei",
"suffix": ""
},
{
"first": "Jialong",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Chenliang",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "253--262",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu Duan, Canwen Xu, Jiaxin Pei, Jialong Han, and Chenliang Li. 2020. Pre-train and Plug-in: Flexible Conditional Text Generation with Variational Auto- Encoders. Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, (2020):253-262. ArXiv: 1911.03882.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Transformerbased Conditional Variational Autoencoder for Controllable Story Generation",
"authors": [
{
"first": "Le",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Chaochun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Liefeng",
"middle": [],
"last": "Bo",
"suffix": ""
},
{
"first": "Wen",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Changyou",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2101.00828"
]
},
"num": null,
"urls": [],
"raw_text": "Le Fang, Tao Zeng, Chaochun Liu, Liefeng Bo, Wen Dong, and Changyou Chen. 2021. Transformer- based Conditional Variational Autoencoder for Con- trollable Story Generation. arXiv:2101.00828 [cs].",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Controlling Linguistic Style Aspects in Neural Language Generation",
"authors": [
{
"first": "Jessica",
"middle": [],
"last": "Ficler",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Workshop on Stylistic Variation",
"volume": "",
"issue": "",
"pages": "94--104",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jessica Ficler and Yoav Goldberg. 2017. Controlling Linguistic Style Aspects in Neural Language Gen- eration. Proceedings of the Workshop on Stylistic Variation, (2017):94-104. ArXiv: 1707.02633.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Co-Creative Level Design via Machine Learning",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Guzdial",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Liao",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Riedl",
"suffix": ""
}
],
"year": 2018,
"venue": "Fifth Experimental AI in Games Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Guzdial, Nicholas Liao, and Mark Riedl. 2018. Co-Creative Level Design via Machine Learn- ing. Fifth Experimental AI in Games Workshop. ArXiv: 1809.09420.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A Simple Language Model for Task-Oriented Dialogue",
"authors": [
{
"first": "Ehsan",
"middle": [],
"last": "Hosseini-Asl",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "Mccann",
"suffix": ""
},
{
"first": "Chien-Sheng",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Semih",
"middle": [],
"last": "Yavuz",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2020,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ehsan Hosseini-Asl, Bryan McCann, Chien-Sheng Wu, Semih Yavuz, and Richard Socher. 2020. A Simple Language Model for Task-Oriented Dialogue. Ad- vances in Neural Information Processing Systems, 33.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Toward Controlled Generation of Text",
"authors": [
{
"first": "Zhiting",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Zichao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"P"
],
"last": "Xing",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 34th International Conference on Machine Learning",
"volume": "70",
"issue": "",
"pages": "1587--1596",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P. Xing. 2017. Toward Con- trolled Generation of Text. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Re- search, pages 1587-1596, International Convention Centre, Sydney, Australia. PMLR.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "CTRL: A Conditional Transformer Language Model for Controllable Generation",
"authors": [
{
"first": "Bryan",
"middle": [],
"last": "Nitish Shirish Keskar",
"suffix": ""
},
{
"first": "Lav",
"middle": [
"R"
],
"last": "Mccann",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Varshney",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.05858"
]
},
"num": null,
"urls": [],
"raw_text": "Nitish Shirish Keskar, Bryan McCann, Lav R. Varsh- ney, Caiming Xiong, and Richard Socher. 2019. CTRL: A Conditional Transformer Language Model for Controllable Generation. arXiv:1909.05858 [cs].",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Hady Elsahar, and Marc Dymetman. 2020. A Distributional Approach to Controlled Text Generation",
"authors": [
{
"first": "Muhammad",
"middle": [],
"last": "Khalifa",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2012.11635[cs].ArXiv:2012.11635"
]
},
"num": null,
"urls": [],
"raw_text": "Muhammad Khalifa, Hady Elsahar, and Marc Dymet- man. 2020. A Distributional Approach to Controlled Text Generation. arXiv:2012.11635 [cs]. ArXiv: 2012.11635.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Shafiq Joty, Richard Socher, and Nazneen Fatema Rajani. 2020. GeDi: Generative Discriminator Guided Sequence Generation",
"authors": [
{
"first": "Ben",
"middle": [],
"last": "Krause",
"suffix": ""
},
{
"first": "Akhilesh",
"middle": [],
"last": "Deepak Gotmare",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "Mc-Cann",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2009.06367[cs].ArXiv:2009.06367"
]
},
"num": null,
"urls": [],
"raw_text": "Ben Krause, Akhilesh Deepak Gotmare, Bryan Mc- Cann, Nitish Shirish Keskar, Shafiq Joty, Richard Socher, and Nazneen Fatema Rajani. 2020. GeDi: Generative Discriminator Guided Sequence Genera- tion. arXiv:2009.06367 [cs]. ArXiv: 2009.06367.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Prefix-Tuning: Optimizing Continuous Prompts for Generation",
"authors": [
{
"first": "Lisa",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2101.00190[cs].ArXiv:2101.00190"
]
},
"num": null,
"urls": [],
"raw_text": "Xiang Lisa Li and Percy Liang. 2021. Prefix-Tuning: Optimizing Continuous Prompts for Generation. arXiv:2101.00190 [cs]. ArXiv: 2101.00190.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Sumanth Dathathri, and Pascale Fung. 2020. Plug-and-Play Conversational Models",
"authors": [
{
"first": "Andrea",
"middle": [],
"last": "Madotto",
"suffix": ""
},
{
"first": "Etsuko",
"middle": [],
"last": "Ishii",
"suffix": ""
},
{
"first": "Zhaojiang",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2010.04344"
]
},
"num": null,
"urls": [],
"raw_text": "Andrea Madotto, Etsuko Ishii, Zhaojiang Lin, Sumanth Dathathri, and Pascale Fung. 2020. Plug-and-Play Conversational Models. arXiv:2010.04344 [cs].",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Plug and Play Autoencoders for Conditional Text Generation",
"authors": [
{
"first": "Florian",
"middle": [],
"last": "Mai",
"suffix": ""
},
{
"first": "Nikolaos",
"middle": [],
"last": "Pappas",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Montero",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Henderson",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "6076--6092",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.491"
]
},
"num": null,
"urls": [],
"raw_text": "Florian Mai, Nikolaos Pappas, Ivan Montero, Noah A. Smith, and James Henderson. 2020. Plug and Play Autoencoders for Conditional Text Generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6076-6092, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories",
"authors": [
{
"first": "Nasrin",
"middle": [],
"last": "Mostafazadeh",
"suffix": ""
},
{
"first": "Nathanael",
"middle": [],
"last": "Chambers",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Vanderwende",
"suffix": ""
},
{
"first": "Pushmeet",
"middle": [],
"last": "Kohli",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Allen",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "839--849",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. 2016. A Cor- pus and Evaluation Framework for Deeper Under- standing of Commonsense Stories. Proceedings of the 2016 Conference of the North {A}merican Chap- ter of the Association for Computational Linguis- tics: Human Language Technologies, pages 839- 849. ArXiv: 1604.01696.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Directed Beam Search: Plug-and-Play Lexically Constrained Language Generation",
"authors": [
{
"first": "Damian",
"middle": [],
"last": "Pascual",
"suffix": ""
},
{
"first": "Beni",
"middle": [],
"last": "Egressy",
"suffix": ""
},
{
"first": "Florian",
"middle": [],
"last": "Bolli",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Wattenhofer",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2012.15416"
]
},
"num": null,
"urls": [],
"raw_text": "Damian Pascual, Beni Egressy, Florian Bolli, and Roger Wattenhofer. 2020. Directed Beam Search: Plug-and-Play Lexically Constrained Language Generation. arXiv:2012.15416 [cs].",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Language Models are Unsupervised Multitask Learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Lan- guage Models are Unsupervised Multitask Learners. page 24.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "PlotMachines: Outlineconditioned generation with dynamic plot state tracking",
"authors": [
{
"first": "Asli",
"middle": [],
"last": "Hannah Rashkin",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Celikyilmaz",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "4274--4295",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.349"
]
},
"num": null,
"urls": [],
"raw_text": "Hannah Rashkin, Asli Celikyilmaz, Yejin Choi, and Jianfeng Gao. 2020. PlotMachines: Outline- conditioned generation with dynamic plot state tracking. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 4274-4295, Online. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Universal Adversarial Triggers for Attacking and Analyzing NLP",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Wallace",
"suffix": ""
},
{
"first": "Shi",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Kandpal",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2153--2162",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gard- ner, and Sameer Singh. 2019. Universal Adver- sarial Triggers for Attacking and Analyzing NLP. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), (2019):2153- 2162. ArXiv: 1908.07125.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Narrative Interpolation for Generating and Understanding Stories",
"authors": [
{
"first": "Su",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Durrett",
"suffix": ""
},
{
"first": "Katrin",
"middle": [],
"last": "Erk",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2008.07466"
]
},
"num": null,
"urls": [],
"raw_text": "Su Wang, Greg Durrett, and Katrin Erk. 2020. Nar- rative Interpolation for Generating and Understand- ing Stories. arXiv:2008.07466 [cs].",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "2020. HuggingFace's Transformers: State-of-the-art Natural Language Processing",
"authors": [
{
"first": "Sylvain",
"middle": [],
"last": "Teven Le Scao",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "Quentin",
"middle": [],
"last": "Drame",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Lhoest",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.03771"
]
},
"num": null,
"urls": [],
"raw_text": "Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. HuggingFace's Transformers: State-of-the-art Nat- ural Language Processing. arXiv:1910.03771 [cs].",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Hannaneh Hajishirzi, Mari Ostendorf, and Bill Dolan. 2020. A Controllable Model of Grounded Response Generation",
"authors": [
{
"first": "Zeqiu",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "Yizhe",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Quirk",
"suffix": ""
},
{
"first": "Rik",
"middle": [],
"last": "Koncel-Kedziorski",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.00613[cs].ArXiv:2005.00613"
]
},
"num": null,
"urls": [],
"raw_text": "Zeqiu Wu, Michel Galley, Chris Brockett, Yizhe Zhang, Xiang Gao, Chris Quirk, Rik Koncel-Kedziorski, Jianfeng Gao, Hannaneh Hajishirzi, Mari Ostendorf, and Bill Dolan. 2020. A Controllable Model of Grounded Response Generation. arXiv:2005.00613 [cs]. ArXiv: 2005.00613.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Plan-And-Write: Towards Better Automatic Storytelling",
"authors": [
{
"first": "Lili",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Nanyun",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Weischedel",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Dongyan",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Yan",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "7378--7385",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lili Yao, Nanyun Peng, Ralph Weischedel, Kevin Knight, Dongyan Zhao, and Rui Yan. 2019. Plan- And-Write: Towards Better Automatic Storytelling. Proceedings of the AAAI Conference on Artificial In- telligence, 33(1):7378-7385. ArXiv: 1811.05701.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"text": "(a) Baseline on order-shuffled stories in ROCStories dataset. (b) Total control strength 1x. (c) Total control strength 2x. (d) Total control strength 4x.",
"num": null
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"text": "average \u03c4 -a (higher meaning better control fidelity) under different Total control strength for the tuned model with topics: (c1) Business, (c2) Science, (c3) Sports, (c4) World, comparing to uncontrolled baseline. Heat map strength is given as percentages (\u2212100% . . . 100%).(a) Perplexity of generated sequences. (b) Total control strength 1x. (c) Total control strength 2x. (d) Total control strength 4x.",
"num": null
},
"FIGREF2": {
"uris": null,
"type_str": "figure",
"text": "Experiment results for the untuned model. Refer to Figure 3a for baseline comparison.",
"num": null
}
}
}
}