{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:05:52.021100Z" }, "title": "Plug-and-Blend: A Framework for Controllable Story Generation with Blended Control Codes", "authors": [ { "first": "Zhiyu", "middle": [], "last": "Lin", "suffix": "", "affiliation": { "laboratory": "", "institution": "Georgia Institute of Technology North Ave NW", "location": { "postCode": "30332", "settlement": "Atlanta", "region": "GA" } }, "email": "zhiyulin@gatech.edu" }, { "first": "Mark", "middle": [ "O" ], "last": "Riedl", "suffix": "", "affiliation": {}, "email": "riedl@cc.gatech.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We describe a Plug-and-Play controllable language generation framework, Plug-and-Blend, that allows a human user to input multiple control codes (topics). In the context of automated story generation, this allows a human user loose or fine grained control of the topics that will appear in the generated story, and can even allow for overlapping, blended topics. We show that our framework, working with different generation models, controls the generation towards given continuous-weighted control codes while keeping the generated sentences fluent, demonstrating strong blending capability. Blending generative model Planner Decoder Generative Language Model Control Model John realized that basketballs fall to the ground like apples Apple Line Context Control codes 3 John was playing basketball 70% sports 30% science", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "We describe a Plug-and-Play controllable language generation framework, Plug-and-Blend, that allows a human user to input multiple control codes (topics). In the context of automated story generation, this allows a human user loose or fine grained control of the topics that will appear in the generated story, and can even allow for overlapping, blended topics. We show that our framework, working with different generation models, controls the generation towards given continuous-weighted control codes while keeping the generated sentences fluent, demonstrating strong blending capability. Blending generative model Planner Decoder Generative Language Model Control Model John realized that basketballs fall to the ground like apples Apple Line Context Control codes 3 John was playing basketball 70% sports 30% science", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Recent advancement in very large pre-trained neural language models (e.g. (Radford et al., 2019; Brown et al., 2020) ) have enabled a new generation of applications that make use of the text generation capability they provide, ranging from autocompletion of e-mails to solving complicated math equations. However these very large pre-trained neural language models are also difficult to control beyond providing a prompt for a generated continuation. This makes very large language models ill-suited for co-creative tasks wherein a human works with a language model in an iterative fashion to produce novel content, such as stories or poems. Co-creative tasks require an ability to not only prompt the language model but to guide the generation with, for example, style, context, or topic constraints.", "cite_spans": [ { "start": 74, "end": 96, "text": "(Radford et al., 2019;", "ref_id": "BIBREF18" }, { "start": 97, "end": 116, "text": "Brown et al., 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Conditional generation is a family of text generation methods that attempt to provide controllability by either directly modifying the model to accept control signals or posing constraints in the generation process. Conditional text generation techniques add an extra input feature (Ficler and Goldberg, 2017) and fine-tuning with additional information embedded (Fang et al., 2021; Hosseini-Asl et al., 2020; Keskar et al., 2019; Khalifa et al., 2020; Hu et al., 2017; Wu et al., 2020; Ficler and Goldberg, 2017; Chan et al., 2020) , or by sideloading additional discriminators along with a pre-trained model, without changing base model parameters holisticly (Dathathri et al., 2020; Madotto et al., 2020; Duan et al., 2020; Mai et al., 2020) .", "cite_spans": [ { "start": 282, "end": 309, "text": "(Ficler and Goldberg, 2017)", "ref_id": "BIBREF6" }, { "start": 363, "end": 382, "text": "(Fang et al., 2021;", "ref_id": "BIBREF5" }, { "start": 383, "end": 409, "text": "Hosseini-Asl et al., 2020;", "ref_id": "BIBREF8" }, { "start": 410, "end": 430, "text": "Keskar et al., 2019;", "ref_id": "BIBREF10" }, { "start": 431, "end": 452, "text": "Khalifa et al., 2020;", "ref_id": null }, { "start": 453, "end": 469, "text": "Hu et al., 2017;", "ref_id": "BIBREF9" }, { "start": 470, "end": 486, "text": "Wu et al., 2020;", "ref_id": "BIBREF8" }, { "start": 487, "end": 513, "text": "Ficler and Goldberg, 2017;", "ref_id": "BIBREF6" }, { "start": 514, "end": 532, "text": "Chan et al., 2020)", "ref_id": "BIBREF2" }, { "start": 661, "end": 685, "text": "(Dathathri et al., 2020;", "ref_id": "BIBREF3" }, { "start": 686, "end": 707, "text": "Madotto et al., 2020;", "ref_id": "BIBREF3" }, { "start": 708, "end": 726, "text": "Duan et al., 2020;", "ref_id": "BIBREF4" }, { "start": 727, "end": 744, "text": "Mai et al., 2020)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We seek \"plug-and-play\" approaches to controllable text generation wherein new language models can be slotted into existing generative systems; new language models are being developed and it becomes intractable to update and retrain controlled generation architectures. Plug-and-play techniques such as (Krause et al., 2020; Pascual et al., 2020) aim to only intervene with the outputs-a vector of logits-of a generative language model. This becomes especially important as the latest iteration of very large pre-trained language models such as GPT-3 (Brown et al., 2020) restrict access to the hidden states and layer weights of models. As language models improve, they can be easily incorporated into existing, controllable generation frameworks.", "cite_spans": [ { "start": 303, "end": 324, "text": "(Krause et al., 2020;", "ref_id": null }, { "start": 325, "end": 346, "text": "Pascual et al., 2020)", "ref_id": "BIBREF17" }, { "start": 551, "end": 571, "text": "(Brown et al., 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We present Plug-and-Blend 1 , an efficient plugand-play generative framework for controllable text generation that (a) works with the logit outputs of any language model; (b) facilitates fine control of generated sentences by allowing continuous bias towards specific control codes; and (c) allows multiple control codes representing style and topic constraints to be provided in overlapping contexts. These control codes can be blended together to generate content that meets multiple style or topic constraints. We describe that these key capabilities empower latent space walking in the hyperspace of generated sentences, and show a simple content planning technique that utilizes this feature to generate paragraphs regarding user intentions in a co-authoring. We present our work in the context 1 Code available at https://github.com/ xxbidiao/plug-and-blend 10 sentence story Topic: sports, lines 1-5 Topic: science, lines 5-10 User Figure 1 : Illustration of overall architecture of our framework of automated story generation wherein a human author provides a prompt as well as a high-level control specification for topics.", "cite_spans": [ { "start": 800, "end": 801, "text": "1", "ref_id": null } ], "ref_spans": [ { "start": 939, "end": 947, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Researchers aim for \"plug-and-play\" (PnP) frameworks (Dathathri et al., 2020) which can be used along an existing generative LM (referred to as the \"base LM\") with minimum or no interference between the PnP components and the base LM.", "cite_spans": [ { "start": 53, "end": 77, "text": "(Dathathri et al., 2020)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Plug-and-Play Conditional Generation", "sec_num": "2.1" }, { "text": "Comparing to non-plug-and-play methods (\"white-box\" approaches), these frameworks can be roughly classified into three categories. Graybox approaches access and modify some non-inputoutput layer computations, usually the hidden representation, hence \"plugging\" an additional model in the middle of the base LM (Dathathri et al., 2020; Madotto et al., 2020; Duan et al., 2020; Mai et al., 2020) . Black-box approaches including \"Prompt Engineering\" that aim to change the prompts fed into the base LM at inference time (Wallace et al., 2019; Li and Liang, 2021) . Guided generation targets at building a controllable \"guiding\" model that shifts the output from base LM at inference time (Krause et al., 2020; Pascual et al., 2020) .", "cite_spans": [ { "start": 310, "end": 334, "text": "(Dathathri et al., 2020;", "ref_id": "BIBREF3" }, { "start": 335, "end": 356, "text": "Madotto et al., 2020;", "ref_id": "BIBREF3" }, { "start": 357, "end": 375, "text": "Duan et al., 2020;", "ref_id": "BIBREF4" }, { "start": 376, "end": 393, "text": "Mai et al., 2020)", "ref_id": "BIBREF15" }, { "start": 518, "end": 540, "text": "(Wallace et al., 2019;", "ref_id": "BIBREF20" }, { "start": 541, "end": 560, "text": "Li and Liang, 2021)", "ref_id": "BIBREF13" }, { "start": 686, "end": 707, "text": "(Krause et al., 2020;", "ref_id": null }, { "start": 708, "end": 729, "text": "Pascual et al., 2020)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Plug-and-Play Conditional Generation", "sec_num": "2.1" }, { "text": "The generation model we propose is an extension of GeDi (Krause et al., 2020) . Adding to the complete decoupling of generation and controlling, we enhanced it with additional capabilities to support multi-topic generation with continuous weighting, supporting the downstreaming applications while keeping its capability to transfer to different base LMs.", "cite_spans": [ { "start": 56, "end": 77, "text": "(Krause et al., 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Plug-and-Play Conditional Generation", "sec_num": "2.1" }, { "text": "Neural story generation systems train or fine-tune a language model on story data. Sampling from a language model trained on story data tends to result in text output that looks like stories as well. However, sampling from P \u03b8 (x t |x