{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:24:02.357102Z" }, "title": "MIXINGBOARD: a Knowledgeable Stylized Integrated Text Generation Platform", "authors": [ { "first": "Xiang", "middle": [], "last": "Gao", "suffix": "", "affiliation": { "laboratory": "", "institution": "Microsoft Research", "location": { "settlement": "Redmond", "region": "WA", "country": "USA" } }, "email": "" }, { "first": "Michel", "middle": [], "last": "Galley", "suffix": "", "affiliation": { "laboratory": "", "institution": "Microsoft Research", "location": { "settlement": "Redmond", "region": "WA", "country": "USA" } }, "email": "mgalley@microsoft.com" }, { "first": "Bill", "middle": [], "last": "Dolan", "suffix": "", "affiliation": { "laboratory": "", "institution": "Microsoft Research", "location": { "settlement": "Redmond", "region": "WA", "country": "USA" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present MIXINGBOARD, a platform for quickly building demos with a focus on knowledge grounded stylized text generation. We unify existing text generation algorithms in a shared codebase and further adapt earlier algorithms for constrained generation. To borrow advantages from different models, we implement strategies for cross-model integration, from the token probability level to the latent space level. An interface to external knowledge is provided via a module that retrieves onthe-fly relevant knowledge from passages on the web or any document collection. A user interface for local development, remote webpage access, and a RESTful API are provided to make it simple for users to build their own demos 1 .", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "We present MIXINGBOARD, a platform for quickly building demos with a focus on knowledge grounded stylized text generation. We unify existing text generation algorithms in a shared codebase and further adapt earlier algorithms for constrained generation. To borrow advantages from different models, we implement strategies for cross-model integration, from the token probability level to the latent space level. An interface to external knowledge is provided via a module that retrieves onthe-fly relevant knowledge from passages on the web or any document collection. A user interface for local development, remote webpage access, and a RESTful API are provided to make it simple for users to build their own demos 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Neural text generation algorithms have seen great improvements over the past several years (Radford et al., 2019; Gao et al., 2019a) . However each algorithm and neural model usually focuses on a specific task and may differ significantly from each other in terms of architecture, implementation, interface, and training domains. It is challenging to unify these algorithms theoretically, but a framework to organically integrate multiple algorithms and components can benefit the community in several ways, as it provides (1) a shared codebase to reproduce and compare the state-of-the-art algorithms from different groups without time consuming trial and errors, (2) a platform to experiment the cross-model integration of these algorithms, and (3) a framework to build demo quickly upon these components. This framework can be built upon existing deep learning libraries (Paszke et al., 2019; 1 Source code available at github.com/microsoft/ MixingBoard Figure 1 : MIXINGBOARD is designed as a platform to organically and quickly integrate separate NLP algorithms into compelling demos Abadi et al., 2015) and neural NLP toolkits (Hug-gingFace, 2019; Gardner et al., 2018; Hu et al., 2018; Ott et al., 2019; Shiv et al., 2019; Miller et al., 2017) 2 , as illustrated in Fig. 1 .", "cite_spans": [ { "start": 91, "end": 113, "text": "(Radford et al., 2019;", "ref_id": "BIBREF18" }, { "start": 114, "end": 132, "text": "Gao et al., 2019a)", "ref_id": "BIBREF4" }, { "start": 874, "end": 895, "text": "(Paszke et al., 2019;", "ref_id": "BIBREF15" }, { "start": 896, "end": 896, "text": "", "ref_id": null }, { "start": 1090, "end": 1109, "text": "Abadi et al., 2015)", "ref_id": "BIBREF0" }, { "start": 1134, "end": 1154, "text": "(Hug-gingFace, 2019;", "ref_id": null }, { "start": 1155, "end": 1176, "text": "Gardner et al., 2018;", "ref_id": "BIBREF7" }, { "start": 1177, "end": 1193, "text": "Hu et al., 2018;", "ref_id": "BIBREF10" }, { "start": 1194, "end": 1211, "text": "Ott et al., 2019;", "ref_id": "BIBREF14" }, { "start": 1212, "end": 1230, "text": "Shiv et al., 2019;", "ref_id": "BIBREF20" }, { "start": 1231, "end": 1251, "text": "Miller et al., 2017)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 958, "end": 966, "text": "Figure 1", "ref_id": null }, { "start": 1274, "end": 1280, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There are several challenges to do such integration. Firstly, engineering efforts are needed to unify the interface of different implementation. Secondly, a top-level manager needs to be designed to utilize different models together. Finally, different models are trained using different data with different performance. Cross-model integration, instead of calling each isolated model individually, can potentially improve the overall performance. In this work, we unified the models of different implementation in a single codebase, implemented demos as top-level managers to access different models, and provide strategies to allow more organic integration across the models, including token probability interpolation, cross-mode scoring, latent interpolation, and unified hypothesis ranking.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This work is also aimed to promote the development of grounded text generation. The exist-ing works focusing on the knowledge grounded text generation (Prabhumoye et al., 2019; Qin et al., 2019; usually assume the knowledge passage is given. However in practice this is not true. We provide the component to retrieve knowledge passage on-the-fly from web or customized document, to allow engineers or researchers test existing or new generation models. Keyphrase constrained generation (Hokamp and Liu, 2017) is another type of grounded generation, broadly speaking. Similarly the keyphrase needs to be provided to apply such constraints. We provide tools to extract constraints from knowledge passage or stylized corpus.", "cite_spans": [ { "start": 151, "end": 176, "text": "(Prabhumoye et al., 2019;", "ref_id": "BIBREF16" }, { "start": 177, "end": 194, "text": "Qin et al., 2019;", "ref_id": "BIBREF17" }, { "start": 486, "end": 508, "text": "(Hokamp and Liu, 2017)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Finally, friendly user interface is a component usually lacking in the implementation of neural models but it is necessary for a demo-centric framework. We provide scripts to build local terminal demo, webpage demo, and RESTful API demo.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our goal is to build a framework that will allow users to quickly build text generation demos using existing modeling techniques. This design allows the framework to be almost agnostic to the ongoing development of text generation techniques (Gao et al., 2019a) . Instead, we focus on the organic integration of models and the interfaces for the final demo/app. From a top-down view, our design are bounded to two markets: text processing assistant, and conversational AI, as illustrated in Fig. 2 . Two demos are present as examples in these domains: document auto-completion and Sherlock Holmes. We further breakdown these demos into several tasks, designed to be shared across different demos. We also designed several strategies to integrate multiple models to generate text. These strategies allow each model to plug-in without heavy constraints on the architecture of the models, as detailed in Section 4.", "cite_spans": [ { "start": 242, "end": 261, "text": "(Gao et al., 2019a)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 491, "end": 497, "text": "Fig. 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Design", "sec_num": "2" }, { "text": "As the goal is not another deep learning NLP toolkit, we rely on existing ones (HuggingFace, 2019; Paszke et al., 2019; Gardner et al., 2018) and online API services Bing Web Search provided in Azure Cognitive Service 3 and TagME. 4 Similarly, most tasks are using existing algorithms: language modeling Radford et al., 2019) , knowledge grounded generation (Qin et al., 2019; Prabhumoye et al., 2019) or span retrieval (Seo et al., 2016; Devlin et al., 2018) , style transfer (Gao et al., 2019c,b) and constrained generation (Hokamp and Liu, 2017) .", "cite_spans": [ { "start": 99, "end": 119, "text": "Paszke et al., 2019;", "ref_id": "BIBREF15" }, { "start": 120, "end": 141, "text": "Gardner et al., 2018)", "ref_id": "BIBREF7" }, { "start": 231, "end": 232, "text": "4", "ref_id": null }, { "start": 304, "end": 325, "text": "Radford et al., 2019)", "ref_id": "BIBREF18" }, { "start": 358, "end": 376, "text": "(Qin et al., 2019;", "ref_id": "BIBREF17" }, { "start": 377, "end": 401, "text": "Prabhumoye et al., 2019)", "ref_id": "BIBREF16" }, { "start": 420, "end": 438, "text": "(Seo et al., 2016;", "ref_id": "BIBREF19" }, { "start": 439, "end": 459, "text": "Devlin et al., 2018)", "ref_id": "BIBREF2" }, { "start": 477, "end": 498, "text": "(Gao et al., 2019c,b)", "ref_id": null }, { "start": 526, "end": 548, "text": "(Hokamp and Liu, 2017)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Design", "sec_num": "2" }, { "text": "We use the following free-text, unstructured text sources to retrieve relevant knowledge passage.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge passage retrieval", "sec_num": "3.1" }, { "text": "\u2022 Search engine. Free-text form \"knowledge\" is retrieved from the following sources 1) text snippets from the (customized) webpage search; 2) text snippets (customized) news search; 3) user-provided documents. \u2022 Specialized websites. For certain preferred domains, e.g., wikipedia.org, we will further download the whole webpage (rather than just the text snippet returned from search engine) to obtain more text snippets. \u2022 Users can also provide their customized knowledge base, like a README file, which can be updated on-the-fly, to allow the agent using latest knowledge.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge passage retrieval", "sec_num": "3.1" }, { "text": "User may select one or multiple sources listed above to obtain knowledge passage candidates. Then the text snippets are then ranked by relevancy as well as diversity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge passage retrieval", "sec_num": "3.1" }, { "text": "We provide a component to retrieve synonym of given target style for a query word. This component is useful for the style transfer module (Section 3.4) as well as the constrained generation module (Section 3.7).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stylized synonym", "sec_num": "3.2" }, { "text": "The similarity based on word2vec, sim word2vec , is defined as the cosine similarity between the vectors of two words. The similarity based on humanedited dictionary, sim dict , is defined as 1 if the candidate word in the synonym list of the query word, otherwise 0. The final similarity between the two words is defined as the weighted average of these two similarities:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stylized synonym", "sec_num": "3.2" }, { "text": "sim = (1 \u2212 w dict ) sim word2vec + w dict sim dict", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stylized synonym", "sec_num": "3.2" }, { "text": "We only choose the candidate word with a similarity higher than certain threshold as the synonym of the query word. Then we calculate the style score of these synonym using a style classifier. We provided a logistic regression model taking 1gram multi-hot vector as features, trained on non-stylized corpus vs. stylized corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stylized synonym", "sec_num": "3.2" }, { "text": "For a given two latent vectors, z a and z b , we expect the decoded results from the interpolated vector z i = uz a + (1 \u2212 u)z b can retain the desired features from both z a and z b . However this requires a interpolatable, smooth latent space. For this purpose, we apply the SpaceFusion (Gao et al., 2019b) and StyleFusion (Gao et al., 2019c) to learn such latent space. The latent interpolation is then used to transfer style, apply soft constraints, and interpolating hypothesis obtained using different models.", "cite_spans": [ { "start": 289, "end": 308, "text": "(Gao et al., 2019b)", "ref_id": null }, { "start": 325, "end": 344, "text": "(Gao et al., 2019c)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Latent interpolation", "sec_num": "3.3" }, { "text": "Gao et al. (2019c) proposed StyleFusion to generate stylized response for a given conversation context by structuring a shared latent space for nonstylized conversation data and stylized samples. We extend it to a style transfer method, i.e., modify the style of a input sentence while maintaining its content, via latent interpolation (see Section 3.3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stylized generation", "sec_num": "3.4" }, { "text": "\u2022 Soft-edit refers to a two-step algorithm, 1) edit the input sentence by replace each word by a synonym of the target style (e.g. \"cookie\" replaced by \"biscuit\" if the target style is British), if there exists any; 2) the edited sentence from step 1 may not be fluent, so we then apply latent interpolation between the input sentence and edited sentence to seek a sentence that is both stylized and fluent. \u2022 Soft-retrieval refers to a similar two-step algorithm, but step 1) is to retrieve a \"similar\" sentence from a stylized corpus, and then apply step 2) to do the interpolation. One example is given in Fig. 5 . The hypothesis \"he was once a schoolmaster in the north of england\" is retrieved given the DialoGPT hypothesis \"he's a professor at the university of london\".", "cite_spans": [], "ref_spans": [ { "start": 609, "end": 615, "text": "Fig. 5", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Stylized generation", "sec_num": "3.4" }, { "text": "Generate a set of candidate responses conditioned on the conversation history, or a follow-up sentence conditioned on the existing text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditioned text generation", "sec_num": "3.5" }, { "text": "\u2022 GPT-2 (Radford et al., 2019 ) is a transformer (Vaswani et al., 2017 ) based text generation model. \u2022 DialoGPT ) is a large-scale pre-trained conversation model obtained by training GPT-2 (Radford et al., 2019) on Reddit comments data. \u2022 SpaceFusion (Gao et al., 2019b ) is a regularized multi-task learning framework proposed to learn a smooth and interpolatable latent space.", "cite_spans": [ { "start": 8, "end": 29, "text": "(Radford et al., 2019", "ref_id": "BIBREF18" }, { "start": 49, "end": 70, "text": "(Vaswani et al., 2017", "ref_id": "BIBREF21" }, { "start": 252, "end": 270, "text": "(Gao et al., 2019b", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Conditioned text generation", "sec_num": "3.5" }, { "text": "We consider the following methods to consume the retrieved knowledge passage and relevant longform text on the fly as a source of external knowledge.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge grounded generation", "sec_num": "3.6" }, { "text": "\u2022 Machine reading comprehension. In the codebase, we fine-tuned BERT on SQuAD. \u2022 Content transfer is a task proposed in (Prabhumoye et al., 2019) designed to, given a context, generate a sentence using knowledge from an external article. We implemented this algorithm in the codebase. \u2022 Knowledge grounded response generation is a task firstly proposed in (Ghazvininejad et al., 2018) and later extended in Dialog System Technology Challenge 7 (DSTC7) . We implemented the CMR algorithm (Conversation with on-demand Machine Reading) proposed in (Qin et al., 2019) .", "cite_spans": [ { "start": 120, "end": 145, "text": "(Prabhumoye et al., 2019)", "ref_id": "BIBREF16" }, { "start": 356, "end": 384, "text": "(Ghazvininejad et al., 2018)", "ref_id": "BIBREF8" }, { "start": 545, "end": 563, "text": "(Qin et al., 2019)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Knowledge grounded generation", "sec_num": "3.6" }, { "text": "Besides the grounded generation, it is also useful to apply constraints at the decoding stage that encourage the generated hypotheses contain the desired phrases. We provide the following two ways to obtain constraints.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constrained generation", "sec_num": "3.7" }, { "text": "\u2022 Key phrases extracted from the Knowledge passage. We use the PKE package (Boudin, 2016) to identify the keywords. \u2022 In some cases, users may want to use a stylized version of the topic phrases or phrase of a desired style as the constraints. We use the stylized synonym algorithm as introduced in Section 3.4 to provide such stylized constraints.", "cite_spans": [ { "start": 75, "end": 89, "text": "(Boudin, 2016)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Constrained generation", "sec_num": "3.7" }, { "text": "With the constraints obtained above, we provide the following two ways to apply such constraints during decoding.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constrained generation", "sec_num": "3.7" }, { "text": "\u2022 Hard constraint is applied via Grid Beam Search (GBS) (Hokamp and Liu, 2017) , which is a lexically constrained decoding algorithm that can be applied to almost any models at the decoding stage and generate hypotheses that contain desired paraphrases (i.e. the constraints). We implemented GBS to provide a hard constrained decoding. \u2022 Soft constraint refers the case that generation is likely, but not always, to satisfy constraints (e.g. include given keywords in the hypothesis). We provide an adapted version of SpaceFusion (Gao et al., 2019b) for this purpose. Gao et al. (2019b) proposed to align the latent space of a Sequence-to-Sequence (S2S) model and that of an Autoencoder (AE) model to improve dialogue generation performance. Inspired by this work, we proposed to replace the S2S model by a keywords-tosequence model, which takes multi-hot input of the keywords x identified from sentence y, as illustrated in Fig. 3 . During training, we simply choose the top-k rare words (rareness measured by inverse document frequency) as the keywords, and k is randomly choose from a Uniform distribution k \u223c U (1, K).", "cite_spans": [ { "start": 56, "end": 78, "text": "(Hokamp and Liu, 2017)", "ref_id": "BIBREF9" }, { "start": 530, "end": 549, "text": "(Gao et al., 2019b)", "ref_id": null }, { "start": 568, "end": 586, "text": "Gao et al. (2019b)", "ref_id": null } ], "ref_spans": [ { "start": 926, "end": 932, "text": "Fig. 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Constrained generation", "sec_num": "3.7" }, { "text": "Multiple models may be called for the same query and returns different responses. We propose the following ways to organically integrate multiple models, as illustrated in Fig. 4 . User can apply these strategies with customized models.", "cite_spans": [], "ref_spans": [ { "start": 172, "end": 178, "text": "Fig. 4", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Cross-model integration", "sec_num": "4" }, { "text": "\u2022 Token probability interpolation refers prediction of the next token using a (weighted) average of the token probability distributions from two or more models given the same time step and given the same context and incomplete hypothesis. Previously, it has been proposed to bridge a conversation model and stylized language model (Niu and Bansal, 2018) . This technique does not require the models to share the latent space but the vocabulary should be shared across different models. \u2022 Latent interpolation refers the technique introduced in Section 3.3. It provides a way to interpolate texts in the latent space. Unlike the token-level strategy introduced above, this technique focuses on the latent level and ingests information from the whole sentence. However if the two candidates are too dissimilar, the interpolation may result in undesired outputs. The soft constraint algorithm intro- duced in Section 3.7 is one option to apply such interpolation. \u2022 Cross model pruning refers pruning the hypothesis candidates (can be incomplete hypothesis, e.g. during beam search) not just based on the joint token probability, but also the evaluated probability from a secondary model. This strategy does not require a shared vocabulary or a shared latent space. Interpolating two models trained on dissimilar domains may be risky but the cross model pruning strategy is safer as the secondary model is only used roughly as a discriminator rather than a generator. \u2022 Unified hypothesis ranking is the final step which sum up the hypotheses generated from each single model and these from the integration of multiple models using the above strategies. We consider the following criteria for the hypothesis ranking: 1) likelihood, measured by the conditional token probability given the context; 2) informativeness, measured by average inverse document frequency (IDF) of the tokens in the hypothesis; 3) repetition penalty, measured by the ratio of the number of unique ngrams and the number of total ngrams. and 4) style intensity, measured by a style classifiers, if style is considered.", "cite_spans": [ { "start": 331, "end": 353, "text": "(Niu and Bansal, 2018)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Cross-model integration", "sec_num": "4" }, { "text": "This demo is a step towards a virtual version of Sherlock Holmes, able to chat in Sherlock Holmes style, with Sherlock Holmes background knowledge in mind. As an extended version of the one introduced by Gao et al. (2019c) , the current demo is grounded on knowledge and coupled with more advanced language modeling . It is designed to integrate the following components: open-domain conversation, stylized response generation, knowledge-grounded conversation, and question answering. Specifically, for a given query, the following steps are executed:", "cite_spans": [ { "start": 204, "end": 222, "text": "Gao et al. (2019c)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Virtual Sherlock Holmes", "sec_num": "5.1" }, { "text": "\u2022 Call DialoGPT and Style-Fusion (Gao et al., 2019c) to get a set of hypotheses. \u2022 Call the knowledge passage selection module to get a set of candidate passages. Then feed these passages to the span selection algorithm (Bert-based MRC (Devlin et al., 2018) ) and CMR (Qin et al., 2019) to get a set of knowledge grounded response. \u2022 Optionally, use the cross-model integration strategies, such as interpolating the token probability of DialoGPT and CMR. \u2022 Based on TF-IDF similarity, best answer is retrieved from a user provided corpus of question-answer pairs. If the similarity is lower than certain threshold, the retrieved result will not be returned. \u2022 Apply the style transfer module to obtain stylized version of the the hypotheses obtained from steps above. \u2022 feed all hypotheses to the unified ranker and return the top ones.", "cite_spans": [ { "start": 33, "end": 52, "text": "(Gao et al., 2019c)", "ref_id": "BIBREF6" }, { "start": 236, "end": 257, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF2" }, { "start": 268, "end": 286, "text": "(Qin et al., 2019)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Virtual Sherlock Holmes", "sec_num": "5.1" }, { "text": "This demo is designed as a writing assistant, which provides suggestion of the next sentence given the context. The assistant is expected to be knowledgeable (able to retrieve relevant knowledge passage from web or a given unstructured text source) and stylized (if a given target style is specified). For a given query, the following steps are executed:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document auto-completion assistant", "sec_num": "5.2" }, { "text": "\u2022 Call language model GPT2 (Radford et al., 2019) to get a set of hypotheses \u2022 Call the knowledge passage selection module to get a set of candidate passages. Then feed these passages to content transfer algorithm (Prabhumoye et al., 2019) to get a set of knowledge grounded response. \u2022 Optionally, use the cross-model integration strategies, such as latent interpolation to interpolate hypotheses from above models. \u2022 Apply the style transfer module to obtain stylized version of the the hypotheses obtained from steps above. \u2022 Feed all hypotheses to the unified ranker and return the top ones.", "cite_spans": [ { "start": 27, "end": 49, "text": "(Radford et al., 2019)", "ref_id": "BIBREF18" }, { "start": 214, "end": 239, "text": "(Prabhumoye et al., 2019)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Document auto-completion assistant", "sec_num": "5.2" }, { "text": "We provided the following three ways to access the demos introduced above for local developer, remote human user, and interface for other programs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "User interface", "sec_num": "6" }, { "text": "\u2022 Command line interface is provided for local interaction. This is designed for developer to test the codebase.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "User interface", "sec_num": "6" }, { "text": "\u2022 Webpage interface is implemented using the Flask toolkit. 5 A graphic interface is provided with html webpage for remote access for human user. As illustrated in Fig. 5 , the Sherlock Holmes webpage consists of a input panel where the user can provide context and control style, a hypothesis list which specify the model and scores of the ranked hypotheses, and a knowledge passage list showing 5 https://flask.palletsprojects.com/en/ 1.1.x/ the retrieved knowledge passages. Another example is given in Fig. 6 for document autocompletion demo, where multiple options of knowledge passage is given.", "cite_spans": [], "ref_spans": [ { "start": 164, "end": 170, "text": "Fig. 5", "ref_id": "FIGREF3" }, { "start": 506, "end": 512, "text": "Fig. 6", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "User interface", "sec_num": "6" }, { "text": "\u2022 RESTful API is implemented using the Flask-RESTful toolkit. 6 JSON object will be returned for remote request. This interface is designed to allow remote access for other programs. One example is to host this RESTful API on a dedicated GPU machine, so the webpage interface can be hosted on another less powerful machine to send request through RESTful API.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "User interface", "sec_num": "6" }, { "text": "MIXINGBOARD is a new open-source platform to organically integrate multiple state-of-the-art NLP algorithms to build demo quickly with user friendly interface. We unified these NLP algorithms in a single codebase, implemented demos as top-level managers to access different models, and provide strategies to allow more organic integration across the models. We provide the component to retrieve knowledge passage on-the-fly from web or customized document for grounded text generation. For future work, we plan to keep adding the stateof-the-art algorithms, reduce latency and fine-tune the implemented models on larger and/or more comprehensive corpus to improve performance. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Although multiple libraries and toolkits are mentioned inFig. 1, the current implementation is primarily based on PyTorch models", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://azure.microsoft.com/en-us/ services/cognitive-services/ 4 https://tagme.d4science.org/tagme/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://flask-restful.readthedocs.io/ en/latest/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "TensorFlow: Large-scale machine learning on heterogeneous systems", "authors": [ { "first": "Mart\u00edn", "middle": [], "last": "Abadi", "suffix": "" }, { "first": "Ashish", "middle": [], "last": "Agarwal", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Barham", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Brevdo", "suffix": "" }, { "first": "Zhifeng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Craig", "middle": [], "last": "Citro", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Andy", "middle": [], "last": "Davis", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" }, { "first": "Matthieu", "middle": [], "last": "Devin", "suffix": "" }, { "first": "Sanjay", "middle": [], "last": "Ghemawat", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Goodfellow", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Harp", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Irving", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Isard", "suffix": "" }, { "first": "Yangqing", "middle": [], "last": "Jia", "suffix": "" }, { "first": "Rafal", "middle": [], "last": "Jozefowicz", "suffix": "" }, { "first": "Lukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Manjunath", "middle": [], "last": "Kudlur", "suffix": "" }, { "first": "Josh", "middle": [], "last": "Levenberg ; Martin Wicke", "suffix": "" }, { "first": "Yuan", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Xiaoqiang", "middle": [], "last": "Zheng", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mart\u00edn Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Cor- rado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Man\u00e9, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Vi\u00e9gas, Oriol Vinyals, Pete Warden, Mar- tin Wattenberg, Martin Wicke, Yuan Yu, and Xiao- qiang Zheng. 2015. TensorFlow: Large-scale ma- chine learning on heterogeneous systems. Software available from tensorflow.org.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "pke: an open source pythonbased keyphrase extraction toolkit", "authors": [ { "first": "Florian", "middle": [], "last": "Boudin", "suffix": "" } ], "year": 2016, "venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: System Demonstrations", "volume": "", "issue": "", "pages": "69--73", "other_ids": {}, "num": null, "urls": [], "raw_text": "Florian Boudin. 2016. pke: an open source python- based keyphrase extraction toolkit. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: System Demonstra- tions, pages 69-73, Osaka, Japan.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language under- standing. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Grounded response generation task at DSTC7", "authors": [ { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Brockett", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Dolan", "suffix": "" } ], "year": 2019, "venue": "AAAI Dialog System Technology Challenges (DSTC7) Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michel Galley, Chris Brockett, Xiang Gao, Jianfeng Gao, and Bill Dolan. 2019. Grounded response gen- eration task at DSTC7. In AAAI Dialog System Tech- nology Challenges (DSTC7) Workshop.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Neural approaches to conversational AI. Foundations and Trends in Information Retrieval", "authors": [ { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" }, { "first": "Lihong", "middle": [], "last": "Li", "suffix": "" } ], "year": 2019, "venue": "", "volume": "13", "issue": "", "pages": "127--298", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jianfeng Gao, Michel Galley, and Lihong Li. 2019a. Neural approaches to conversational AI. Founda- tions and Trends in Information Retrieval, 13(2- 3):127-298.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Michel Galley, Jianfeng Gao, and Bill Dolan. 2019b. Jointly optimizing diversity and relevance in neural response generation. NAACL-HLT", "authors": [ { "first": "Xiang", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Sungjin", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Yizhe", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Brockett", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiang Gao, Sungjin Lee, Yizhe Zhang, Chris Brockett, Michel Galley, Jianfeng Gao, and Bill Dolan. 2019b. Jointly optimizing diversity and relevance in neural response generation. NAACL-HLT 2019.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Structuring latent spaces for stylized response generation", "authors": [ { "first": "Xiang", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Yizhe", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Sungjin", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Brockett", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Dolan", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1909.05361" ] }, "num": null, "urls": [], "raw_text": "Xiang Gao, Yizhe Zhang, Sungjin Lee, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2019c. Structuring latent spaces for stylized response gener- ation. arXiv preprint arXiv:1909.05361.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Allennlp: A deep semantic natural language processing platform", "authors": [ { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Joel", "middle": [], "last": "Grus", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Oyvind", "middle": [], "last": "Tafjord", "suffix": "" }, { "first": "Pradeep", "middle": [], "last": "Dasigi", "suffix": "" }, { "first": "Nelson", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Schmitz", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1803.07640" ] }, "num": null, "urls": [], "raw_text": "Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson Liu, Matthew Pe- ters, Michael Schmitz, and Luke Zettlemoyer. 2018. Allennlp: A deep semantic natural language process- ing platform. arXiv preprint arXiv:1803.07640.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A knowledge-grounded neural conversation model", "authors": [ { "first": "Marjan", "middle": [], "last": "Ghazvininejad", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Brockett", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Dolan", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Yih", "middle": [], "last": "Wen-Tau", "suffix": "" }, { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" } ], "year": 2018, "venue": "Thirty-Second AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Wen-tau Yih, and Michel Galley. 2018. A knowledge-grounded neural conversation model. In Thirty-Second AAAI Confer- ence on Artificial Intelligence.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Lexically constrained decoding for sequence generation using grid beam search", "authors": [ { "first": "Chris", "middle": [], "last": "Hokamp", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1704.07138" ] }, "num": null, "urls": [], "raw_text": "Chris Hokamp and Qun Liu. 2017. Lexically con- strained decoding for sequence generation using grid beam search. arXiv preprint arXiv:1704.07138.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Texar: A modularized, versatile, and extensible toolkit for text generation", "authors": [ { "first": "Zhiting", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Haoran", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Zichao", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Bowen", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Tiancheng", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Junxian", "middle": [], "last": "He", "suffix": "" }, { "first": "Wentao", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Lianhui", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Di", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1809.00794" ] }, "num": null, "urls": [], "raw_text": "Zhiting Hu, Haoran Shi, Zichao Yang, Bowen Tan, Tiancheng Zhao, Junxian He, Wentao Wang, Lian- hui Qin, Di Wang, et al. 2018. Texar: A modular- ized, versatile, and extensible toolkit for text genera- tion. arXiv preprint arXiv:1809.00794.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "PyTorch transformer repository", "authors": [ { "first": "", "middle": [], "last": "Huggingface", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "HuggingFace. 2019. PyTorch transformer repos- itory.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Parlai: A dialog research software platform", "authors": [ { "first": "H", "middle": [], "last": "Alexander", "suffix": "" }, { "first": "Will", "middle": [], "last": "Miller", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Jiasen", "middle": [], "last": "Fisch", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Batra", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Bordes", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Parikh", "suffix": "" }, { "first": "", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1705.06476" ] }, "num": null, "urls": [], "raw_text": "Alexander H Miller, Will Feng, Adam Fisch, Jiasen Lu, Dhruv Batra, Antoine Bordes, Devi Parikh, and Ja- son Weston. 2017. Parlai: A dialog research soft- ware platform. arXiv preprint arXiv:1705.06476.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Polite dialogue generation without parallel data", "authors": [ { "first": "Tong", "middle": [], "last": "Niu", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" } ], "year": 2018, "venue": "Transactions of the Association of Computational Linguistics", "volume": "6", "issue": "", "pages": "373--389", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tong Niu and Mohit Bansal. 2018. Polite dialogue gen- eration without parallel data. Transactions of the As- sociation of Computational Linguistics, 6:373-389.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "fairseq: A fast, extensible toolkit for sequence modeling", "authors": [ { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Edunov", "suffix": "" }, { "first": "Alexei", "middle": [], "last": "Baevski", "suffix": "" }, { "first": "Angela", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Gross", "suffix": "" }, { "first": "Nathan", "middle": [], "last": "Ng", "suffix": "" }, { "first": "David", "middle": [], "last": "Grangier", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" } ], "year": 2019, "venue": "Proceedings of NAACL-HLT 2019: Demonstrations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Pytorch: An imperative style, high-performance deep learning library", "authors": [ { "first": "Adam", "middle": [], "last": "Paszke", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Gross", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Massa", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Lerer", "suffix": "" }, { "first": "James", "middle": [], "last": "Bradbury", "suffix": "" }, { "first": "Gregory", "middle": [], "last": "Chanan", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Killeen", "suffix": "" }, { "first": "Zeming", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Natalia", "middle": [], "last": "Gimelshein", "suffix": "" }, { "first": "Luca", "middle": [], "last": "Antiga", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "8024--8035", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. In Ad- vances in Neural Information Processing Systems, pages 8024-8035.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Towards content transfer through grounded text generation", "authors": [ { "first": "Shrimai", "middle": [], "last": "Prabhumoye", "suffix": "" } ], "year": 2019, "venue": "Chris Quirk, and Michel Galley", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shrimai Prabhumoye, Chris Quirk, and Michel Galley. 2019. Towards content transfer through grounded text generation. In Proc. of NAACL.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Conversing by reading: Contentful neural conversation with on-demand machine reading", "authors": [ { "first": "Lianhui", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Brockett", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Dolan", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2019, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lianhui Qin, Michel Galley, Chris Brockett, Xiaodong Liu, Xiang Gao, Bill Dolan, Yejin Choi, and Jian- feng Gao. 2019. Conversing by reading: Contentful neural conversation with on-demand machine read- ing. In Proc. of ACL.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Language models are unsupervised multitask learners", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "David", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2019, "venue": "OpenAI Blog", "volume": "", "issue": "8", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8).", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Bidirectional attention flow for machine comprehension", "authors": [ { "first": "Minjoon", "middle": [], "last": "Seo", "suffix": "" }, { "first": "Aniruddha", "middle": [], "last": "Kembhavi", "suffix": "" }, { "first": "Ali", "middle": [], "last": "Farhadi", "suffix": "" }, { "first": "Hannaneh", "middle": [], "last": "Hajishirzi", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1611.01603" ] }, "num": null, "urls": [], "raw_text": "Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Microsoft icecaps: An opensource toolkit for conversation modeling", "authors": [ { "first": "Leonardo", "middle": [], "last": "Vighnesh", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Shiv", "suffix": "" }, { "first": "Anshuman", "middle": [], "last": "Quirk", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Suri", "suffix": "" }, { "first": "Khuram", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Nithya", "middle": [], "last": "Shahid", "suffix": "" }, { "first": "Yizhe", "middle": [], "last": "Govindarajan", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Michel", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Galley", "suffix": "" }, { "first": "", "middle": [], "last": "Brockett", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Conference of the Association for Computational Linguistics: System Demonstrations", "volume": "", "issue": "", "pages": "123--128", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vighnesh Leonardo Shiv, Chris Quirk, Anshuman Suri, Xiang Gao, Khuram Shahid, Nithya Govindarajan, Yizhe Zhang, Jianfeng Gao, Michel Galley, Chris Brockett, et al. 2019. Microsoft icecaps: An open- source toolkit for conversation modeling. In Pro- ceedings of the 57th Conference of the Association for Computational Linguistics: System Demonstra- tions, pages 123-128.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "DialoGPT: Large-scale generative pre-training for conversational response generation", "authors": [ { "first": "Yizhe", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Siqi", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" }, { "first": "Yen-Chun", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Brockett", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Jingjing", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Dolan", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1911.00536" ] }, "num": null, "urls": [], "raw_text": "Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2019. DialoGPT: Large-scale generative pre-training for conversational response generation. arXiv preprint arXiv:1911.00536.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "The architecture of MIXINGBOARD, consisting of layers from basic tools, algorithms, tasks to integrated demos with market into consideration.", "num": null, "uris": null }, "FIGREF1": { "type_str": "figure", "text": "A soft keywords constrained generation model based on SpaceFusion(Gao et al., 2019b).", "num": null, "uris": null }, "FIGREF2": { "type_str": "figure", "text": "An example flow chart showing the integration of two models at different stages (blue boxes).", "num": null, "uris": null }, "FIGREF3": { "type_str": "figure", "text": "Sherlock Holmes webpage demo with wikipedia knowledge example.", "num": null, "uris": null }, "FIGREF4": { "type_str": "figure", "text": "Document Auto-completion webpage demo with user input knowledge passage.", "num": null, "uris": null } } } }