ACL-OCL / Base_JSON /prefixA /json /acl /2020.acl-demos.25.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:38:36.896929Z"
},
"title": "Interactive Task Learning from GUI-Grounded Natural Language Instructions and Demonstrations",
"authors": [
{
"first": "Toby",
"middle": [],
"last": "Jia",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": "tobyli@cs.cmu.edu"
},
{
"first": "Jun",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": ""
},
{
"first": "Tom",
"middle": [
"M"
],
"last": "Mitchell",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": "tom.mitchell@cs.cmu.edu"
},
{
"first": "Brad",
"middle": [
"A"
],
"last": "Myers",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We show SUGILITE, an intelligent task automation agent that can learn new tasks and relevant associated concepts interactively from the user's natural language instructions and demonstrations, using the graphical user interfaces (GUIs) of third-party mobile apps. This system provides several interesting features: (1) it allows users to teach new task procedures and concepts through verbal instructions together with demonstration of the steps of a script using GUIs; (2) it supports users in clarifying their intents for demonstrated actions using GUI-grounded verbal instructions; (3) it infers parameters of tasks and their possible values in utterances using the hierarchical structures of the underlying app GUIs; and (4) it generalizes taught concepts to different contexts and task domains. We describe the architecture of the SUGILITE system, explain the design and implementation of its key features, and show a prototype in the form of a conversational assistant on Android.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "We show SUGILITE, an intelligent task automation agent that can learn new tasks and relevant associated concepts interactively from the user's natural language instructions and demonstrations, using the graphical user interfaces (GUIs) of third-party mobile apps. This system provides several interesting features: (1) it allows users to teach new task procedures and concepts through verbal instructions together with demonstration of the steps of a script using GUIs; (2) it supports users in clarifying their intents for demonstrated actions using GUI-grounded verbal instructions; (3) it infers parameters of tasks and their possible values in utterances using the hierarchical structures of the underlying app GUIs; and (4) it generalizes taught concepts to different contexts and task domains. We describe the architecture of the SUGILITE system, explain the design and implementation of its key features, and show a prototype in the form of a conversational assistant on Android.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Interactive task learning (ITL) is an emerging research topic that focuses on enabling task automation agents to learn new tasks and their corresponding relevant concepts through natural interaction with human users (Laird et al., 2017) . This topic is also known as end user development (EUD) for task automation (Ko et al., 2011; Myers et al., 2017) . Work in this domain includes both physical agents (e.g., robots) that learn tasks that might involve sensing and manipulating objects in the real world (Chai et al., 2018; Argall et al., 2009) , as well as software agents that learn how to perform tasks through software interfaces (Azaria et al., 2016; Allen et al., 2007; Leshed et al., 2008) . This paper focuses on the latter category.",
"cite_spans": [
{
"start": 216,
"end": 236,
"text": "(Laird et al., 2017)",
"ref_id": "BIBREF19"
},
{
"start": 314,
"end": 331,
"text": "(Ko et al., 2011;",
"ref_id": "BIBREF17"
},
{
"start": 332,
"end": 351,
"text": "Myers et al., 2017)",
"ref_id": "BIBREF38"
},
{
"start": 506,
"end": 525,
"text": "(Chai et al., 2018;",
"ref_id": "BIBREF8"
},
{
"start": 526,
"end": 546,
"text": "Argall et al., 2009)",
"ref_id": "BIBREF1"
},
{
"start": 636,
"end": 657,
"text": "(Azaria et al., 2016;",
"ref_id": "BIBREF3"
},
{
"start": 658,
"end": 677,
"text": "Allen et al., 2007;",
"ref_id": "BIBREF0"
},
{
"start": 678,
"end": 698,
"text": "Leshed et al., 2008)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A particularly useful application of ITL is for conversational virtual assistants (e.g., Apple Siri, Google Assistant, Amazon Alexa). These systems have been widely adopted by end users to perform tasks in a variety of domains through natural language conversation. However, a key limitation of these systems is that their task fulfillment and language understanding capabilities are limited to a small set of pre-programmed tasks (Li et al., 2018b; . This limited support is not adequate for the diverse \"long-tail\" of user needs and preferences (Li et al., 2017a) . Although some software agents provide APIs to enable thirdparty developers to develop new \"skills\" for them, this requires significant programming expertise and relevant APIs, and therefore is not usable by the vast majority of end users.",
"cite_spans": [
{
"start": 431,
"end": 449,
"text": "(Li et al., 2018b;",
"ref_id": null
},
{
"start": 547,
"end": 565,
"text": "(Li et al., 2017a)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Natural language instructions play a key role in some ITL systems for virtual assistants, because this modality represents an natural way for humans to teach new tasks (often to other humans), and has a low learning barrier compared to existing textual or visual programming languages for task automation. Some prior systems (Azaria et al., 2016; Le et al., 2013; Srivastava et al., 2017 relied solely natural language instruction, while others (Allen et al., 2007; Kirk and Laird, 2019; Sereshkeh et al., 2020) also used demonstrations of direct manipulations to supplement the natural language instructions. We surveyed the prior work, and identified the following five key design challenges: 1. Usability: The system should be usable for users without significant programming expertise. It should be easy and intuitive to use with a low learning barrier. This requires careful design of the dialog flow to best match the user's natural model of task instruction.",
"cite_spans": [
{
"start": 325,
"end": 346,
"text": "(Azaria et al., 2016;",
"ref_id": "BIBREF3"
},
{
"start": 347,
"end": 363,
"text": "Le et al., 2013;",
"ref_id": "BIBREF20"
},
{
"start": 364,
"end": 387,
"text": "Srivastava et al., 2017",
"ref_id": "BIBREF45"
},
{
"start": 445,
"end": 465,
"text": "(Allen et al., 2007;",
"ref_id": "BIBREF0"
},
{
"start": 466,
"end": 487,
"text": "Kirk and Laird, 2019;",
"ref_id": "BIBREF16"
},
{
"start": 488,
"end": 511,
"text": "Sereshkeh et al., 2020)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The system should handle a Figure 1 : An example dialog structure while SUGILITE learns a new task that contains a conditional and new concepts. The numbers indicate the sequence of the utterances. The screenshot on the right shows the conversational interface during these steps.",
"cite_spans": [],
"ref_spans": [
{
"start": 27,
"end": 35,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Applicability:",
"sec_num": "2."
},
{
"text": "wide range of common and long-tail tasks across different domains. Many existing systems can only work with pre-specified task domains Azaria et al., 2016; Gulwani and Marron, 2014) , or services that provide open API access to their functionalities (Campagna et al., 2017; Le et al., 2013) . This limits the applicability of those systems to a smaller subset of tasks.",
"cite_spans": [
{
"start": 135,
"end": 155,
"text": "Azaria et al., 2016;",
"ref_id": "BIBREF3"
},
{
"start": 156,
"end": 181,
"text": "Gulwani and Marron, 2014)",
"ref_id": "BIBREF14"
},
{
"start": 250,
"end": 273,
"text": "(Campagna et al., 2017;",
"ref_id": "BIBREF7"
},
{
"start": 274,
"end": 290,
"text": "Le et al., 2013)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Applicability:",
"sec_num": "2."
},
{
"text": "The same problem also applies to the language understanding capability of the system. It should be able to understand, ground, and act upon instructions in different task domains (e.g., different phone apps) without requiring pre-built parsers for each domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Applicability:",
"sec_num": "2."
},
{
"text": "The system should learn generalized procedures and concepts to handle new task contexts that go beyond the example context used for instruction. This includes inferring parameters of tasks, allowing the use of different parameter values, and adapting learned concepts to new task domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalizability:",
"sec_num": "3."
},
{
"text": "The system should be sufficiently expressive to allow users to specify flexible rules, conditions, and other control structures that reflect their intentions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Flexibility:",
"sec_num": "4."
},
{
"text": "The system should be resilient to minor changes in target applications, and be able to recover from errors caused by previously unseen or unexpected situations, possibly with some help from the user.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Robustness:",
"sec_num": "5."
},
{
"text": "To address these challenges, we present the prototype of a new task automation agent named SUGILITE 12 . This prototype integrates and implements the results from several of our prior research works (Li et al., 2017a (Li et al., , 2018a (Li et al., , 2017b Li and Riva, 2018; Li et al., 2019) , and we are current preparing for a field deployment study with this prototype. The implementation of our system is also open-sourced on GitHub 3 . The high-level approach used in SUGILITE is to combine conversational natural language instructions with demonstrations on mobile app GUIs, and to use each of the two modalities to disambiguate, ground, and supplement the user's inputs from the other modality through mixed-initiative interactions.",
"cite_spans": [
{
"start": 199,
"end": 216,
"text": "(Li et al., 2017a",
"ref_id": "BIBREF23"
},
{
"start": 217,
"end": 236,
"text": "(Li et al., , 2018a",
"ref_id": "BIBREF25"
},
{
"start": 237,
"end": 256,
"text": "(Li et al., , 2017b",
"ref_id": "BIBREF28"
},
{
"start": 257,
"end": 275,
"text": "Li and Riva, 2018;",
"ref_id": "BIBREF30"
},
{
"start": 276,
"end": 292,
"text": "Li et al., 2019)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Robustness:",
"sec_num": "5."
},
{
"text": "This section explains how SUGILITE learns new tasks and concepts from the multi-modal interactive instructions from the users.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Overview",
"sec_num": "2"
},
{
"text": "The user starts with speaking a command. The command can describe either an action (e.g., \"check the weather\") or an automation rule with a condition (e.g., \"If it is hot, order a cup of Iced Cappuccino\"). Suppose that the agent has no prior knowledge in any of the involved task domains, then it will recursively resolve the unknown concepts and procedures used in the command. Although it does not know these concepts, it can recognize the structure of the command (e.g., conditional), and parse each part of the command into the corresponding typed resolve functions, as shown in Figure 1 . SUG-ILITE uses a grammar-based executable semantic parsing architecture (Liang, 2016) ; therefore its conversation flow operates on the recursive execution of the resolve functions. Since the resolve functions are typed, the agent can generate prompts based on their types (e.g., \"How do I tell whether. . . \" for resolveBool and \"How do I find out the value for. . . \" for resolveValue).",
"cite_spans": [
{
"start": 666,
"end": 679,
"text": "(Liang, 2016)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [
{
"start": 583,
"end": 591,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "System Overview",
"sec_num": "2"
},
{
"text": "When the SUGILITE agent reaches the resolve function for a value query or a procedure, it asks the users if they can demonstrate them. The users can then demonstrate how they would normally look up the value, or perform the procedure manually with existing mobile apps on the phone by direct manipulation (Figure 2a ). For any ambiguous demonstrated action, the user verbally explains the intent behind the action through multi-turn conversations with the help from an interaction proxy overlay that guides the user to focus on providing more effective input (see Figure 2bcde , more details in Section 3.2). When the user demonstrates a value query (e.g., finding out the value of the temperature), SUGILITE highlights the GUI elements showing values with the compatible types (see Figure 3 ) to assist the user in finding the appropriate GUI element during the demonstration.",
"cite_spans": [],
"ref_spans": [
{
"start": 305,
"end": 315,
"text": "(Figure 2a",
"ref_id": "FIGREF0"
},
{
"start": 564,
"end": 576,
"text": "Figure 2bcde",
"ref_id": "FIGREF0"
},
{
"start": 783,
"end": 791,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "System Overview",
"sec_num": "2"
},
{
"text": "All user-instructed value concepts, Boolean concepts, and procedures automatically get generalized by SUGILITE. The procedures are parameterized so that they can be reused with different parameter values in the future. For example, for Utterance 8 in Figure 1 , the user does not need to demonstrate again since the system can invoke the newlylearned order Starbucks function with a different parameter value (details in Section 3.3). The learned concepts and value queries are also generalized so that the system recognizes the different definitions of concepts like \"hot\" and value queries like \"temperature\" in different contexts (details in Section 3.4).",
"cite_spans": [],
"ref_spans": [
{
"start": 251,
"end": 259,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "System Overview",
"sec_num": "2"
},
{
"text": "Language Instructions SUGILITE allows users to use demonstrations to teach the agent any unknown procedures and concepts in their natural language instructions. As discussed earlier, a major challenge in ITL is that understanding natural language instructions and carrying out the tasks accordingly require having knowledge in the specific task domains. Our use of programming by demonstration (PBD) is an effective way to address this \"out-of-domain\" problem in both the task-fulfillment and the natural language understanding processes (Li et al., 2018b) . In SUGILITE, procedural actions are represented as sequences of GUI operations, and declarative con- cepts can be represented as references to GUI contents. This approach supports ITL for a wide range of tasks -virtually anything that can be performed with one or more existing third-party mobile apps. Our prior study (Li et al., 2019) also found that the availability of app GUI references can result in end users providing clearer natural language commands. In one study where we asked participants to instruct an intelligent agent to complete everyday computing tasks in natural language, the participants who saw screenshots of relevant apps used fewer unclear, vague, or ambiguous concepts in their verbal instructions than those who did not see the screenshots. Details of the study design and the results can be found in Li et al. (2019) .",
"cite_spans": [
{
"start": 538,
"end": 556,
"text": "(Li et al., 2018b)",
"ref_id": null
},
{
"start": 878,
"end": 895,
"text": "(Li et al., 2019)",
"ref_id": "BIBREF29"
},
{
"start": 1388,
"end": 1404,
"text": "Li et al. (2019)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Using Demonstrations in Natural",
"sec_num": "3.1"
},
{
"text": "A major limitation of demonstrations is that they are too literal, and are therefore brittle to any changes in the task context. They encapsulate what the user did, but not why the user did it. When the context changes, the agent often may not know what to do, due to this lack of understanding of the user intents behind their demonstrated actions. This is known as the data description problem in the PBD community, and it is regarded as a key problem in PBD research (Cypher and Halbert, 1993; Lieberman, 2001 ). For example, just looking at the action shown in Figure 2a , one cannot tell if the user meant \"the restaurant with the most reviews\", \"the promoted restaurant\", \"the restaurant with 1,000 bonus points\", \"the cheapest Steakhouse\", or any other criteria, so the system cannot generate a description for this action that accurately reflects the user's intent. A prior approach is to ask for multiple examples from the users (McDaniel and Myers, 1999) , but this is often not feasible due to the user's inability to come up with useful and complete examples, and the amount of examples required for complex tasks (Myers and McDaniel, 2001; Lee et al., 2017 ).",
"cite_spans": [
{
"start": 470,
"end": 496,
"text": "(Cypher and Halbert, 1993;",
"ref_id": "BIBREF10"
},
{
"start": 497,
"end": 512,
"text": "Lieberman, 2001",
"ref_id": "BIBREF33"
},
{
"start": 938,
"end": 964,
"text": "(McDaniel and Myers, 1999)",
"ref_id": "BIBREF37"
},
{
"start": 1126,
"end": 1152,
"text": "(Myers and McDaniel, 2001;",
"ref_id": "BIBREF39"
},
{
"start": 1153,
"end": 1169,
"text": "Lee et al., 2017",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 565,
"end": 574,
"text": "Figure 2a",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Spoken Intent Clarification for Demonstrated Actions",
"sec_num": "3.2"
},
{
"text": "SUGILITE's approach is to ask users to verbally explain their intent for the demonstrated actions using speech. Our formative study (Li et al., 2018a) found that end users were able to provide useful and generalizable explanations for the intents of their demonstrated actions. They also commonly used in their utterances semantic references to GUI contents (e.g., \"the close by restaurant\" for an entry showing the text \"596 ft\") and implicit spatial references (e.g., \"the score for Lakers\" for a text object that contains a numeric value and is right-aligned to another text object \"Lakers\").",
"cite_spans": [
{
"start": 132,
"end": 150,
"text": "(Li et al., 2018a)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Spoken Intent Clarification for Demonstrated Actions",
"sec_num": "3.2"
},
{
"text": "Based on these findings, we designed and implemented a multi-modal mixed-initiative intent clarification mechanism for demonstrated actions. As shown in Figure 2 , the user describes their intention in natural language, and iteratively refines the descriptions to remove ambiguity with the help of an interactive overlay (Figure 2d ). The overlay highlights the result from executing the current data description query, and helps the user focus on explaining the key differences between the target object (highlighted in red) and the false positives (highlighted in yellow) of the query.",
"cite_spans": [],
"ref_spans": [
{
"start": 153,
"end": 161,
"text": "Figure 2",
"ref_id": "FIGREF0"
},
{
"start": 321,
"end": 331,
"text": "(Figure 2d",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Spoken Intent Clarification for Demonstrated Actions",
"sec_num": "3.2"
},
{
"text": "To ground the user's natural language explanations about GUI elements, SUGILITE represents each GUI screen as a UI snapshot graph. This graph captures the GUI elements' text labels, metainformation (including screen position, type, and package name), and the spatial (e.g., nextTo), hierarchical (e.g., hasChild), and semantic relations (e.g., containsPrice) among them (Figure 4) . A semantic parser translates the user's explanation into a graph query on the UI snapshot graph, executes it on the graph, and verifies if the result matches the correct entity that the user originally demonstrated. The goal of this process is to generate a query that uniquely matches the target UI element and also reflects the user's underlying intent.",
"cite_spans": [],
"ref_spans": [
{
"start": 370,
"end": 380,
"text": "(Figure 4)",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Spoken Intent Clarification for Demonstrated Actions",
"sec_num": "3.2"
},
{
"text": "Our semantic parser uses a Floating Parser ar- chitecture (Pasupat and Liang, 2015) and is implemented with the SEMPRE framework (Berant et al., 2013) . We represent UI snapshot graph queries in a simple but flexible LISP-like query language (Sexpressions) that can represent joins, conjunctions, superlatives and their compositions, constructed by the following 7 grammar rules:",
"cite_spans": [
{
"start": 58,
"end": 83,
"text": "(Pasupat and Liang, 2015)",
"ref_id": "BIBREF43"
},
{
"start": 129,
"end": 150,
"text": "(Berant et al., 2013)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Spoken Intent Clarification for Demonstrated Actions",
"sec_num": "3.2"
},
{
"text": "E \u2192 e; E \u2192 S; S \u2192 (join r E); S \u2192 (and S S)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Spoken Intent Clarification for Demonstrated Actions",
"sec_num": "3.2"
},
{
"text": "T \u2192 (ARG MAX r S); T \u2192 (ARG MIN r S); Q \u2192 S | T",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Spoken Intent Clarification for Demonstrated Actions",
"sec_num": "3.2"
},
{
"text": "where Q is the root non-terminal of the query expression, e is a terminal that represents a UI object entity, r is a terminal that represents a relation, and the rest of the non-terminals are used for intermediate derivations. SUGILITE's language forms a subset of a more general formalism known as Lambda Dependency-based Compositional Semantics , which is a notationally simpler alternative to lambda calculus which is particularly well-suited for expressing queries over knowledge graphs. More technical details and the user evaluation are discussed in Li et al. (2018a) .",
"cite_spans": [
{
"start": 556,
"end": 573,
"text": "Li et al. (2018a)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Spoken Intent Clarification for Demonstrated Actions",
"sec_num": "3.2"
},
{
"text": "Another way SUGILITE leverages GUI groundings in the natural language instructions is to infer task parameters and their possible values. This allows the agent to learn generalized procedures (e.g., to order any kind of beverage from Starbucks) from a demonstration of a specific instance of the task (e.g., ordering an iced cappuccino). SUGILITE achieves this by comparing the user utterance (e.g., \"order a cup of iced cappuccino\") against the data descriptions of the target UI elements (e.g., click on the menu item that has the text \"Iced Cappuccino\") and the arguments (e.g., put \"Iced Cappuccino\" into a search box) of the demonstrated actions for matches. This process grounds different parts in the utterances to specific actions in the demonstrated procedure. It then analyzes the hierarchical structure of GUI at the time of demonstration, and looks for alternative GUI elements that are in parallel to the original target GUI element structurally. In this way, it extracts the other possible values for the identified parameter, such as the names of all the other drinks displayed in the same menu as \"Iced Cappuccino\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Parameterization through GUI Grounding",
"sec_num": "3.3"
},
{
"text": "The extracted sets of possible parameter values are also used for disambiguating the procedures to invoke, such as invoking the order Starbucks procedure for the command \"order a cup of latte\", but invoking the order PapaJohns procedure for the command \"order a cheese pizza.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Parameterization through GUI Grounding",
"sec_num": "3.3"
},
{
"text": "In addition to the procedures, SUGILITE also automatically generalizes the learned concepts in order to reuse parts of existing concepts as much as possible to avoid requiring users to perform redundant demonstrations (Li et al., 2019) .",
"cite_spans": [
{
"start": 218,
"end": 235,
"text": "(Li et al., 2019)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generalizing the Learned Concepts",
"sec_num": "3.4"
},
{
"text": "For Boolean concepts, SUGILITE assumes that the Boolean operation and the types of the arguments stay the same, but the arguments themselves may differ. For example, the concept \"hot\" used in Figure 1 can be generalize to \"arg0 is greater than arg1\" where arg0 and arg1 can be value queries or constant values of the temperature type. This allows the various constant thresholds of temperature, or dynamic queries for temperatures depending on the specific task context. This mechanism allows concepts to be used across different contexts (e.g., determining whether to order iced coffee vs. whether to open the window) task domains (e.g., \"the weather is hot\" vs. \"the oven is hot\").",
"cite_spans": [],
"ref_spans": [
{
"start": 192,
"end": 200,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Generalizing the Learned Concepts",
"sec_num": "3.4"
},
{
"text": "Similarly, named value queries (resolved from resolveValue such as \"temperature\" in Figure 1) can be generalized to have different implementations depending on the task domain. In \"the temperature outside\", query Temperature() can invoke the weather app, whereas in \"the temperature of the oven\" it can invoke the smart oven app to look up the current temperature of the oven (Li et al., 2017b) .",
"cite_spans": [
{
"start": 376,
"end": 394,
"text": "(Li et al., 2017b)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 84,
"end": 93,
"text": "Figure 1)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Generalizing the Learned Concepts",
"sec_num": "3.4"
},
{
"text": "We conducted several lab user studies to evaluate the usability, efficiency and effectiveness of SUG-ILITE. The results of these study showed that end users without significant programming expertise were able to successfully teach the agent the procedures of performing common tasks (e.g., ordering pizza, requesting Uber, checking sports score, ordering coffee) (Li et al., 2017a) , conditional rules for triggering the tasks (Li et al., 2019) , and concepts relevant to the tasks (e.g., the weather is hot, the traffic is heavy) (Li et al., 2019) using SUG-ILITE. The users were also able to clarify their intents when ambiguities arise (Li et al., 2018a) . Most of our participants found SUGILITE easy and natural to use (Li et al., 2017a (Li et al., , 2018a (Li et al., , 2019 . Efficiency wise, teaching a task usually took the user 3-6 times longer than how long it took to perform the task manually in our studies (Li et al., 2017a) , which indicates that teaching a task using SUG-ILITE can save time for many repetitive tasks.",
"cite_spans": [
{
"start": 363,
"end": 381,
"text": "(Li et al., 2017a)",
"ref_id": "BIBREF23"
},
{
"start": 427,
"end": 444,
"text": "(Li et al., 2019)",
"ref_id": "BIBREF29"
},
{
"start": 531,
"end": 548,
"text": "(Li et al., 2019)",
"ref_id": "BIBREF29"
},
{
"start": 639,
"end": 657,
"text": "(Li et al., 2018a)",
"ref_id": "BIBREF25"
},
{
"start": 724,
"end": 741,
"text": "(Li et al., 2017a",
"ref_id": "BIBREF23"
},
{
"start": 742,
"end": 761,
"text": "(Li et al., , 2018a",
"ref_id": "BIBREF25"
},
{
"start": 762,
"end": 780,
"text": "(Li et al., , 2019",
"ref_id": "BIBREF29"
},
{
"start": 921,
"end": 939,
"text": "(Li et al., 2017a)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "5 Discussion and Future Work 5.1 Using GUIs for Language Grounding SUGILITE illustrates the great promise of using GUIs as a resource for grounding and understanding natural language instructions in ITL. The GUIs encapsulate rich knowledge about the flows of the underlying tasks and the properties and relations of relevant entities, so they can be used to bootstrap the domain-specific knowledge needed by ITL systems that rely on natural language instructions for learning. Users are also familiar with GUIs, which makes GUIs the ideal medium to which users can refer during task instructions. A major challenge in natural language instruction is that the users do not know what concepts or knowledge the agent already knows so that they can use it in their instructions (Li et al., 2019) . Therefore, they often introduce additional unknown concepts that are either unnecessary or entirely beyond the capability of the agent (e.g., explaining \"hot\" as \"when I'm sweating\" when teaching the agent to \"open the window when it is hot\"). By using the app GUIs as the medium, the system can effectively constrain the users to refer to things that can be found out from some app GUIs (e.g., \"hot\" can mean \"the temperature is high\"), which mostly overlaps with the \"capability ceiling\" of smartphone-based agents, and allows the users to define new concepts for the agent by referring to app GUIs (Li et al., 2017a (Li et al., , 2019 .",
"cite_spans": [
{
"start": 774,
"end": 791,
"text": "(Li et al., 2019)",
"ref_id": "BIBREF29"
},
{
"start": 1395,
"end": 1412,
"text": "(Li et al., 2017a",
"ref_id": "BIBREF23"
},
{
"start": 1413,
"end": 1431,
"text": "(Li et al., , 2019",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "The current version of SUGILITE uses a grammarbased executable semantic parser to understand the users' natural language explanations of their intents for the demonstrated actions. While this approach comes with many benefits, such as only requiring a small amount of training data and not relying on any domain knowledge, it has rigid patterns and therefore sometimes encounters problems with the flexible structures and varied expressions in the user utterances. We are looking at alternative approaches for parsing natural language instructions into our domainspecific language (DSL) for representing data description queries and task execution procedures. A promising strategy is to take advantage of the abstract syntax tree (AST) structure in our DSL for constructing a neural parser Yin and Neubig, 2017) , which reduces the amount of training data needed and enforces the wellformedness of the output code.",
"cite_spans": [
{
"start": 790,
"end": 811,
"text": "Yin and Neubig, 2017)",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "More Robust Natural Language Understanding",
"sec_num": "5.2"
},
{
"text": "The current model also only uses the semantic information from the local user instructions and their corresponding app GUIs. Another promising approach to enable more robust natural language understanding is to leverage the pre-trained generalpurpose language models (e.g., BERT (Devlin et al., 2018) ) to encode the user instructions and the information extracted from app GUIs.",
"cite_spans": [
{
"start": 279,
"end": 300,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "More Robust Natural Language Understanding",
"sec_num": "5.2"
},
{
"text": "An interesting future direction is to better extract semantics from app GUIs so that the user can focus on high-level task specifications and personal preferences without dealing with low-level mundane details (e.g., \"buy 2 burgers\" means setting the value of the textbox below the text \"quantity\" and next to the text \"Burger\" to be \"2\"). Some works have made early progress in this domain (Liu et al., 2018b; Deka et al., 2016; thanks to the availability of large datasets of GUIs like RICO (Deka et al., 2017) . Recent reinforcement learning-based approaches and semantic parsing techniques have also shown promising results in learning models for navigating through GUIs for user-specified task objectives (Liu et al., 2018a; Pasupat et al., 2018) . For ITL, an interesting future challenge is to combine these user-independent domain-agnostic machine-learned models with the user's personalized instructions for a specific task. This will likely require a new kind of mixedinitiative instruction (Horvitz, 1999) where the agent is more proactive in guiding the user and takes more initiative in the dialog. This could be supported by improved background knowledge and task models, and more flexible dialog frameworks that can handle the continuous refinement and uncertainty inherent in natural language interaction, and the variations in user goals. Collecting and aggregating personal task instructions across many users also introduce important concerns on user privacy, as discussed in .",
"cite_spans": [
{
"start": 391,
"end": 410,
"text": "(Liu et al., 2018b;",
"ref_id": "BIBREF35"
},
{
"start": 411,
"end": 429,
"text": "Deka et al., 2016;",
"ref_id": "BIBREF12"
},
{
"start": 493,
"end": 512,
"text": "(Deka et al., 2017)",
"ref_id": "BIBREF11"
},
{
"start": 710,
"end": 729,
"text": "(Liu et al., 2018a;",
"ref_id": "BIBREF34"
},
{
"start": 730,
"end": 751,
"text": "Pasupat et al., 2018)",
"ref_id": "BIBREF42"
},
{
"start": 1001,
"end": 1016,
"text": "(Horvitz, 1999)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Task Semantics from GUIs",
"sec_num": "5.3"
},
{
"text": "Conversational Learning SUGILITE combines speech and direct manipulation to enable a \"speak and point\" interaction style, which has been studied since early interactive systems like Put-That-There (Bolt, 1980) . As described in Section 3.2, a key pattern used in SUGILITE's multi-modal interface is mutual disambiguation (Oviatt, 1999) where it utilizes inputs in complementary modalities to infer robust and generalizable scripts that can accurately represent user intentions. We are currently exploring other ways of using multi-modal interactions to supplement natural language instructions in ITL. A promising direction is to use GUI references to help with repairing conversational breakdowns (Beneteau et al., 2019; Ashktorab et al., 2019; caused by incorrect semantic parsing, intent classification, or entity recognition. Since GUIs encapsulate rich semantic information about the users' intents, the task flows, and the task constraints, we can potentially ask the users to point to the relevant GUI screens as a part of the error handling process, explaining the errors with references to the GUIs, and helping the system recover from the breakdowns.",
"cite_spans": [
{
"start": 197,
"end": 209,
"text": "(Bolt, 1980)",
"ref_id": "BIBREF6"
},
{
"start": 321,
"end": 335,
"text": "(Oviatt, 1999)",
"ref_id": "BIBREF41"
},
{
"start": 698,
"end": 721,
"text": "(Beneteau et al., 2019;",
"ref_id": "BIBREF4"
},
{
"start": 722,
"end": 745,
"text": "Ashktorab et al., 2019;",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Modal Interactions in",
"sec_num": "5.4"
},
{
"text": "We described SUGILITE, a task automation agent that can learn new tasks and relevant concepts interactively from users through their GUI-grounded natural language instructions and demonstrations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "This system provides capabilities such as intent clarification, task parameterization, and concept generalization. SUGILITE shows the promise of using app GUIs for grounding natural language instructions, and the effectiveness of resolving unknown concepts, ambiguities, and vagueness in natural language instructions using a mixed-initiative multi-modal approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Sugilite is a gemstone, and here stands for Smartphone Users Generating Intelligent Likeable Interfaces Through Examples.2 A demo video is available at https://www.youtube.com/ watch?v=tdHEk-GeaqE 3 https://github.com/tobyli/Sugilite development",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research was supported in part by Verizon and Oath through the InMind project, a J.P. Morgan Faculty Research Award, NSF grant IIS-1814472, and AFOSR grant FA95501710218. Any opinions, findings or recommendations expressed here are those of the authors and do not necessarily reflect views of the sponsors. We thank Amos Azaria, Igor Labutov, Xiaohan Nancy Li, Xiaoyi Zhang, Jingya Chen, and Marissa Radensky for their contributions to the development of this system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "PLOW: A Collaborative Task Learning Agent",
"authors": [
{
"first": "James",
"middle": [],
"last": "Allen",
"suffix": ""
},
{
"first": "Nathanael",
"middle": [],
"last": "Chambers",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Ferguson",
"suffix": ""
},
{
"first": "Lucian",
"middle": [],
"last": "Galescu",
"suffix": ""
},
{
"first": "Hyuckchul",
"middle": [],
"last": "Jung",
"suffix": ""
},
{
"first": "Mary",
"middle": [],
"last": "Swift",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Taysom",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 22nd National Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "1514--1519",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Allen, Nathanael Chambers, George Ferguson, Lucian Galescu, Hyuckchul Jung, Mary Swift, and William Taysom. 2007. PLOW: A Collaborative Task Learning Agent. In Proceedings of the 22nd National Conference on Artificial Intelligence -Vol- ume 2, AAAI'07, pages 1514-1519, Vancouver, British Columbia, Canada. AAAI Press.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A Survey of Robot Learning from Demonstration",
"authors": [
{
"first": "Brenna",
"middle": [
"D"
],
"last": "Argall",
"suffix": ""
},
{
"first": "Sonia",
"middle": [],
"last": "Chernova",
"suffix": ""
},
{
"first": "Manuela",
"middle": [],
"last": "Veloso",
"suffix": ""
},
{
"first": "Brett",
"middle": [],
"last": "Browning",
"suffix": ""
}
],
"year": 2009,
"venue": "Robot. Auton. Syst",
"volume": "57",
"issue": "5",
"pages": "469--483",
"other_ids": {
"DOI": [
"10.1016/j.robot.2008.10.024"
]
},
"num": null,
"urls": [],
"raw_text": "Brenna D. Argall, Sonia Chernova, Manuela Veloso, and Brett Browning. 2009. A Survey of Robot Learning from Demonstration. Robot. Auton. Syst., 57(5):469-483.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Resilient chatbots: Repair strategy preferences for conversational breakdowns",
"authors": [
{
"first": "Zahra",
"middle": [],
"last": "Ashktorab",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "Vera",
"middle": [],
"last": "Liao",
"suffix": ""
},
{
"first": "Justin",
"middle": [
"D"
],
"last": "Weisz",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zahra Ashktorab, Mohit Jain, Q Vera Liao, and Justin D Weisz. 2019. Resilient chatbots: Repair strategy preferences for conversational breakdowns. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, page 254. ACM.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Instructable Intelligent Personal Agent",
"authors": [
{
"first": "Amos",
"middle": [],
"last": "Azaria",
"suffix": ""
},
{
"first": "Jayant",
"middle": [],
"last": "Krishnamurthy",
"suffix": ""
},
{
"first": "Tom",
"middle": [
"M"
],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. The 30th AAAI Conference on Artificial Intelligence (AAAI)",
"volume": "4",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amos Azaria, Jayant Krishnamurthy, and Tom M. Mitchell. 2016. Instructable Intelligent Personal Agent. In Proc. The 30th AAAI Conference on Ar- tificial Intelligence (AAAI), volume 4.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Communication breakdowns between families and alexa",
"authors": [
{
"first": "Erin",
"middle": [],
"last": "Beneteau",
"suffix": ""
},
{
"first": "Olivia",
"middle": [
"K"
],
"last": "Richards",
"suffix": ""
},
{
"first": "Mingrui",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Julie",
"middle": [
"A"
],
"last": "Kientz",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Yip",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Hiniker",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems",
"volume": "",
"issue": "",
"pages": "1--13",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erin Beneteau, Olivia K Richards, Mingrui Zhang, Julie A Kientz, Jason Yip, and Alexis Hiniker. 2019. Communication breakdowns between families and alexa. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pages 1- 13.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Semantic parsing on freebase from question-answer pairs",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Chou",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "Frostig",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1533--1544",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1533-1544.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Put-that-there\": Voice and Gesture at the Graphics Interface",
"authors": [
{
"first": "Richard",
"middle": [
"A"
],
"last": "Bolt",
"suffix": ""
}
],
"year": 1980,
"venue": "Proceedings of the 7th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH '80",
"volume": "",
"issue": "",
"pages": "262--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard A. Bolt. 1980. \"Put-that-there\": Voice and Gesture at the Graphics Interface. In Proceedings of the 7th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH '80, pages 262-270, New York, NY, USA. ACM.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Almond: The architecture of an open, crowdsourced, privacy-preserving, programmable virtual assistant",
"authors": [
{
"first": "Giovanni",
"middle": [],
"last": "Campagna",
"suffix": ""
},
{
"first": "Rakesh",
"middle": [],
"last": "Ramesh",
"suffix": ""
},
{
"first": "Silei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Fischer",
"suffix": ""
},
{
"first": "Monica",
"middle": [
"S"
],
"last": "Lam",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 26th International Conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "341--350",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Giovanni Campagna, Rakesh Ramesh, Silei Xu, Michael Fischer, and Monica S. Lam. 2017. Al- mond: The architecture of an open, crowdsourced, privacy-preserving, programmable virtual assistant. In Proceedings of the 26th International Conference on World Wide Web, pages 341-350.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Language to action: Towards interactive task learning with physical agents",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Joyce",
"suffix": ""
},
{
"first": "Qiaozi",
"middle": [],
"last": "Chai",
"suffix": ""
},
{
"first": "Lanbo",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Shaohua",
"middle": [],
"last": "She",
"suffix": ""
},
{
"first": "Sari",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Guangyue",
"middle": [],
"last": "Saba-Sadiya",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2018,
"venue": "In IJCAI",
"volume": "",
"issue": "",
"pages": "2--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joyce Y Chai, Qiaozi Gao, Lanbo She, Shaohua Yang, Sari Saba-Sadiya, and Guangyue Xu. 2018. Lan- guage to action: Towards interactive task learning with physical agents. In IJCAI, pages 2-9.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Unblind your apps: Predicting naturallanguage labels for mobile gui components by deep learning",
"authors": [
{
"first": "Jieshan",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Chunyang",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Zhenchang",
"middle": [],
"last": "Xing",
"suffix": ""
},
{
"first": "Xiwei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Liming",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Guoqiang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jinshui",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 42nd International Conference on Software Engineering, ICSE '20",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jieshan Chen, Chunyang Chen, Zhenchang Xing, Xi- wei Xu, Liming Zhu, Guoqiang Li, and Jinshui Wang. 2020. Unblind your apps: Predicting natural- language labels for mobile gui components by deep learning. In Proceedings of the 42nd International Conference on Software Engineering, ICSE '20.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Watch what I do: programming by demonstration",
"authors": [
{
"first": "Allen",
"middle": [],
"last": "Cypher",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"Conrad"
],
"last": "Halbert",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Allen Cypher and Daniel Conrad Halbert. 1993. Watch what I do: programming by demonstration. MIT press.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Rico: A Mobile App Dataset for Building Data-Driven Design Applications",
"authors": [
{
"first": "Biplab",
"middle": [],
"last": "Deka",
"suffix": ""
},
{
"first": "Zifeng",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Chad",
"middle": [],
"last": "Franzen",
"suffix": ""
},
{
"first": "Joshua",
"middle": [],
"last": "Hibschman",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Afergan",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Nichols",
"suffix": ""
},
{
"first": "Ranjitha",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology, UIST '17",
"volume": "",
"issue": "",
"pages": "845--854",
"other_ids": {
"DOI": [
"10.1145/3126594.3126651"
]
},
"num": null,
"urls": [],
"raw_text": "Biplab Deka, Zifeng Huang, Chad Franzen, Joshua Hi- bschman, Daniel Afergan, Yang Li, Jeffrey Nichols, and Ranjitha Kumar. 2017. Rico: A Mobile App Dataset for Building Data-Driven Design Applica- tions. In Proceedings of the 30th Annual ACM Sym- posium on User Interface Software and Technology, UIST '17, pages 845-854, New York, NY, USA. ACM.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "ERICA: Interaction Mining Mobile Apps",
"authors": [
{
"first": "Biplab",
"middle": [],
"last": "Deka",
"suffix": ""
},
{
"first": "Zifeng",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Ranjitha",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 29th Annual Symposium on User Interface Software and Technology, UIST '16",
"volume": "",
"issue": "",
"pages": "767--776",
"other_ids": {
"DOI": [
"10.1145/2984511.2984581"
]
},
"num": null,
"urls": [],
"raw_text": "Biplab Deka, Zifeng Huang, and Ranjitha Kumar. 2016. ERICA: Interaction Mining Mobile Apps. In Pro- ceedings of the 29th Annual Symposium on User In- terface Software and Technology, UIST '16, pages 767-776, New York, NY, USA. ACM.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Nlyze: Interactive programming by natural language for spreadsheet data analysis and manipulation",
"authors": [
{
"first": "Sumit",
"middle": [],
"last": "Gulwani",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Marron",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 ACM SIGMOD international conference on Management of data",
"volume": "",
"issue": "",
"pages": "803--814",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sumit Gulwani and Mark Marron. 2014. Nlyze: Inter- active programming by natural language for spread- sheet data analysis and manipulation. In Proceed- ings of the 2014 ACM SIGMOD international con- ference on Management of data, pages 803-814.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Principles of mixed-initiative user interfaces",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Horvitz",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the SIGCHI conference on Human Factors in Computing Systems",
"volume": "",
"issue": "",
"pages": "159--166",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Horvitz. 1999. Principles of mixed-initiative user interfaces. In Proceedings of the SIGCHI con- ference on Human Factors in Computing Systems, pages 159-166. ACM.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Learning hierarchical symbolic representations to support interactive task learning and knowledge transfer",
"authors": [
{
"first": "James",
"middle": [
"R"
],
"last": "Kirk",
"suffix": ""
},
{
"first": "John",
"middle": [
"E"
],
"last": "Laird",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19",
"volume": "",
"issue": "",
"pages": "6095--6102",
"other_ids": {
"DOI": [
"10.24963/ijcai.2019/844"
]
},
"num": null,
"urls": [],
"raw_text": "James R. Kirk and John E. Laird. 2019. Learn- ing hierarchical symbolic representations to sup- port interactive task learning and knowledge transfer. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI- 19, pages 6095-6102. International Joint Confer- ences on Artificial Intelligence Organization.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The State of the Art in End-user Software Engineering",
"authors": [
{
"first": "Amy",
"middle": [
"J"
],
"last": "Ko",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Abraham",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Beckwith",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Blackwell",
"suffix": ""
},
{
"first": "Margaret",
"middle": [],
"last": "Burnett",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Erwig",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Scaffidi",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Lawrance",
"suffix": ""
},
{
"first": "Henry",
"middle": [],
"last": "Lieberman",
"suffix": ""
},
{
"first": "Brad",
"middle": [],
"last": "Myers",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Beth"
],
"last": "Rosson",
"suffix": ""
},
{
"first": "Gregg",
"middle": [],
"last": "Rothermel",
"suffix": ""
},
{
"first": "Mary",
"middle": [],
"last": "Shaw",
"suffix": ""
},
{
"first": "Susan",
"middle": [],
"last": "Wiedenbeck",
"suffix": ""
}
],
"year": 2011,
"venue": "ACM Comput. Surv",
"volume": "43",
"issue": "3",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/1922649.1922658"
]
},
"num": null,
"urls": [],
"raw_text": "Amy J. Ko, Robin Abraham, Laura Beckwith, Alan Blackwell, Margaret Burnett, Martin Erwig, Chris Scaffidi, Joseph Lawrance, Henry Lieberman, Brad Myers, Mary Beth Rosson, Gregg Rothermel, Mary Shaw, and Susan Wiedenbeck. 2011. The State of the Art in End-user Software Engineering. ACM Comput. Surv., 43(3):21:1-21:44.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Lia: A natural language programmable personal assistant",
"authors": [
{
"first": "Igor",
"middle": [],
"last": "Labutov",
"suffix": ""
},
{
"first": "Shashank",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "145--150",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Igor Labutov, Shashank Srivastava, and Tom Mitchell. 2018. Lia: A natural language programmable per- sonal assistant. In Proceedings of the 2018 Con- ference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 145- 150. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Interactive task learning",
"authors": [
{
"first": "E",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Laird",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Gluck",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Anderson",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Kenneth",
"suffix": ""
},
{
"first": "Odest Chadwicke",
"middle": [],
"last": "Forbus",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Jenkins",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Lebiere",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Salvucci",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Scheutz",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Thomaz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Trafton",
"suffix": ""
}
],
"year": 2017,
"venue": "IEEE Intelligent Systems",
"volume": "32",
"issue": "4",
"pages": "6--21",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John E Laird, Kevin Gluck, John Anderson, Ken- neth D Forbus, Odest Chadwicke Jenkins, Christian Lebiere, Dario Salvucci, Matthias Scheutz, Andrea Thomaz, Greg Trafton, et al. 2017. Interactive task learning. IEEE Intelligent Systems, 32(4):6-21.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "SmartSynth: Synthesizing Smartphone Automation Scripts from Natural Language",
"authors": [
{
"first": "Sumit",
"middle": [],
"last": "Vu Le",
"suffix": ""
},
{
"first": "Zhendong",
"middle": [],
"last": "Gulwani",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Su",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceeding of the 11th Annual International Conference on Mobile Systems, Applications, and Services, MobiSys '13",
"volume": "",
"issue": "",
"pages": "193--206",
"other_ids": {
"DOI": [
"10.1145/2462456.2464443"
]
},
"num": null,
"urls": [],
"raw_text": "Vu Le, Sumit Gulwani, and Zhendong Su. 2013. SmartSynth: Synthesizing Smartphone Automation Scripts from Natural Language. In Proceeding of the 11th Annual International Conference on Mobile Systems, Applications, and Services, MobiSys '13, pages 193-206, New York, NY, USA. ACM.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Towards Understanding Human Mistakes of Programming by Example: An Online User Study",
"authors": [
{
"first": "Casey",
"middle": [],
"last": "Tak Yeon Lee",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [
"B"
],
"last": "Dugan",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 22nd International Conference on Intelligent User Interfaces, IUI '17",
"volume": "",
"issue": "",
"pages": "257--261",
"other_ids": {
"DOI": [
"10.1145/3025171.3025203"
]
},
"num": null,
"urls": [],
"raw_text": "Tak Yeon Lee, Casey Dugan, and Benjamin B. Bed- erson. 2017. Towards Understanding Human Mis- takes of Programming by Example: An Online User Study. In Proceedings of the 22nd International Conference on Intelligent User Interfaces, IUI '17, pages 257-261, New York, NY, USA. ACM.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "CoScripter: Automating & Sharing How-to Knowledge in the Enterprise",
"authors": [
{
"first": "Gilly",
"middle": [],
"last": "Leshed",
"suffix": ""
},
{
"first": "Eben",
"middle": [
"M"
],
"last": "Haber",
"suffix": ""
},
{
"first": "Tara",
"middle": [],
"last": "Matthews",
"suffix": ""
},
{
"first": "Tessa",
"middle": [],
"last": "Lau",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '08",
"volume": "",
"issue": "",
"pages": "1719--1728",
"other_ids": {
"DOI": [
"10.1145/1357054.1357323"
]
},
"num": null,
"urls": [],
"raw_text": "Gilly Leshed, Eben M. Haber, Tara Matthews, and Tessa Lau. 2008. CoScripter: Automating & Shar- ing How-to Knowledge in the Enterprise. In Pro- ceedings of the SIGCHI Conference on Human Fac- tors in Computing Systems, CHI '08, pages 1719- 1728, New York, NY, USA. ACM.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "SUGILITE: Creating Multimodal Smartphone Automation by Demonstration",
"authors": [
{
"first": "Toby Jia-Jun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Amos",
"middle": [],
"last": "Azaria",
"suffix": ""
},
{
"first": "Brad",
"middle": [
"A"
],
"last": "Myers",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI '17",
"volume": "",
"issue": "",
"pages": "6038--6049",
"other_ids": {
"DOI": [
"10.1145/3025453.3025483"
]
},
"num": null,
"urls": [],
"raw_text": "Toby Jia-Jun Li, Amos Azaria, and Brad A. Myers. 2017a. SUGILITE: Creating Multimodal Smart- phone Automation by Demonstration. In Proceed- ings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI '17, pages 6038-6049, New York, NY, USA. ACM.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Privacypreserving script sharing in gui-based programmingby-demonstration systems",
"authors": [
{
"first": "Toby Jia-Jun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jingya",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Brandon",
"middle": [],
"last": "Canfield",
"suffix": ""
},
{
"first": "Brad",
"middle": [
"A"
],
"last": "Myers",
"suffix": ""
}
],
"year": 2020,
"venue": "Proc. ACM Hum.-Comput. Interact",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3392869"
]
},
"num": null,
"urls": [],
"raw_text": "Toby Jia-Jun Li, Jingya Chen, Brandon Can- field, and Brad A. Myers. 2020. Privacy- preserving script sharing in gui-based programming- by-demonstration systems. Proc. ACM Hum.- Comput. Interact., 4(CSCW).",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "APPINITE: A Multi-Modal Interface for Specifying Data Descriptions in Programming by Demonstration Using Verbal Instructions",
"authors": [
{
"first": "Toby Jia-Jun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Igor",
"middle": [],
"last": "Labutov",
"suffix": ""
},
{
"first": "Nancy",
"middle": [],
"last": "Xiaohan",
"suffix": ""
},
{
"first": "Xiaoyi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Wenze",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Tom",
"middle": [
"M"
],
"last": "Shi",
"suffix": ""
},
{
"first": "Brad",
"middle": [
"A"
],
"last": "Mitchell",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Myers",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 IEEE Symposium on Visual Languages and Human-Centric Computing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Toby Jia-Jun Li, Igor Labutov, Xiaohan Nancy Li, Xiaoyi Zhang, Wenze Shi, Tom M. Mitchell, and Brad A. Myers. 2018a. APPINITE: A Multi-Modal Interface for Specifying Data Descriptions in Pro- gramming by Demonstration Using Verbal Instruc- tions. In Proceedings of the 2018 IEEE Symposium on Visual Languages and Human-Centric Comput- ing (VL/HCC 2018).",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Teaching Agents When They Fail: End User Development in Goal-oriented Conversational Agents",
"authors": [
{
"first": "",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2018,
"venue": "Studies in Conversational UX Design",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell. 2018b. Teaching Agents When They Fail: End User Development in Goal-oriented Conversa- tional Agents. In Studies in Conversational UX De- sign. Springer.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Programming IoT Devices by Demonstration Using Mobile Apps",
"authors": [
{
"first": "Toby Jia-Jun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yuanchun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Fanglin",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Brad",
"middle": [
"A"
],
"last": "Myers",
"suffix": ""
}
],
"year": 2017,
"venue": "End-User Development",
"volume": "",
"issue": "",
"pages": "3--17",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Toby Jia-Jun Li, Yuanchun Li, Fanglin Chen, and Brad A. Myers. 2017b. Programming IoT Devices by Demonstration Using Mobile Apps. In End-User Development, pages 3-17, Cham. Springer Interna- tional Publishing.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "PUMICE: A Multi-Modal Agent that Learns Concepts and Conditionals from Natural Language and Demonstrations",
"authors": [
{
"first": "Toby Jia-Jun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Marissa",
"middle": [],
"last": "Radensky",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Kirielle",
"middle": [],
"last": "Singarajah",
"suffix": ""
},
{
"first": "Tom",
"middle": [
"M"
],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Brad",
"middle": [
"A"
],
"last": "Myers",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology (UIST 2019), UIST 2019",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"http://doi.acm.org/10.1145/3332165.3347899"
]
},
"num": null,
"urls": [],
"raw_text": "Toby Jia-Jun Li, Marissa Radensky, Justin Jia, Kirielle Singarajah, Tom M. Mitchell, and Brad A. Myers. 2019. PUMICE: A Multi-Modal Agent that Learns Concepts and Conditionals from Natural Language and Demonstrations. In Proceedings of the 32nd An- nual ACM Symposium on User Interface Software and Technology (UIST 2019), UIST 2019. ACM.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "KITE: Building conversational bots from mobile apps",
"authors": [
{
"first": "Toby Jia-Jun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Oriana",
"middle": [],
"last": "Riva",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 16th ACM International Conference on Mobile Systems, Applications, and Services",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Toby Jia-Jun Li and Oriana Riva. 2018. KITE: Build- ing conversational bots from mobile apps. In Pro- ceedings of the 16th ACM International Conference on Mobile Systems, Applications, and Services (Mo- biSys 2018). ACM.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Learning executable semantic parsers for natural language understanding",
"authors": [
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2016,
"venue": "Communications of the ACM",
"volume": "59",
"issue": "9",
"pages": "68--76",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Percy Liang. 2016. Learning executable semantic parsers for natural language understanding. Commu- nications of the ACM, 59(9):68-76.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Learning dependency-based compositional semantics",
"authors": [
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2013,
"venue": "Computational Linguistics",
"volume": "39",
"issue": "2",
"pages": "389--446",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Percy Liang, Michael I. Jordan, and Dan Klein. 2013. Learning dependency-based compositional seman- tics. Computational Linguistics, 39(2):389-446.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Your wish is my command: Programming by example",
"authors": [
{
"first": "Henry",
"middle": [],
"last": "Lieberman",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Henry Lieberman. 2001. Your wish is my command: Programming by example. Morgan Kaufmann.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Reinforcement learning on web interfaces using workflow-guided exploration",
"authors": [
{
"first": "Kelvin",
"middle": [],
"last": "Evan Zheran Liu",
"suffix": ""
},
{
"first": "Panupong",
"middle": [],
"last": "Guu",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Pasupat",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evan Zheran Liu, Kelvin Guu, Panupong Pasupat, and Percy Liang. 2018a. Reinforcement learning on web interfaces using workflow-guided exploration. In International Conference on Learning Representa- tions.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Learning design semantics for mobile apps",
"authors": [
{
"first": "F",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Craft",
"suffix": ""
},
{
"first": "Ersin",
"middle": [],
"last": "Situ",
"suffix": ""
},
{
"first": "Radomir",
"middle": [],
"last": "Yumer",
"suffix": ""
},
{
"first": "Ranjitha",
"middle": [],
"last": "Mech",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2018,
"venue": "The 31st",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas F Liu, Mark Craft, Jason Situ, Ersin Yumer, Radomir Mech, and Ranjitha Kumar. 2018b. Learn- ing design semantics for mobile apps. In The 31st",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Annual ACM Symposium on User Interface Software and Technology",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "569--579",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual ACM Symposium on User Interface Software and Technology, pages 569-579. ACM.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Getting More out of Programming-by-demonstration",
"authors": [
{
"first": "G",
"middle": [],
"last": "Richard",
"suffix": ""
},
{
"first": "Brad",
"middle": [
"A"
],
"last": "Mcdaniel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Myers",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '99",
"volume": "",
"issue": "",
"pages": "442--449",
"other_ids": {
"DOI": [
"10.1145/302979.303127"
]
},
"num": null,
"urls": [],
"raw_text": "Richard G. McDaniel and Brad A. Myers. 1999. Get- ting More out of Programming-by-demonstration. In Proceedings of the SIGCHI Conference on Hu- man Factors in Computing Systems, CHI '99, pages 442-449, New York, NY, USA. ACM.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Making End User Development More Natural",
"authors": [
{
"first": "Brad",
"middle": [
"A"
],
"last": "Myers",
"suffix": ""
},
{
"first": "Amy",
"middle": [
"J"
],
"last": "Ko",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Scaffidi",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Oney",
"suffix": ""
},
{
"first": "Youngseok",
"middle": [],
"last": "Yoon",
"suffix": ""
},
{
"first": "Kerry",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Beth"
],
"last": "Kery",
"suffix": ""
},
{
"first": "Toby Jia-Jun",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2017,
"venue": "New Perspectives in End-User Development",
"volume": "",
"issue": "",
"pages": "1--22",
"other_ids": {
"DOI": [
"10.1007/978-3-319-60291-2_1"
]
},
"num": null,
"urls": [],
"raw_text": "Brad A. Myers, Amy J. Ko, Chris Scaffidi, Stephen Oney, YoungSeok Yoon, Kerry Chang, Mary Beth Kery, and Toby Jia-Jun Li. 2017. Making End User Development More Natural. In New Perspectives in End-User Development, pages 1-22. Springer, Cham.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Sometimes you need a little intelligence, sometimes you need a lot. Your Wish is My Command: Programming by Example",
"authors": [
{
"first": "Brad",
"middle": [
"A"
],
"last": "Myers",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Mcdaniel",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "45--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brad A. Myers and Richard McDaniel. 2001. Some- times you need a little intelligence, sometimes you need a lot. Your Wish is My Command: Program- ming by Example. San Francisco, CA: Morgan Kauf- mann Publishers, pages 45-60.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Patterns for how users overcome obstacles in voice user interfaces",
"authors": [
{
"first": "Chelsea",
"middle": [],
"last": "Myers",
"suffix": ""
},
{
"first": "Anushay",
"middle": [],
"last": "Furqan",
"suffix": ""
},
{
"first": "Jessica",
"middle": [],
"last": "Nebolsky",
"suffix": ""
},
{
"first": "Karina",
"middle": [],
"last": "Caro",
"suffix": ""
},
{
"first": "Jichen",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems",
"volume": "",
"issue": "",
"pages": "1--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chelsea Myers, Anushay Furqan, Jessica Nebolsky, Ka- rina Caro, and Jichen Zhu. 2018. Patterns for how users overcome obstacles in voice user interfaces. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pages 1-7.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Mutual disambiguation of recognition errors in a multimodel architecture",
"authors": [
{
"first": "Sharon",
"middle": [],
"last": "Oviatt",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the SIGCHI conference on Human Factors in Computing Systems",
"volume": "",
"issue": "",
"pages": "576--583",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sharon Oviatt. 1999. Mutual disambiguation of recog- nition errors in a multimodel architecture. In Pro- ceedings of the SIGCHI conference on Human Fac- tors in Computing Systems, pages 576-583. ACM.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Mapping natural language commands to web elements",
"authors": [
{
"first": "Panupong",
"middle": [],
"last": "Pasupat",
"suffix": ""
},
{
"first": "Tian-Shun",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Evan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Kelvin",
"middle": [],
"last": "Guu",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "4970--4976",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1540"
]
},
"num": null,
"urls": [],
"raw_text": "Panupong Pasupat, Tian-Shun Jiang, Evan Liu, Kelvin Guu, and Percy Liang. 2018. Mapping natural lan- guage commands to web elements. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4970-4976, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Compositional Semantic Parsing on Semi-Structured Tables",
"authors": [
{
"first": "Panupong",
"middle": [],
"last": "Pasupat",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Panupong Pasupat and Percy Liang. 2015. Composi- tional Semantic Parsing on Semi-Structured Tables. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Lan- guage Processing. ArXiv: 1508.00305.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Vasta: a vision and language-assisted smartphone task automation system",
"authors": [
{
"first": "Alborz Rezazadeh",
"middle": [],
"last": "Sereshkeh",
"suffix": ""
},
{
"first": "Gary",
"middle": [],
"last": "Leung",
"suffix": ""
},
{
"first": "Krish",
"middle": [],
"last": "Perumal",
"suffix": ""
},
{
"first": "Caleb",
"middle": [],
"last": "Phillips",
"suffix": ""
},
{
"first": "Minfan",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 25th International Conference on Intelligent User Interfaces",
"volume": "",
"issue": "",
"pages": "22--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alborz Rezazadeh Sereshkeh, Gary Leung, Krish Pe- rumal, Caleb Phillips, Minfan Zhang, Afsaneh Fa- zly, and Iqbal Mohomed. 2020. Vasta: a vision and language-assisted smartphone task automation sys- tem. In Proceedings of the 25th International Con- ference on Intelligent User Interfaces, pages 22-32.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Joint concept learning and semantic parsing from natural language explanations",
"authors": [
{
"first": "Shashank",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Igor",
"middle": [],
"last": "Labutov",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1527--1536",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shashank Srivastava, Igor Labutov, and Tom Mitchell. 2017. Joint concept learning and semantic parsing from natural language explanations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1527-1536.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Zero-shot learning of classifiers from natural language quantification",
"authors": [
{
"first": "Shashank",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Igor",
"middle": [],
"last": "Labutov",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "306--316",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1029"
]
},
"num": null,
"urls": [],
"raw_text": "Shashank Srivastava, Igor Labutov, and Tom Mitchell. 2018. Zero-shot learning of classifiers from natu- ral language quantification. In Proceedings of the 56th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 306-316, Melbourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Incorporating external knowledge through pre-training for natural language to code generation",
"authors": [
{
"first": "F",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "Zhengbao",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Pengcheng",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Bogdan",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Vasilescu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.09015"
]
},
"num": null,
"urls": [],
"raw_text": "Frank F Xu, Zhengbao Jiang, Pengcheng Yin, Bogdan Vasilescu, and Graham Neubig. 2020. Incorporat- ing external knowledge through pre-training for nat- ural language to code generation. arXiv preprint arXiv:2004.09015.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "A syntactic neural model for general-purpose code generation",
"authors": [
{
"first": "Pengcheng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "440--450",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1041"
]
},
"num": null,
"urls": [],
"raw_text": "Pengcheng Yin and Graham Neubig. 2017. A syntactic neural model for general-purpose code generation. In Proceedings of the 55th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 440-450, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "The screenshots of SUGILITE's demonstration mechanism and its multi-modal mixed-initiative intent clarification process for the demonstrated actions.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF1": {
"text": "The user teaches the value concept \"commute time\" by demonstrating querying the value in Google Maps. SUGILITE highlights all the duration values on the Google Maps GUI.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF2": {
"text": "SUGILITE's instruction parsing and grounding process for intent clarifications illustrated on an example UI snapshot graph constructed from a simplified GUI snippet.",
"type_str": "figure",
"uris": null,
"num": null
}
}
}
}