{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:12:33.703231Z" }, "title": "Learning Adaptive Language Interfaces through Decomposition", "authors": [ { "first": "Siddharth", "middle": [], "last": "Karamcheti", "suffix": "", "affiliation": { "laboratory": "", "institution": "Stanford University", "location": {} }, "email": "skaramcheti@cs.stanford.edu" }, { "first": "Dorsa", "middle": [], "last": "Sadigh", "suffix": "", "affiliation": { "laboratory": "", "institution": "Stanford University", "location": {} }, "email": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Stanford University", "location": {} }, "email": "pliang@cs.stanford.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Our goal is to create an interactive natural language interface that efficiently and reliably learns from users to complete tasks in simulated robotics settings. We introduce a neural semantic parsing system that learns new high-level abstractions through decomposition: users interactively teach the system by breaking down high-level utterances describing novel behavior into low-level steps that it can understand. Unfortunately, existing methods either rely on grammars which parse sentences with limited flexibility, or neural sequence-to-sequence models that do not learn efficiently or reliably from individual examples. Our approach bridges this gap, demonstrating the flexibility of modern neural systems, as well as the one-shot reliable generalization of grammar-based methods. Our crowdsourced interactive experiments suggest that over time, users complete complex tasks more efficiently while using our system by leveraging what they just taught. At the same time, getting users to trust the system enough to be incentivized to teach high-level utterances is still an ongoing challenge. We end with a discussion of some of the obstacles we need to overcome to fully realize the potential of the interactive paradigm.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Our goal is to create an interactive natural language interface that efficiently and reliably learns from users to complete tasks in simulated robotics settings. We introduce a neural semantic parsing system that learns new high-level abstractions through decomposition: users interactively teach the system by breaking down high-level utterances describing novel behavior into low-level steps that it can understand. Unfortunately, existing methods either rely on grammars which parse sentences with limited flexibility, or neural sequence-to-sequence models that do not learn efficiently or reliably from individual examples. Our approach bridges this gap, demonstrating the flexibility of modern neural systems, as well as the one-shot reliable generalization of grammar-based methods. Our crowdsourced interactive experiments suggest that over time, users complete complex tasks more efficiently while using our system by leveraging what they just taught. At the same time, getting users to trust the system enough to be incentivized to teach high-level utterances is still an ongoing challenge. We end with a discussion of some of the obstacles we need to overcome to fully realize the potential of the interactive paradigm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "As robots are deployed in collaborative applications like healthcare and household assistance (Scassellati et al., 2012; Knepper et al., 2013) , there is a growing need for reliable human-robot communication. One such communication modality that is both user-friendly and versatile is natural language; to this end, we focus on robust natural language interfaces (NLIs) that can map utterances to executable behavior (Tellex et al., 2011; Artzi and Zettlemoyer, 2013; Thomason et al., 2015; Shridhar et al., 2020) .", "cite_spans": [ { "start": 94, "end": 120, "text": "(Scassellati et al., 2012;", "ref_id": "BIBREF26" }, { "start": 121, "end": 142, "text": "Knepper et al., 2013)", "ref_id": "BIBREF15" }, { "start": 417, "end": 438, "text": "(Tellex et al., 2011;", "ref_id": "BIBREF29" }, { "start": 439, "end": 467, "text": "Artzi and Zettlemoyer, 2013;", "ref_id": "BIBREF2" }, { "start": 468, "end": 490, "text": "Thomason et al., 2015;", "ref_id": "BIBREF31" }, { "start": 491, "end": 513, "text": "Shridhar et al., 2020)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Wash the coffee mug", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interaction Teaching", "sec_num": null }, { "text": "Go to the mug and pick it up Go to the sink and put it inside ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Online Learning", "sec_num": null }, { "text": "Users decompose high-level utterances into spans of low-level actions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "(Single-User)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Historical Interaction Data", "sec_num": null }, { "text": "Figure 1: In our proposed framework, users interact with a simulated robot to complete tasks. Central to our approach is learning by decomposition: users teach the system to understand novel high-level utterances by breaking them down into utterances that the system can understand and execute. Using these decompositions, we update a semantic parser online, allowing our system to adapt to users as they complete more tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TOGGLE Faucet", "sec_num": null }, { "text": "Most existing work on NLIs (and AI systems more broadly) falls into a static train-then-deploy paradigm: models are first trained on large datasets of (language, action) pairs and then deployed, with the hope they will reliably generalize to new utterances. Yet, what happens when such models make mistakes or are faced with types of utterances unseen at training -for example, providing a household robot with a novel utterance like \"wash the coffee mug?\" Such static systems will fail with no way to recover, burdening the user to find alternate utterances to accomplish the task (or give up). Instead, we argue that NLIs need to be dynamic and adaptive, learning interactively from user feedback High-Level Task 1 Clean & Place (Mug, CounterTop) High-Level Task 2 Clean & Place (Tomato, DiningTable) \"Wash the coffee mug\" -> I'm sorry -I don't understand! \"Go to the mug and pick it up\" -> GOTO Mug; PICKUP Mug \"Go to the sink and put it inside\" -> GOTO Sink; PUT Mug Sink \"Turn on the faucet\" -> TOGGLE Faucet \"Turn it off\" -> TOGGLE Faucet \"Pick up the mug\" -> PICKUP Mug \"Place it on the counter\" -> I'm sorry -I don't understand! \"Go to the counter\" -> GOTO CounterTop \"Put the mug on the counter\" -> PUT Mug CounterTop \"Clean and put the tomato on the table\" -> I'm sorry -I don't understand! \"Wash the tomato\" -> GOTO Tomato; PICKUP Tomato; GOTO Sink; PUT Tomato Sink; TOGGLE Faucet; TOGGLE Faucet;", "cite_spans": [ { "start": 731, "end": 748, "text": "(Mug, CounterTop)", "ref_id": null }, { "start": 781, "end": 802, "text": "(Tomato, DiningTable)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "TOGGLE Faucet", "sec_num": null }, { "text": "\"Pick up the tomato\" -> PICKUP Tomato \"Place the tomato on the table\" -> GOTO DiningTable; PUT Tomato DiningTable;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TOGGLE Faucet", "sec_num": null }, { "text": "Figure 2: One-shot generalization example: When the system fails to understand an utterance (e.g. \"wash the coffee mug\", \"place it on the counter\"), the user teaches the system by decomposing it into other utterances the system can understand (illustrated by brackets above), which eventually get mapped to low-level actions that are executed. This induced mapping of high-level utterance to low-level actions forms an example that we use to update our semantic parser online. Because our semantic parser is capable of reliable one-shot generalization, users can leverage these decompositions when completing the next task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Teaching", "sec_num": null }, { "text": "to index and perform more complicated behaviors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Teaching", "sec_num": null }, { "text": "In this work, we explore building NLIs for simulated robotics that learn from real humans. Inspired by , we leverage the idea of learning from decomposition to learn new abstractions. Just like how a human interactively teaches a new task to a friend by breaking it down, users interactively teach our system by simplifying utterances that the system cannot understand (e.g. \"wash the coffee mug\") into lower-level utterances that it can (e.g. \"go to the coffee mug and pick it up\", \"go to the sink and put it inside\", etc. -see Figure 1 ).", "cite_spans": [], "ref_spans": [ { "start": 529, "end": 537, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Teaching", "sec_num": null }, { "text": "To map language to executable behavior, and Thomason et al. (2019) built adaptive NLIs that leverage grammar-based parsers that allow reliable one-shot generalization but lack lexical flexibility. For example, a grammar-based system that understands how to \"wash the coffee mug\" may not generalize to \"clean the mug.\" Meanwhile, recent semantic parsers are based primarily on neural sequence-to-sequence models (Dong and Lapata, 2016; Jia and Liang, 2016; Guu et al., 2017) . While these models excel from a lexical flexibility perspective, they lack the ability to perform reliable one-shot generalization: it is difficult to train them to generalize from individual examples (Koehn and Knowles, 2017) .", "cite_spans": [ { "start": 44, "end": 66, "text": "Thomason et al. (2019)", "ref_id": "BIBREF30" }, { "start": 411, "end": 434, "text": "(Dong and Lapata, 2016;", "ref_id": "BIBREF7" }, { "start": 435, "end": 455, "text": "Jia and Liang, 2016;", "ref_id": "BIBREF12" }, { "start": 456, "end": 473, "text": "Guu et al., 2017)", "ref_id": "BIBREF10" }, { "start": 677, "end": 702, "text": "(Koehn and Knowles, 2017)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Teaching", "sec_num": null }, { "text": "In this paper we propose a new interactive NLI that is lexically flexible and can reliably and efficiently perform one-shot generalization. We introduce a novel exemplar-based neural network semantic parser that first abstracts away entities (e.g. \"wash the coffee mug\" \u2192 \"wash the \"), allowing for generalization to previously taught utterances with novel object combinations. Our parser then retrieves the corresponding \"lifted\" utterance and respective program (exemplar) from the training examples based on a learned metric (implemented as a neural network), giving us the lexical flexibility of sequence-to-sequence models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Teaching", "sec_num": null }, { "text": "We demonstrate the efficacy of our learning from decomposition framework through a set of humanin-the-loop experiments where crowdworkers use our NLI to solve a suite of simulated robotics tasks in household environments. Crucially, after completing a task, we update the semantic parser so that users can immediately reuse what they taught. We show that over time, users are able to complete complex tasks (requiring several steps) more efficiently with our exemplar-based method compared to a neural sequence-to-sequence baseline. However, for more straightforward tasks that can be completed in fewer steps, we see similar performance to the baseline. We end with an error analysis and discussion of user trust and incentives in the context of building interactive semantic parsing systems, paving the way for future work that better realizes the potential of the interactive paradigm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Teaching", "sec_num": null }, { "text": "User sessions are broken up into a sequence of episodes (individual tasks), each comprised of two phases: 1) Interaction, where the user provides utterances to the system to accomplish the task, and 2) Teaching, where the user teaches the system to understand novel utterances (Figures 1 and 2 ", "cite_spans": [], "ref_spans": [ { "start": 277, "end": 293, "text": "(Figures 1 and 2", "ref_id": null } ], "eq_spans": [], "section": "Learning from Decomposition", "sec_num": "2" }, { "text": "During interaction, the user attempts to complete a task by producing a sequence of user utterances u 1 , u 2 , . . . with the corresponding system responses p 1 , p 2 , . . . (including the NOT-SURE action) that are executed in the environment (the NOT-SURE action executes to an error message \"I'm sorry -I don't understand!\"). For example, in Figure 1 , the user first says the novel utterance \"wash the coffee mug,\" and the system returns NOT-SURE. The user follows up with \"go to the mug and pick it up,\" which the system maps to the program GOTO Mug; PICKUP Mug. This continues until the user has completed the task. If the system or user makes a mistake and produces an undesired action, the user must continue to provide utterances, as there are no resets.", "cite_spans": [], "ref_spans": [ { "start": 346, "end": 354, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Interaction", "sec_num": "2.1" }, { "text": "The goal of teaching is to convert the sequence of utterance-action pairs (u i , p i ) into a set of valid training examples for updating the system. To do this, the system presents the user with each u i where p i is NOT-SURE, and asks the user to select the corresponding contiguous sequence of actions p i+1 , . . . p j . To facilitate comprehension, we show users (programatically generated) human-readable representations of each action p -e.g. \"go to the mug\" for a program p = GOTO Mug. For example, the user maps \"wash the coffee mug\" to the sequence GOTO Mug; PICKUP Mug; . . . TOGGLE Faucet (see Figure 1 for the full decomposition). Similarly, the user maps \"place it on the counter\" to GOTO CounterTop; PUT Mug CounterTop. The resulting examples (u i ,p i = p i+1 . . . p j ) are used to update the system (details in Section 3.2.2). We update every time a user completes a task and teaches new examples -this allows users to access what they have taught immediately, during the following task.", "cite_spans": [], "ref_spans": [ { "start": 606, "end": 614, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Teaching", "sec_num": "2.2" }, { "text": "This example illustrates two desiderata for our framework, both of which are key to trust: 1) the ability to identify novel types of utterances (when to output NOT-SURE), as well as 2) the ability to perform one-shot generalization. Knowing when to output NOT-SURE is key to trust during inference: signaling to users what the system knows, so that the simulated robot does not take undesired actions (like dropping your coffee mug on the floor). Performing one-shot generalization is key to trust during learning: users need to rely on the system remembering what has been taught so they can more efficiently complete future tasks. For example, when the user is completing the next task (second half of Figure 1 ), they should be able to rely on the system understanding \"wash the tomato\" and \"place the tomato on the table,\" even though these refer to different objects than in the taught examples. Section 3 discusses how we enable oneshot generalization in further detail.", "cite_spans": [], "ref_spans": [ { "start": 704, "end": 712, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Desiderata", "sec_num": "2.3" }, { "text": "Sequence-to-sequence models fail. We found modern neural sequence-to-sequence models to be a poor fit in our setting. The biggest problem we found was their ability to handle novel utterances. Anecdotally, we found when given the novel utterance \"wash the coffee mug,\" a neural sequenceto-sequence system trained on the seed set of utterances in Table 1 returned the program OPEN Mug, which does not even execute. These problems are exacerbated by the lack of training data; a single user's interaction only creates a handful of new examples, contraindicating the use of datahungry sequence-to-sequence models (Koehn and Knowles, 2017) .", "cite_spans": [ { "start": 610, "end": 635, "text": "(Koehn and Knowles, 2017)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 346, "end": 353, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Desiderata", "sec_num": "2.3" }, { "text": "one-shot generalization, our parser operates over lifted versions of utterances and programs -versions that abstract out explicit references to objects (allowing for automatic generalization to new combinations of objects unseen during training). We now describe our semantic parser, which maps a user utterance u and environment state s to the corresponding program p that best reflects the meaning of the user's utterance. In this work, a state s consists of a set of objects where each object is defined by a fixed set of features (e.g. visibility, toggle status, etc.). We define a program p as a sequence of primitive actions, where each action consists of a template (from Table 1 ) with arguments corresponding to object types. We conclude with a description of how we retrain our semantic parser using the newly taught examples from the teaching phase (Section 2.2).", "cite_spans": [], "ref_spans": [ { "start": 679, "end": 686, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Desiderata", "sec_num": "2.3" }, { "text": "Our semantic parser ( Figure 3 ) takes an utterance u and first abstracts out entities (Section 3.1.1), creating object references and lifted utterances. We parse these into object types (Section 3.1.1) and lifted programs (Section 3.1.2), which are combined (Section 3.1.3) and fed to a reranker that additionally uses the state s (Section 3.1.4) to identify the program p * to execute.", "cite_spans": [], "ref_spans": [ { "start": 22, "end": 30, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Model", "sec_num": "3.1" }, { "text": "We define an entity abstractor that maps an utterance u (e.g. \"wash the coffee mug\") to a lifted utterance f (e.g. \"wash the \") and a list of object references O (e.g. [\"coffee mug\"]). The entity resolver maps each object reference o \u2208 O (e.g. \"coffee mug\") to a grounded object type g (e.g. Mug) resulting in a new list G. To do this, we exploit a set of \"typical names,\" (e.g. Mug = {\"coffee mug\", \"mug\", \"cup\"}) that we define a priori, looking up the object type with the given name. However, if there are multiple types that share the given name (e.g. in our dataset, table is a \"typical name\" for DiningTable, CoffeeTable, SideTable), we use the current state s to disambiguate: we fetch all the matching items in s and return the physically closest one.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entity Abstraction & Resolution", "sec_num": "3.1.1" }, { "text": "Central to our approach is the exemplar-based semantic parser that maps a lifted utterance f to a set of lifted programs Q. To do this, we learn a classifier p \u03b8 that takes two lifted utterances (f, f ) \"Wash the coffee mug\" \"coffee mug\" \"Wash the \" Figure 3: Semantic parsing pipeline. First, entities are extracted and the corresponding outputs -the lifted utterance and object references -are parsed into programs and grounded object types. These are combined and re-ranked to identify the program to execute.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Parsing", "sec_num": "3.1.2" }, { "text": "and predicts a probability whether they have the same lifted program (q = q ). We take Q to be the programs corresponding to the highest probability f under p \u03b8 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Parsing", "sec_num": "3.1.2" }, { "text": "Embedding Utterances. We first embed each utterance with an embedding function \u03c6, implemented as a neural network that first uses GloVe (Pennington et al., 2014) to embed the words in f followed by position encoding similar to that used in Vaswani et al. (2017) and a nonlinear transform. The resulting embeddings are summed and fed into to a two-layer MLP to create the utterance embedding \u03c6(f ). The classifier p \u03b8 outputs \u03c3(a cos-sim(\u03c6(f ), \u03c6(f )) + b), where cos-sim is cosine similarity, a, b are learned scalars, and \u03c3 is the sigmoid function. We train p \u03b8 with a binary cross-entropy objective on a training set of (lifted utterance, lifted program) pairs:", "cite_spans": [ { "start": 136, "end": 161, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF25" }, { "start": 240, "end": 261, "text": "Vaswani et al. (2017)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic Parsing", "sec_num": "3.1.2" }, { "text": "{(f i , f j , [q i = q j ]) : i, j \u2208 [n]}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Parsing", "sec_num": "3.1.2" }, { "text": "Efficient Inference. We now describe how we use p \u03b8 for inference given a new lifted utterance f . Unfortunately, na\u00efve application of p \u03b8 for a new f requires pairwise comparison with every training example. We streamline this by using the structure of our embedding space -as the classifier outputs the scaled cosine similarity between two utterances, we store the embeddings \u03c6(f i ) for each training utterance (f i , q i ) in our dataset, then use an approximate nearest neighbors algorithm to find the the set of utterances that are \"close-enough\"; we use the corresponding lifted programs to form the output set Q. We formalize what it means for an utterance to be \"close-enough\" in the following paragraph. We note that this procedure is similar to COSINEBERT (Mussman et al., 2020) , a model used for active learning on pairwise language tasks.", "cite_spans": [ { "start": 767, "end": 789, "text": "(Mussman et al., 2020)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic Parsing", "sec_num": "3.1.2" }, { "text": "Setting a Threshold. One of the desiderata of our system is returning NOT-SURE for utterances it is not confident about. To do this, we set a threshold \u03c4 such that if \u03c6(f ) \u2212 \u03c6(f ) 2 \u2265 \u03c4 , return NOT-SURE. Note that this is equivalent to to thresholding the probability output by p \u03b8 which is monotonic in the cosine distance as defined above.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Parsing", "sec_num": "3.1.2" }, { "text": "We set this threshold using a held-out validation set of (utterance, program) pairs (defined based solely on the seed examples in Table 1 ). For each utterance in the validation set f , we set \u03c4 such that 90% of the programs corresponding to utterances with \u03c4 are correct. Given an utterance f at test time, we return the set of lifted programs Q corresponding to all lifted utterances within \u03c4 of \u03c6(f ) (all lifted utterances \"close enough\" to f ).", "cite_spans": [], "ref_spans": [ { "start": 130, "end": 137, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Semantic Parsing", "sec_num": "3.1.2" }, { "text": "Handling Compositionality. For multi-action utterances (e.g. \"go to the apple and pick it up\") we heuristically split on the keyword \"and,\" resulting in multiple substrings. We parse each substring obtaining subsets of lifted programs, and take the cross-product of these subsets as the final set Q. We acknowledge that this is not a perfect heuristic; in future work we hope to explore more general extensions that allow us to efficiently interpret utterances that have been composed in this way.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Parsing", "sec_num": "3.1.2" }, { "text": "Implementation Details. When identifying the threshold \u03c4 , we define a hyperparameter lower bound \u03b2; this lower bound ensures that our semantic parser isn't overly conservative (returning NOT-SURE despite being moderately confident about the set of candidate programs). We find a value \u03b2 = 0.15 works well for our experiments. We use Spotify's annoy library as our approximate nearest neighbors store for fast lookups. We initialize our exemplar-based parser with seed examples (utterances mapped to programs) that cover the set of actions. Table 1 shows these actions, and a subset of the utterances used for training -our full dataset consists of only 44 examples (minor variations of the trigger words in the table). This is similar to prior work that defines a set of canonical utterances (Wang et al., 2015) , or a core grammar . We strip stop words (the, up, down, on, off, of, in, to, then, a, an, back, front, out, from, with, inside, outside, below, above, top) from f prior to feeding to our parser to make our model more robust to minor lexical variation.", "cite_spans": [ { "start": 793, "end": 812, "text": "(Wang et al., 2015)", "ref_id": "BIBREF35" } ], "ref_spans": [ { "start": 541, "end": 548, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Semantic Parsing", "sec_num": "3.1.2" }, { "text": "We combine each lifted program q \u2208 Q with the grounded object types G to form a set of grounded programs P = {p 1 , . . . , p k }. In general, given a lifted program q that takes a sequence of arguments (e.g PUT ) and a list of object types (e.g. G = [Mug, DiningTable]), we simply substitute the object types into the program, replacing each argument in the lifted program. This results in a final grounded program (e.g. p = PUT Mug DiningTable).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combination", "sec_num": "3.1.3" }, { "text": "The semantic parser, entity resolver, and combination step produce a set of grounded programs P. The reranker takes the original utterance u, current state s, and this set of grounded programs P and chooses a single candidate p * \u2208 P to execute.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reranking", "sec_num": "3.1.4" }, { "text": "As a first step, we discard candidate programs that fail to execute in our simulator: for example, PICKUP Mug is discarded if the robot is already holding an object. Then we use a neural network to produce a score for each p i \u2208 P. This network separately embeds the utterance, state, and each candidate program, feeding the concatenated embeddings to a two-layer MLP to produce a realvalued score for each p i . In our work, the state s is retrieved dynamically based on the grounded objects G returned by the entity resolver; the state is made up of hand-coded features corresponding to attributes like visibility, toggle status, and whether it can be picked up, amongst others. We use a similar scheme as the semantic parser (Section 3.1.2) to encode utterances and candidate programs (embed, position encode, and sum), and a simple linear transformation to encode the bag-of-features representing the state s.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reranking", "sec_num": "3.1.4" }, { "text": "The highest-scoring candidate p * \u2208 P is executed. The reranker is trained via the process described in Section 3.2.3 only after new examples are taught by users during the teaching phase following each task they are asked to complete.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reranking", "sec_num": "3.1.4" }, { "text": "In the following subsections, we discuss how to retrain our semantic parser and reranker to achieve the second of the two desiderata desired of our system: reliable and efficient one-shot generalization. As input to the retraining procedure, we take the datasetD = (u i ,p i ) of newly taught examples from the teaching phase (Section 2.2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Retraining from User Feedback", "sec_num": "3.2" }, { "text": "Retraining the exemplar-based semantic parser requires converting our grounded datasetD to pairs of lifted utterances and programs. Consider the grounded example (\"Place the tomato on the table\", GOTO DiningTable; PUT Tomato DiningTable); we want to map this to its lifted form (\"Place the on the \", GOTO PUT ). To do this, we use the entity abstractor and resolver (from Section 3.1.1) to factor out object references.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Creating Lifted Examples", "sec_num": "3.2.1" }, { "text": "Concretely, using the entity abstractor on the above example leaves us withf = \"Place the on the \", and references\u00d4 = [\"tomato\", \"dining table\"], which the entity resolver maps to\u011c = [Tomato, DiningTable]. We replace any element of G that occurs in the original program with the generic token to create the lifted program (q = GOTO ; PUT ). Applying this procedure to each example inD gives us our lifted examples (f ,q).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Creating Lifted Examples", "sec_num": "3.2.1" }, { "text": "Updating the semantic parser requires optimizing the binary cross-entropy objective from Section 3.1.2 using these lifted examples (f ,q). As we train our parser from pairs of examples, and there are far more negative examples (pairs with different programs) than positives, we over-sample positive examples so that batches have an equal number of positives and negatives. We train on the entire history of data for the given user, re-creating the nearest neighbors store with embeddings of each training utterance f i . After this step, we recalibrate the nearest neighbors threshold using the procedure in Section 3.1.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Updating the Semantic Parser", "sec_num": "3.2.2" }, { "text": "After updating the semantic parser, we re-parse each utterance in our dataset to define our retraining dataset of (\u00fb i ,P i ,\u015d i ) tuples. We use the program p * that was actually executed for utterance\u00fb i in state\u015d i as the \"gold\" label for the reranker. We train the reranker by maximizing the log-likelihood (minimizing the cross-entropy loss) of this candidate p * amongst the others.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Updating the Reranker", "sec_num": "3.2.3" }, { "text": "We evaluate our approach with a set of humanin-the-loop experiments where crowdworkers are tasked with solving a series of simulated robotics tasks. Users interact with our system over 5 episodes (where each episode consists of a single task), teaching our system new examples after successfully completing each one. Each user has their own individual semantic parser and re-ranker (models are not shared across the users), with both components updating online after each teaching phase, prior to the start of the next task. Updating the two models (including rebuilding the nearest neighbors store) after each teaching phase varies depending on task complexity, but takes anywhere from 28 -63 seconds on an Amazon EC2 T2.Medium (2 CPUs, 4 GiB RAM, no GPU) instance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "Environment and Tasks. Our experiments take place in simulated household environments, with users completing structured, everyday tasks. We create a 2D web-client inspired by the AI2-THOR Simulation Environment (Kolve et al., 2017) that removes the 3D rendering and spatial layout, but preserves the object types, attributes, and relations.", "cite_spans": [ { "start": 211, "end": 231, "text": "(Kolve et al., 2017)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.1" }, { "text": "We borrow our tasks from the ALFRED Dataset (Shridhar et al., 2020) that defines 7 task types: 1) Pick and Place, 2) Pick Two Objects and Place, 3) Look at Object in Light, 4) Nested Pick and Place, 5) Pick, Clean, and Place, 6) Pick, Heat, and Place, and 7) Pick, Cool, and Place.", "cite_spans": [ { "start": 44, "end": 67, "text": "(Shridhar et al., 2020)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.1" }, { "text": "Interactive User Studies: We run our interactive user studies via Amazon Mechanical Turk (AMT). Each user is assigned one of the 7 task types, and is asked to complete 5 tasks of that type in a row. We recruited 20 workers per approach. Workers were paid $5 with an average completion time of 23 minutes. We limit our AMT studies to workers with an approval rating \u2265 98%, location = US, and a total number of completed HITs > 5000.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.1" }, { "text": "Baseline. We compare our approach with a neural sequence-to-sequence with attention model similar to Jia and Liang (2016) . To improve reliability, if the user enters an utterance that can be handled by a simple grammar that covers the core utterances from Table 1 , we return the resulting program; otherwise, we invoke the sequence-to-sequence model. We find the inclusion of such a grammar necessary to prevent users from getting stuck. We refer to this combination of a neural sequence-to-sequence with a grammar as \"seq2seq-grammar\", whereas we refer to our system as \"exemplar-based\". We keep the learning by decomposition framework identical for both our system and the sequence-to-sequence system -in other words, we simply swap out our exemplar-based neural parser described in Section 3.1.2 for the seq2seq-grammar model.", "cite_spans": [ { "start": 101, "end": 121, "text": "Jia and Liang (2016)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 257, "end": 264, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.1" }, { "text": "Metrics. We define three evaluation metrics:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.1" }, { "text": "1. Total number of examples taught: The number of unique (utterance, program) pairs that the users teach the system across each teaching phase (as described in Section 2.2). This number starts at 44, the number of unique seed examples from Table 1 . Higher is better -this metric indicates whether users are engaging with the system to teach high-level abstractions; a flat curve means that the users have finished teaching and are exploiting the examples they have previously taught.", "cite_spans": [], "ref_spans": [ { "start": 240, "end": 247, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.1" }, { "text": "2. Per-turn program complexity: the number of actions generated per utterance. For example, an utterance that generates the program GOTO Mug; PICKUP Mug; GOTO Sink; PUT Mug Sink has complexity of 4 -one for each primitive (NOT-SURE counts at 0). We expect a steep upward trend in this metric over time as users teach and reuse progressively more complex examples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.1" }, { "text": "3. Normalized episode length: the number of language utterances the user provided divided by the number of primitive actions required to solve the task. This is the end-to-end metric we seek to optimize -values less than 1 indicate that users are able to tap into what they have taught to complete tasks in fewer steps.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.1" }, { "text": "Full Results: 20 Users x 7 Tasks. Figure 4 presents graphs of the three metrics over the 5 episodes for each of the 20 users, split across the 7 different task. Error bars denote estimated standard deviation across all 20 users. Users of both our exemplar-based system and the sequence-tosequence baseline teach a moderate number of new examples over time, with an upwards trend in perturn program complexity as they complete more tasks. Finally, we see a decreasing trend in the normalized episode length, with the mean value of our system dipping slightly below a value of 1 after completing 5 instances.", "cite_spans": [], "ref_spans": [ { "start": 34, "end": 42, "text": "Figure 4", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Results", "sec_num": "4.2" }, { "text": "Case Study: Pick, Cool, and Place. Figure 5 , on the other hand, presents graphs of the 3 metrics across 3 users for the Pick, Cool, and Place task, one of the more complex tasks in our suite, requiring at least 12 primitive utterances to complete successfully (compared to tasks like Pick and Place that only require 4). Here we see large gaps between our system and the sequence-to-sequence baseline -not only do users of our system teach significantly more high-level examples, but they have a much-higher per-turn program complexity after 5 episodes compared to the baseline. Finally, Figure 5 : Results for the Pick, Cool, and Place task across 3 users (subset of the original 20). This task is complex, requiring at least 12 primitives to complete. Notice how the number of defined examples and per-turn program complexity are much higher for our method, and that the normalized episode length is lower.", "cite_spans": [], "ref_spans": [ { "start": 35, "end": 43, "text": "Figure 5", "ref_id": null }, { "start": 589, "end": 597, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "4.2" }, { "text": "we see that after 5 episodes, the normalized episode length is around 0.2, indicating that users are able to complete the complex task in 1/5 the steps necessary with our system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.2" }, { "text": "Are users re-using high-level abstractions?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.2" }, { "text": "The general results in Figure 4 indicate that while users are teaching the system new abstractions, they are unfortunately not re-using them effectively. The normalized episode length plot shows that both systems converge to 1, indicating that users are defaulting to the primitive actions, rather than trying to teach higher-level examples. One possible explanation for this is that for simpler tasks (e.g. Pick and Place), it is perhaps easier and faster to provide low-level utterances (those in Table 1), rather than teach new examples. Defaulting to low-level utterances also explains the lack of a significant gap between the sequence-to-sequence model and our model -in light of low-level utterances, the grammar does the heavy-lifting (in other words, we would not be invoking the sequence-to-sequence model at all). Indeed, across all 20 users for the seq2seq-grammar model, 89.9% of successfully parsed utterances (713 out of 793 total) were handled by the grammar, with only 10.1% handled by the seq2seq model (70 of 793 total).", "cite_spans": [], "ref_spans": [ { "start": 23, "end": 31, "text": "Figure 4", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Results", "sec_num": "4.2" }, { "text": "However, this trend doesn't hold true for more complex tasks. Figure 5 shows that users are teaching and reusing a significant number of examples, completing tasks extremely efficiently. One hypothesis is to correlate task complexity with abstraction reuse (and thus, the ease by which users solve tasks), and while supported by the Pick, Cool, and Place results ( Figure 5 ), we would require future experiments with a larger number of users before we can draw meaningful conclusions.", "cite_spans": [], "ref_spans": [ { "start": 62, "end": 70, "text": "Figure 5", "ref_id": null }, { "start": 365, "end": 373, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "4.2" }, { "text": "We build on a long tradition of learning semantic parsers for mapping language to executable programs (Zelle and Mooney, 1996; Collins, 2005, 2007; Liang et al., 2011) , with a focus on using context and learning from interaction.", "cite_spans": [ { "start": 102, "end": 126, "text": "(Zelle and Mooney, 1996;", "ref_id": "BIBREF39" }, { "start": 127, "end": 147, "text": "Collins, 2005, 2007;", "ref_id": null }, { "start": 148, "end": 167, "text": "Liang et al., 2011)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "Contextual Semantic Parsing. In many settings, successfully parsing an utterance requires reasoning about both linguistic and environment context. Artzi and Zettlemoyer (2013) developed a model for parsing instructions in the SAIL Navigation dataset (MacMahon et al., 2006; Chen and Mooney, 2011) that leverages the environment context. Later, Long et al. (2016) introduced the SCONE Dataset, requiring building models that can reason over both types of context. More recently, introduced the large-scale Conversational Text-to-SQL (CoSQL) dataset that requires jointly reasoning over dialogue history and databases to parse user queries to SQL. We handle both linguistic context and environment context in our work, by decoupling semantic parsing from grounding; our lifted semantic parser handles linguistic context, while our entity resolver and reranker handle environment context.", "cite_spans": [ { "start": 147, "end": 175, "text": "Artzi and Zettlemoyer (2013)", "ref_id": "BIBREF2" }, { "start": 250, "end": 273, "text": "(MacMahon et al., 2006;", "ref_id": "BIBREF22" }, { "start": 274, "end": 296, "text": "Chen and Mooney, 2011)", "ref_id": "BIBREF5" }, { "start": 344, "end": 362, "text": "Long et al. (2016)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "Learning from Interaction. Closest to our work is Voxelurn , and its close predecessor SHRDLURN (Wang et al., 2016) .", "cite_spans": [ { "start": 96, "end": 115, "text": "(Wang et al., 2016)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "Voxelurn defined an open-ended environment where the goal was to build arbitrary voxel structures using language instructions. We take inspiration from its teaching procedure where users decompose highlevel utterances into low-level actions in the context of a grammar-based parser. Other work uses alternative modes of interaction to teach new behaviors. Srivastava et al. (2017) used natural language explanations to teach new concepts. Relatedly, Labutov et al. (2018) introduced LIA, a programmable personal assistant that learned from user-provided condition-action rules. Furthermore, Weigelt et al. (2020) introduce an approach for teaching systems new programmatic functions from language that explicitly reasons about whether utterances contain \"teaching intents,\" a mechanism that is similar to our procedure for returning NOT-SURE. Once these \"teaching intents\" have been identified, they are parsed into corresponding code blocks that can then be executed. Other work leverages conversations to learn new concepts, generating queries for users to respond to (Artzi and Zettlemoyer, 2011; Thomason et al., 2019) . Notably, Thomason et al. (2019) used this conversational structure in a robotics setting similar to ours, but focused on learning new percepts, rather than structural abstractions. Yao et al. (2019) defined a similar conversational system for Text-to-SQL models that decides when intervention is needed, and generates a clarification question accordingly.", "cite_spans": [ { "start": 356, "end": 380, "text": "Srivastava et al. (2017)", "ref_id": "BIBREF28" }, { "start": 450, "end": 471, "text": "Labutov et al. (2018)", "ref_id": "BIBREF18" }, { "start": 591, "end": 612, "text": "Weigelt et al. (2020)", "ref_id": "BIBREF36" }, { "start": 1070, "end": 1099, "text": "(Artzi and Zettlemoyer, 2011;", "ref_id": "BIBREF1" }, { "start": 1100, "end": 1122, "text": "Thomason et al., 2019)", "ref_id": "BIBREF30" }, { "start": 1134, "end": 1156, "text": "Thomason et al. (2019)", "ref_id": "BIBREF30" }, { "start": 1306, "end": 1323, "text": "Yao et al. (2019)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "General Instruction Following. Other work looks at instruction following for robotics tasks outside the semantic parsing paradigm, for example by mapping language directly to sequences of actions (Anderson et al., 2018; Fried et al., 2018; Shridhar et al., 2020) , mapping language to representations of reward functions , or learning languageconditioned policies via reinforcement learning (Hermann et al., 2017; Chaplot et al., 2018) .", "cite_spans": [ { "start": 196, "end": 219, "text": "(Anderson et al., 2018;", "ref_id": "BIBREF0" }, { "start": 220, "end": 239, "text": "Fried et al., 2018;", "ref_id": "BIBREF8" }, { "start": 240, "end": 262, "text": "Shridhar et al., 2020)", "ref_id": "BIBREF27" }, { "start": 391, "end": 413, "text": "(Hermann et al., 2017;", "ref_id": "BIBREF11" }, { "start": 414, "end": 435, "text": "Chaplot et al., 2018)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "Towards More Complex Settings. Our analysis in Section 4.2 suggests that situating our system in a more complex setting might allow us to truly see the benefits of learning by decomposition. One such setting is Voxelurn , with its open-ended tasks that allow for the definition of multiple different high-level abstractions with compositional richness. In contrast, the tasks in this work are linear, with similar sequences of primitives used to accomplish each high-level task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion & Lessons Learned", "sec_num": "6" }, { "text": "Future work should use this insight and identify environments that are more complex and openended, where users are naturally incentivized to teach the system new abstractions that built atop each other, to facilitate performing more complex behaviors. In robotics, this might translate to building systems for cooking, perhaps taking inspiration from Epic Kitchens (Damen et al., 2018) , where the set of high-level objectives (general recipes to follow, kitchen behaviors to imitate) is much larger, but where individual subtasks (low-level abstractions like slicing a vegetable, stirring a pot) are very common and generalizable. Other settings might include open-ended building tasks, either in the real world (Knepper et al., 2013; Lee et al., 2019) , or in virtual worlds like Minecraft (Johnson et al., 2016; Gray et al., 2019) .", "cite_spans": [ { "start": 365, "end": 385, "text": "(Damen et al., 2018)", "ref_id": "BIBREF6" }, { "start": 713, "end": 735, "text": "(Knepper et al., 2013;", "ref_id": "BIBREF15" }, { "start": 736, "end": 753, "text": "Lee et al., 2019)", "ref_id": "BIBREF19" }, { "start": 792, "end": 814, "text": "(Johnson et al., 2016;", "ref_id": "BIBREF13" }, { "start": 815, "end": 833, "text": "Gray et al., 2019)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion & Lessons Learned", "sec_num": "6" }, { "text": "On Trusting Interactive Learning. Users have an implicit expectation that after providing just a single example -say to \"wash the coffee mug\" -the system will know how to \"wash the tomato\" or even \"clean the plate\" immediately. However, existing machine learning is not built with such extreme data efficiency in mind; especially for harder types of generalization (e.g. to \"clean the plate\"), we cannot guarantee learning this in a single step. While in this work we show reliable one-shot generalization across objects in a simplified setting, the real-world is much more complex, and different entities merit different behaviors. For example, consider generalizing from \"wash the spoon\" to \"wash the table\"; a system like ours will try to execute the program taught in the first context (going to the sink, placing the object inside, etc.) to the second, leading to complete failure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion & Lessons Learned", "sec_num": "6" }, { "text": "Part of the problem is a lack of transparency; after teaching an example, it is hard for a user to understand what the system knows. This impacts trust, and as a result, when the system makes a mistake interpreting a high-level utterance, users back off to using utterances they are confident the system will understand (mirroring our observed results). This suggests future work in building more reliable methods for one-shot generalization and interpretability, providing users with a clear picture of what the model has learned.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion & Lessons Learned", "sec_num": "6" }, { "text": "Semantic ParsingTo address the above desiderata (identifying when to output NOT-SURE, and one-shot generalization), we incorporate two key insights into our approach. To identify when to output NOT-SURE, we look at the distances between a new utterance and the utterances in our training set, similar to the exemplar-based approach of Papernot and Mc-Daniel (2018) -if an utterance is \"close enough\" to a training utterance, return the corresponding program, otherwise return NOT-SURE. To enable", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported by NSF Award Grant no. 2006388 and the Future of Life Institute. S.K. is supported by the Open Philanthropy Project AI Fellowship. We thank Robin Jia, Michael Xie, John Hewitt, and Chris Potts for helpful feedback during the initial stages of this work. We finally thank the anonymous reviewers for their helpful comments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments", "authors": [ { "first": "P", "middle": [], "last": "Anderson", "suffix": "" }, { "first": "Q", "middle": [], "last": "Wu", "suffix": "" }, { "first": "D", "middle": [], "last": "Teney", "suffix": "" }, { "first": "J", "middle": [], "last": "Bruce", "suffix": "" }, { "first": "M", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "N", "middle": [], "last": "S\u00fcnderhauf", "suffix": "" }, { "first": "I", "middle": [], "last": "Reid", "suffix": "" }, { "first": "S", "middle": [], "last": "Gould", "suffix": "" }, { "first": "A", "middle": [], "last": "Van Den", "suffix": "" }, { "first": "", "middle": [], "last": "Hengel", "suffix": "" } ], "year": 2018, "venue": "Computer Vision and Pattern Recognition (CVPR)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Anderson, Q. Wu, D. Teney, J. Bruce, M. Johnson, N. S\u00fcnderhauf, I. Reid, S. Gould, and A. van den Hengel. 2018. Vision-and-language navigation: In- terpreting visually-grounded navigation instructions in real environments. In Computer Vision and Pat- tern Recognition (CVPR).", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Bootstrapping semantic parsers from conversations", "authors": [ { "first": "Y", "middle": [], "last": "Artzi", "suffix": "" }, { "first": "L", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2011, "venue": "Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "421--432", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Artzi and L. Zettlemoyer. 2011. Bootstrapping se- mantic parsers from conversations. In Empirical Methods in Natural Language Processing (EMNLP), pages 421-432.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Weakly supervised learning of semantic parsers for mapping instructions to actions", "authors": [ { "first": "Y", "middle": [], "last": "Artzi", "suffix": "" }, { "first": "L", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2013, "venue": "Transactions of the Association for Computational Linguistics (TACL)", "volume": "1", "issue": "", "pages": "49--62", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Artzi and L. Zettlemoyer. 2013. Weakly supervised learning of semantic parsers for mapping instruc- tions to actions. Transactions of the Association for Computational Linguistics (TACL), 1:49-62.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Accurately and efficiently interpreting human-robot instructions of varying granularities", "authors": [ { "first": "D", "middle": [], "last": "Arumugam", "suffix": "" }, { "first": "S", "middle": [], "last": "Karamcheti", "suffix": "" }, { "first": "N", "middle": [], "last": "Gopalan", "suffix": "" }, { "first": "L", "middle": [ "L S" ], "last": "Wong", "suffix": "" }, { "first": "S", "middle": [], "last": "Tellex", "suffix": "" } ], "year": 2017, "venue": "Robotics: Science and Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Arumugam, S. Karamcheti, N. Gopalan, L. L. S. Wong, and S. Tellex. 2017. Accurately and ef- ficiently interpreting human-robot instructions of varying granularities. In Robotics: Science and Sys- tems (RSS).", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Gatedattention architectures for task-oriented language grounding", "authors": [ { "first": "D", "middle": [ "S" ], "last": "Chaplot", "suffix": "" }, { "first": "K", "middle": [ "M" ], "last": "Sathyendra", "suffix": "" }, { "first": "R", "middle": [ "K" ], "last": "Pasumarthi", "suffix": "" }, { "first": "D", "middle": [], "last": "Rajagopal", "suffix": "" }, { "first": "R", "middle": [], "last": "Salakhutdinov", "suffix": "" } ], "year": 2018, "venue": "Association for the Advancement of Artificial Intelligence (AAAI)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. S. Chaplot, K. M. Sathyendra, R. K. Pasumarthi, D. Rajagopal, and R. Salakhutdinov. 2018. Gated- attention architectures for task-oriented language grounding. In Association for the Advancement of Artificial Intelligence (AAAI).", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Learning to interpret natural language navigation instructions from observations", "authors": [ { "first": "D", "middle": [ "L" ], "last": "Chen", "suffix": "" }, { "first": "R", "middle": [ "J" ], "last": "Mooney", "suffix": "" } ], "year": 2011, "venue": "Association for the Advancement of Artificial Intelligence (AAAI)", "volume": "", "issue": "", "pages": "859--865", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. L. Chen and R. J. Mooney. 2011. Learning to in- terpret natural language navigation instructions from observations. In Association for the Advancement of Artificial Intelligence (AAAI), pages 859-865.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Scaling egocentric vision: The EPIC-KITCHENS dataset", "authors": [ { "first": "D", "middle": [], "last": "Damen", "suffix": "" }, { "first": "H", "middle": [], "last": "Doughty", "suffix": "" }, { "first": "G", "middle": [ "M" ], "last": "Farinella", "suffix": "" }, { "first": "S", "middle": [], "last": "Fidler", "suffix": "" }, { "first": "A", "middle": [], "last": "Furnari", "suffix": "" }, { "first": "E", "middle": [], "last": "Kazakos", "suffix": "" }, { "first": "D", "middle": [], "last": "Moltisanti", "suffix": "" }, { "first": "J", "middle": [], "last": "Munro", "suffix": "" }, { "first": "T", "middle": [], "last": "Perrett", "suffix": "" }, { "first": "W", "middle": [], "last": "Price", "suffix": "" }, { "first": "M", "middle": [], "last": "Wray", "suffix": "" } ], "year": 2018, "venue": "European Conference on Computer Vision (ECCV)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Damen, H. Doughty, G. M. Farinella, S. Fidler, A. Furnari, E. Kazakos, D. Moltisanti, J. Munro, T. Perrett, W. Price, and M. Wray. 2018. Scal- ing egocentric vision: The EPIC-KITCHENS dataset. In European Conference on Computer Vi- sion (ECCV).", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Language to logical form with neural attention", "authors": [ { "first": "L", "middle": [], "last": "Dong", "suffix": "" }, { "first": "M", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2016, "venue": "Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Dong and M. Lapata. 2016. Language to logical form with neural attention. In Association for Com- putational Linguistics (ACL).", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Speaker-follower models for vision-and-language navigation", "authors": [ { "first": "D", "middle": [], "last": "Fried", "suffix": "" }, { "first": "R", "middle": [], "last": "Hu", "suffix": "" }, { "first": "V", "middle": [], "last": "Cirik", "suffix": "" }, { "first": "A", "middle": [], "last": "Rohrbach", "suffix": "" }, { "first": "J", "middle": [], "last": "Andreas", "suffix": "" }, { "first": "L", "middle": [], "last": "Morency", "suffix": "" }, { "first": "T", "middle": [], "last": "Berg-Kirkpatrick", "suffix": "" }, { "first": "K", "middle": [], "last": "Saenko", "suffix": "" }, { "first": "D", "middle": [], "last": "Klein", "suffix": "" }, { "first": "T", "middle": [], "last": "Darrell", "suffix": "" } ], "year": 2018, "venue": "Advances in Neural Information Processing Systems (NeurIPS)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Fried, R. Hu, V. Cirik, A. Rohrbach, J. An- dreas, L. Morency, T. Berg-Kirkpatrick, K. Saenko, D. Klein, and T. Darrell. 2018. Speaker-follower models for vision-and-language navigation. In Ad- vances in Neural Information Processing Systems (NeurIPS).", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Craftassist: A framework for dialogue-enabled interactive agents", "authors": [ { "first": "J", "middle": [], "last": "Gray", "suffix": "" }, { "first": "K", "middle": [], "last": "Srinet", "suffix": "" }, { "first": "Y", "middle": [], "last": "Jernite", "suffix": "" }, { "first": "H", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Z", "middle": [], "last": "Chen", "suffix": "" }, { "first": "D", "middle": [], "last": "Guo", "suffix": "" }, { "first": "S", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "C", "middle": [ "L" ], "last": "Zitnick", "suffix": "" }, { "first": "A", "middle": [], "last": "Szlam", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.08584" ] }, "num": null, "urls": [], "raw_text": "J. Gray, K. Srinet, Y. Jernite, H. Yu, Z. Chen, D. Guo, S. Goyal, C. L. Zitnick, and A. Szlam. 2019. Craftas- sist: A framework for dialogue-enabled interactive agents. arXiv preprint arXiv:1907.08584.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "From language to programs: Bridging reinforcement learning and maximum marginal likelihood", "authors": [ { "first": "K", "middle": [], "last": "Guu", "suffix": "" }, { "first": "P", "middle": [], "last": "Pasupat", "suffix": "" }, { "first": "E", "middle": [ "Z" ], "last": "Liu", "suffix": "" }, { "first": "P", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2017, "venue": "Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Guu, P. Pasupat, E. Z. Liu, and P. Liang. 2017. From language to programs: Bridging reinforce- ment learning and maximum marginal likelihood. In Association for Computational Linguistics (ACL).", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Grounded language learning in a simulated 3d world", "authors": [ { "first": "K", "middle": [ "M" ], "last": "Hermann", "suffix": "" }, { "first": "F", "middle": [], "last": "Hill", "suffix": "" }, { "first": "S", "middle": [], "last": "Green", "suffix": "" }, { "first": "F", "middle": [], "last": "Wang", "suffix": "" }, { "first": "R", "middle": [], "last": "Faulkner", "suffix": "" }, { "first": "H", "middle": [], "last": "Soyer", "suffix": "" }, { "first": "D", "middle": [], "last": "Szepesvari", "suffix": "" }, { "first": "W", "middle": [], "last": "Czarnecki", "suffix": "" }, { "first": "M", "middle": [], "last": "Jaderberg", "suffix": "" }, { "first": "D", "middle": [], "last": "Teplyashin", "suffix": "" }, { "first": "M", "middle": [], "last": "Wainwright", "suffix": "" }, { "first": "C", "middle": [], "last": "Apps", "suffix": "" }, { "first": "D", "middle": [], "last": "Hassabis", "suffix": "" }, { "first": "P", "middle": [], "last": "Blunsom", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1706.06551" ] }, "num": null, "urls": [], "raw_text": "K. M. Hermann, F. Hill, S. Green, F. Wang, R. Faulkner, H. Soyer, D. Szepesvari, W. Czarnecki, M. Jader- berg, D. Teplyashin, M. Wainwright, C. Apps, D. Hassabis, and P. Blunsom. 2017. Grounded language learning in a simulated 3d world. arXiv preprint arXiv:1706.06551.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Data recombination for neural semantic parsing", "authors": [ { "first": "R", "middle": [], "last": "Jia", "suffix": "" }, { "first": "P", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2016, "venue": "Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Jia and P. Liang. 2016. Data recombination for neu- ral semantic parsing. In Association for Computa- tional Linguistics (ACL).", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "The malmo platform for artificial intelligence experimentation", "authors": [ { "first": "M", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "K", "middle": [], "last": "Hofmann", "suffix": "" }, { "first": "T", "middle": [], "last": "Hutton", "suffix": "" }, { "first": "D", "middle": [], "last": "Bignell", "suffix": "" } ], "year": 2016, "venue": "International Joint Conference on Artificial Intelligence (IJCAI)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Johnson, K. Hofmann, T. Hutton, and D. Bignell. 2016. The malmo platform for artificial intelligence experimentation. In International Joint Conference on Artificial Intelligence (IJCAI).", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A tale of two draggns: A hybrid approach for interpreting action-oriented and goal-oriented instructions", "authors": [ { "first": "S", "middle": [], "last": "Karamcheti", "suffix": "" }, { "first": "E", "middle": [ "C" ], "last": "Williams", "suffix": "" }, { "first": "D", "middle": [], "last": "Arumugam", "suffix": "" }, { "first": "M", "middle": [], "last": "Rhee", "suffix": "" }, { "first": "N", "middle": [], "last": "Gopalan", "suffix": "" }, { "first": "L", "middle": [ "L S" ], "last": "Wong", "suffix": "" }, { "first": "S", "middle": [], "last": "Tellex", "suffix": "" } ], "year": 2017, "venue": "First Workshop on Language Grounding for Robotics @ ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Karamcheti, E. C. Williams, D. Arumugam, M. Rhee, N. Gopalan, L. L. S. Wong, and S. Tellex. 2017. A tale of two draggns: A hybrid approach for interpreting action-oriented and goal-oriented in- structions. In First Workshop on Language Ground- ing for Robotics @ ACL.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Ikeabot: An autonomous multi-robot coordinated furniture assembly system", "authors": [ { "first": "R", "middle": [ "A" ], "last": "Knepper", "suffix": "" }, { "first": "T", "middle": [], "last": "Layton", "suffix": "" }, { "first": "J", "middle": [], "last": "Romanishin", "suffix": "" }, { "first": "D", "middle": [], "last": "Rus", "suffix": "" } ], "year": 2013, "venue": "International Conference on Robotics and Automation (ICRA)", "volume": "", "issue": "", "pages": "855--862", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. A. Knepper, T. Layton, J. Romanishin, and D. Rus. 2013. Ikeabot: An autonomous multi-robot coordi- nated furniture assembly system. In International Conference on Robotics and Automation (ICRA), pages 855-862.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Six challenges for neural machine translation", "authors": [ { "first": "P", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "R", "middle": [], "last": "Knowles", "suffix": "" } ], "year": 2017, "venue": "NMT@ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Koehn and R. Knowles. 2017. Six challenges for neural machine translation. In NMT@ACL.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Ai2-thor: An interactive 3d environment for visual AI", "authors": [ { "first": "E", "middle": [], "last": "Kolve", "suffix": "" }, { "first": "R", "middle": [], "last": "Mottaghi", "suffix": "" }, { "first": "D", "middle": [], "last": "Gordon", "suffix": "" }, { "first": "Y", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "A", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "A", "middle": [], "last": "Farhadi", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1712.05474" ] }, "num": null, "urls": [], "raw_text": "E. Kolve, R. Mottaghi, D. Gordon, Y. Zhu, A. Gupta, and A. Farhadi. 2017. Ai2-thor: An interactive 3d environment for visual AI. arXiv preprint arXiv:1712.05474.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Lia: A natural language programmable personal assistant", "authors": [ { "first": "I", "middle": [], "last": "Labutov", "suffix": "" }, { "first": "S", "middle": [], "last": "Srivastava", "suffix": "" }, { "first": "T", "middle": [ "M" ], "last": "Mitchell", "suffix": "" } ], "year": 2018, "venue": "Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "I. Labutov, S. Srivastava, and T. M. Mitchell. 2018. Lia: A natural language programmable personal assistant. In Empirical Methods in Natural Language Process- ing (EMNLP).", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "IKEA furniture assembly environment for long-horizon complex manipulation tasks", "authors": [ { "first": "Y", "middle": [], "last": "Lee", "suffix": "" }, { "first": "E", "middle": [ "S" ], "last": "Hu", "suffix": "" }, { "first": "Z", "middle": [], "last": "Yang", "suffix": "" }, { "first": "A", "middle": [], "last": "Yin", "suffix": "" }, { "first": "J", "middle": [ "J" ], "last": "Lim", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1911.07246" ] }, "num": null, "urls": [], "raw_text": "Y. Lee, E. S. Hu, Z. Yang, A. Yin, and J. J. Lim. 2019. IKEA furniture assembly environment for long-horizon complex manipulation tasks. arXiv preprint arXiv:1911.07246.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Learning dependency-based compositional semantics", "authors": [ { "first": "P", "middle": [], "last": "Liang", "suffix": "" }, { "first": "M", "middle": [ "I" ], "last": "Jordan", "suffix": "" }, { "first": "D", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2011, "venue": "Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "590--599", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Liang, M. I. Jordan, and D. Klein. 2011. Learn- ing dependency-based compositional semantics. In Association for Computational Linguistics (ACL), pages 590-599.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Simpler context-dependent logical forms via model projections", "authors": [ { "first": "R", "middle": [], "last": "Long", "suffix": "" }, { "first": "P", "middle": [], "last": "Pasupat", "suffix": "" }, { "first": "P", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2016, "venue": "Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Long, P. Pasupat, and P. Liang. 2016. Simpler context-dependent logical forms via model projec- tions. In Association for Computational Linguistics (ACL).", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Walk the talk: Connecting language, knowledge, and action in route instructions", "authors": [ { "first": "M", "middle": [], "last": "Macmahon", "suffix": "" }, { "first": "B", "middle": [], "last": "Stankiewicz", "suffix": "" }, { "first": "B", "middle": [], "last": "Kuipers", "suffix": "" } ], "year": 2006, "venue": "National Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. MacMahon, B. Stankiewicz, and B. Kuipers. 2006. Walk the talk: Connecting language, knowledge, and action in route instructions. In National Con- ference on Artificial Intelligence.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "On the importance of adaptive data collection for extremely imbalanced pairwise tasks", "authors": [ { "first": "S", "middle": [], "last": "Mussman", "suffix": "" }, { "first": "R", "middle": [], "last": "Jia", "suffix": "" }, { "first": "P", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2020, "venue": "Findings of Empirical Methods in Natural Language Processing (Findings of EMNLP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Mussman, R. Jia, and P. Liang. 2020. On the impor- tance of adaptive data collection for extremely im- balanced pairwise tasks. In Findings of Empirical Methods in Natural Language Processing (Findings of EMNLP).", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Deep k-nearest neighbors: Towards confident, interpretable and robust deep learning", "authors": [ { "first": "N", "middle": [], "last": "Papernot", "suffix": "" }, { "first": "P", "middle": [], "last": "Mcdaniel", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1803.04765" ] }, "num": null, "urls": [], "raw_text": "N. Papernot and P. McDaniel. 2018. Deep k-nearest neighbors: Towards confident, inter- pretable and robust deep learning. arXiv preprint arXiv:1803.04765.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "GloVe: Global vectors for word representation", "authors": [ { "first": "J", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "R", "middle": [], "last": "Socher", "suffix": "" }, { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Pennington, R. Socher, and C. D. Manning. 2014. GloVe: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Robots for use in autism research", "authors": [ { "first": "B", "middle": [], "last": "Scassellati", "suffix": "" }, { "first": "H", "middle": [], "last": "Admoni", "suffix": "" }, { "first": "M", "middle": [], "last": "Mataric", "suffix": "" } ], "year": 2012, "venue": "Annual review of biomedical engineering", "volume": "14", "issue": "", "pages": "275--294", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. Scassellati, H. Admoni, and M. Mataric. 2012. Robots for use in autism research. Annual review of biomedical engineering, 14:275-294.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Alfred: A benchmark for interpreting grounded instructions for everyday tasks", "authors": [ { "first": "M", "middle": [], "last": "Shridhar", "suffix": "" }, { "first": "J", "middle": [], "last": "Thomason", "suffix": "" }, { "first": "D", "middle": [], "last": "Gordon", "suffix": "" }, { "first": "Y", "middle": [], "last": "Bisk", "suffix": "" }, { "first": "W", "middle": [], "last": "Han", "suffix": "" }, { "first": "R", "middle": [], "last": "Mottaghi", "suffix": "" }, { "first": "L", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "D", "middle": [], "last": "Fox", "suffix": "" } ], "year": 2020, "venue": "Computer Vision and Pattern Recognition (CVPR)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Shridhar, J. Thomason, D. Gordon, Y. Bisk, W. Han, R. Mottaghi, L. Zettlemoyer, and D. Fox. 2020. Al- fred: A benchmark for interpreting grounded instruc- tions for everyday tasks. In Computer Vision and Pattern Recognition (CVPR).", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Joint concept learning and semantic parsing from natural language explanations", "authors": [ { "first": "S", "middle": [], "last": "Srivastava", "suffix": "" }, { "first": "I", "middle": [], "last": "Labutov", "suffix": "" }, { "first": "T", "middle": [], "last": "Mitchell", "suffix": "" } ], "year": 2017, "venue": "Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1528--1537", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Srivastava, I. Labutov, and T. Mitchell. 2017. Joint concept learning and semantic parsing from natu- ral language explanations. In Empirical Methods in Natural Language Processing (EMNLP), pages 1528-1537.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Understanding natural language commands for robotic navigation and mobile manipulation", "authors": [ { "first": "S", "middle": [], "last": "Tellex", "suffix": "" }, { "first": "T", "middle": [], "last": "Kollar", "suffix": "" }, { "first": "S", "middle": [], "last": "Dickerson", "suffix": "" }, { "first": "M", "middle": [ "R" ], "last": "Walter", "suffix": "" }, { "first": "A", "middle": [ "G" ], "last": "Banerjee", "suffix": "" }, { "first": "S", "middle": [ "J" ], "last": "Teller", "suffix": "" }, { "first": "N", "middle": [], "last": "Roy", "suffix": "" } ], "year": 2011, "venue": "Association for the Advancement of Artificial Intelligence (AAAI)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Tellex, T. Kollar, S. Dickerson, M. R. Walter, A. G. Banerjee, S. J. Teller, and N. Roy. 2011. Understand- ing natural language commands for robotic naviga- tion and mobile manipulation. In Association for the Advancement of Artificial Intelligence (AAAI).", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Improving grounded natural language understanding through human-robot dialog", "authors": [ { "first": "J", "middle": [], "last": "Thomason", "suffix": "" }, { "first": "A", "middle": [], "last": "Padmakumar", "suffix": "" }, { "first": "J", "middle": [], "last": "Sinapov", "suffix": "" }, { "first": "N", "middle": [], "last": "Walker", "suffix": "" }, { "first": "Y", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "H", "middle": [], "last": "Yedidsion", "suffix": "" }, { "first": "J", "middle": [ "W" ], "last": "Hart", "suffix": "" }, { "first": "P", "middle": [], "last": "Stone", "suffix": "" }, { "first": "R", "middle": [ "J" ], "last": "Mooney", "suffix": "" } ], "year": 2019, "venue": "International Conference on Robotics and Automation (ICRA)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Thomason, A. Padmakumar, J. Sinapov, N. Walker, Y. Jiang, H. Yedidsion, J. W. Hart, P. Stone, and R. J. Mooney. 2019. Improving grounded natural language understanding through human-robot dia- log. In International Conference on Robotics and Automation (ICRA).", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Learning to interpret natural language commands through human-robot dialog", "authors": [ { "first": "J", "middle": [], "last": "Thomason", "suffix": "" }, { "first": "S", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "R", "middle": [ "J" ], "last": "Mooney", "suffix": "" }, { "first": "P", "middle": [], "last": "Stone", "suffix": "" } ], "year": 2015, "venue": "International Joint Conference on Artificial Intelligence (IJ-CAI)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Thomason, S. Zhang, R. J. Mooney, and P. Stone. 2015. Learning to interpret natural language com- mands through human-robot dialog. In Interna- tional Joint Conference on Artificial Intelligence (IJ- CAI).", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Attention is all you need", "authors": [ { "first": "A", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "N", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "N", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "J", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "L", "middle": [], "last": "Jones", "suffix": "" }, { "first": "A", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "L", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "I", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1706.03762" ] }, "num": null, "urls": [], "raw_text": "A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin. 2017. Attention is all you need. arXiv preprint arXiv:1706.03762.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Naturalizing a programming language via interactive learning", "authors": [ { "first": "S", "middle": [ "I" ], "last": "Wang", "suffix": "" }, { "first": "S", "middle": [], "last": "Ginn", "suffix": "" }, { "first": "P", "middle": [], "last": "Liang", "suffix": "" }, { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2017, "venue": "Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. I. Wang, S. Ginn, P. Liang, and C. D. Manning. 2017. Naturalizing a programming language via in- teractive learning. In Association for Computational Linguistics (ACL).", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Learning language games through interaction", "authors": [ { "first": "S", "middle": [ "I" ], "last": "Wang", "suffix": "" }, { "first": "P", "middle": [], "last": "Liang", "suffix": "" }, { "first": "C", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2016, "venue": "Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. I. Wang, P. Liang, and C. Manning. 2016. Learning language games through interaction. In Association for Computational Linguistics (ACL).", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Building a semantic parser overnight", "authors": [ { "first": "Y", "middle": [], "last": "Wang", "suffix": "" }, { "first": "J", "middle": [], "last": "Berant", "suffix": "" }, { "first": "P", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2015, "venue": "Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Wang, J. Berant, and P. Liang. 2015. Building a semantic parser overnight. In Association for Com- putational Linguistics (ACL).", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Programming in natural language with fuse: Synthesizing methods from spoken utterances using deep natural language understanding", "authors": [ { "first": "S", "middle": [], "last": "Weigelt", "suffix": "" }, { "first": "V", "middle": [], "last": "Steurer", "suffix": "" }, { "first": "T", "middle": [], "last": "Hey", "suffix": "" }, { "first": "W", "middle": [], "last": "Tichy", "suffix": "" } ], "year": 2020, "venue": "Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Weigelt, V. Steurer, T. Hey, and W. Tichy. 2020. Pro- gramming in natural language with fuse: Synthesiz- ing methods from spoken utterances using deep natu- ral language understanding. In Association for Com- putational Linguistics (ACL).", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Model-based interactive semantic parsing: A unified framework and a text-to-SQL case study", "authors": [ { "first": "Z", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Y", "middle": [], "last": "Su", "suffix": "" }, { "first": "H", "middle": [], "last": "Sun", "suffix": "" }, { "first": "W", "middle": [], "last": "Yih", "suffix": "" } ], "year": 2019, "venue": "Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Z. Yao, Y. Su, H. Sun, and W. Yih. 2019. Model-based interactive semantic parsing: A unified framework and a text-to-SQL case study. In Empirical Methods in Natural Language Processing (EMNLP).", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Cosql: A conversational text-to-SQL challenge towards crossdomain natural language interfaces to databases", "authors": [ { "first": "T", "middle": [], "last": "Yu", "suffix": "" }, { "first": "R", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "H", "middle": [ "Y" ], "last": "Er", "suffix": "" }, { "first": "S", "middle": [], "last": "Li", "suffix": "" }, { "first": "E", "middle": [], "last": "Xue", "suffix": "" }, { "first": "B", "middle": [], "last": "Pang", "suffix": "" }, { "first": "X", "middle": [ "V" ], "last": "Lin", "suffix": "" }, { "first": "Y", "middle": [ "C" ], "last": "Tan", "suffix": "" }, { "first": "T", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Z", "middle": [], "last": "Li", "suffix": "" }, { "first": "Y", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "M", "middle": [], "last": "Yasunaga", "suffix": "" }, { "first": "S", "middle": [], "last": "Shim", "suffix": "" }, { "first": "T", "middle": [], "last": "Chen", "suffix": "" }, { "first": "A", "middle": [ "R" ], "last": "Fabbri", "suffix": "" }, { "first": "Z", "middle": [], "last": "Li", "suffix": "" }, { "first": "L", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Y", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "S", "middle": [], "last": "Dixit", "suffix": "" }, { "first": "V", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "C", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "R", "middle": [], "last": "Socher", "suffix": "" }, { "first": "W", "middle": [ "S" ], "last": "Lasecki", "suffix": "" }, { "first": "D", "middle": [ "R" ], "last": "Radev", "suffix": "" } ], "year": 2019, "venue": "Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Yu, R. Zhang, H. Y. Er, S. Li, E. Xue, B. Pang, X. V. Lin, Y. C. Tan, T. Shi, Z. Li, Y. Jiang, M. Yasunaga, S. Shim, T. Chen, A. R. Fabbri, Z. Li, L. Chen, Y. Zhang, S. Dixit, V. Zhang, C. Xiong, R. Socher, W. S. Lasecki, and D. R. Radev. 2019. Cosql: A conversational text-to-SQL challenge towards cross- domain natural language interfaces to databases. In Empirical Methods in Natural Language Processing (EMNLP).", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Learning to parse database queries using inductive logic programming", "authors": [ { "first": "M", "middle": [], "last": "Zelle", "suffix": "" }, { "first": "R", "middle": [ "J" ], "last": "Mooney", "suffix": "" } ], "year": 1996, "venue": "Association for the Advancement of Artificial Intelligence (AAAI)", "volume": "", "issue": "", "pages": "1050--1055", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Zelle and R. J. Mooney. 1996. Learning to parse database queries using inductive logic programming. In Association for the Advancement of Artificial In- telligence (AAAI), pages 1050-1055.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars", "authors": [ { "first": "L", "middle": [ "S" ], "last": "Zettlemoyer", "suffix": "" }, { "first": "M", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2005, "venue": "Uncertainty in Artificial Intelligence (UAI)", "volume": "", "issue": "", "pages": "658--666", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. S. Zettlemoyer and M. Collins. 2005. Learning to map sentences to logical form: Structured classifica- tion with probabilistic categorial grammars. In Un- certainty in Artificial Intelligence (UAI), pages 658- 666.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Online learning of relaxed CCG grammars for parsing to logical form", "authors": [ { "first": "L", "middle": [ "S" ], "last": "Zettlemoyer", "suffix": "" }, { "first": "M", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2007, "venue": "Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP/CoNLL)", "volume": "", "issue": "", "pages": "678--687", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. S. Zettlemoyer and M. Collins. 2007. Online learn- ing of relaxed CCG grammars for parsing to logi- cal form. In Empirical Methods in Natural Lan- guage Processing and Computational Natural Lan- guage Learning (EMNLP/CoNLL), pages 678-687.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Complete set of results across 20 users with 7 different task types. Each user is given a single task type, and asked to complete 5 different episodes, with different combinations of environments and objects. The graph on the left shows the number of examples taught over 5 episodes. The graph in the middle shows the per-turn program complexity (number of primitives per language utterance) over time. The last graph shows the normalized episode length (# utterances to solve task / number of actions required).", "num": null, "type_str": "figure", "uris": null }, "TABREF2": { "num": null, "content": "
: List of primitive programmatic actions and
seed utterances used to initialize our semantic parser.
Note that the utterances are lifted; they do not include
references to concrete objects. This enables one-shot
generalization to unseen object combinations.
", "text": "", "html": null, "type_str": "table" } } } }