{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:33:49.735211Z" }, "title": "", "authors": [], "year": "", "venue": null, "identifiers": {}, "abstract": "", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [], "body_text": [ { "text": "Motivation: A key aspect of human learning is the ability to learn continuously from various sources of feedback. In contrast, much of the recent success of deep learning for NLP relies on large datasets and extensive compute resources to train and fine-tune models, which then remain fixed. This leaves a research gap for systems that adapt to the changing needs of individual users or allow users to continually correct errors as they emerge. Learning from user interaction is crucial for tasks that require a high grade of personalization and for rapidly changing or complex, multi-step tasks where collecting and annotating large datasets is not feasible, but an informed user can provide guidance. What is interactive NLP?: Interactive Learning for NLP means training, fine-tuning or otherwise adapting an NLP model to inputs from a human user or teacher. Relevant approaches range from active learning with a human in the loop, to training with implicit user feedback (e.g. clicks), dialogue systems that adapt to user utterances and training with new forms of human input. Interactive learning is the converse of learning from datasets collected offline with no human input during the training process. Goals: The goal of this workshop was to bring together researchers to:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Message from the General Chair", "sec_num": null }, { "text": "\u2022 Develop novel methods for interactive machine learning of NLP models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Message from the General Chair", "sec_num": null }, { "text": "\u2022 Discuss how to evaluate interactive NLP systems, including models for realistic user simulation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Message from the General Chair", "sec_num": null }, { "text": "\u2022 Identify scenarios involving natural language where interactive learning is beneficial.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Message from the General Chair", "sec_num": null }, { "text": "Previous work has been split across different tracks and task-focused workshops, making it hard to disentangle applications from broadly-applicable methodologies or establish common practices for evaluating interactive learning systems. We aimed to bring together researchers to share insights on interactive learning from a wide range of NLP-related fields, including, but not limited to, dialogue systems, question answering, summarization, and educational applications. Concerning methodology, we encouraged submissions investigating various dimensions of interactive learning, such as (but not restricted to):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Message from the General Chair", "sec_num": null }, { "text": "\u2022 Interactive machine learning methods: the wide range of topics discussed above, from active learning with a user to methods that extract, interpret and aggregate user feedback or preferences from complex interactions, such as natural language instructions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Message from the General Chair", "sec_num": null }, { "text": "\u2022 User effort: the amount of user effort required for different types of feedback; explicit labels require higher user effort than feedback deduced from user interaction (e.g., clicks, viewtime); how users cope with the system misinterpreting instructions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Message from the General Chair", "sec_num": null }, { "text": "\u2022 Feedback types: different types of feedback require different techniques to incorporate them into a model. E.g., explicit labels allow us to directly train while user instructions require interpretation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Message from the General Chair", "sec_num": null }, { "text": "A major bottleneck for interactive learning approaches is their evaluation, including a lack of suitable datasets. We, therefore, encouraged submissions that cover research into the following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Message from the General Chair", "sec_num": null }, { "text": "\u2022 Evaluation methods: approaches to assessing interactive methods, such as low-effort, easily reproducible approaches with real-world users and simulated user models for automated evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Message from the General Chair", "sec_num": null }, { "text": "\u2022 Reproducibility: procedures for documenting user evaluations and ensuring they are reproducible.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Message from the General Chair", "sec_num": null }, { "text": "\u2022 Data: Introduce novel datasets for training and evaluating interactive models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Message from the General Chair", "sec_num": null }, { "text": "To investigate scenarios where interactive learning is effective, we invited submissions that present empirical results for applications of interactive methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Message from the General Chair", "sec_num": null }, { "text": "\u2022 Kiant\u00e9 Brantley (kdbrant@cs.umd.edu) is a fourth year PhD student in Computer Science at The University of Maryland College Park advised by Hal Daum\u00e9 III. His research interest is in designing algorithms that efficiently integrate domain knowledge into sequential decision-making problems (e.g. reinforcement learning, imitation learning and structure prediction for natural language processing).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Organizing Committee", "sec_num": null }, { "text": "\u2022 Soham Dan (sohamdan@seas.upenn.edu) is a PhD student at the University of Pennsylvania working with Dan Roth on natural language understanding-specifically, in the context of grounded domains. His research involves concept learning, interactive learning and semantic parsing of instructions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Organizing Committee", "sec_num": null }, { "text": "\u2022 Iryna Gurevych (gurevych@ukp.informatik.tu-darmstadt.de) is a full professor in the department of Computer Science at the Technical University of Darmstadt. She has published on interactive methods for NLP in various NLP domains such as language learning, text summarization, or entity linking.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Organizing Committee", "sec_num": null }, { "text": "\u2022 Ji-Ung Lee (lee@ukp.informatik.tu-darmstadt.de) is a PhD student at the Technical University of Darmstadt. His research focuses on effective model training from user feedback in low-data scenarios coupled with providing the user with instances that fit their needs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Organizing Committee", "sec_num": null }, { "text": "\u2022 Filip Radlinski (filiprad@google.com) is a Research Scientist at Google, UK. His research focuses on improvements to conversational search and recommendation through better understanding and modeling user interests through natural language, improved transparency of conversational systems, as well as human-centered evaluation and personalization of information retrieval and recommendation tasks. He received his PhD from Cornell University.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Organizing Committee", "sec_num": null }, { "text": "\u2022 Hinrich Sch\u00fctze (hinrichacl@cis.lmu.de) is a full professor at Ludwig Maximilian University, Munich and chair of computational linguistics. His research covers deep learning for NLP, semantics in NLP and linguistics, and information retrieval. He has co-organized several workshops, including two SCLeM workshops (Subword and Character level models in NLP) at EMNLP and NAACL and a Dagstuhl seminar entitled \"From Characters to Understanding Natural Language\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Organizing Committee", "sec_num": null }, { "text": "\u2022 Edwin Simpson (edwin.simpson@bristol.ac.uk) is a lecturer at the University of Bristol working on interactive learning for NLP and machine learning for crowdsourced annotation with an interest in Bayesian methods for handling uncertainty.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Organizing Committee", "sec_num": null }, { "text": "\u2022 Lili Yu (liliyu@fb.com) is a research scientist in the Facebook Language Research team. Her research interest lies in summarization, conversational AI, learning from user feedback and knowledge representation and grounding.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Organizing Committee", "sec_num": null }, { "text": "v ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Organizing Committee", "sec_num": null } ], "back_matter": [], "bib_entries": {}, "ref_entries": { "TABREF0": { "num": null, "html": null, "text": "HILDIF: Interactive Debugging of NLI Models Using Influence Functions", "content": "