{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:10:58.120104Z" }, "title": "", "authors": [], "year": "", "venue": null, "identifiers": {}, "abstract": "", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [], "body_text": [ { "text": "The growth in computational power and the rise of Deep Neural Networks (DNNs) have revolutionized the field of Natural Language Processing (NLP). The ability to collect massive datasets with the capacity to train big models on powerful GPUs, has yielded NLP-based technology that was beyond imagination only a few years ago.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "Unfortunately, this technology is still limited to a handful of resource rich languages and domains. This is because most NLP algorithms rely on the fundamental assumption that the training and the test sets are drawn from the same underlying distribution. When the train and test distributions do not match, a phenomenon known as domain shift, such models are likely to encounter performance drops.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "Despite the growing availability of heterogeneous data, many NLP domains still lack the amounts of labeled data required to feed data-hungry neural models, and in some domains and languages even unlabeled data is scarce. As a result, the problem of domain adaptation, training an algorithm on annotated data from one or more source domains, and applying it to other target domains, is a fundamental challenge that has to be solved in order to make NLP technology available for most world languages and textual domains.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "Domain Adaptation (DA) is hence the focus of this workshop. Particularly, the topics of the workshop include, but are not restricted to:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "\u2022 Novel DA algorithms, for existing as well as new setups.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "\u2022 Extending DA research to new domains and tasks through both novel datasets and algorithmic approaches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "\u2022 Proposing novel zero-shot and few-shot algorithms and discussing their relevance for DA.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "\u2022 Exploring the similarities and differences between algorithmic approaches to DA, cross-lingual and cross-task learning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "\u2022 A conceptual discussion of the definitions of fundamental concepts such as domain, transfer as well as zero-shot and few-shot learning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "\u2022 Introducing and exploring novel or under-explored DA setups, aiming towards realistic and applicable ones (e.g., one-to-many DA, many-to-many DA, and DA when the target domain is unknown when training on the source domain).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "Adapt-NLP would not have been possible without the dedication of its program committee. We would like to thank them for their invaluable effort in providing timely and high-quality reviews on a short notice. We are also grateful to our invited speakers and panelists for contributing to our program. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null } ], "back_matter": [], "bib_entries": {}, "ref_entries": { "TABREF1": { "text": "Multidomain Pretrained Language Models for Green NLP Antonios Maronikolakis and Hinrich Sch\u00fctze . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Pseudo-Label Guided Unsupervised Domain Adaptation of Contextual Embeddings Tianyu Chen, Shaohan Huang, Furu Wei and Jianxin Li . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9", "num": null, "content": "", "html": null, "type_str": "table" } } } }