|
{ |
|
"paper_id": "2022", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T16:43:36.161688Z" |
|
}, |
|
"title": "Keynote Talk: What kinds of questions have we been asking? A taxonomy for QA/RC benchmarks", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Lalor", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Rogers", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Bowman", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Lora", |
|
"middle": [], |
|
"last": "Aroyo", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Tongshuang", |
|
"middle": [], |
|
"last": "Google", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Jordan", |
|
"middle": [], |
|
"last": "Boyd-Graber", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This talk provides an overview of the current landscape of resources for Question Answering and Reading comprehension, highlighting the current lacunae for future work. I will also present a new taxonomy of \"skills\" targeted by QA/RC datasets and discuss various ways in which questions may be unanswerable.", |
|
"pdf_parse": { |
|
"paper_id": "2022", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This talk provides an overview of the current landscape of resources for Question Answering and Reading comprehension, highlighting the current lacunae for future work. I will also present a new taxonomy of \"skills\" targeted by QA/RC datasets and discuss various ways in which questions may be unanswerable.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "This volume contains papers from the First Workshop on Dynamic Adversarial Data Collection (DADC), held at NAACL 2022. Dynamic Adversarial Data Collection (DADC) has been gaining traction in the community as a promising approach to improving data collection practices, model evaluation and performance. DADC allows us to collect human-written data dynamically with models in the loop. Humans can be tasked with finding adversarial examples that fool current state-of-the-art models (SOTA), for example, or they can cooperate with models to find interesting examples. This offers two benefits: it allows us to gauge how good contemporary SOTA methods really are; and it yields data that may be used to train even stronger models by specifically targeting their current weaknesses. The first workshop on DADC and corresponding shared task focus on three currently under-explored themes: i) understanding how humans can be incentivized to creatively identify and target model weaknesses to increase their chances of fooling the model; ii) how humans and machines can cooperate to produce the most useful data; and iii) how the interaction between humans and machines can further drive performance improvements, both from the perspectives of traditional evaluation metrics as well as those of robustness and fairness. Abstract: Dynamic and/or adversarial data collection can be quite useful as a way of collecting training data for machine-learning models, identifying the conditions under which these models fail, and conducting online head-to-head comparisons between models. However, it is essentially impossible to use these practices to build usable static benchmark datasets for use in evaluating or comparing future new models. I defend this point using a mix of conceptual and empirical points, focusing on the claims (i) that adversarial data collection can skew the distribution of phenomena such as to make it unrepresentative of the intended task, and (ii) that adversarial data collection can arbitrarily shift the rankings of models on its resulting test sets to disfavor systems that are qualitatively similar to the current state of the art.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preface", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Bio: Sam Bowman is an Assistant Professor at New York University and a Visiting Researcher (Sabbatical) at Anthropic. His research interests include the study of artificial neural network models for natural language understanding, with a focus on building high-quality training and evaluation data, applying these models to scientific questions in syntax and semantics, and contributing to work on language model alignment and control.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preface", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Abstract: The efficacy of machine learning (ML) models depends on both algorithms and data. Training data defines what we want our models to learn, and testing data provides the means by which their empirical progress is measured. Benchmark datasets define the entire world within which models exist and operate, yet research continues to focus on critiquing and improving the algorithmic aspect of the models rather than critiquing and improving the data with which our models operate. If \"data is the new oil,\" we are still missing work on the refineries by which the data itself could be optimized for more effective use. In this talk, I will discuss data excellence and lessons learned from software engineering to achieve the scare and rigor in assessing data quality.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preface", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Bio: Lora Aroyo is Research Scientist at Google Research, NYC, where she works on research for Data Excellence by specifically focussing on metrics and strategies to measure quality of human-labeled data in a reliable and transparent way. Lora is an active member of the Human Computation, User Modeling and Semantic Web communities. She is president of the User Modeling community UM Inc, which serves as a steering committee for the ACM Conference Series \"User Modeling, Adaptation and Personalization\" (UMAP) sponsored by SIGCHI and SIGWEB. She is also a member of the ACM SIGCHI conferences board. Prior to joining Google, Lora was a computer science professor at the VU University Amsterdam.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preface", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "x Keynote Talk: Model-in-the-loop Data Collection: What Roles does the Model Play?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preface", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Tongshuang Wu Carnegie Mellon University Abstract: Assistive models have been shown useful for supporting humans in creating challenging datasets, but how exactly do they help? In this talk, I will discuss different roles of assistive models in counterfactual data collection (i.e., perturbing existing text inputs to gain insight into task model decision boundaries), and the characteristics associated with these roles. I will use three examples (CheckList, Polyjuice, Tailor) to demonstrate how our objectives shift when we perturb texts for evaluation, explanation, and improvement, and how that change the corresponding assistive models from enhancing human goals (requiring model controllability) to competing with human bias (requiring careful data reranking). I will conclude by exploring additional roles that these models can play to become more effective.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preface", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Bio: Sherry Tongshuang Wu is an Assistant Professor at the Human Computer Interaction Institute at Carnegie Mellon University (CMU HCII), holding a courtesy appointment at the Language Technolgoy Institute (CMU LTI). Sherry's research lies at the intersection of Human-Computer Interaction (HCI) and Natural Language Processing (NLP). She aims to understand and support people coping with imperfect AI models, both when the model is under active development, and after it is deployed for end users.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preface", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "GreaseVision: Rewriting the Rules of the Interface Siddhartha Datta, Konrad Kollnig and", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "GreaseVision: Rewriting the Rules of the Interface Siddhartha Datta, Konrad Kollnig and Nigel Shadbolt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Posthoc Verification and the Fallibility of the Ground Truth Yifan Ding", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Posthoc Verification and the Fallibility of the Ground Truth Yifan Ding, Nicholas Botzer and Tim Weninger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "30 longhorns at DADC 2022: How many linguists does it take to fool a Question Answering model? A systematic approach to adversarial attacks", |
|
"authors": [ |
|
{ |
|
"first": "Julian", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [ |
|
"." |
|
], |
|
"last": "Michael", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Trina", |
|
"middle": [], |
|
"last": ". ; Venelin Kovatchev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Chatterjee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Venkata", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jifan", |
|
"middle": [], |
|
"last": "Govindarajan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eunsol", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gabriella", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anubrata", |
|
"middle": [], |
|
"last": "Chronis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Das", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Overconfidence in the Face of Ambiguity with Adversarial Data Margaret", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Overconfidence in the Face of Ambiguity with Adversarial Data Margaret Li and Julian Michael . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 longhorns at DADC 2022: How many linguists does it take to fool a Question Answering model? A systematic approach to adversarial attacks. Venelin Kovatchev, Trina Chatterjee, Venkata S Govindarajan, Jifan Chen, Eunsol Choi, Gabriella Chronis, Anubrata Das, Katrin Erk, Matthew Lease, Junyi Jessy Li, Yating Wu and Kyle Mahowald 41", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Collecting high-quality adversarial data for machine reading comprehension tasks with humans and models in the loop", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [ |
|
"Romero" |
|
], |
|
"last": "Damian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Magdalena", |
|
"middle": [], |
|
"last": "Diaz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Anio\u0142", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": ".", |
|
"middle": [ |
|
"." |
|
], |
|
"last": "John Culnan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Collecting high-quality adversarial data for machine reading comprehension tasks with humans and models in the loop Damian Y. Romero Diaz, Magdalena Anio\u0142 and John Culnan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Multilingual NLU Benchmarks Ruixiang Cui, Daniel Hershcovich and", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Generalized Quantifiers as a Source of Error in Multilingual NLU Benchmarks Ruixiang Cui, Daniel Hershcovich and Anders S\u00f8gaard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Adversarially Constructed Evaluation Sets Are More Challenging, but May Not Be Fair Jason Phang", |
|
"authors": [ |
|
{ |
|
"first": "Angelica", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samuel", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Bowman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": ".", |
|
"middle": [ |
|
"." |
|
], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adversarially Constructed Evaluation Sets Are More Challenging, but May Not Be Fair Jason Phang, Angelica Chen, William Huang and Samuel R. Bowman . . . . . . . . . . . . . . . . . . . . . . 62", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Best Paper Talk: Overconfidence in the Face of Ambiguity with", |
|
"authors": [ |
|
{ |
|
"first": "Program", |
|
"middle": [], |
|
"last": "Thursday", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2022, |
|
"venue": "40 Shared Task Winners' Presentations", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "40--56", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Program Thursday, July 14, 2022 09:00 -09:10 Opening Remarks 09:10 -09:25 Collaborative Progress: ML Commons Introduction 09:25 -10:00 Invited Talk 1: Anna Rogers 10:00 -10:35 Invited Talk 2: Jordan Boyd-Graber 10:35 -10:50 Break 10:50 -11:10 Best Paper Talk: Overconfidence in the Face of Ambiguity with Adversarial Data Margaret Li and Julian Michael 11:10 -11:45 Invited Talk 3: Sam Bowman 11:45 -12:20 Invited Talk 4: Lora Aroyo 12:20 -13:20 Lunch 13:20 -13:55 Invited Talk 5: Sherry Tongshuang Wu 13:55 -14:55 Panel: The Future of Data Collection 14:55 -15:10 Break 15:10 -15:20 Introduction to the DADC Shared Task: Max Bartolo 15:20 -15:40 Shared Task Winners' Presentations 15:40 -16:55 Poster Session", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": {} |
|
} |
|
} |