{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:12:35.133162Z" }, "title": "", "authors": [ { "first": "Jo\u00e3o", "middle": [], "last": "Sedoc", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Abeer", "middle": [], "last": "Aldayel", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Aditya", "middle": [], "last": "Bhargava", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Federico", "middle": [], "last": "Bianchi", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Luis", "middle": [], "last": "Fernando", "suffix": "", "affiliation": {}, "email": "" }, { "first": "D", "middle": [ "'" ], "last": "Haro", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Kenton", "middle": [], "last": "Murray", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Zachary", "middle": [], "last": "Lipton", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [], "body_text": [ { "text": "Publication of negative results is difficult in most fields, and the current focus on benchmarkdriven performance improvement exacerbates this situation and implicitly discourages hypothesis-driven research. As a result, the development of NLP models often devolves into a product of tinkering and tweaking, rather than science. Furthermore, it increases the time, effort, and carbon emissions spent on developing and tuning models, as the researchers have little opportunity to learn from what has already been tried and failed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "Historically, this tendency is hard to combat. ACL 2010 invited negative results as a special type of research paper submissions 1 , but received too few submissions and did not continue with it. The Journal for Interesting Negative Results in NLP and ML 2 has only produced one issue in 2008.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "However, the tide may be turning. Despite the pandemic, the second iteration of the Workshop on Insights from Negative Results attracted 39 submissions and 14 presentation requests for papers accepted to \"Findings of EMNLP\". NeurIPS 2021 also accepted the second iteration of \"I (Still) Can't Believe It's Not Better\" 3 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "The workshop maintained roughly the same focus, welcoming many kinds of negative results with the hope that they could yield useful insights and provide a much-needed reality check on the successes of deep learning models in NLP. In particular, we solicited the following types of contributions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "\u2022 broadly applicable recommendations for training/fine-tuning, especially if X that didn't work is something that many practitioners would think reasonable to try, and if the demonstration of X's failure is accompanied by some explanation/hypothesis;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "\u2022 ablation studies of components in previously proposed models, showing that their contributions are different from what was initially reported;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "\u2022 datasets or probing tasks showing that previous approaches do not generalize to other domains or language phenomena;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "\u2022 trivial baselines that work suspiciously well for a given task/dataset;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "\u2022 cross-lingual studies showing that a technique X is only successful for a certain language or language family;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "\u2022 experiments on (in)stability of the previously published results due to hardware, random initializations, preprocessing pipeline components, etc;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "\u2022 theoretical arguments and/or proofs for why X should not be expected to work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "In terms of topics, 19 papers from our submission pool discussed \"great ideas that didn't work\", 11 dealt with the issues of generalizability, 3 were on the topic of \"right for the wrong reasons\", 2 papers focused on reproducibility issues, and 4 papers in other relevant topics. Some submissions fit in more than one category. We accepted 20 short papers (51.2% acceptance rate) and granted 4 presentation requests for Findings papers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "We hope the workshop will continue to contribute to the many reality-check discussions on progress in NLP. If we do not talk about things that do not work, it is harder to see what the biggest problems are and where the community effort is the most needed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF1": { "ref_id": "b1", "title": "Does Commonsense help in detecting Sarcasm? Somnath Basu Roy Chowdhury and", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Does Commonsense help in detecting Sarcasm? Somnath Basu Roy Chowdhury and Snigdha Chaturvedi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "BERT Cannot Align Characters Antonis Maronikolakis, Philipp Dufter and", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "BERT Cannot Align Characters Antonis Maronikolakis, Philipp Dufter and Hinrich Sch\u00fctze . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Two Heads are Better than One? Verification of Ensemble Effect in Neural Machine Translation chanjun park", "authors": [ { "first": "Sungjin", "middle": [], "last": "Park", "suffix": "" }, { "first": "Seolhwa", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Taesun", "middle": [], "last": "Whang", "suffix": "" }, { "first": "Heuiseok", "middle": [ ". . . . . . . . . . . . ." ], "last": "Lim", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Two Heads are Better than One? Verification of Ensemble Effect in Neural Machine Translation chanjun park, Sungjin Park, Seolhwa Lee, Taesun Whang and Heuiseok Lim . . . . . . . . . . . . . . . . . 23", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Are BERTs Sensitive to Native Interference in L2 Production? Zixin Tang, Prasenjit Mitra and", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Are BERTs Sensitive to Native Interference in L2 Production? Zixin Tang, Prasenjit Mitra and David Reitter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Comparing Euclidean and Hyperbolic Embeddings on the WordNet Nouns Hypernymy Graph Sameer", "authors": [ { "first": "Adrian", "middle": [], "last": "Bansal", "suffix": "" }, { "first": ".", "middle": [ "." ], "last": "Benton", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Comparing Euclidean and Hyperbolic Embeddings on the WordNet Nouns Hypernymy Graph Sameer Bansal and Adrian Benton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "When does Further Pre-training MLM Help? An Empirical Study on Task-Oriented Dialog Pre-training Qi Zhu", "authors": [ { "first": "Yuxian", "middle": [], "last": "Gu", "suffix": "" }, { "first": "Lingxiao", "middle": [], "last": "Luo", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Li", "suffix": "" }, { "first": "L", "middle": [ "I" ], "last": "Cheng", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Peng", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "When does Further Pre-training MLM Help? An Empirical Study on Task-Oriented Dialog Pre-training Qi Zhu, Yuxian Gu, Lingxiao Luo, Bing Li, Cheng LI, Wei Peng, Minlie Huang and Xiaoyan Zhu 54", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "On the Difficulty of Segmenting Words with Attention", "authors": [ { "first": "Ramon", "middle": [], "last": "Sanabria", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Sharon", "middle": [], "last": "Goldwater", "suffix": "" }, { "first": ".", "middle": [ "." ], "last": "", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "On the Difficulty of Segmenting Words with Attention Ramon Sanabria, Hao Tang and Sharon Goldwater . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "The Highs and Lows of Simple Lexical Domain Adaptation Approaches for Neural Machine Translation Nikolay Bogoychev and", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "The Highs and Lows of Simple Lexical Domain Adaptation Approaches for Neural Machine Translation Nikolay Bogoychev and Pinzhen Chen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Backtranslation in Neural Morphological Inflection Ling Liu and Mans Hulden", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Backtranslation in Neural Morphological Inflection Ling Liu and Mans Hulden . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Learning Data Augmentation Schedules for Natural Language Processing Daphn\u00e9 Chopard, Matthias", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Learning Data Augmentation Schedules for Natural Language Processing Daphn\u00e9 Chopard, Matthias S. Treder and Irena Spasi\u0107 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "An Investigation into the Contribution of Locally Aggregated Descriptors to Figurative Language Identification Sina Mahdipour Saravani, Ritwik Banerjee and Indrakshi Ray", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "An Investigation into the Contribution of Locally Aggregated Descriptors to Figurative Language Iden- tification Sina Mahdipour Saravani, Ritwik Banerjee and Indrakshi Ray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Investigating the Effect of Natural Language Explanations on Out-of-Distribution Generalization in Fewshot NLI", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Investigating the Effect of Natural Language Explanations on Out-of-Distribution Generalization in Few- shot NLI", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "NLI: Ways (Not) To Go Beyond Simple Heuristics Prajjwal Bhargava, Aleksandr Drozd and", "authors": [ { "first": "Yangqiaoyu", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Chenhao", "middle": [ ". . ." ], "last": "Tan", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yangqiaoyu Zhou and Chenhao Tan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 vii Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics Prajjwal Bhargava, Aleksandr Drozd and Anna Rogers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Challenging the Semi-Supervised VAE Framework for Text Classification Ghazi Felhi", "authors": [ { "first": "Joseph", "middle": [], "last": "Le Roux", "suffix": "" }, { "first": "Djam\u00e9", "middle": [], "last": "Seddah", "suffix": "" }, { "first": ".", "middle": [ "." ], "last": "", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Challenging the Semi-Supervised VAE Framework for Text Classification Ghazi Felhi, Joseph Le Roux and Djam\u00e9 Seddah . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Invited talk: Rachael Tatman (Rasa) Chatbots can be good: What we learn from unhappy users 18:00-18:15 Closing remarks The program is subject to change, please check the EMNLP 2021 conference website for the final program and schedule in different time zones", "authors": [ { "first": "Michael", "middle": [], "last": "Fromm", "suffix": "" }, { "first": "Evgeniy", "middle": [], "last": "Faerman", "suffix": "" }, { "first": "Thomas", "middle": [ ". . . . . . . . . . . . . . . . . . . ." ], "last": "Seidl", "suffix": "" } ], "year": 2021, "venue": "Active Learning for Argument Strength Estimation Nataliia Kees", "volume": "8", "issue": "", "pages": "0--16", "other_ids": {}, "num": null, "urls": [], "raw_text": "Active Learning for Argument Strength Estimation Nataliia Kees, Michael Fromm, Evgeniy Faerman and Thomas Seidl . . . . . . . . . . . . . . . . . . . . . . . . 144 viii Program Wednesday, November 10, 2021 8:45-9:00 Opening remarks 9:00-10:00 Invited talk: Noah Smith (University of Washington / Allen Institute for AI) What Makes a Result Negative? 10:00-11:15 Poster session 1 11:15-11:30 Social break / coffee time 11:30-12:30 Invited talk: Bonnie Webber (University of Edinburgh) The Reviewers and the Reviewed: Institutional Memory and Institutional Incentives 12:30-13:00 Oral presentation session 1 13:00-14:00 Lunch break 14:00-15:00 Invited talk: Zachary Lipton (Carnegie Mellon University) Some Results on Label Shift and Label Noise 15:00-16:15 Poster session 2 16:15-16:30 Social break / coffee time 16:30-17:00 Oral presentation session 2 17:00-18:00 Invited talk: Rachael Tatman (Rasa) Chatbots can be good: What we learn from unhappy users 18:00-18:15 Closing remarks The program is subject to change, please check the EMNLP 2021 conference website for the final pro- gram and schedule in different time zones. The program will also be available at https://insights-workshop.github.io. All times above are specified in Atlantic Standard Time (GMT-4). ix", "links": null } }, "ref_entries": {} } }