{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:09:30.720120Z" }, "title": "", "authors": [ { "first": "Afra", "middle": [], "last": "Alishahi", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Yonatan", "middle": [], "last": "Belinkov", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Grzegorz", "middle": [], "last": "Chrupa\u0142a", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Dieuwke", "middle": [], "last": "Hupkes", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Yuval", "middle": [], "last": "Pinter", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Hassan", "middle": [], "last": "Sajjad", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Samira", "middle": [], "last": "Abnar", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Fraser -Lmu", "middle": [], "last": "Alexander", "suffix": "", "affiliation": {}, "email": "" }, { "first": "", "middle": [], "last": "Munich", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [], "body_text": [ { "text": "BlackboxNLP is the workshop on analyzing and interpreting neural networks for NLP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "In the last few years, neural networks have rapidly become a central component in NLP systems. The improvement in accuracy and performance brought by the introduction of neural networks has typically come at the cost of our understanding of the system: How do we assess what the representations and computations are that the network learns? The goal of this workshop is to bring together people who are attempting to peek inside the neural network black box, taking inspiration from machine learning, psychology, linguistics, and neuroscience.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "In this third edition of the workshop, hosted by the 2020 conference on Empirical Methods in Natural Language Processing (EMNLP), we accepted 31 archival papers and 9 extended abstracts. The workshop also provided a platform for authors of EMNLP-Findings papers to present their work as a poster at the workshop. Lastly, for the first time, BlackboxNLP included a shared interpretation mission. One paper submitted to this mission has a demo presentation, of the interpretability library diagnnose, and is included as the last paper in these proceedings (submission number 70). BlackboxNLP would not have been possible without the dedication of its program committee. We would like to thank them for their invaluable effort in providing timely and high-quality reviews on a short notice. We are also grateful to our invited speakers for contributing to our program. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null } ], "back_matter": [ { "text": "The programme of BlackboxNLP2020 consists of three keynote presentations, six selected oral presentations, one demo paper and two poster sessions. Due to the virtual nature of the conference, these activities are distributed over three blocks, such that every activity occurs twice and is accessible for any time zone. The full programme of the workshop can be found at https://blackboxnlp.github.io. The elephant in the interpretability room: Why use attention as explanation when we have saliency methods? Jasmijn Bastings and Katja Filippova The shared task paper selected to give a demo presentation is: diagNNose: A Library for Neural Activation Analysis Jaap Jumelet All other papers in these proceedings, as well as the nine accepted abstracts, are presented at the poster sessions of the conference. Also a selection of related EMNLP-findings papers are present at the poster sessions.ix", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conference Program", "sec_num": null } ], "bib_entries": {}, "ref_entries": { "TABREF1": { "num": null, "text": "BERTering RAMS: What and How Much does BERT Already Know About Event Arguments? -A Study on the RAMS Dataset Varun Gangal and Eduard Hovy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1", "type_str": "table", "content": "