{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:12:45.920073Z" }, "title": "Clustering Examples in Multi-Dataset NLP Benchmarks with Item Response Theory", "authors": [ { "first": "Pedro", "middle": [], "last": "Rodriguez", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Phu", "middle": [ "Mon" ], "last": "Htut", "suffix": "", "affiliation": {}, "email": "" }, { "first": "John", "middle": [ "P" ], "last": "Lalor", "suffix": "", "affiliation": {}, "email": "john.lalor@nd.edu" }, { "first": "Joao", "middle": [], "last": "Sedoc", "suffix": "", "affiliation": {}, "email": "jsedoc@stern.nyu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In natural language processing, multi-dataset benchmarks for common tasks (e.g., Su-perGLUE for natural language inference and MRQA for question answering) have risen in importance. Invariably, tasks and individual examples vary in difficulty. Recent analysis methods infer properties of examples such as difficulty. In particular, Item Response Theory (IRT) jointly infers example and model properties from the output of benchmark tasks (i.e., scores for each model-example pair). Therefore, it seems sensible that methods like IRT should be able to detect differences between datasets in a task. This work shows that current IRT models are not as good at identifying differences as we would expect, explain why this is difficult, and outline future directions that incorporate more (textual) signal from examples.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "In natural language processing, multi-dataset benchmarks for common tasks (e.g., Su-perGLUE for natural language inference and MRQA for question answering) have risen in importance. Invariably, tasks and individual examples vary in difficulty. Recent analysis methods infer properties of examples such as difficulty. In particular, Item Response Theory (IRT) jointly infers example and model properties from the output of benchmark tasks (i.e., scores for each model-example pair). Therefore, it seems sensible that methods like IRT should be able to detect differences between datasets in a task. This work shows that current IRT models are not as good at identifying differences as we would expect, explain why this is difficult, and outline future directions that incorporate more (textual) signal from examples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Understanding and describing the data in natural language processing (NLP) benchmarks is crucial to ensuring their validity and reliability (Ferraro et al., 2015; Gebru et al., 2018; Bender and Friedman, 2018) . This is even more important as multi-dataset task benchmarks have-for better or worse-become the norm (Raji et al., 2021) . For example, SuperGLUE incorporates eight natural language inference (NLI) datasets (Wang et al., 2019) , and MRQA incorporates twelve question answering (QA) datasets (Fisch et al., 2019) . To better understand benchmark data, there are methods for analyzing examples in isolation (Lalor et al., 2018) , characterizing a dataset's data distribution (Swayamdipta et al., 2020) , using individual models to glean insight about datasets and examples (Feng et al., 2018) , and using many models to do the same (Rodriguez et al., 2021; Vania et al., 2021) . This paper investigates how effectively one method-Item Response Theory (IRT)-gives insight into multi-dataset benchmarks.", "cite_spans": [ { "start": 140, "end": 162, "text": "(Ferraro et al., 2015;", "ref_id": "BIBREF6" }, { "start": 163, "end": 182, "text": "Gebru et al., 2018;", "ref_id": null }, { "start": 183, "end": 209, "text": "Bender and Friedman, 2018)", "ref_id": "BIBREF1" }, { "start": 314, "end": 333, "text": "(Raji et al., 2021)", "ref_id": null }, { "start": 420, "end": 439, "text": "(Wang et al., 2019)", "ref_id": "BIBREF33" }, { "start": 504, "end": 524, "text": "(Fisch et al., 2019)", "ref_id": "BIBREF7" }, { "start": 618, "end": 638, "text": "(Lalor et al., 2018)", "ref_id": "BIBREF13" }, { "start": 686, "end": 712, "text": "(Swayamdipta et al., 2020)", "ref_id": "BIBREF31" }, { "start": 784, "end": 803, "text": "(Feng et al., 2018)", "ref_id": "BIBREF5" }, { "start": 843, "end": 867, "text": "(Rodriguez et al., 2021;", "ref_id": "BIBREF27" }, { "start": 868, "end": 887, "text": "Vania et al., 2021)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Outside of NLP, IRT provides insight into educational test questions (Lord et al., 1968; Baker, 2001) and political ideologies of legislators (Poole and Rosenthal, 2017) . In NLP, IRT is used to identify helpful training examples (Lalor and Yu, 2020), detect errors in evaluation examples (Rodriguez et al., 2021) , and estimate the future utility of examples in benchmarks (Vania et al., 2021) . The goal of this paper is to identify the characteristics of multidataset benchmarks that IRT methods focus on. Are certain datasets easier than others? Can clustering highlight dataset or example properties?", "cite_spans": [ { "start": 69, "end": 88, "text": "(Lord et al., 1968;", "ref_id": "BIBREF17" }, { "start": 89, "end": 101, "text": "Baker, 2001)", "ref_id": "BIBREF0" }, { "start": 142, "end": 169, "text": "(Poole and Rosenthal, 2017)", "ref_id": "BIBREF23" }, { "start": 289, "end": 313, "text": "(Rodriguez et al., 2021)", "ref_id": "BIBREF27" }, { "start": 374, "end": 394, "text": "(Vania et al., 2021)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We hypothesize that examples from similar datasets will cluster together as they should have similar IRT characteristics (such as difficulty level) compared to examples from other datasets. However, we do not see any distinct dataset-based clusters in our results. Instead, we find that IRT characteristics tend to group the examples of similar labels in the same clusters, suggesting that some label types are more difficult or more discriminating regardless of the datasets they belong to. In the rest of this paper, we describe IRT methods for benchmark analysis ( \u00a72), our clustering methods ( \u00a73), and our experimental results ( \u00a74). 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we adapt IRT methods to explain why benchmarks examples are difficult, rather than solely assigning them difficulty values. This section describes the IRT models in our experiments and the test-bed we use in our experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "IRT for Benchmark Analysis", "sec_num": "2" }, { "text": "IRT is a probabilistic framework that models the likelihood that subject j (e.g., a model) answers test item i (e.g., a sentiment prediction) correctly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Item Response Theory Models", "sec_num": "2.1" }, { "text": "Sentiment 24,620 Amazon reviews (Zhang et al., 2015) , Yelp reviews, * SST-3 (Socher et al., 2013) , and Dynasent Rounds 1 & 2 NLI 63,018 ANLI rounds one through three (Nie et al., 2020) , HANS (McCoy et al., 2019) , MNLI matched & MNLI mismatched (Williams et al., 2018) , SNLI (Bowman et al., 2015) , and Winogender (Rudinger et al., 2018) * https://www.yelp.com/dataset Table 1 : Details of the datasets used in our experiments.", "cite_spans": [ { "start": 32, "end": 52, "text": "(Zhang et al., 2015)", "ref_id": "BIBREF35" }, { "start": 77, "end": 98, "text": "(Socher et al., 2013)", "ref_id": "BIBREF29" }, { "start": 168, "end": 186, "text": "(Nie et al., 2020)", "ref_id": "BIBREF21" }, { "start": 189, "end": 214, "text": "HANS (McCoy et al., 2019)", "ref_id": null }, { "start": 248, "end": 271, "text": "(Williams et al., 2018)", "ref_id": "BIBREF34" }, { "start": 274, "end": 300, "text": "SNLI (Bowman et al., 2015)", "ref_id": null } ], "ref_spans": [ { "start": 373, "end": 380, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Task N Datasets", "sec_num": null }, { "text": "Likelihood of correct answer for subject j on item i.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task N Datasets", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u23df \u23de\u23de \u23df p(y ij = 1| \u03b3 i , \u03b2 i , \u03bb i , \u03b8 j ) = \u03bb i 1 + e \u2212 \u03b3 i ( \u03b8 j \u2212 \u03b2 i )", "eq_num": "(1)" } ], "section": "Task N Datasets", "sec_num": null }, { "text": "Discriminability of item i Ability of subject j Difficulty of item i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task N Datasets", "sec_num": null }, { "text": "The likelihood of a correct response (Equation 1) is modeled as a relationship between the difficulty (\u03b2 i ) of an item, its discriminability (\u03b3 i ), its feasibility (\u03bb i ), and the subject's ability (\u03b8 j ). Typically, \u03b8 j and \u03b2 i are unconstrained, \u03bb i is between zero and one, and \u03b3 i is non-negative.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task N Datasets", "sec_num": null }, { "text": "This model is a four parameter (4PL) IRT model (Equation 1) and while complex, easily simplifies to simpler models. 2 For example, when \u03bb i = 1 and \u03b3 i = 1 this is a 1PL model. In this case, the difference between subject ability and item difficulty (\u03b8 j \u2212 \u03b2 i ) determines the likelihood of a correct answer: as subject ability increases, the likelihood of a correct response increases. When only \u03bb i = 1, this is a 2PL model as in topic modeling experiments ( \u00a74.2). IRT parameters can also be multidimensional. In two experimental setups ( \u00a74.1 and \u00a7A), we use a 2PL model (\u03bb i = 1) where \u03b3 i , \u03b2 i , and \u03b8 j are multidimensional. We fit all models with py-irt (Lalor and Rodriguez, 2022).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task N Datasets", "sec_num": null }, { "text": "Ideally, IRT methods should generalize across multiple datasets, tasks, and models. To accomplish this while minimizing engineering overhead, we use data from dynabench.org )-a dynamic benchmark of multiple tasks, datasets, and model submissions ( each task, there are seven models: a majority baseline (always positive), ALBERT (Lan et al., 2020) , BERT (Devlin et al., 2019) , DeBERTa (He et al., 2020) , FastText , RoBERTa (Liu et al., 2019) , and T5 (Raffel et al., 2020) . In experiments, IRT infers parameters from the subject-item (i.e., modelexample) matrix where entries are one if the subject answered the item correctly and zero otherwise.", "cite_spans": [ { "start": 329, "end": 347, "text": "(Lan et al., 2020)", "ref_id": "BIBREF15" }, { "start": 355, "end": 376, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF4" }, { "start": 387, "end": 404, "text": "(He et al., 2020)", "ref_id": "BIBREF9" }, { "start": 426, "end": 444, "text": "(Liu et al., 2019)", "ref_id": "BIBREF16" }, { "start": 454, "end": 475, "text": "(Raffel et al., 2020)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Benchmark Data", "sec_num": "2.2" }, { "text": "IRT analysis offers a way to assign properties like difficulty and discriminability to examples, but does little to explain why a particular example may be hard or easy. Next, we identify interpretable features that might explain IRT parameter values (e.g., label, topics, and embeddings).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Benchmark Data", "sec_num": "2.2" }, { "text": "This section explains the methods that our experiments ( \u00a74) use to interpret IRT parameters. These methods fall into two categories: (1) methods that correlate examples' IRT parameters with dataset or label features and (2) methods that correlate derived textual information with IRT parameters (e.g., topic models or embeddings).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interpreting IRT Parameters", "sec_num": "3" }, { "text": "Intuitively, test instances-be they NLI examples or SAT questions-can be difficult along more than one dimension. An example might focus on testing commonsense reasoning instead of testing background knowledge. Therefore, it is sensible for IRT models to learn multidimensional parameters, but do different difficulty dimensions align with our intuitions on what might make examples easier or harder? To interpret evaluation data with multidimensional IRT, we: (1) train multidimensional IRT models, 4 (2) use t-SNE for dimensionality reduction (Poli\u010dar et al., 2019), (3) plot the resulting points in 2D space, and (4) color the points by characteristics of each example such as the classification label or source dataset ( \u00a74.1).", "cite_spans": [ { "start": 500, "end": 501, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Multidimensional IRT Clustering", "sec_num": "3.1" }, { "text": "Our next method is based on the intuition that textual information-in particular topical associations-affects example difficulty. If true, topical associations should correlate with IRT parameters. To test this, we fit a topic model to the five datasets in the Dynabench sentiment task (Table 1). To avoid having too many topics to interpret, we fit the model with five topics using the mallet software package (McCallum, 2002) . 5 We obtain IRT parameters from a one dimensional, 2PL IRT model (Equation 1). As with multidimensional IRT, we jointly visualize an interpretable feature (topic assignment) and IRT parameter values ( \u00a74.2).", "cite_spans": [ { "start": 411, "end": 427, "text": "(McCallum, 2002)", "ref_id": "BIBREF18" }, { "start": 430, "end": 431, "text": "5", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Topic Models", "sec_num": "3.2" }, { "text": "If textual information is correlated item difficulty, then transformer models like BERT should also be able to predict IRT parameters given the item text. We test this idea by fine-tuning a BERT model (Devlin et al., 2019) with regression heads to predict the difficulty and discriminability parameters of a 4PL IRT model (Equation 1). As with the multidimensional clustering method, we also visualize embeddings from BERT-base ( \u00a74.3). The goal of our visualizations is to test: (1) how BERT embeddings change with IRT fine-tuning and (2) whether clusters correspond to interpretable instance features (e.g., label or source dataset).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using BERT to Predict IRT Parameters", "sec_num": "3.3" }, { "text": "Next, we discuss what each interpretation method ( \u00a73) tells us about IRT parameter values.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "Using the subject-item response matrix from Dynabench, we fit a multidimensional 2PL model, cluster with t-SNE, and color the datapoints by either dataset name or the example label.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multidimensional IRT Clustering", "sec_num": "4.1" }, { "text": "When we run t-SNE on the difficulty parameters of a 5-dimensional 2PL model for sentiment datasets and color-code by dataset, we do not observe any distinct dataset-based clusters (Figure 1a ). However, when we color-code by label, we observe more well-defined clusters, especially for the positive and negative labels (Figure 1b ). This result suggests that some label types are more difficult for models to learn or more discriminating among the models regardless of which dataset they belong to. While the lack of dataset-based clustering is a negative result, label-based trends indicate consistency among items with the same label in terms of learned IRT parameters. However, the lack of breadth within a label suggests that each label can only accurately estimate a narrow range of ability levels in models. 6", "cite_spans": [], "ref_spans": [ { "start": 180, "end": 190, "text": "(Figure 1a", "ref_id": null }, { "start": 319, "end": 329, "text": "(Figure 1b", "ref_id": null } ], "eq_spans": [], "section": "Multidimensional IRT Clustering", "sec_num": "4.1" }, { "text": "We first validate that the topics inferred by the topic model (Table 2) are reasonable through manual inspection. The topic model successfully identifies at least five distinct review themes: media (e.g., movies, music), hotels, books, products, and food. Having verified that the topic model is at least reasonable, we next inspect the relationship between the highest scoring topic per example and its difficulty ( Figure 3) . We see that certain topics are more prevalent at different levels of difficulty; however, there is no clear delineation between topics and difficulties. This suggests that at least this topic model alone does not fully explain difficulty. 7", "cite_spans": [], "ref_spans": [ { "start": 62, "end": 71, "text": "(Table 2)", "ref_id": null }, { "start": 417, "end": 426, "text": "Figure 3)", "ref_id": null } ], "eq_spans": [], "section": "How Do Topics Relate to Item Difficulty?", "sec_num": "4.2" }, { "text": "Figure 2 compares t-SNE visualizations of embeddings from a normal BERT model as opposed to a BERT model that is fine-tuned to predict 4PL difficulty and discriminability parameters from the sentiment task. When points are color coded by label, the embeddings of the IRT fine-tuned BERT model clearly form label-based clusters. In contrast, we do not observe clear patterns or clusters for the embeddings of the vanilla BERT model. This indicates separation of labels by IRT parameters. 8 This suggests that IRT parameters are correlated with dataset labels, and the BERT embeddings learned on IRT parameters encode label properties.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "How Does IRT Difficulty Influence BERT?", "sec_num": "4.3" }, { "text": "It is generally agreed that some datasets are more challenging than others. Therefore, items in the Figure 1 : t-SNE visualization of sentiment datasets on the 5-dimensional 2PL IRT difficulty parameter, colored by dataset (a) and by label (b). Coloring by dataset does not result in easily discernable clusters; coloring by label produces well separated clusters for positive and neutral labels. The negation cluster is distinct but has more intruders than other labels. This suggests example label is more correlated with difficulty than source dataset.", "cite_spans": [], "ref_spans": [ { "start": 100, "end": 108, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Discussion", "sec_num": "4.4" }, { "text": "Topic ID Topic Words in Dynabench Sentiment Datasets 0 movie num good album music great film songs love time 1 num place time room back service people hotel didn good 2 book read story good books num reading great time characters 3 num product great good bought work time buy back price 4 num food good place great service ordered back time restaurant Table 2 : We train a five-topic, topic model on the Dynabench sentiment data (Table 1) . Topics correspond to five review themes: media, hotel, book, product, and food. Topic IDs and colors correspond to Figure 3 .", "cite_spans": [], "ref_spans": [ { "start": 352, "end": 359, "text": "Table 2", "ref_id": null }, { "start": 429, "end": 438, "text": "(Table 1)", "ref_id": "TABREF0" }, { "start": 556, "end": 564, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Discussion", "sec_num": "4.4" }, { "text": "same dataset should have similar IRT characteristics. However, our results indicate that benchmark datasets display more depth than breadth in terms of example IRT parameters. For a multi-dataset task such as NLI, examples clustered by IRT parameters group according to shared labels, not shared datasets. While learned latent topics show some variation across IRT difficulty, it is not clearly evident that certain topics are more difficult than others. While we cannot conclude that certain topics or datasets are more difficult than others, our results suggest that certain labels are.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4.4" }, { "text": "In this work, our expectation was that datasets would be separable by IRT-learned parameters. However, we found that clustering was more interpretable at the label level than the dataset level. Future work in IRT should better jointly model the characteristics of NLP data as opposed to our methods that train these components in isolation. For example, it may be that the signal provided by dataset properties is second order to labels and our methods may not effectively model this (potential) multi-level relationship. Multidimensional IRT models that encode relationships between difficulty dimensions ought to better fit the data (e.g., predicting sentiment of restaurant reviews should overlap with hotel reviews, as they both involve service). If these models succeed, they should aid the interpretation of benchmarks. Lastly, as models provide more information through initiatives like Model Cards (Mitchell et al., 2019) , IRT could jointly model these properties with latent ability parameters to glean insights into which differences in models yield empirical impacts. Figure 2 : Clustering results for the Dynasent datasets using a BERT embeddings from a BERT model used to predict IRT parameters. 2a: Cluster by labels using untrained BERT. 2b: Cluster by labels using trained BERT. Without fine-tuning, there are no clear patterns between BERT embeddings and label. However, fine-tuning to predict IRT parameters shows clear clustering patterns between embeddings and labels. This suggests that embeddings learned to predict IRT parameters can encode the properties of dataset labels. Example Topic Figure 3 : To observe the relationship between topics and IRT difficulty, we plot the un-normalized histogram of example difficulty (top) and the normalized difficulty partitioned by topic (bottom). Topic 4 in green (food reviews) is more prevalent with lower difficulty examples, while topic 1 in orange (hotel reviews) is more prevalent in higher difficulty examples.", "cite_spans": [ { "start": 906, "end": 929, "text": "(Mitchell et al., 2019)", "ref_id": "BIBREF20" } ], "ref_spans": [ { "start": 1080, "end": 1088, "text": "Figure 2", "ref_id": null }, { "start": 1613, "end": 1621, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "5" }, { "text": "A Additional Visualizations", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "5" }, { "text": "In Figure 4a , we run t-SNE on the discriminability parameters of a 5-dimensional 2PL model learned for the Dynasent datasets and color-code by data set. We do not observe any distinct dataset-based clusters. We repeat the same visualizations using difficulty and discriminability parameters of a 3-dimensional 2PL model learned on Dynasent datasest (Figure 5a and 5c) , a 3-dimensional 2PL model learned on NLI datasets (Figure 7a and 7c) , and an 8-dimensional 2PL model learned on NLI datasets (Figure 6a and 6c) . In all these experiments, we do not observe any distinct dataset-based cluster.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 12, "text": "Figure 4a", "ref_id": null }, { "start": 350, "end": 368, "text": "(Figure 5a and 5c)", "ref_id": null }, { "start": 421, "end": 439, "text": "(Figure 7a and 7c)", "ref_id": null }, { "start": 497, "end": 515, "text": "(Figure 6a and 6c)", "ref_id": null } ], "eq_spans": [], "section": "A.1 Dataset Based Clustering", "sec_num": null }, { "text": "In Figure 4b , we run t-SNE on the discriminability parameters of a 5-dimensional 2PL model learned for the Dynasent datasets and color-code by dataset labels. We repeat the same visualizations using difficulty and discriminability parameters of a 3-dimensional 2PL model learned on Dynasent datasest (Figure 5b and 5d) , a 3-dimensional 2PL model learned on NLI datasets (Figure 7b and 7d) , and an 8-dimensional 2PL model learned on NLI datasets (Figure 6b and 6d) . In all these experiments, we observe clearer clusters compared to Section A.1. ", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 12, "text": "Figure 4b", "ref_id": null }, { "start": 301, "end": 319, "text": "(Figure 5b and 5d)", "ref_id": null }, { "start": 372, "end": 390, "text": "(Figure 7b and 7d)", "ref_id": null }, { "start": 448, "end": 466, "text": "(Figure 6b and 6d)", "ref_id": null } ], "eq_spans": [], "section": "A.2 Label Based Clustering", "sec_num": null }, { "text": "Code and data at www.pedro.ai/multidim-irt.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We set the dimension of the IRT model to the number of datasets per task (5 for sentiment and 8 for NLI), and the number of labels in each task (3 for both sentiment and NLI).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For model training, we use an optimization interval of 10 with 3,000 iterations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We performed additional clustering analyses on the sentiment and NLI datasets, varying the IRT models learned and the IRT parameters used for clustering (Appendix A). In all cases we observed more well-defined label-based clusters than dataset-based clusters.7 We also replicate the plot with discriminability, but do not observe any visually discernible patterns.8 IRT-based distributions of examples(Figure 8in the appendices) show that there are clearer patterns with respect to IRT when we group the examples by their dataset labels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The Basics of Item Response Theory", "authors": [ { "first": "B", "middle": [], "last": "Frank", "suffix": "" }, { "first": "", "middle": [], "last": "Baker", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Frank B Baker. 2001. The Basics of Item Response Theory. ERIC.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Data statements for natural language processing: Toward mitigating system bias and enabling better science", "authors": [ { "first": "M", "middle": [], "last": "Emily", "suffix": "" }, { "first": "Batya", "middle": [], "last": "Bender", "suffix": "" }, { "first": "", "middle": [], "last": "Friedman", "suffix": "" } ], "year": 2018, "venue": "Transactions of the Association for Computational Linguistics", "volume": "6", "issue": "", "pages": "587--604", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emily M Bender and Batya Friedman. 2018. Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics, 6:587-604.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Enriching word vectors with subword information", "authors": [ { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2017, "venue": "Transactions of the Association for Computational Linguistics", "volume": "5", "issue": "", "pages": "135--146", "other_ids": {}, "num": null, "urls": [], "raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A large annotated corpus for learning natural language inference", "authors": [ { "first": "Gabor", "middle": [], "last": "Samuel R Bowman", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Angeli", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Potts", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Proceedings of Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of Empirical Methods in Natural Lan- guage Processing. Association for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Conference of the North American Chap- ter of the Association for Computational Linguistics. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Pathologies of neural models make interpretations difficult", "authors": [ { "first": "Eric", "middle": [], "last": "Shi Feng", "suffix": "" }, { "first": "Alvin", "middle": [], "last": "Wallace", "suffix": "" }, { "first": "I", "middle": [ "I" ], "last": "Grissom", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Pedro", "middle": [], "last": "Rodriguez", "suffix": "" }, { "first": "Jordan", "middle": [], "last": "Boyd-Graber", "suffix": "" } ], "year": 2018, "venue": "Proceedings of Empirical Methods in Natural Language Processing. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shi Feng, Eric Wallace, Alvin Grissom II, Mohit Iyyer, Pedro Rodriguez, and Jordan Boyd-Graber. 2018. Pathologies of neural models make interpretations difficult. In Proceedings of Empirical Methods in Natural Language Processing. Association for Com- putational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A survey of current datasets for vision and language research", "authors": [ { "first": "Francis", "middle": [], "last": "Ferraro", "suffix": "" }, { "first": "Nasrin", "middle": [], "last": "Mostafazadeh", "suffix": "" }, { "first": "Ting-Hao", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Lucy", "middle": [], "last": "Vanderwende", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" }, { "first": "Margaret", "middle": [], "last": "Mitchell", "suffix": "" } ], "year": 2015, "venue": "Proceedings of Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Francis Ferraro, Nasrin Mostafazadeh, Ting-Hao Huang, Lucy Vanderwende, Jacob Devlin, Michel Galley, and Margaret Mitchell. 2015. A survey of current datasets for vision and language research. In Pro- ceedings of Empirical Methods in Natural Language Processing. Association for Computational Linguis- tics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "MRQA 2019 shared task: Evaluating generalization in reading comprehension", "authors": [ { "first": "Adam", "middle": [], "last": "Fisch", "suffix": "" }, { "first": "Alon", "middle": [], "last": "Talmor", "suffix": "" }, { "first": "Robin", "middle": [], "last": "Jia", "suffix": "" }, { "first": "Minjoon", "middle": [], "last": "Seo", "suffix": "" }, { "first": "Eunsol", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2nd Workshop on Machine Reading for Question Answering", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Fisch, Alon Talmor, Robin Jia, Minjoon Seo, Eunsol Choi, and Danqi Chen. 2019. MRQA 2019 shared task: Evaluating generalization in reading comprehension. In Proceedings of the 2nd Work- shop on Machine Reading for Question Answering. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "DeBERTa: Decoding-enhanced BERT with disentangled attention", "authors": [ { "first": "Pengcheng", "middle": [], "last": "He", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Weizhu", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2020. DeBERTa: Decoding-enhanced BERT with disentangled attention.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Bag of tricks for efficient text classification", "authors": [ { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In Proceedings of the European Chap- ter of the Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Dynabench: Rethinking benchmarking in NLP", "authors": [ { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" }, { "first": "Max", "middle": [], "last": "Bartolo", "suffix": "" }, { "first": "Yixin", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Divyansh", "middle": [], "last": "Kaushik", "suffix": "" }, { "first": "Atticus", "middle": [], "last": "Geiger", "suffix": "" }, { "first": "Zhengxuan", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Bertie", "middle": [], "last": "Vidgen", "suffix": "" }, { "first": "Grusha", "middle": [], "last": "Prasad", "suffix": "" }, { "first": "Amanpreet", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Pratik", "middle": [], "last": "Ringshia", "suffix": "" }, { "first": "Zhiyi", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Tristan", "middle": [], "last": "Thrush", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" }, { "first": "Zeerak", "middle": [], "last": "Waseem", "suffix": "" }, { "first": "Pontus", "middle": [], "last": "Stenetorp", "suffix": "" }, { "first": "Robin", "middle": [], "last": "Jia", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Potts", "suffix": "" }, { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" } ], "year": 2021, "venue": "Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Douwe Kiela, Max Bartolo, Yixin Nie, Divyansh Kaushik, Atticus Geiger, Zhengxuan Wu, Bertie Vid- gen, Grusha Prasad, Amanpreet Singh, Pratik Ring- shia, Zhiyi Ma, Tristan Thrush, Sebastian Riedel, Zeerak Waseem, Pontus Stenetorp, Robin Jia, Mohit Bansal, Christopher Potts, and Adina Williams. 2021. Dynabench: Rethinking benchmarking in NLP. In Conference of the North American Chapter of the As- sociation for Computational Linguistics. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "py-irt : A scalable item response theory library for python", "authors": [ { "first": "P", "middle": [], "last": "John", "suffix": "" }, { "first": "Pedro", "middle": [], "last": "Lalor", "suffix": "" }, { "first": "", "middle": [], "last": "Rodriguez", "suffix": "" } ], "year": 2022, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2203.01282" ] }, "num": null, "urls": [], "raw_text": "John P. Lalor and Pedro Rodriguez. 2022. py-irt : A scal- able item response theory library for python. arXiv preprint arXiv:2203.01282.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Understanding deep learning performance through an examination of test set difficulty: A psychometric case study", "authors": [ { "first": "P", "middle": [], "last": "John", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Lalor", "suffix": "" }, { "first": "Tsendsuren", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Hong", "middle": [], "last": "Munkhdalai", "suffix": "" }, { "first": "", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2018, "venue": "Proceedings of Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John P Lalor, Hao Wu, Tsendsuren Munkhdalai, and Hong Yu. 2018. Understanding deep learning per- formance through an examination of test set diffi- culty: A psychometric case study. In Proceedings of Empirical Methods in Natural Language Processing. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Dynamic data selection for curriculum learning via ability estimation", "authors": [ { "first": "P", "middle": [], "last": "John", "suffix": "" }, { "first": "Hong", "middle": [], "last": "Lalor", "suffix": "" }, { "first": "", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2020, "venue": "Findings of the Association for Computational Linguistics: EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John P Lalor and Hong Yu. 2020. Dynamic data se- lection for curriculum learning via ability estimation. In Findings of the Association for Computational Linguistics: EMNLP. Association for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "ALBERT: A lite BERT for self-supervised learning of language representations", "authors": [ { "first": "Zhenzhong", "middle": [], "last": "Lan", "suffix": "" }, { "first": "Mingda", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Goodman", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "Piyush", "middle": [], "last": "Sharma", "suffix": "" }, { "first": "Radu", "middle": [], "last": "Soricut", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A lite BERT for self-supervised learning of language representations. In Proceedings of the International Conference on Learning Repre- sentations.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "RoBERTa: A robustly optimized BERT pretraining approach", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.11692" ] }, "num": null, "urls": [], "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Statistical theories of mental test scores", "authors": [ { "first": "M", "middle": [], "last": "F M Lord", "suffix": "" }, { "first": "Allan", "middle": [], "last": "R Novick", "suffix": "" }, { "first": "", "middle": [], "last": "Birnbaum", "suffix": "" } ], "year": 1968, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "F M Lord, M R Novick, and Allan Birnbaum. 1968. Statistical theories of mental test scores.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Mallet: A machine learning for language toolkit", "authors": [ { "first": "Andrew Kachites", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew Kachites McCallum. 2002. Mallet: A machine learning for language toolkit. http://mallet.cs.umass.edu.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference", "authors": [ { "first": "Tom", "middle": [], "last": "Mccoy", "suffix": "" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "" }, { "first": "Tal", "middle": [], "last": "Linzen", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Association for Computational Linguistics. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuris- tics in natural language inference. In Proceedings of the Association for Computational Linguistics. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Model cards for model reporting", "authors": [ { "first": "Margaret", "middle": [], "last": "Mitchell", "suffix": "" }, { "first": "Simone", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Zaldivar", "suffix": "" }, { "first": "Parker", "middle": [], "last": "Barnes", "suffix": "" }, { "first": "Lucy", "middle": [], "last": "Vasserman", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Hutchinson", "suffix": "" }, { "first": "Elena", "middle": [], "last": "Spitzer", "suffix": "" }, { "first": "Deborah", "middle": [], "last": "Inioluwa", "suffix": "" }, { "first": "Timnit", "middle": [], "last": "Raji", "suffix": "" }, { "first": "", "middle": [], "last": "Gebru", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model cards for model reporting. In Proceedings of the Conference on Fairness, Account- ability, and Transparency. Association for Comput- ing Machinery.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Adversarial NLI: A new benchmark for natural language understanding", "authors": [ { "first": "Yixin", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Dinan", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Association for Computational Linguistics. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2020. Adversar- ial NLI: A new benchmark for natural language un- derstanding. In Proceedings of the Association for Computational Linguistics. Association for Compu- tational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "openTSNE: a modular python library for t-sne dimensionality reduction and embedding", "authors": [ { "first": "G", "middle": [], "last": "Pavlin", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Poli\u010dar", "suffix": "" }, { "first": "Bla\u017e", "middle": [], "last": "Stra\u017ear", "suffix": "" }, { "first": "", "middle": [], "last": "Zupan", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pavlin G. Poli\u010dar, Martin Stra\u017ear, and Bla\u017e Zupan. 2019. openTSNE: a modular python library for t-sne dimen- sionality reduction and embedding.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Ideology & congress: A political economic history of roll call voting", "authors": [ { "first": "T", "middle": [], "last": "Keith", "suffix": "" }, { "first": "Howard", "middle": [], "last": "Poole", "suffix": "" }, { "first": "", "middle": [], "last": "Rosenthal", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Keith T Poole and Howard Rosenthal. 2017. Ideology & congress: A political economic history of roll call voting, 2 edition. Routledge, London, England.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "DynaSent: A dynamic benchmark for sentiment analysis", "authors": [ { "first": "Christopher", "middle": [], "last": "Potts", "suffix": "" }, { "first": "Zhengxuan", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Atticus", "middle": [], "last": "Geiger", "suffix": "" }, { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the Association for Computational Linguistics. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher Potts, Zhengxuan Wu, Atticus Geiger, and Douwe Kiela. 2021. DynaSent: A dynamic bench- mark for sentiment analysis. In Proceedings of the Association for Computational Linguistics. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Exploring the limits of transfer learning with a unified Text-to-Text transformer", "authors": [ { "first": "Colin", "middle": [], "last": "Raffel", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Roberts", "suffix": "" }, { "first": "Katherine", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Sharan", "middle": [], "last": "Narang", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Matena", "suffix": "" }, { "first": "Yanqi", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Peter J", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2020, "venue": "Journal of Machine Learning Research", "volume": "21", "issue": "140", "pages": "1--67", "other_ids": {}, "num": null, "urls": [], "raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the lim- its of transfer learning with a unified Text-to-Text transformer. Journal of Machine Learning Research, 21(140):1-67.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "2021. AI and the everything in the whole wide world benchmark", "authors": [ { "first": "Emily", "middle": [ "M" ], "last": "Inioluwa Deborah Raji", "suffix": "" }, { "first": "Amandalynne", "middle": [], "last": "Bender", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Paullada", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Denton", "suffix": "" }, { "first": "", "middle": [], "last": "Hanna", "suffix": "" } ], "year": null, "venue": "NeurIPS: Datasets and Benchmarks Track", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Inioluwa Deborah Raji, Emily M Bender, Amandalynne Paullada, Emily Denton, and Alex Hanna. 2021. AI and the everything in the whole wide world bench- mark. In NeurIPS: Datasets and Benchmarks Track.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Evaluation examples are not equally informative: How should that change NLP leaderboards?", "authors": [ { "first": "Pedro", "middle": [], "last": "Rodriguez", "suffix": "" }, { "first": "Joe", "middle": [], "last": "Barrow", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Hoyle", "suffix": "" }, { "first": "John", "middle": [ "P" ], "last": "Lalor", "suffix": "" }, { "first": "Robin", "middle": [], "last": "Jia", "suffix": "" }, { "first": "Jordan", "middle": [], "last": "Boyd-Graber", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the Association for Computational Linguistics. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pedro Rodriguez, Joe Barrow, Alexander Hoyle, John P. Lalor, Robin Jia, and Jordan Boyd-Graber. 2021. Evaluation examples are not equally informative: How should that change NLP leaderboards? In Proceedings of the Association for Computational Linguistics. Association for Computational Linguis- tics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Gender bias in coreference resolution", "authors": [ { "first": "Rachel", "middle": [], "last": "Rudinger", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Naradowsky", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Leonard", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" } ], "year": 2018, "venue": "Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. In Conference of the North American Chapter of the Association for Computa- tional Linguistics. Association for Computational Linguistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "authors": [ { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Perelygin", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Chuang", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Manning", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Ng", "suffix": "" }, { "first": "", "middle": [], "last": "Potts", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Proceedings of Empirical Methods in Natural Language Processing", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "In Proceedings of Empirical Methods in Natural Lan- guage Processing. Association for Computational Linguistics.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Dataset cartography: Mapping and diagnosing datasets with training dynamics", "authors": [ { "first": "Swabha", "middle": [], "last": "Swayamdipta", "suffix": "" }, { "first": "Roy", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "Nicholas", "middle": [], "last": "Lourie", "suffix": "" }, { "first": "Yizhong", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Hannaneh", "middle": [], "last": "Hajishirzi", "suffix": "" }, { "first": "A", "middle": [], "last": "Noah", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Smith", "suffix": "" }, { "first": "", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2020, "venue": "Proceedings of Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Swabha Swayamdipta, Roy Schwartz, Nicholas Lourie, Yizhong Wang, Hannaneh Hajishirzi, Noah A Smith, and Yejin Choi. 2020. Dataset cartography: Mapping and diagnosing datasets with training dynamics. In Proceedings of Empirical Methods in Natural Lan- guage Processing. Association for Computational Linguistics.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Bowman. 2021. Comparing test sets with item response theory", "authors": [ { "first": "Clara", "middle": [], "last": "Vania", "suffix": "" }, { "first": "", "middle": [], "last": "Phu Mon", "suffix": "" }, { "first": "William", "middle": [], "last": "Htut", "suffix": "" }, { "first": "Dhara", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Richard", "middle": [ "Yuanzhe" ], "last": "Mungra", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Haokun", "middle": [], "last": "Phang", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Samuel", "middle": [ "R" ], "last": "Cho", "suffix": "" } ], "year": null, "venue": "Proceedings of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Clara Vania, Phu Mon Htut, William Huang, Dhara Mungra, Richard Yuanzhe Pang, Jason Phang, Haokun Liu, Kyunghyun Cho, and Samuel R. Bow- man. 2021. Comparing test sets with item response theory. In Proceedings of the Association for Compu- tational Linguistics. Association for Computational Linguistics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "SuperGLUE: A stickier benchmark for General-Purpose language understanding systems", "authors": [ { "first": "Alex", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Yada", "middle": [], "last": "Pruksachatkun", "suffix": "" }, { "first": "Nikita", "middle": [], "last": "Nangia", "suffix": "" }, { "first": "Amanpreet", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2019, "venue": "Proceedings of Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Wang, Yada Pruksachatkun, Nikita Nangia, Aman- preet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. SuperGLUE: A stick- ier benchmark for General-Purpose language under- standing systems. In Proceedings of Advances in Neural Information Processing Systems.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "A Broad-Coverage challenge corpus for sentence understanding through inference", "authors": [ { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Nikita", "middle": [], "last": "Nangia", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2018, "venue": "Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A Broad-Coverage challenge corpus for sen- tence understanding through inference. In Confer- ence of the North American Chapter of the Associa- tion for Computational Linguistics. Association for Computational Linguistics.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Character-level convolutional networks for text classification", "authors": [ { "first": "Xiang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Junbo", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Yann", "middle": [], "last": "Lecun", "suffix": "" } ], "year": 2015, "venue": "Proceedings of Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text clas- sification. Proceedings of Advances in Neural Infor- mation Processing Systems.", "links": null } }, "ref_entries": { "FIGREF3": { "text": "T-SNE visualisation of the Dynasent datasets on the discriminability parameter of a 5-dimensional 2PL model: (a) marked by dataset, (b) marked by label.", "type_str": "figure", "num": null, "uris": null }, "FIGREF4": { "text": "T-SNE visualisation of the Dynasent datasets on the parameters of a 3-dimensional 2PL model: (a) Difficulty marked by dataset, (b) Difficulty marked by label, (c) Discriminability marked by dataset, (d) Discriminability marked by label. T-SNE visualisation of the NLI datasets on the parameters of a 8-dimensional 2PL model: (a) Difficulty marked by dataset, (b) Difficulty marked by label, (c) Discriminability marked by dataset, (d) Discriminability marked by label. T-SNE visualisation of the NLI datasets on the parameters of a 3-dimensional 2PL model: (a) Difficulty marked by dataset, (b) Difficulty marked by label, (c) Discriminability marked by dataset, (dDistributions of examples for the sentiment datasets (3PL model): (a) Diff by dataset, (b) Disc by dataset, (c) Diff by label, (b) Disc by label.", "type_str": "figure", "num": null, "uris": null }, "TABREF0": { "content": "
). 3 For
", "text": "", "num": null, "type_str": "table", "html": null } } } }