% Note: Moved table to 01intro.tex for better placement. -SB

\section{Tasks}\label{sec:tasks}

GLUE is centered on nine English sentence understanding tasks, which  cover a broad range of domains, data quantities, and difficulties.
As the goal of GLUE is to spur development of generalizable NLU systems, we design the benchmark such that good performance should require a model to share substantial knowledge (e.g., trained parameters) across all tasks, while still maintaining some task-specific components.
Though it is possible to train a single model for each task and evaluate the resulting set of models on this benchmark, 
we expect that our inclusion of several data-scarce tasks will ultimately render this approach uncompetitive.
We describe the tasks below and in \autoref{tab:tasks}.  Appendix \ref{sec:apdx_data} includes additional details. Unless otherwise mentioned, tasks are evaluated on accuracy and are balanced across classes.

\subsection{Single-Sentence Tasks}

\paragraph{CoLA}
The Corpus of Linguistic Acceptability \citep{warstadt2018neural}
%consists of examples of expert English sentence acceptability judgments drawn from 22 books and journal articles on linguistic theory.
consists of English acceptability judgments drawn from books and journal articles on linguistic theory.
Each example is a sequence of words annotated with whether it is a grammatical English sentence. 
%Superficially, this data is similar to our analysis data in that it is constructed to demonstrate potentially subtle and difficult contrasts.
%Judgments of this particular kind are the primary form of evidence in syntactic theory \citep{schutze-96}, so a machine learning system capable of predicting them reliably would offer potentially substantial evidence on questions of language learnability and innate bias. 
Following the authors, we use Matthews correlation coefficient \citep{matthews1975comparison} as the evaluation metric, which evaluates performance on unbalanced binary classification and ranges from -1 to 1, with 0 being the performance of uninformed guessing.
We use the standard test set, for which we obtained private labels from the authors.
We report a single performance number on the combination of in- and out-of-domain sections of the test set.

\paragraph{SST-2}
The Stanford Sentiment Treebank \citep{socher2013recursive} consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. We use the two-way (\textit{positive}/\textit{negative}) class split, and use only sentence-level labels.

\subsection{Similarity and Paraphrase Tasks}

\paragraph{MRPC}
The Microsoft Research Paraphrase Corpus \citep{dolan2005automatically} is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent. Because the classes are imbalanced (68\% positive, 32\% negative), we follow common practice and report both accuracy and F1 score.

\paragraph{QQP}
The Quora Question Pairs\footnote{ \href{https://data.quora.com/First-Quora-Dataset-Release-Question-Pairs}{\texttt{data.quora.com/\allowbreak First-\allowbreak Quora-\allowbreak Dataset-\allowbreak Release-Question-Pairs}}} dataset is a collection of question pairs from the community question-answering website Quora. The task is to determine whether a pair of questions are semantically equivalent. As in MRPC, the class distribution in QQP is unbalanced (37\% positive, 63\% negative), so we report both accuracy and F1 score. We use the standard test set, for which we obtained private labels from the authors.

\paragraph{STS-B}
The Semantic Textual Similarity Benchmark \citep{cer2017semeval} is a collection of sentence pairs drawn from news headlines, video and image captions, and natural language inference data. 
Each pair is human-annotated with a similarity score from 1 to 5; the task is to predict these scores.
Follow common practice, we evaluate using Pearson and Spearman correlation coefficients.

\subsection{Inference Tasks}

\paragraph{MNLI}
%The Multi-Genre Natural Language Inference Corpus \citep{DBLP:journals/corr/WilliamsNB17} is a crowd-sourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis, contradicts the hypothesis, or neither (\textit{neutral}). The premise sentences are gathered from a diverse set of sources, including transcribed speech, popular fiction, and government reports. The test set is broken into two sections: \textit{matched}, which is drawn from the same sources as the training set, and \textit{mismatched}, which uses different sources and thus requires domain transfer. We use the standard test set, for which we obtained labels privately from the authors, and evaluate on both sections. 
The Multi-Genre Natural Language Inference Corpus \citep{DBLP:journals/corr/WilliamsNB17} is a crowd-sourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (\textit{entailment}), contradicts the hypothesis (\textit{contradiction}), or neither (\textit{neutral}). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. We use the standard test set, for which we obtained private labels from the authors, and evaluate on both the \textit{matched} (in-domain) and \textit{mismatched} (cross-domain) sections. We also use and recommend the SNLI corpus \citep{bowman2015large} as 550k examples of auxiliary training data. %\citep{chen2017recurrent,gong2018nli}.

\paragraph{QNLI}
The Stanford Question Answering Dataset (\citealt{rajpurkar2016squad}) is a question-answering dataset consisting of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). We convert the task into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue.
This process of recasting existing datasets into NLI is similar to methods introduced in \citet{white2017inference} and expanded upon in \citet{demszky2018transforming}.
We call the converted dataset QNLI (Question-answering NLI)\footnote{An earlier release of QNLI had an artifact where the task could be modeled and solved as an easier task than we describe here. We have since released an updated version of QNLI that removes this possibility.}.

\paragraph{RTE}
The Recognizing Textual Entailment (RTE) datasets come from a series of annual textual entailment challenges. We combine the data from RTE1 \citep{dagan2006pascal}, RTE2 \citep{bar2006second}, RTE3 \citep{giampiccolo2007third}, and RTE5 \citep{bentivogli2009fifth}.\footnote{RTE4 is not publicly available, while RTE6 and RTE7 do not fit the standard NLI task.} Examples are constructed based on news and Wikipedia text. 
We convert all datasets to a two-class split, where for three-class datasets we collapse \textit{neutral} and \textit{contradiction} into \textit{not\_entailment}, for consistency.
% The task is to predict if the premise entails the hypothesis. 

\begin{table*}[t]
\small
\centering
\begin{tabular}{ll}
\toprule
\textbf{Coarse-Grained Categories} & \textbf{Fine-Grained Categories} \\
\midrule
\multirow{2}{*}{Lexical Semantics} & Lexical Entailment, Morphological Negation, Factivity, \\ & Symmetry/Collectivity, Redundancy, Named Entities, Quantifiers \\
\midrule
\multirow{3}{*}{Predicate-Argument Structure} & Core Arguments, Prepositional Phrases, Ellipsis/Implicits, \\ & Anaphora/Coreference
Active/Passive, Nominalization, \\ & Genitives/Partitives, Datives, Relative Clauses, \\
& Coordination Scope, Intersectivity, Restrictivity \\
\midrule
\multirow{2}{*}{Logic} & Negation, Double Negation, Intervals/Numbers, Conjunction, Disjunction, \\ & Conditionals, Universal, Existential, Temporal, Upward Monotone, \\ & Downward Monotone, Non-Monotone \\
\midrule
Knowledge & Common Sense, World Knowledge\\

\bottomrule
\end{tabular}
\caption{The types of linguistic phenomena annotated in the diagnostic dataset, organized under four major categories. For a description of each phenomenon, see \autoref{sec:apdx_diagnostic}.}
\label{tab:analysis-categories}
\end{table*}

\paragraph{WNLI}
The Winograd Schema Challenge \citep{levesque2011winograd} is a reading comprehension task in which a system must read a sentence with a pronoun and select the referent of that pronoun from a list of choices. 
The examples are manually constructed to foil simple statistical methods: Each one is contingent on contextual information provided by a single word or phrase in the sentence. 
To convert the problem into sentence pair classification, we construct sentence pairs by replacing the ambiguous pronoun with each possible referent.
%The task (a slight relaxation of the original Winograd Schema Challenge) is to predict if the sentence with the pronoun substituted is entailed by the original sentence.
The task is to predict if the sentence with the pronoun substituted is entailed by the original sentence.
We use a small evaluation set consisting of new examples derived from fiction books\footnote{See similar examples at 
\href{https://cs.nyu.edu/faculty/davise/papers/WinogradSchemas/WS.html}{\tt cs.nyu.edu/\allowbreak faculty/\allowbreak davise/\allowbreak papers/\allowbreak WinogradSchemas/\allowbreak WS.html}} that was shared privately by the authors of the original corpus. 
While the included training set is balanced between two classes, %(\textit{entailment} and \textit{not\_entailment})
the test set is imbalanced between them (35\% entailment, 65\% not entailment). As with QNLI, each example is evaluated separately, so there is not a systematic correspondence between a model's score on this task and its score on the unconverted original task.
We call converted dataset WNLI (Winograd NLI).

% analysis examples table
\begin{table*}[t]
\small
\centering
\begin{tabularx}{\textwidth}{XXXll}
\toprule
 \textbf{Tags} & \textbf{Sentence 1} & \textbf{Sentence 2} & \textbf{Fwd} & \textbf{Bwd} \\
\midrule
%\it Active/Passive (Predicate-Argument Structure) & Cape sparrows eat seeds, along with soft plant parts and insects. & Cape sparrows are eaten by seeds, along with soft plant parts and insects. & N & N \\ \midrule
%\it Symmetry/Collectivity (Lexical Semantics), Core Args (Predicate Argument Structure) & Tulsi Gabbard disagrees with Bernie Sanders on what is the best way to deal with Bashar al-Assad. & Tulsi Gabbard and Bernie Sanders disagree on what is the best way to deal with Bashar al-Assad. & E & E \\ \midrule
%\it Named Entities (Lexical Semantics), World Knowledge (Knowledge) & Musk decided to offer up his personal Tesla roadster. & Musk decided to offer up his personal car. & E & N \\ \midrule
\it Lexical Entailment (Lexical Semantics), Downward Monotone (Logic) & The timing of the meeting has not been set, according to a Starbucks spokesperson. & The timing of the meeting has not been considered, according to a Starbucks spokesperson. & N & E \\ \midrule
\it Universal Quantifiers (Logic) & Our deepest sympathies are with all those affected by this accident. & Our deepest sympathies are with a victim who was affected by this accident. & E & N \\ \midrule
\it Quantifiers (Lexical Semantics), Double Negation (Logic) & I have never seen a hummingbird not flying. & I have never seen a hummingbird. & N & E \\
\bottomrule
\end{tabularx}
\caption{Examples from the diagnostic set. \textit{Fwd} (resp. \textit{Bwd}) denotes the label when sentence 1 (resp. sentence 2) is the premise. Labels are \textit{entailment} (E), \textit{neutral} (N), or \textit{contradiction} (C).
Examples are tagged with the phenomena they demonstrate, and each phenomenon belongs to one of four broad categories (in parentheses).
%See \autoref{tab:analysis-categories} in Appendix \ref{sec:apdx_data} for a complete tag taxonomy.
}
\label{tab:analysis-examples}
\end{table*}



\subsection{Evaluation}
The GLUE benchmark follows the same evaluation model as SemEval and Kaggle. To evaluate a system on the benchmark, one must run the system on the provided test data for the tasks, then upload the results to the website \href{https://gluebenchmark.com}{\tt gluebenchmark.com} for scoring. 
%The site will then show an overall score for the main suite of tasks, and per-task scores on the main tasks and the diagnostic dataset. 
The benchmark site then shows per-task scores, as well as a macro-average of those scores to determine a system's position on the leaderboard.
For tasks with multiple metrics (e.g., accuracy and F1), we use an unweighted average of the metrics as the score for the task when computing the overall macro-average.
The website also provides fine- and coarse-grained results on the diagnostic dataset. See Appendix \ref{sec:apdx_website} for details.


%\subsection{Data and Bias}
%We do not endorse the use of the task training sets for any specific \textit{non-research} use. They do not cover every dialect of English one may wish to handle, nor languages other than English. As all of them contain text or annotations that were collected in uncontrolled settings, they contain evidence of stereotypes and biases that one may not wish one's system to learn \citep{rudinger2017social}. 


\section{Diagnostic Dataset}

Drawing inspiration from the FraCaS suite \citep{cooper96fracas} and the recent Build-It-Break-It competition \citep{ettinger2017towards}, we include a small, manually-curated test set for the analysis of system performance. While the main benchmark mostly reflects an application-driven distribution of examples, 
our diagnostic dataset highlights a pre-defined set of modeling-relevant phenomena. We show the full set of phenomena in \autoref{tab:analysis-categories}.

Each diagnostic example is an NLI sentence pair with tags for the phenomena demonstrated.
The NLI task is well-suited to this kind of analysis, as it can easily evaluate the full set of skills involved in (ungrounded) sentence understanding, from the resolution of syntactic ambiguity to pragmatic reasoning with world knowledge.
We ensure the data is reasonably diverse by producing examples for a variety of linguistic phenomena and basing our examples on naturally-occurring sentences from several domains (news, Reddit, Wikipedia, academic papers).
This approaches differs from that of FraCaS, which was designed to test linguistic theories with a minimal and uniform set of examples.
A sample from our dataset is shown in \autoref{tab:analysis-examples}. A full list of linguistic categories is in \autoref{tab:analysis-categories}.


% \paragraph{Linguistic Phenomena}
%We tag every example with fine- and coarse-grained categories of the linguistic phenomena they involve (categories shown in \autoref{tab:analysis-categories} in the appendix).
% The coarse-grained categories are \textit{Lexical Semantics}, \textit{Predicate-Argument Structure}, \textit{Logic}, and \textit{Knowledge and Common Sense}.
% Each has several fine-grained subcategories.
%While each example was collected with a single phenomenon in mind, a single example can be tagged with more than one category.
% For example, to know that \textit{I like some dogs} entails \textit{I like some animals}, it is not sufficient to know that \textit{dog} lexically entails \textit{animal}; one must also know that \textit{dog/animal} appears in an upward monotone context in the sentence. This example would be classified under both \textit{Lexical Semantics \(>\) Lexical Entailment} and \textit{Logic \(>\) Upward Monotone}.
% SB: Helpful but unnecessary. Can add back in CR.

%\paragraph{Domains} 
%We construct sentence pairs based on text from four domains:
%News (articles linked from the front page%\footnote{\url{news.google.com}}
%),
%Reddit (threads linked from the Front Page%\footnote{\url{reddit.com}}
%),
%Wikipedia (Featured Articles%\footnote{\url{en.wikipedia.org/wiki/Wikipedia:Featured_articles}}
%), 
%and academic papers from recent ACL conferences. We include 100 sentence pairs constructed from each source and 150 artificially-constructed sentence pairs for 550 total.
% SB: Can move to appendix if needed.

\paragraph{Annotation Process} 
We begin with a target set of phenomena, based roughly on those used in the FraCaS suite \citep{cooper96fracas}.
We construct each example by locating a sentence that can be easily made to demonstrate a target phenomenon, and editing it in two ways to produce an appropriate sentence pair.
%In many cases, we make these modifications small in order to encourage high lexical and structural overlap within each sentence pair and limit superficial cues.
We make minimal modifications so as to maintain high lexical and structural overlap within each sentence pair and limit superficial cues.
We then label the inference relationships  between the sentences, considering each sentence alternatively as the premise, producing two labeled examples for each pair (1100 total).
Where possible, we produce several pairs with different labels for a single source sentence, to have minimal sets of sentence pairs that are lexically and structurally very similar but correspond to different entailment relationships.
%After an initial round of annotation, we revise to include phenomena we think can be located in the data and collapse categories that are difficult to differentiate.
%We then set a minimum number of examples for each fine-grained category in each domain and use these counts to guide data collection (making sure to surpass the minimum) to ensure a baseline amount of diversity in the dataset.
% After finalizing the categories, we gathered a minimum number of examples in each fine-grained category from each domain to ensure a baseline level of diversity.
The resulting labels are 42\% \textit{entailment}, 35\% \textit{neutral}, and 23\% \textit{contradiction}.

\paragraph{Evaluation}
Since the class distribution in the diagnostic set is not balanced, we use \(R_3\) \citep{gorodkin2004Rk}, a three-class generalization of the Matthews correlation coefficient, for evaluation.
% This coefficient was introduced by~\newcite{gorodkin2004Rk} as \(R_K\), a generalization of the Pearson correlation that works for \(K\) dimensions by averaging the square error from the mean value in each dimension, i.e., calculating the full covariance between the input and output. In the discrete case, it generalizes Matthews correlation, where a value of 1 means perfect correlation and 0 means random chance.
%\omer{Julian: can we define $R_3$ here *succinctly*? It's also OK to refer to give a brief description and then refer to some external source. We just want the paper to be as self-contained as possible.}
%\julian{Here's a full-ish description and citation for \(R_K\). we can still shorten it of course.}

% \paragraph{Validation}
In light of recent work showing that crowdsourced data often contains artifacts which can be exploited to perform well without solving the intended task 
\citep[][ i.a.]{schwartz17cloze,poliak2018hypothesis},
%we perform an audit of our manually curated data as a sanity check.
we audit the data for such artifacts.
We reproduce the methodology of \citet{gurudipta18artifacts},
training two fastText classifiers \citep{joulin2016bag} to predict entailment labels on SNLI and MNLI using only the hypothesis as input. 
The models respectively get near-chance accuracies of 32.7\% and 36.4\% on our diagnostic data, showing that the data does not suffer from such artifacts. 

To establish human baseline performance on the diagnostic set, we have six NLP researchers annotate 50 sentence pairs (100 entailment examples) randomly sampled from the diagnostic set. Inter-annotator agreement is high, with a Fleiss's \(\kappa\) of 0.73.
The average \(R_3\) score among the annotators is 0.80, much higher than any of the baseline systems described in Section \ref{sec:baselines}. 

%We also evaluate state-of-the-art NLI models on the diagnostic dataset and find their overall performance to be rather weak, further suggesting that no easily-gameable artifacts present in existing training data are abundant in the diagnostic dataset (see Section~\ref{sec:experiments}).

%\paragraph{Intended Use}
%Because these analysis examples are hand-picked to address certain phenomena, we expect that they will not be representative of the distribution of language as a whole, even in the targeted domains. However, NLI is a task with no natural input distribution. We deliberately select sentences that we hope will be able to provide insight into what models are doing, what phenomena they catch on to, and where are they limited. This means that the raw performance numbers on the analysis set should be taken with a grain of salt. The set is provided not as a benchmark, but as an analysis tool to paint in broad strokes the kinds of phenomena a model may or may not capture, and to provide a set of examples that can serve for error analysis, qualitative model comparison, and development of adversarial examples that expose a model's weaknesses.