\begin{table*}[t]
\centering \small
\begin{tabular}{lrrlll}
 \toprule
\textbf{Corpus} & \textbf{$|$Train$|$} & \textbf{$|$Test$|$} & \textbf{Task} & \textbf{Metrics} & \textbf{Domain} \\
\midrule
\multicolumn{6}{c}{Single-Sentence Tasks}\\
\midrule
CoLA & 8.5k & \textbf{1k} & acceptability & Matthews corr.& misc. \\ % SB: Changed from 'linguistics literature'. That could be misleading, as few of the sentences are actually in the style of academic writing, and many are found in the wild.
SST-2 & 67k & 1.8k & sentiment & acc. & movie reviews \\
\midrule
\multicolumn{6}{c}{Similarity and Paraphrase Tasks}\\
\midrule
MRPC & 3.7k & 1.7k & paraphrase & acc./F1 & news \\
STS-B & 7k & 1.4k & sentence similarity & Pearson/Spearman corr. & misc. \\
QQP & 364k & \textbf{391k} & paraphrase & acc./F1 & social QA questions \\
\midrule
\multicolumn{6}{c}{Inference Tasks} \\
\midrule
MNLI & 393k & \textbf{20k} & NLI & matched acc./mismatched acc. & misc. \\
QNLI & 108k & 5.7k & QA/NLI & acc. & Wikipedia \\
RTE & 2.5k & 3k & NLI & acc. & misc. \\
WNLI & 634 & \textbf{146} & coreference/NLI & acc. & fiction books \\
\bottomrule
\end{tabular}
\caption{Task descriptions and statistics. All tasks are single sentence or sentence pair classification, except STS-B, which is a regression task. MNLI has three classes; all other classification tasks have two. Test sets shown in bold use labels that have never been made public in any form.
}
\label{tab:tasks}
\end{table*}

\section{Related Work}
\label{sec:related}
\citet{collobert2011natural} used a multi-task model with a shared sentence understanding component to jointly learn POS tagging, chunking, named entity recognition, and semantic role labeling.
% TODO: Worth adding a citation or two here to show that work on pure multitask is still active, not just the odd hybrid stuff like Hashimoto. Something from Bingel? If we do that, we can probably drop the below.
More recent work has explored using labels from core NLP tasks to supervise training of lower levels of deep neural networks \citep{sogaard2016deep,hashimoto2016joint} and 
automatically learning cross-task sharing mechanisms for multi-task learning \citep{ruder2017sluice}.
%identifying the conditions under which multi-task learning is useful \citep{bingel2017identifying}.

Beyond multi-task learning, much work in developing general NLU systems has focused on sentence-to-vector encoders \citep[][ i.a.]{pmlr-v32-le14,kiros2015skip}, leveraging unlabeled data 
%dai2015semi,kiros2015skip,jernite2017discourse,DBLP:journals/corr/abs-1710-04334,P17-1161,logeswaran2018efficient,
\citep{hill2016learning,peters2018deep}, labeled data \citep{conneau2018senteval,mccann2017learned}, and combinations of these \citep{collobert2011natural,subramanian2018large}.
In this line of work, a standard evaluation practice has emerged, recently codified as SentEval \citep{DBLP:conf/emnlp/ConneauKSBB17,conneau2018senteval}.
Like GLUE, SentEval relies on a set of existing classification tasks involving either one or two sentences as inputs. Unlike GLUE, SentEval only evaluates sentence-to-vector encoders, making it well-suited for evaluating sentences \emph{in isolation}.
%Specifically, SentEval feeds the output of a pre-trained sentence encoder into lightweight task-specific models (typically linear classifiers) that are trained and tested on task-specific data.
%SentEval is well-suited for evaluating sentence representations \emph{in isolation}. 
However, cross-sentence contextualization and alignment, such as that yielded by methods like soft-attention, are instrumental in achieving state-of-the-art performance on tasks such as machine translation \citep{bahdanau2014neural,vaswani2017attention}, question answering \citep{seo2016bidirectional}, and natural language inference \citep{rocktaschel2015reasoning}.
%\footnote{In the case of SNLI \citep{bowman2015large}, the best-performing sentence encoding model on the leaderboard as of April 2018 achieves 86.3\% accuracy, while the best performing attention-based model achieves 89.3\%.}.
GLUE is designed to facilitate the development of these methods: It is model-agnostic, allowing for any kind of representation or contextualization, including models that use no vector or symbolic representations for sentences whatsoever.
%Indeed, among our baseline models the use of attention consistently leads to improved performance on GLUE.

GLUE also diverges from SentEval in the selection of evaluation tasks that are included in the suite. Many of the SentEval tasks are closely related to sentiment analysis, such as MR \citep{pang2005seeing}, SST \citep{socher2013recursive}, CR \citep{hu2004mining}, and SUBJ \citep{pang2004sentimental}. Other tasks are so close to being solved that evaluation on them is relatively  uninformative, such as MPQA \citep{wiebe2005annotating} and TREC question classification \citep{voorhees1999trec}. In GLUE, we attempt to construct a benchmark that is both diverse and difficult.


%In work which appeared after the initial launch of GLUE, 
\citet{McCann2018decaNLP} introduce decaNLP, which also scores NLP systems based on their performance on multiple datasets. Their benchmark recasts the ten evaluation tasks as question answering, converting tasks like summarization and text-to-SQL semantic parsing into question answering using automatic transformations. That benchmark lacks the leaderboard and error analysis toolkit of GLUE, but more importantly, we see it as pursuing a more ambitious but less immediately practical goal: While GLUE rewards methods that yield good performance on a circumscribed set of tasks using methods like those that are currently used for those tasks, their benchmark rewards systems that make progress toward their goal of unifying all of NLU under the rubric of question answering.