

\section{Baselines}\label{sec:baselines}

For baselines, we evaluate a multi-task learning model trained on the GLUE tasks, as well as several variants based on recent pre-training methods.
We briefly describe them here. See Appendix \ref{sec:apdx_baselines} for details.
We implement our models in the AllenNLP library \citep{Gardner2017AllenNLPAD}.

\paragraph{Architecture}

Our simplest baseline architecture is based on sentence-to-vector encoders, and sets aside GLUE's ability to evaluate models with more complex structures.
Taking inspiration from \citet{DBLP:conf/emnlp/ConneauKSBB17}, the model uses a two-layer, 1500D (per direction) BiLSTM with max pooling and 300D GloVe word embeddings \citep[840B Common Crawl version;][]{pennington2014glove}.
For single-sentence tasks, we encode the sentence and pass the resulting vector to a classifier.
For sentence-pair tasks, we encode sentences independently to produce vectors $u, v$, and pass $[u; v; |u - v|; u * v]$ to a classifier.
The classifier is an MLP with a 512D hidden layer.

We also consider a variant of our model which for sentence pair tasks uses an attention mechanism inspired by \citet{seo2016bidirectional} between all pairs of words, followed by a second BiLSTM with max pooling.
By explicitly modeling the interaction between sentences, these models fall outside the sentence-to-vector paradigm.

\paragraph{Pre-Training} We augment our base model with two recent methods for pre-training: ELMo and CoVe. 
We use existing trained models for both.

ELMo uses a pair of two-layer neural language models trained on the Billion Word Benchmark \citep{chelba2013one}. 
Each word is represented by a contextual embedding, produced by taking a linear combination of the corresponding hidden states of each layer of the two models. 
We follow the authors' recommendations\footnote{\href{https://github.com/allenai/allennlp/blob/master/tutorials/how_to/elmo.md}{\tt github.com/\allowbreak allenai/\allowbreak allennlp/\allowbreak blob/\allowbreak master/\allowbreak tutorials/\allowbreak how\textunderscore to/\allowbreak elmo.md}} and use ELMo embeddings in place of any other embeddings.

CoVe \citep{mccann2017learned} uses a two-layer BiLSTM encoder originally trained for English-to-German translation. 
The CoVe vector of a word is the corresponding hidden state of the top-layer LSTM. 
As in the original work, we concatenate the CoVe vectors to the GloVe word embeddings. 

\paragraph{Training}

We train our models with the BiLSTM sentence encoder and post-attention BiLSTMs shared across tasks, and classifiers trained separately for each task.
For each training update, we sample a task to train with a probability proportional to the number of training examples for each task.
We train our models with Adam \citep{kingma2014adam} with initial learning rate $10^{-4}$ and batch size 128.
We use the macro-average score as the validation metric and stop training when the learning rate drops below $10^{-5}$ or performance does not improve after 5 validation checks.

We also train a set of single-task models, which are configured and trained identically, but share no parameters. While this is generally an effective model for the tasks under study, to allow for fair comparisons with the multi-task analogs we do not tune parameter or training settings for each task, so these single-task models do not generally represent the state of the art for each task.
 
\paragraph{Sentence Representation Models}

Finally, we evaluate the following trained sentence-to-vector encoder models using our benchmark: average bag-of-words using GloVe embeddings (CBoW), Skip-Thought \citep{kiros2015skip}, InferSent \citep{DBLP:conf/emnlp/ConneauKSBB17}, DisSent \citep{nie2017dissent}, and GenSen \citep{subramanian2018large}. 
See Appendix \ref{sec:apdx_baselines} for additional details.
For these models, we only train task-specific classifiers on the representations they produce. 