\section{Additional Baseline Details}\label{sec:apdx_baselines}

\subsection{Attention Mechanism}

%For sentence-pair tasks, we also evaluate models that use bidirectional attention between the two sentences. By explicitly modeling the interaction between sentences, these models fall outside the sentence-to-vector paradigm. 
% We implement and train our models using AllenNLP \citep{Gardner2017AllenNLPAD}. 
% We follow standard practice to contextualize each token with attention. % Omer: cite Parikh et al in the EMNLP version. % Sam: Is this really related to Parikh's work? Closer to BiDAF and some later NLI models.
We implement our attention mechanism as follows:
given two sequences of hidden states $u_1, u_2, \dots, u_M$ and $v_1, v_2, \dots, v_N$, we first compute matrix $H$ where $H_{ij} = u_i \cdot v_j$.
For each $u_i$, we get attention weights $\alpha_{i}$ by taking a softmax over the $i^{th}$ row of $H$, and get the corresponding context vector $\tilde{v}_i = \sum_j \alpha_{ij} v_j $ by taking the attention-weighted sum of the $v_j$. 
We pass a second BiLSTM with max pooling over the sequence $[u_1; \tilde{v}_1], \dots [u_M ; \tilde{v}_M]$ to produce $u'$.
We process the $v_j$ vectors analogously to obtain $v'$.
Finally, we feed $[u'; v'; |u' - v'|; u' * v']$ into a classifier.

\subsection{Training}

We train our models with the BiLSTM sentence encoder and post-attention BiLSTMs shared across tasks, and classifiers trained separately for each task.
For each training update, we sample a task to train with a probability proportional to the number of training examples for each task.
We scale each task's loss inversely proportional to the number of examples for that task, which we found to improve overall performance.
We train our models with Adam \citep{kingma2014adam} with initial learning rate $10^{-3}$, batch size 128, and gradient clipping.
We use macro-average score over all tasks as our validation metric, and perform a validation check every 10k updates.  
We divide the learning rate by 5 whenever validation performance does not improve. 
We stop training when the learning rate drops below $10^{-5}$ or performance does not improve after 5 validation checks. 

\subsection{Sentence Representation Models}

We evaluate the following sentence representation models:
\begin{enumerate}
\item CBoW, the average of the GloVe embeddings of the tokens in the sentence.
\item Skip-Thought \citep{kiros2015skip}, a sequence-to-sequence(s) model trained to generate the previous and next sentences given the middle sentence. 
We use the original pre-trained model\footnote{\href{https://github.com/ryankiros/skip-thoughts}{\tt github.com/\allowbreak ryankiros/\allowbreak skip-thoughts}} trained on sequences of sentences from the Toronto Book Corpus (\citealt{zhu2015aligning}, TBC). 
\item InferSent \citep{DBLP:conf/emnlp/ConneauKSBB17}, a 
BiLSTM with max-pooling trained on MNLI and SNLI.
\item DisSent \citep{nie2017dissent}, a BiLSTM with max-pooling trained to predict the discourse marker (\textit{because}, \textit{so}, etc.) relating two sentences on data derived from TBC. 
We use the variant trained for eight-way classification.
\item GenSen \citep{subramanian2018large}, a sequence-to-sequence model trained on a variety of supervised and unsupervised objectives. 
We use the variant of the model trained on both MNLI and SNLI, the Skip-Thought objective on TBC, and a constituency parsing objective on the Billion Word Benchmark.
\end{enumerate}