\documentclass{article} \pdfoutput=1 \usepackage{iclr2018_conference,times} \usepackage{hyperref} \usepackage{url} \usepackage{graphicx} \usepackage{amssymb,amsmath,bm} \usepackage{textcomp} \usepackage{tikz} \usepackage{caption} \usepackage{subcaption} \usepackage{booktabs} \usepackage{lineno} \usepackage{array} \usepackage{etoolbox,siunitx} \robustify\bfseries \usetikzlibrary{arrows} \usetikzlibrary{calc} \DeclareSymbolFont{extraup}{U}{zavm}{m}{n} \DeclareMathSymbol{\varheart}{\mathalpha}{extraup}{86} \DeclareMathSymbol{\vardiamond}{\mathalpha}{extraup}{87} \def\vec#1{\ensuremath{\bm{{#1}}}} \def\mat#1{\vec{#1}} \sloppy \title{Twin Networks: Matching the Future \\ for Sequence Generation} \author{Dmitriy Serdyuk,$^\textbf{*}$\,${}^\vardiamond$ Nan Rosemary Ke,${}^{\textbf{*}}\,^{\vardiamond\,\ddagger}$ Alessandro Sordoni$^\varheart$ \vspace{0.1cm}\\ \textbf{Adam Trischler,}$^\varheart$ \textbf{Chris Pal}$^\clubsuit{}^\vardiamond$ \textbf{\&} \textbf{Yoshua Bengio}$^{\P\,\vardiamond}$ \vspace{5mm} \\ $^\vardiamond$ Montreal Institute for Learning Algorithms (MILA), Canada \\ $^\varheart$ Microsoft Research, Canada \\ $^\clubsuit$ Ecole Polytechnique, Canada \\ $^\P$ CIFAR Senior Fellow \\ $^\ddagger$ Work done at Microsoft Research \\ $^\textbf{*}$ \textbf{Authors contributed equally} \\ \texttt{serdyuk@iro.umontreal.ca}, \texttt{rosemary.nan.ke@gmail.com} \vspace{1.5cm} } \newcommand{\fix}{\marginpar{FIX}} \newcommand{\new}{\marginpar{NEW}} \iclrfinalcopy \begin{document} \maketitle \vspace*{-1.6cm} \begin{abstract} We propose a simple technique for encouraging generative RNNs to plan ahead. We train a ``backward'' recurrent network to generate a given sequence in reverse order, and we encourage states of the forward model to predict cotemporal states of the backward model. The backward network is used only during training, and plays no role during sampling or inference. We hypothesize that our approach eases modeling of long-term dependencies by implicitly forcing the forward states to hold information about the longer-term future (as contained in the backward states). We show empirically that our approach achieves 9\% relative improvement for a speech recognition task, and achieves significant improvement on a COCO caption generation task. \end{abstract} \section{Introduction} \label{sec:intro} Recurrent Neural Networks (RNNs) are the basis of state-of-art models for generating sequential data such as text and speech. RNNs are trained to generate sequences by predicting one output at a time given all previous ones, and excel at the task through their capacity to remember past information well beyond classical $n$-gram models~\citep{bengio1994learning,hochreiter1997long}. More recently, RNNs have also found success when applied to conditional generation tasks such as speech-to-text~\citep{NIPS2015_5847,chan2015listen}, image captioning~\citep{xu2015show} and machine translation~\citep{sutskever2014sequence,bahdanau2014neural}. RNNs are usually trained by \emph{teacher forcing}: at each point in a given sequence, the RNN is optimized to predict the next token given all preceding tokens. This corresponds to optimizing one-step-ahead prediction. As there is no explicit bias toward planning in the training objective, the model may prefer to focus on the most recent tokens instead of capturing subtle long-term dependencies that could contribute to global coherence. Local correlations are usually stronger than long-term dependencies and thus end up dominating the learning signal. The consequence is that samples from RNNs tend to exhibit local coherence but lack meaningful global structure. This difficulty in capturing long-term dependencies has been noted and discussed in several seminal works~\citep{hochreiter1991untersuchungen,bengio1994learning,hochreiter1997long,pascanu2013difficulty}. Recent efforts to address this problem have involved augmenting RNNs with external memory~\citep{dieng2016topicrnn,grave2016improving,gulcehre2017memory}, with unitary or hierarchical architectures~\citep{arjovsky2016unitary,serban2017hierarchical}, or with explicit planning mechanisms~\citep{tris2017}. Parallel efforts aim to prevent overfitting on strong local correlations by regularizing the states of the network, by applying dropout or penalizing various statistics~\citep{moon2015rnndrop,zaremba2014recurrent,gal2016theoretically,krueger2016zoneout,merity2017regularizing}. In this paper, we propose \emph{TwinNet},\footnote{ The source code is available at \url{https://github.com/dmitriy-serdyuk/twin-net/}.} a simple method for regularizing a recurrent neural network that encourages modeling those aspects of the past that are predictive of the long-term future. Succinctly, this is achieved as follows: in parallel to the standard forward RNN, we run a ``twin'' backward RNN (with no parameter sharing) that predicts the sequence in reverse, and we encourage the hidden state of the forward network to be close to that of the backward network used to predict the same token. Intuitively, this forces the forward network to focus on the past information that is useful to predicting a specific token and that is \emph{also} present in and useful to the backward network, coming from the future (Fig.~\ref{fig:twin}). In practice, our model introduces a regularization term to the training loss. This is distinct from other regularization methods that act on the hidden states either by injecting noise~\citep{krueger2016zoneout} or by penalizing their norm~\citep{krueger2015regularizing,merity2017regularizing}, because we formulate explicit auxiliary targets for the forward hidden states: namely, the backward hidden states. The activation regularizer (AR) proposed by~\cite{merity2017regularizing}, which penalizes the norm of the hidden states, is equivalent to the TwinNet approach with the backward states set to zero. Overall, our model is driven by the intuition (a) that the backward hidden states contain a summary of the future of the sequence, and (b) that in order to predict the future more accurately, the model will have to form a better representation of the past. We demonstrate the effectiveness of the TwinNet approach experimentally, through several conditional and unconditional generation tasks that include speech recognition, image captioning, language modelling, and sequential image generation. To summarize, the contributions of this work are as follows: \begin{itemize} \item We introduce a simple method for training generative recurrent networks that regularizes the hidden states of the network to anticipate future states (see Section~\ref{sec:model}); \item The paper provides extensive evaluation of the proposed model on multiple tasks and concludes that it helps training and regularization for conditioned generation (speech recognition, image captioning) and for the unconditioned case (sequential MNIST, language modelling, see Section~\ref{sec:experiments}); \item For deeper analysis we visualize the introduced cost and observe that it negatively correlates with the word frequency (more surprising words have higher cost). \end{itemize} \begin{figure} \centering \begin{tikzpicture}[->,thick] \scriptsize \tikzstyle{main}=[circle, minimum size = 7mm, thin, draw =black!80, node distance = 12mm] \foreach \name in {1,...,4} \node[main, fill = white!100] (y\name) at (\name*1.5,3.5) {$x_\name$}; \foreach \name in {1,...,4} \node[main, fill = white!100] (hf\name) at (\name*1.5,1.5) {$h^f_\name$}; \foreach \name in {1,...,4} \node[main, fill = white!100,draw=orange] (hb\name) at (\name*1.5,0) {$h^b_\name$}; \foreach \h in {1,...,4} { \draw[<->,draw=orange,dashed] (hf\h) to [bend right=45] node[midway,left] {$L_\h$} (hb\h) {}; \path (hf\h) edge [bend left] (y\h); } \foreach \current/\next in {1/2,2/3,3/4} { \path (hf\current) edge (hf\next); \path[draw=orange] (hb\next) edge (hb\current); } \foreach \h in {1,...,4} { \path (hb\h) edge [draw=orange,bend right] (y\h); } \end{tikzpicture} \caption{The forward and the backward networks predict the sequence $s = \{x_1, ..., x_4\}$ independently. The penalty matches the forward (or a parametric function of the forward) and the backward hidden states. The forward network receives the gradient signal from the log-likelihood objective as well as $L_t$ between states that predict the same token. The backward network is trained only by maximizing the data log-likelihood. During the evaluation part of the network colored with orange is discarded. The cost $L_t$ is either a Euclidean distance or a learned metric $||g(h_t^f) - h_t^b||{}_2$ with an affine transformation $g$. Best viewed in color.} \label{fig:twin} \end{figure} \pagebreak \section{Model} \label{sec:model} Given a dataset of sequences $\mathcal{S} = \{s^1, \ldots, s^n\}$, where each $s^k = \{x_1, \ldots, x_{T_k}\}$ is an observed sequence of inputs $x_i \in \mathcal{X}$, we wish to estimate a density $p(s)$ by maximizing the log-likelihood of the observed data $\mathcal{L} = \sum_{i=1}^n \log p(s^i)$. Using the chain rule, the joint probability over a sequence $x_1, \ldots, x_T$ decomposes as: \begin{equation} p(x_1, \ldots, x_T) = p(x_1)p(x_2|x_1)... = \prod_{t=1}^{T} p(x_t | x_{1}, \ldots, x_{t-1}). \end{equation} This particular decomposition of the joint probability has been widely used in language modeling~\citep{bengio2003neural,mikolov2010recurrent} and speech recognition~\citep{bahl1983maximum}. A recurrent neural network is a powerful architecture for approximating this conditional probability. At each step, the RNN updates a hidden state $h_t^f$, which iteratively summarizes the inputs seen up to time $t$: \begin{equation} h^f_t = \Phi_f(x_{t-1}, h_{t-1}^f), \end{equation} where $f$ symbolizes that the network reads the sequence in the forward direction, and $\Phi_f$ is typically a non-linear function, such as a LSTM cell~\citep{hochreiter1997long} or a GRU~\citep{cho2014learning}. Thus, $h^f_t$ forms a representation summarizing information about the sequence's past. The prediction of the next symbol $x_t$ is performed using another non-linear transformation on top of $h^f_t$,~i.e. $p_f(x_t|x_{