\documentclass[10pt,onecolumn,letterpaper]{article}

\usepackage{cvpr}
\usepackage{times}
\usepackage{epsfig}
\usepackage{graphicx}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{bbm}
\usepackage{enumitem, url}

\usepackage{color}

\newcommand{\task}{\mbox{Interactive Question Answering}} 
\newcommand{\taskshort}{\mbox{\sc IQA}}
\newcommand{\dataset}{\mbox{Interactive Question Answering Dataset}}
\newcommand{\datasetshort}{\mbox{\sc IQAdata}}
\newcommand{\model}{Hierarchical Interactive Memory Network}
\newcommand{\modelshort}{\mbox{\sc himn}}
\newcommand{\gru}{Egocentric Spatial GRU}
\newcommand{\grushort}{\mbox{esGRU}}


\providecommand{\todo}[1]{{\protect\color{red}{\bf [TODO: #1]}}}
\providecommand{\daniel}[1]{{\protect\color{blue}{\bf [Daniel: #1]}}}
\providecommand{\mohammad}[1]{{\protect\color{magenta}{\bf [Mohammad: #1]}}}


% Include other packages here, before hyperref.

% If you comment hyperref and then uncomment it, you should delete
% egpaper.aux before re-running latex.  (Or just hit 'q' on the first latex
% run, let it finish, and you should be clear).
\usepackage[pagebackref=true,breaklinks=true,letterpaper=true,colorlinks,bookmarks=false]{hyperref}

% \cvprfinalcopy % *** Uncomment this line for the final submission

\def\cvprPaperID{1352} % *** Enter the CVPR Paper ID here
\def\httilde{\mbox{\tt\raisebox{-.5ex}{\symbol{126}}}}

% Pages are numbered in submission mode, and unnumbered in camera-ready
\ifcvprfinal\pagestyle{empty}\fi
\begin{document}

%%%%%%%%% TITLE
\title{Supplementary Material for IQA: Visual Question Answering in Interactive Environments}
\maketitle

\section{Using Real Object Detection}
\begin{table}[!htbp]
\begin{center}
\resizebox{\columnwidth}{!}{
\begin{tabular}{|l|l|l|l|}
 \hline
 \multicolumn{4}{|c|}{Accuracy of QA} \\
 \hline
       Model & Existence &Counting & Spatial Relationship\\
 \hline
Random & 50 & 25 & 50 \\
A3C with no object detections & 56.9 & 26.42 & 59.1 \\
\textbf{A3C with YOLO object detection} & \textbf{54.29} & \textbf{26.78} & \textbf{55.36} \\
A3C with oracle object detections & 59.5 & 27.1 & 66.2 \\
\textbf{HIMN with YOLO object detection} & \textbf{63.39} & \textbf{35.89} & \textbf{57.14} \\
HIMN with oracle object detection & 69.8 & 32.2 & 65.6 \\
HIMN with oracle object detection and navigator & 73.03 & 45.35 & 71.42 \\
 \hline
\end{tabular}
}
\end{center}
\caption{This tables compares the test accuracy of question answering across different models.}
\label{table:supp_accuracy}
\end{table}

\begin{table}[!htbp]
\begin{center}
\resizebox{\textwidth}{!}{
\begin{tabular}{ |l|l|l|l|l|l|l|l|l|  }
 \hline
 \multicolumn{9}{|c|}{Accuracy of QA Per Answer} \\
 \hline
       Model & Existence & Existence & Counting & Counting & Counting & Counting & Spatial & Spatial \\
       & (N) & (Y) & (0) & (1) & (2) & (3) & Relation (N) & Relation (Y) \\
 \hline
Always Answer Most Likely Value & 57 & 0 & 27 & 0 & 0 & 0 & 52 & 0 \\
A3C with no object detections & 99.69 & 0.83 & 20.81 & 33.04 & 26.53 & 26.32 & 55.02 & 64.94 \\
A3C with YOLO object detection & 84.91 & 14.05 & 41.22 & 35.4 & 42.07 & 44.81 & 39.72 & 42.77 \\
A3C with oracle object detections & 100 & 1.92 & 34.29 & 11.11 & 35.14 & 23.08 & 64.21 & 68.09 \\
HIMN & 92.41 & 40 & 35.14 & 33.93 & 32.17 & 27.52 & 64.44 & 66.67 \\
HIMN with YOLO object detection & 92.14 & 25.62 & 48.32 & 53.57 & 48.30 & 50.66 & 50.93 & 52.89 \\
HIMN with oracle navigator & 100 & 37.6 & 36.24 & 53.57 & 51.02 & 42.11 & 70.82 & 72.29  \\
 \hline
\end{tabular}
}
\end{center}
\caption{Results for each question category broken down by each possible answer.}
\label{table:supp_breakdown}
\end{table}

The modular nature of \modelshort\ allows us to easily swap out and swap in controllers with different architectures. We replace ground truth object sensing (provided by the AI2-THOR framework) with an object detection algorithm (YOLO V2).  We fine-tune YOLO V2 \cite{redmon2016yolo9000} on the AI2-THOR training scenes to show it examples of classes that are not in the MSCOCO dataset such as bread and cabinet. We estimate the depth of an object using the FRCN depth estimation network \cite{laina2016deeper} and project the probabilities of the detected objects onto the ground plane. These detection probabilities are incorporated into the spatial memory using a moving average update rule.

To fairly compare against A3C, we also train the A3C model with YOLO object detection outputs rather than ground truth detections. Table~\ref{table:supp_accuracy} (comparable to Table 2 in the paper) shows the performance of \modelshort\ with YOLO compared to A3C with YOLO. \modelshort\ significantly outperforms this baseline and also outperforms A3C with ground truth object detections on 2 of the 3 question types. \modelshort\ is presumably able to learn robustness to detection noise because it can directly encode the YOLO object probability outputs into the spatial map and incorporate past observations from the same locations much more directly.


We further separate the accuracies for different answers to explore potential biases in the questions as well as in the model behaviors, shown in Table~\ref{table:supp_breakdown}. We find that the questions are mostly balanced, yet Existence is slightly more likely to be false than true. This is a result of filtering out Existence questions where the object is placed in an unobservable location. Because of this, we notice that the A3C models, rather than learn how to explore the environment, simply exploits the bias in the questions. Our model, on the other hand is still quite likely to get true existence questions correct.

\section{Network Architecture}
The full HIMN network can be broken into several networks for navigation, planning, and answering. Their architectures are as follows: \\

\noindent \textbf{Navigation Network} \\
Inputs: 
\begin{itemize}[itemsep=0pt]
    \item Current image at $ 300 \times 300 \times 3 $ resolution
    \item Previous Action One-Hot Vector
    \item Destination
\end{itemize}
Layers:
\begin{itemize}[itemsep=0pt]
    \item Conv: $64 \times 7 \times 7$ kernels, stride 2, ELU activation
    \item Max Pool: $2 \times 2$, stride 2
    \item Conv: $128 \times 5 \times 5$ kernels, stride 1, ELU activation
    \item Max Pool: $2 \times 2$, stride 2
    \item Conv: $256 \times 3 \times 3$ kernels, stride 1, ELU activation
    \item Conv: $256 \times 3 \times 3$ kernels, stride 1, ELU activation
    \item Conv: $256 \times 3 \times 3$ kernels, stride 1, ELU activation
    \item Max Pool: $2 \times 2$, stride 2
    \item Conv: $512 \times 3 \times 3$ kernels, stride 1, ELU activation
    \item Conv: $512 \times 3 \times 3$ kernels, stride 1, ELU activation
    \item Conv: $512 \times 3 \times 3$ kernels, stride 1, ELU activation
    \item Max Pool: $2 \times 2$, stride 2
    \item FC1: Fully Connected on conv output: 1024 units, ELU activation
    \item FC2: Fully Connected on action one-hot: 32 units, ELU activation
    \item FC-Concat: Concatenate (FC1, FC2)
    \item GRU: 1024 units
    \item GRU-Concat: Concatenate(FC-Concat, GRU)
    \item Spatial GRU: $32 \times 5 \times 5$
    \item \textbf{Output} Path Weight: Conv: $1 \times 1 \times 1$, stride 1, activation = $min(max(1, 5 * e^{x}), 200)$
    \item Crop: Spatial GRU at Destination with $5 \times 5$ padding
    \item Conv: $32 \times 3 \times 3$ kernels, stride 1, ELU activation
    \item Conv: $32 \times 3 \times 3$ kernels, stride 1, ELU activation
    \item \textbf{Output} Terminal: Fully Connected, no activation.
\end{itemize}

\noindent \textbf{Planner Network} \\
Inputs: 
\begin{itemize}[itemsep=0pt]
    \item Current image at $ 300 \times 300 \times 3 $ resolution
    \item Semantic map at $RoomW \times RoomH \times NumClasses + 5$
    \item Previous Action One-Hot Vector
    \item Question Encoding
\end{itemize}
Layers:
\begin{itemize}[itemsep=0pt]
    \item Conv: $32 \times 7 \times 7$ kernels, stride 2, ReLU activation
    \item Max Pool: $2 \times 2$, stride 2
    \item Conv: $64 \times 5 \times 5$ kernels, stride 1, ReLU activation
    \item Max Pool: $2 \times 2$, stride 2
    \item Conv: $128 \times 3 \times 3$ kernels, stride 1, ReLU activation
    \item Max Pool: $2 \times 2$, stride 2
    \item Conv: $256 \times 3 \times 3$ kernels, stride 1, ReLU activation
    \item FC1: Fully Connected on conv output: 1024 units, ELU activation
    \item FC2: Fully Connected on action one-hot: 32 units, ELU activation
    \item FC-Concat: Concatenate (FC1, FC2)
    \item GRU: 1024 units
    \item GRU-Concat: Concatenate(FC-Concat, GRU)
    \\
    \item FCQ: Fully connected on Question Encoding: 64 units, ELU activation
    \item Tile FCQ: $RoomW \times RoomH$
    \item Concatenate(Semantic Map, Tile FCQ)
    \item Conv: $64 \times 1 \times 1$ kernels, stride 1, ELU activation
    \item Conv: $64 \times 11 \times 11$ kernels, stride 1, ELU activation
    \item Conv: $128 \times 3 \times 3$ kernels, stride 1, ELU activation
    \item Conv: $256 \times 3 \times 3$ kernels, stride 1, ELU activation
    \item Semantic Features: Conv: $256 \times 3 \times 3$ kernels, stride 1, ELU activation
    \item \textbf{Output}: Viability: Conv: $1 \times 1 \times 1$, stride 1, no activation
    \item Crop: $1 \times 1$ at Semantic Features(current spatial location)
    \item Concatenate (GRU-Concat, Crop)
    \item \textbf{Output}: $V_{action}$: Fully Connected on Concatenate
    \item \textbf{Output}: $\pi_{action}$ Fully connected on Concatenate: 6 units
\end{itemize}

\noindent \textbf{Answerer Network} \\
Inputs: 
\begin{itemize}[itemsep=0pt]
    \item Semantic map at $RoomW \times RoomH \times NumClasses + 5$
    \item Question Encoding
\end{itemize}
Layers:
\begin{itemize}[itemsep=0pt]
    \item Tile Question: $RoomW \times RoomH$
    \item Concatenate: (Semantic Map, Tile Question)
    \item Conv: $64 \times 1 \times 1$ kernels, stride 1, ELU activation
    \item Conv: $128 \times 3 \times 3$ kernels, stride 2, ELU activation
    \item Max Pool: $2 \times 2$, stride 2
    \item Conv: $128 \times 3 \times 3$ kernels, stride 2, ELU activation
    \item Max Pool: $2 \times 2$, stride 2
    \item Conv: $128 \times 3 \times 3$ kernels, stride 2, ELU activation
    \item Max Pool: $2 \times 2$, stride 2
    \item Spatial Sum
    \item Fully Connected: 128 units, ELU activation
    \item Fully Connected: 128 units, ELU activation
    \item \textbf{Output}: Answer: Fully Connected: NumAnswerClasses, no activation
    \item \textbf{Output}: $V_{answer}$: Fully Connected, no activation
    \item \textbf{Output}: $\pi_{answer}$: Fully Connected, no activation
\end{itemize}

\section{Training details}
To train the navigation network, we used the ADAM optimization algorithm with the learning rate of $10^{-4}$, and default Tensorflow constants. We trained with a batch size of 256 for 200,000 iterations. To train the planner and answerer, we use the RMSProp optimization algorithm with a learning rate of $10^{-3}$. We use the learning curriculum described in the paper for 5 million iterations of A3C (5 million distinct interactions with the environment).

{\small
\bibliographystyle{ieee}
\bibliography{00_references}
}

\end{document}


