\documentclass[letterpaper, 10pt]{article}

\usepackage{hyperref}
\usepackage[pdftex]{graphicx}
\usepackage{amsfonts,amsmath,amssymb,amsthm}
%\usepackage{minted}
\usepackage{fullpage}
\usepackage{booktabs}
\usepackage{algorithm}
\usepackage{algorithmic}
\usepackage{shortbold}
\usepackage{color}

\newcommand{\email}[1]{\href{mailto:#1}{\tt\small #1}}
\title{ {ACRL} Homework 2: Helicopter Control}
\author{%
    Sanjiban Choudhury \\
    \email{sanjibac@andrew.cmu.edu} \and
    Abhijeet Tallavajhula \\
    \email{atallav1@andrew.cmu.edu} \and
    Venkatraman Narayanan\\
    \email{venkatrn@andrew.cmu.edu}
}
\date{March 25, 2014}

\begin{document}
\maketitle

\section{Introduction}
This report presents a summary of our approaches for learning a hover controller for a helicopter,
by policy search on the non-linear dynamics model.

\section{Linear Policy Class}
To start off with, we looked at a linear policy class of the form $\Delta u = K\Delta x$. We used the cross-entropy method for blackbox optimization of $K$. To evaluate each sample, we ran the simulator with noise and counted the number of timesteps the helicopter could last within a basin around the hover point. We used 100 samples for the total population and selected 10 for the elite set. We added time decreasing noise to the covariance matrix of the Gaussian we sample from. 
\begin{figure}[h]
    \begin{center}
        \includegraphics[width=0.4\textwidth]{./figures/ce_linear_history.png}
        \includegraphics[width=0.4\textwidth]{./figures/ce_linear_sample_run.png}
    \end{center}
    \caption{Learning a Linear Policy with the Cross-Entropy Method.}
    \label{fig:ce}
\end{figure}
The optimization found weights that work in the presence of noise and with initial perturbation. Evolution of performance of the population mean is shown in Fig.~\ref{fig:ce}. Also shown is the northing for a particular run with the best weights found.

\section{Non-Linear Policy Class}
We then explored a more complex non-linear policy class, by using the policy form of the neural-network based controller described in~\cite{ng2003autonomous}. Once again, we used the cross-entropy method to obtain the weights of the non-linear policy. 
\begin{figure}[h]
    \begin{center}
        \includegraphics[width=0.4\textwidth]{./figures/ce_neural_history.png}
        \includegraphics[width=0.4\textwidth]{./figures/ce_neural_sample_run.png}
    \end{center}
    \caption{Learning a Non-Linear Policy with a Neural Network.}
    \label{fig:nn}
\end{figure}
Fig.~\ref{fig:nn} shows the performance (number of timesteps the helicopter remains in hover) of the population mean against increasing iterations of the cross-entropy method. Note that the convergence of the weights has drastically improved, when compared to the simple linear policy class used earlier. 

\section{Handling Latency}
We considered a fixed known latency of 3. A couple of approaches we tried were
\begin{itemize}
\item Including a window of previous controls in the input to the non-linear controller described earlier. The performance of this controller oscillated between 170-320 time steps.
\item Forward simulating the current state through the known control history and passing the resulting state as input to the non-linear controller. This makes use of the exact helicopter model used in simulation, but was intended as a check. Surprisingly, the optimization was not able to find good weights. This means that the dynamics is sensitive to small changes in state. After forward simulating the old state is not equal to the current unknown state due to noise in the inputs.
\end{itemize}
We then considered a controller which is the sum of the steady-state LQR controller found earlier and the non-linear term. The idea is that the non-linear terms compensate for the baseline controller, similar to ~\cite{johnson2004adaptive}. The optimization converges quickly and the results are shown in Fig.~\ref{fig:lqr_nn}. Note that LQR is unable to handle a latency of 3.

The approach does not scale though. For a latency of 4, the best performing controller remained in the hover basin for 700 timesteps. 

\begin{figure}[h]
    \begin{center}
        \includegraphics[width=0.4\textwidth]{./figures/ce_neural_latency_history.png}
        \includegraphics[width=0.4\textwidth]{./figures/ce_neural_latency_sample_run.png}
    \end{center}
    \caption{Learning a Non-Linear Policy with LQR + Neural Network.}
    \label{fig:lqr_nn}
\end{figure}

\bibliographystyle{plain}
\bibliography{hw2_bib}

\end{document}
