\documentclass[a4paper,12pt]{report}
\usepackage[utf8]{inputenc}
\usepackage{graphicx}
\usepackage{cite}
\usepackage{amsmath}
\usepackage{amsthm}
\usepackage[font={small,it}]{caption}

\begin{document}
\chapter{Experiment setup and results}
We consider a one dimensional problem. Consider a robot moving across a road crossing. Cars move at a constant speed. There are some cars that the robot needs to collide with (say refuelling cars) but it shouldn't collide with the other ones. The robot can either move left, right or stay in its position. 
%attach figure
\begin{figure}[h!]
  
  \centering
    \includegraphics[scale=.5]{img1f.png}
    \caption{Robot in green, red objects indicating cars to be avoided, blue indicating objects to be collected}
  \label{gm}
\end{figure}
The state of the robot is defined by its location as well as the time step. The reward function is assumed to be a linear combination of features. The robot can see H time steps into the future. This thus reduces the state space of the MDPs that are generated. We re-evaluate the MDPs (i.e. re-evaluate a policy) at each time step and aggregate it to get the complete policy for the complete MDP. The expert policy for the MDP may not be globally optimal.
%equation
We wish to determine the weight for each feature. We use three features for the problem. %TODO policy evaluation info


\section{Predefined weights}
For the first experiment, our features are as follows:
\begin{itemize} %reduce spacing
 \setlength{\itemsep}{.1pt}
 \item a collision with a car gives us a negative reward (-1) 
 \item moving in either direction also gives us a negative reward (-1) 
 \item colliding with a refuelling car gives us a positive reward (1).
\end{itemize}

We set the weights as $[1000, 1 , 1000]$, indicating an urge to minimize the collision with cars with negative reward and maximizing the collision with refuelling cars and not caring much about the movement.
\par
Our training set consists of 11 MDPs and the policies generated using the above weights. Our validation set again consists of 11 MDPs and the policies generated using the same weights. The number of states are different (the number of locations vary from 10 to 30 and the number of time steps from 30 to 90). The time horizon, H, is set to 5.\par
We consider two sets of transition probabilities, first a deterministic case where an action always succeeds and then a non-deterministic case where an action succeeds with a probability of $0.8$ and the other two actions have an equal probability of $0.1$. For the edge cases, the probability of an action resulting in an invalid action is added to the probability of the action succeeding.
\par
We consider three metrics to evaluate our results:
\begin{itemize}
 \setlength{\itemsep}{.1pt}
 \item Path match: A higher percentage indicates a closer match to the observed path
 \item Policy match for the observed path : A higher percentage indicates a closer match to the policy
 \item Average difference in features per time step : A lower number indicates a better match in the sum of features
\end{itemize}

\subsection{Results}
Some abbreviations used:\\
QP: The quadratic programming formulation from \cite{Abbeel:2004:ALV:1015330.1015430} \\%<placeholder Apprenticeship learning via IRL Ng Abbeel> \\
GA: Feature matching using Genetic Algorithms \\
ME: Maximum Entropy Method \cite{Ziebart_2008_6055}\\
MEN: Normalized maximum entropy method \\
BIRL: Standard Bayesian IRL \cite{Ramachandran07bayesianinverse}\\
BIRLI: Improved BIRL \\
\subsubsection{Deterministic actions}
\begin{center}
  \begin{tabular}{ | l | c | c | c | c |}
    \hline
    Algorithm & Weights & Path Match & Policy Match & Average Difference \\ \hline
    QP & 0.9264706 & 79.88 & 96.78 & 0.0356871 \\        
       & 0.0882353 & 	   &       & 0.1260655 \\
       & 0.6911765 & 	   &       & 0.0444611 \\ \hline
    GA & 0.5817141 & 95.84 & 99.46 & 0.0060606 \\        
       & 0.0237875 & 	   &       & 0.0286501 \\
       & 0.6006902 & 	   &       & 0.0060606 \\ \hline
    ME & 0.2099072 & 27.35 & 69.54 & 0.2191048 \\        
       & 0.7900928 & 	   &       & 0.9138373 \\
       & 1.341D-08 & 	   &       & 0.1451539 \\ \hline
    MEN & 0.4768660 & 97.66 & 99.86 & 0.0 \\        
       & 0.0165905 & 	   &       & 0.0165289 \\
       & 0.5065435 & 	   &       & 0.0 \\ \hline
    BIRL & 0.4368039 & 53.12 & 65.84 & 0.0451011 \\        
       & 0.0 & 	   &       & 0.4914036 \\
       & 0.5631961 & 	   &       & 0.0421774 \\ \hline 
    BIRLI & 0.4867780 & 94.99 & 99.06 & 0.0096970 \\        
       & 0.0248060 & 	   &       & 0.0431956 \\
       & 0.4884160 & 	   &       & 0.0096970 \\ \hline
  \end{tabular}
\end{center}
The Normalized maximum entropy method performs the best closely followed by our genetic algorithm approach and our BIRL approach. The standard probability definition for BIRL performs badly along with the un-normalized maximum entropy method.
\subsubsection{Non-deterministic actions}
\begin{center}
  \begin{tabular}{ | l | c | c | c | c |}
    \hline
    Algorithm & Weights & Path Match & Policy Match & Average Difference \\ \hline
    QP & 0.5817141 & 55.98 & 84.69 & 0.0178033 \\        
       & 0.0237875 & 	   &       & 0.3870907 \\
       & 0.6006902 & 	   &       & 0.0153828 \\ \hline
    GA & 0.6532417 & 98.09 & 99.77 & 0.0062628 \\        
       & 0.0009902 & 	   &       & 0.0232378 \\
       & 0.6349674 & 	   &       & 0.0061540 \\ \hline
    ME & 0.9768493 & 63.23 & 87.79 & 0.0364202 \\        
       & 2.743D-08 & 	   &       & 0.1333707 \\
       & 0.0231507 & 	   &       & 0.0560314 \\ \hline
    MEN & 0.6362234 & 82.70 & 93.72 & 0.0305718 \\        
       & 6.943D-21 & 	   &       & 0.1098317 \\
       & 0.3637766 & 	   &       & 0.0229517 \\ \hline
    BIRL & 0.7904742 & 71.99 & 90.02 & 0.0302867 \\        
       & 0.0 & 	   &       & 0.1718813 \\
       & 0.2095258 & 	   &       & 0.0358048 \\ \hline 
    BIRLI & 0.4666431 & 88.25 & 95.63 & 0.0072912 \\        
       & 0.0 & 	   &       & 0.1137779 \\
       & 0.5333569 & 	   &       & 0.0078020 \\ \hline
  \end{tabular}
\end{center}
In this case, our GA approach followed by our BIRL and normalized maximum entropy method perform better than the remaining approaches. Because of the limitations of the QP approach in the batch setting, it performs worse than the other algorithms.

\section{Human observation}
We collected the trajectories of the robot controlled by humans for the same set of MDPs used earlier. We specified two set of rules:\\
\textbf{Rule set 1}al
\begin{itemize}
 \setlength{\itemsep}{.1pt}
 \item Try to avoid the red objects
 \item Try to collect blue objects
\end{itemize}
\textbf{Rule set 2}
\begin{itemize}
 \setlength{\itemsep}{.1pt}
 \item Try to hit the red objects 
 \item Try to minimize movement
 \item Try to avoid the blue objects 
\end{itemize}

A number of trajectories were collected for each MDP and the best one fitting the rule set was selected as the expert trajectory for the MDP. For the second rule set, a change of sign for the first and third feature is required for the maximum entropy method to work.
\par
The actions are considered to be deterministic. BIRL cannot be compared since it requires the complete policy over the state space whereas we just have the path taken by the robot.
We do not have a test set $<$yet$>$ as the training set itself was difficult to collect since it takes time for a human to learn the problem and act as an expert. However, there is little overfitting so the metrics are evaluated on the training data set. %TODO add extra table to appendix?
\subsection{Results}
\subsubsection{Rule set 1}
\begin{center}
  \begin{tabular}{ | l | c | c | c | c |}
    \hline
    Algorithm & Weights & Path Match & Policy Match & Average Difference \\ \hline
    QP & 0.0013509 & 32.70 & 69.19 & 0.0949127 \\        
       & 0.0006991 & 	   &       & 0.3808001 \\
       & 0.0401125 & 	   &       & 0.0648134 \\ \hline
    GA & 0.2666580 & 43.07 & 71.52 & 0.1064196 \\        
       & 0.0150257 & 	   &       & 0.2414331 \\
       & 0.3756208 & 	   &       & 0.0251347 \\ \hline
    ME & 0.2035503 & 14.81 & 63.83 & 0.1746338 \\        
       & 0.7964433 & 	   &       & 1.0918848 \\
       & 0.0000063 & 	   &       & 0.1411851 \\ \hline
    MEN & 0.2849231 & 34.01 & 71.04 & 0.0969916 \\        
       & 0.0658453 & 	   &       & 0.3327419 \\
       & 0.6492316 & 	   &       & 0.0613169 \\ \hline
  \end{tabular}
\end{center}
Based on the rule set, the first and third feature should have nearly equal weights and the seconf feature should have a lower weight. The standard maximum entropy implementation performs the worst with a very low weight assigned to the third feature. The perfromance for the other three algorithms is comparable with the genetic algorithm having the highest patch and policy match.
\subsubsection{Rule set 2}
\begin{center}
  \begin{tabular}{ | l | c | c | c | c |}
    \hline
    Algorithm & Weights & Path Match & Policy Match & Average Difference \\ \hline
    QP & 0.12 & 34.70 & 84.09 & 0.4549555 \\        
       & 0.1371429 & 	   &       & 0.4419955 \\
       & 0.9057143 & 	   &       & 0.0069930 \\ \hline
    GA & 0.3174619 & 36.23 & 86.80 & 0.4058479 \\        
       & 0.3135551 & 	   &       & 0.3916247 \\
       & 0.7662550 & 	   &       & 0.0069930 \\ \hline
    ME & 2.909D-17 & 19.41 & 81.32 & 0.6624998 \\        
       & 1 & 	   &       & 0.5694686 \\
       & 6.377D-24 & 	   &       & 0.0335146 \\ \hline
    MEN & 0.0014155 & 19.41 & 81.32 & 0.6624998 \\        
       & 0.9985845 & 	   &       & 0.5694686 \\
       & 3.452D-21 & 	   &       & 0.0335146 \\ \hline
  \end{tabular}
\end{center}
From the weights, it seems that QP and GA assign an almost equal weight to the first and second features and a higher weight to the third feature (indicating a preference to avoid colliding with the blue objects). These weights agree with the rule set. The maximum entropy implementations however assign a very high weight to the second feature compared to the other two  and it seems that the policy generated would just to to minimize movement.
\bibliography{report1}{}
\bibliographystyle{plain}
\end{document}
