\documentclass[a4paper,12pt]{memoir}
\usepackage[utf8]{inputenc}
\usepackage[french,english]{babel}
\usepackage [vscale=0.76,includehead]{geometry}   

\usepackage{cite}

\usepackage{amsthm}
\usepackage{algorithm}
\usepackage{algpseudocode}
\usepackage[font={small,it}]{caption}
\usepackage{tabularx}
\usepackage[T1]{fontenc}
\newtheorem{theorem}{Theorem}
\usepackage [vscale=0.76,includehead]{geometry}                % See geometry.pdf to learn the layout options. There are lots.
%\geometry{a4paper}                   % ... or a4paper or a5paper or ... 
%\geometry{landscape}                % Activate for for rotated page geometry
%\OnehalfSpacing
% \setSingleSpace{1.05}
%\usepackage[parfill]{parskip}    % Activate to begin paragraphs with an empty line rather than an indent
\usepackage{graphicx}
\usepackage{amsmath}
%\usepackage{fullpage}
\usepackage{mathptmx} % font = times
\usepackage{helvet} % font sf = helvetica
%\usepackage[latin1]{inputenc}
%\usepackage{relsize}

%Style des têtes de section, headings, chapitre
\headstyles{komalike}
\nouppercaseheads
\chapterstyle{dash}
\makeevenhead{headings}{\sffamily\thepage}{}{\sffamily\leftmark} 
\makeoddhead{headings}{\sffamily\rightmark}{}{\sffamily\thepage}
\makeoddfoot{plain}{}{}{} % Pages chapitre. 
\makeheadrule{headings}{\textwidth}{\normalrulethickness}
%\renewcommand{\leftmark}{\thechapter ---}
\renewcommand{\chaptername}{\relax}
\renewcommand{\chaptitlefont}{ \sffamily\bfseries \LARGE}
\renewcommand{\chapnumfont}{ \sffamily\bfseries \LARGE}
\setsecnumdepth{subsection}
\setcounter{tocdepth}{3}

% Title page formatting -- do not change!
\pretitle{\HUGE\sffamily \bfseries\begin{center}} 
\posttitle{\end{center}}
\preauthor{\LARGE  \sffamily \bfseries\begin{center}}
\postauthor{\par\end{center}}

\newcommand{\jury}[1]{% 
\gdef\juryB{#1}} 
\newcommand{\juryB}{} 
\newcommand{\session}[1]{% 
\gdef\sessionB{#1}} 
\newcommand{\sessionB}{} 
\newcommand{\option}[1]{% 
\gdef\optionB{#1}} 
\newcommand{\optionB}{} 

\renewcommand{\maketitlehookd}{% 
\vfill{}  \large\par\noindent  
\begin{center}\juryB \bigskip\sessionB\end{center}
\vspace{-1.5cm}}
\renewcommand{\maketitlehooka}{% 
\vspace{-1.5cm}\noindent\includegraphics[height=14ex]{logoINP.png}\hfill\raisebox{2ex}{\includegraphics[height=7ex]{logoUJF.jpg}}\\
\bigskip
\begin{center} \large
Master of Science in Informatics at Grenoble \\
Master Math\'ematiques Informatique - sp\'ecialit\'e Informatique \\ 
option \optionB  \end{center}\vfill}
% End of title page formatting

\option{GVR}
\title{Lifelong feature-based strategy planning for dynamic environments}%\\\vspace{-1ex}\rule{10ex}{0.5pt} \\sub-title} 
\author{Nishant JAIN}
\date{24 June 2015} % Delete this line to display the current date
\jury{
Research project performed at INRIA Grenoble Rh\^{o}ne-Alpes\\\medskip
Under the supervision of:\\
Dr Alejandro-Dizan VASQUEZ, INRIA Grenoble Rh\^{o}ne-Alpes\\\medskip
Defended before a jury composed of:\\
Prof James CROWLEY\\
Dr Edmond BOYER\\
Dr Guillaume HUARD\\
Dr Dominique VAUFREYDAZ\\
Dr Jean-S\'{e}bastien FRANCO\\
Dr Anne SPALANZANI\\
}
\session{June\hfill 2015}


%%% BEGIN DOCUMENT
\begin{document}
\selectlanguage{english} % french si rapport en français
\frontmatter
\begin{titlingpage}
\maketitle
\end{titlingpage}

%\small
\setlength{\parskip}{-1pt plus 1pt}

\renewcommand{\abstracttextfont}{\normalfont}
\abstractintoc
\begin{abstract} 
In this dissertation we address the problem of motion planning in dynamic environments. We focus on learning from human demonstration, particularly using inverse reinforcement learning (IRL). We discuss different existing algorithms such as Maximum Entropy IRL and Bayesian IRL. We propose improvements to the existing algorithms as well as a new approach using genetic algorithms. The algorithms are extended to work in dynamic environments and also to learn a reward function from examples coming from similar environments, thus improving robustness. We have compared the proposed approaches against the existing methods. Our tests show that the latter yield unsatisfactory results. On the other hand, the proposed improvements and extensions, in general, show much better performance.
\\
\\
\textbf{Keywords}: Learning from demonstration, Inverse reinforcement learning, dynamic environment, motion planning
\end{abstract}
\abstractintoc
\renewcommand\abstractname{R\'esum\'e}
\begin{abstract} \selectlanguage{french}
Dans ce mémoire, nous traitons du problème de la planification de mouvements dans des environnements dynamiques. Plus particulièrement, nous nous intéressons à l'apprentissage par démonstration et en particulier à  l'apprentissage inverse par renforcement (IRL de par ses sigles en anglais). Nous discutons de plusieurs algorithmes existants tels que le Maximum Entropy IRL ou encore le Bayesian IRL. Nous proposons des améliorations pour certains d'entre eux ainsi qu'une nouvelle approche basée sur des algorithmes génétiques.  Nous étendons ensuite ces algorithmes au cas des environnements dynamiques, puis nous  proposons une extension pour apprendre une fonction de récompense à partir d'exemples obtenus dans des environnements similaires, améliorant ainsi la robustesse. Nous avons comparé nos approches aux méthodes existantes. Nos tests montrent que ces dernières produisent des résultats peu satisfaisants. Par contre, les améliorations et extensions proposées dans ce mémoire affichent en général une performance bien meilleure.
\\
\\
\textbf{Mots-clés}: L'apprentissage par démonstration, L’apprentissage par renforcement inverse, la planification de mouvements, les environnements dynamiques 
\end{abstract}
\selectlanguage{english}% french si rapport en français



\cleardoublepage

\tableofcontents* % the asterisk means that the table of contents itself isn't put into the ToC
\normalsize

\mainmatter
\SingleSpace
\raggedbottom
\chapter{Introduction}
%$<$ Talk about learning from human demonstration in general, reinforcement learning, irl, motion planning in dynamic environments, cover state of the art for non irl stuff?$>$
Robots are getting more and more common in everyday life, from recreational robots for home to autonomous cars. Scientists around the word are trying to make robots smart. Instead of programming robots for each and every task, the current goal is to make it possible for them to learn actions, just like humans do. One way to do so is to demonstrate an action and have the robot learn from this demonstration. This is called Robot Learning from Demonstration (LfD). LfD allows users to train robots according to their specific needs. 
\par
Many robotics applications assume the robot to take an action based on its current world state \cite{Argall2009469}. The mapping from states to actions is called a \textit{policy}. Let the sequence of state-action pairs observed from a teacher demonstrating to the robot the desired behaviour be called an \textit{example}. LfD algorithms utilize a dataset of examples to learn a policy that reproduces the demonstrated behaviour. This approach is different from Reinforcement Learning \cite{Sutton:1998:IRL:551283}, which uses data acquired through exploration to derive the policy. Classical approaches to robot control model domain knowledge and derive policies mathematically. These approaches depend heavily on the accuracy of the model. The models themselves are difficult to develop and use approximations for reducing computational complexity.
\par
Creating autonomous robots is one of the ultimate goals of robotics \cite{Latombe:1991:RMP:532147}. Such robots should be able to execute tasks given their high level description without human intervention. Motion planning is one of the problems that need to be addressed to make this possible. This problem can be loosely stated as given the initial state of the robot and the goal state, determine a sequence of motions such that the robot reaches the goal state while minimizing a cost function. The classical methods reduced the state space to make the use of search algorithms such as A* possible. A* works well for fully observable environments but other algorithms such as D* \cite{Stentz93optimaland}, which are are improvements on A*, were developed to work in partially observable and dynamic environments. Some algorithms also take into account the concept of Inevitable collision states \cite{fraichard:inria-00182063}. An inevitable collision state is defined as one where no matter what the future trajectory followed by the robot, a collision eventually occurs.
%\\
%$<$ Discuss traditional planning algorithms in detail?$>$
%A lot of classical planning techniques assumed that the robot had a complete and accurate model of the environment. 
\par
Another approach used in planning is Rapidly Exploring Random Trees (RRT) \cite{Lavalle98rapidly-exploringrandom}. RRTs were designed to handle nonholonomic constraints and high degrees of freedom. The tree is constructed incrementally from samples drawn randomly from the search space. The tree is biased to grow towards large unsearched areas of the search space.
\par
%Lifelong learning
Instead of focusing on learning single tasks, focusing on learning from the multitude of control tasks that the robot may encounter in its lifetime gives  it the opportunity to transfer knowledge between tasks or environments \cite{Thrun93lifelongrobot}. The task independent knowledge can be used to learn control in new environments with reduced real-world experimentation. This is called lifelong learning where the goal is to selectively transfer knowledge gained from previous tasks when learning a new task so as to develop more accurate hypothesis or policies \cite{SSS135802}.
\par 
Reinforcement learning is based on the concept of learning from interaction with the environment. Reinforcement learning problems involve learning what to do so as to maximize a numerical reward \cite{Sutton:1998:IRL:551283}. The learner is not told which actions to take but instead must discover which actions yield the most reward by trying them out.
\par
However when considering the behaviour of animals and humans, a prior hypothesis on the reward may turn out to be incorrect. In other cases, it could be difficult for the designer to model the reward at all. To model natural learning using reinforcement learning ideas, we must first solve another computational task, which is called inverse reinforcement learning \cite{Russell:1998:LAU:279943.279964}. Inverse reinforcement learning involves learning the reward given some demonstrations of actions by an expert for the task. The learner is not just replicating the observed trajectory but is also inferring the reason for such behaviour \cite{Lopes09}. The reward seems to be the most succinct and transferable definition of the task. Inverse reinforcement learning is thus one of the forms of LfD that incorporates lifelong learning. 
\par 
The inverse reinforcement learning problem is defined as follows:\\
\textbf{Given} 1) measurements of an agent’s behaviour over time,
in a variety of circumstances, 2) measurements of the
sensory inputs to that agent; 3) a model of the physical
environment (including the agent’s body).\\
\textbf{Determine} the reward function that the agent is optimizing.

\par 
Our major contributions are:
\begin{itemize}
 \setlength{\itemsep}{-2pt}
 \item Most of the current Inverse Reinforcement Learning algorithms are limited to static environments. We extend these to dynamic environments. 
 \item We discuss the flaws of the existing algorithms and propose novel approaches addressing these flaws.
 \item We provide a comparison of the different algorithms with our new approaches on a test problem.
\end{itemize} 
\par 
The dissertation is organized as follows: we present some preliminary concepts and the state of the art in Chapter \ref{ch:soa}. We propose new algorithms in Chapter \ref{ch:na} and compare them on our test problem in Chapter \ref{ch:es}. We conclude with our findings in Chapter \ref{ch:cf}.
%$<$ Discuss IRL algos$>$
%\\


%$<$ Restructure intro + IRL chapter to make things coherent$>$
%When considering path planning for robots, it is usually necessary to avoid collisions with obstacles. Inevitable collision avoidance \cite{fraichard:inria-00182063} is one such concept where the robot must avoid states for which no matter what the future trajectory followed by the system, a collision eventually occurs.  

%\chapter{Inverse Reinforcement Learning}\label{ch:soa}
\chapter{Prerequisite concepts and the state of the art}\label{ch:soa}

\section{Motion Planning}
The basic motion planning problem can be defined as follows \cite{Latombe:1991:RMP:532147}:
\begin{itemize} %reduce spacing
 \setlength{\itemsep}{-2pt}
 \item Let $\mathcal{A}$ be a single rigid object, the robot, moving in a Euclidean space $\mathcal{W}$, called the \textit{workspace}
 \item Let $\mathcal{B}_1,\cdots,\mathcal{B}_q$ be fixed rigid objects in $\mathcal{W}$ called \textit{obstacles}
 \item Assume that the geometry of the robot and the obstacles are accurately known and their are no kinematic constraints limiting the motion of the robot
 \item The problem is : Given an initial and goal positions and orientations of $\mathcal{A}$ in $\mathcal{W}$, generate a path $\tau$ specifying the continuous sequence of positions and orientations of $\mathcal{A}$ without contact with the obstacles $\mathcal{B}_i$ and reporting failure if no such path exists
\end{itemize}
This problem is generally transformed to the motion planning problem of a point by mapping the workspace to the configuration space $\mathcal{C}$. The configuration $q$ of $\mathcal{A}$ is a specification of the position and orientation of $\mathcal{A}$ in $\mathcal{W}$. The subset of $\mathcal{W}$ occupied by $\mathcal{A}$ at configuration $q$ is denoted by $\mathcal{A}(q)$. The space $\mathcal{C}$ is the set of all such configurations of $\mathcal{A}$. For every obstacle $\mathcal{B}_i$, the corresponding mapping in $\mathcal{C}$ is called a C-obstacle $\mathcal{CB}_i$.
\begin{equation*}
\mathcal{CB}_i=\{q \in \mathcal{C} : \mathcal{A}(q) \cap \mathcal{B}_i \ne \emptyset \}
\end{equation*}
The union of all the C-obstacles is called the C-obstacle region. The set
\begin{equation*}
\mathcal{C}_{free}=\mathcal{C}\backslash \bigcup_{i=1}^q \mathcal{CB}_i
\end{equation*}
is called the free space. This simplified problem can be solved by constructing a geometric path from the initial configuration to the goal configuration in $\mathcal{C}_{free}$.
\par 
In case of a dynamic motion planning algorithm, some or all of the obstacles in the workspace are moving. The general approach used is to add a dimension, time, to the robot's configuration space. The robot can now be mapped in this configuration-time space as a point moving among stationary obstacles.
\par 
Using classical techniques for solving the problem requires us to associate some sort of cost or reward to the motions executed by the robot so as to generate an optimal path. As robot-human interaction gains more significance, robots need to be modeled to act like humans. It is thus difficult to model these costs correctly. Instead of specifying these costs implicitly, learning them from previously recorded human actions provides us a way to model these costs more accurately. The cost or reward function can generally be modeled as a functions of the sensor data available from the robot, which is similar to how humans use their senses and take actions based on their perceptions processed by the brain.
\par 
%TODO talk about features

\section{Markov Decision Processes}
The Inverse Reinforcement Learning problem was first addressed by Ng and Russell \cite{Ng00algorithmsfor} where the problem was modeled using Markov Decision Processes (MDPs). The reinforcement learning problem too is generally modeled using an MDP.  %mdp equation
\\
A finite MDP is a tuple $(S, A, \{P_{sa}\}, \gamma, R)$, where
\begin{itemize} %reduce spacing
 \setlength{\itemsep}{-2pt}
 \item $S$ is a finite set of $N$ states
 \item $A$ = $\{a_1, \cdots, a_k\}$ is a set of $k$ actions
 \item $P_{sa}(.)$ are the state transition probabilities upon taking action $a$ in state $s$
 \item $\gamma \in [0,1)$ is the discount factor
 \item $R : S \mapsto \Re$ is the reinforcement function or the reward function, bounded in absolute value by $R_{max}$
\end{itemize}
For simplicity, we denote rewards as $R(s)$ but we can easily extend it to $R : S \times A \mapsto \Re$, which is the notation we will be using later.
\\
A policy is defined as any map $\pi : S \mapsto A$, and the value function for a policy $\pi$ at any state $s_1$ is given by

\begin{align*}
 V^\pi(s_1)=E[R(s_1) + \gamma R(s_2) + \gamma^2 R(s_3) + \cdots |\pi]
\end{align*}

where the expectation is over the distribution of the state sequence $(s_1,s_2,\cdots)$ we pass through when we execute the policy $\pi$ starting from $s_1$. We also define the Q-function as

\begin{align*}
 Q^\pi(s,a)=R(s) + \gamma E_{s' \sim P_{sa}(.)}[V^\pi(s')]
\end{align*}

where the expectation is over $s'$ distributed according to $P_{sa}(.)$.\\
For discrete finite spaces, functions such as $R$ and $V$ can be represented as vectors indexed by state. The rewards can thus be written as an $N$ dimensional vector $\mathbf{R}$ for a state space of size $N$. Also, let $\mathbf{P}_a$ be the $N$ by $N$ matrix where each element $(i,j)$ gives the probability of moving to state $j$ from state $i$ on taking the action $a$. Let $\prec$ and $\preceq$ denote strict and non-strict vector inequality.
The goal of reinforcement learning is to find a policy $\pi$ such that $V^\pi(s)$ is maximized. Let us denote this policy as $\pi^*$.

%basic properties

Some additional results concerning MDPs \cite{Ng00algorithmsfor,Bertsekas:1996:NP:560669,Sutton:1998:IRL:551283} are also helpful
\begin{theorem}[Bellman Equations]
 Let an MDP $M = (S, A, \{P_{sa}\}, \gamma, R)$ and a policy $\pi : S \mapsto A$ be given. Then for all $s \in S, a \in A, V^\pi$ and $Q^\pi$ satisfy
 \begin{equation}
  V^\pi(s) = R(s) + \gamma \sum\limits_{s'} P_{s\pi(s)}(s')V^\pi(s')
 \end{equation}
 \begin{equation}\label{eq:bl2}
  Q^\pi(s,a) = R(s) + \gamma \sum\limits_{s'} P_{sa}(s')V^\pi(s')
 \end{equation}
\end{theorem}

\begin{theorem}[Bellman Equations]
 Let an MDP $M = (S, A, \{P_{sa}\}, \gamma, R)$ and a policy $\pi : S \mapsto A$ be given. Then $\pi$ is an optimal policy for $M$ if and only if, for all $s \in S$,
 \begin{equation}
  \pi(s) \in \arg \max\limits_{a \in A} Q^\pi(s,a)
 \end{equation}
\end{theorem}

\par 
Two commonly used algorithms to evaluate a policy for an MDP are Value Iteration (algorithm \ref{vi}) and Policy Iteration (algorithm \ref{pi}). 
\begin{algorithm}
 \caption{Value Iteration}\label{vi}
 \begin{algorithmic}[1]
  \Procedure{ValueIteration}{MDP $M$, Threshold $\theta$}
   \State Initialize array $V_0[s]$ randomly
   \State $k:=0$
   \Repeat
    \State $k:=k+1$
    \ForAll{$s \in S$}
     \State $V_k[s]= \max_a \sum_{s'} P(s,a,s')(R(s,a,s')+\gamma V_{k-1}[s'])$ 
    \EndFor
   \Until{$\forall s \: |V_k[s]-V_{k-1}[s]| > \theta$}
   \ForAll{$s \in S$}
    \State $\pi[s]= \arg \max_a \sum_{s'} P(s,a,s')(R(s,a,s')+\gamma V_k[s'])$ 
   \EndFor
  \\ \Return{$\pi, V_k$}
  \EndProcedure
 \end{algorithmic}
\end{algorithm}

\begin{algorithm}
 \caption{Policy Iteration}\label{pi}
 \begin{algorithmic}[1]
  \Procedure{PolicyIteration}{MDP $M$}
   \State Initialize array $\pi[s]$ randomly
   \Repeat
    \State $noChange:=true$
    \State Solve $V[s]=\sum_{s'\in S}P(s,\pi[s],s')(R(s,a,s')+\gamma V[s'])$
    \ForAll{$s \in S$}
     \State $QBest:=V[s]$
     \ForAll{$a \in A$}
     \State $Qsa= \sum_{s'\in S}P(s,a,s')(R(s,a,s')+\gamma V[s'])$ 
     \If{$Qsa > QBest$}
      \State $\pi[s]=a$
      \State $QBest=Qsa$
      \State $noChange=false$
     \EndIf
     \EndFor
    \EndFor
   \Until{$noChange$}
   
    %\State $\pi[s]= \arg \max_a \sum_{s'} P(s,a,s')(R(s,a,s')+\gamma V_k[s'])$ 
   
  \\ \Return{$\pi$}
  \EndProcedure
 \end{algorithmic}
\end{algorithm}
%V def,Q def
%vector notations, strict inequality
%basic properties
The policies are assumed to be \textit{stationary}, i.e. the policy is independent of time and always defines the same action for a state.
%\\
%then come to IRL
\section{Inverse Reinforcement learning : state of the art}

\par In inverse reinforcement learning, the problem is to find the reward function $R$ provided some observed behaviour.
\par 
For small finite MDPs, we may have the complete observed policy $\pi$ and we then wish to find the set of possible reward functions $R$ such that $\pi$ is an optimal policy in the MDP $(S, A, \{P_{sa}\}, \gamma, R)$.
%characteristics
\par Ng and Russell \cite{Ng00algorithmsfor} also characterized the set of solutions for the problem
\begin{theorem}
 Let a finite state space $S$, a set of actions $A$ = $\{a_1, \cdots, a_k\}$, transition probability matrices \{$\mathbf{P}_a$\}, and a discount factor $\gamma \in [0,1)$ be given. Then the policy $\pi$ given by $\pi(s) \equiv a_1$ is optimal if and only if, for all $a = a_2, \cdots, a_k$, the reward $\mathbf{R}$ satisfies
 \begin{equation}\label{eq:css}
  (\mathbf{P}_{a_1}-\mathbf{P}_a)(\mathbf{I}-\gamma \mathbf{P}_{a_1})^{-1}\mathbf{R} \succeq 0
 \end{equation}

\end{theorem}
Note that $\mathbf{R}$ is always a solution and there may be multiple solutions that satisfy (\ref{eq:css}). The IRL problem is thus ill-posed. There may be multiple reward functions for which a given policy is optimal and there may be multiple policies that are optimal for a particular reward function.
\par 
In larger finite state MDPs or infinite state MDPs, finding a general solution can become difficult. Instead, it makes sense to define the reward in terms of the context of the agent, such as the distance to the nearest obstacle or the current velocity of the agent. This information computed from the state or actions of the agent can be used to define $d$ features. These features can then be used to determine $R$ over the complete state space.
A linear approximation for the reward function is commonly used. Given a vector of features $\boldsymbol{\phi} : S \to [0,1]^d$.
\begin{equation}
 R(s)=w_1\phi_1+w_2\phi_2+\cdots +w_d\phi_d
\end{equation}
Vasquez et al. \cite{vasquez2014inverse} provide a comparison of the features used for robot navigation in crowds. The selection of features is a crucial design choice for IRL.

\subsection{Inverse Reinforcement learning using linear programming (LP)}
%TODO connecting text etc
The IRL problem has been transformed to determine the weight vector $\mathbf{w}$ instead of the reward function. When a weight vector is optimal, the trajectory taken by the agent following the induced policy should produce the same sum of features accumulated over the trajectory as the expert demonstration. Using this approach, the authors formulated the problem as a linear programming problem. Considering that we do not have the complete policy but just an observed sequence of states $(s_0, s_1, \cdots)$, we have
\begin{equation}
 V^\pi_i(s_0)=\phi_i(s_0)+\gamma \phi_i(s_1)+ \gamma^2 \phi_i(s_2)+\cdots
\end{equation}
$V^\pi(s_0)$ can thus be written as
\begin{equation}
 V^\pi(s_0)=w_1 V^\pi_1(s_0)+w_2 V^\pi_2(s_0)+\cdots +w_d V^\pi_d(s_0)
\end{equation}
To start off the algorithm, we first find the value estimates for the `expert' policy $\pi^*$ for $m$ Monte Carlo runs as well as for our base policy $\pi_1$, which is a randomly chosen policy. We run the algorithms for a large number of iterations. At each iteration, $k$, we have the set of policies $\{\pi_1,\cdots,\pi_k\}$, and we want to find $\mathbf{w}$ so that the resulting reward satisfies
\begin{equation}
 V^{\pi^*}(s_0) \ge V^{\pi}_i(s_0), i=1,\cdots,k
\end{equation}
The optimization problem used is as follows:
\begin{align*}
 \max &\sum\limits_{i=1}^{k} p(V^{\pi^*}(s_0) - V^{\pi_i}(s_0)) \\
 s.t. &|w_i| \le 1, i=1, \cdots, d
\end{align*} 
where $p(x) = x, \: x \ge 0 ,\: p(x)=-2x$ otherwise.
For each iteration, the $\mathbf{w}$ obtained is using to generate a new policy $\pi_{k+1}$ which is added to the current set of policies.
\\If you look at the optimization problem closely, you will see that it can be reduced to
\begin{align*}
 \max &\sum\limits_{i=1}^{d} c_i w_i \\
 s.t. \: &|w_i| \le 1, i=1, \cdots, d
\end{align*} 
which would result in $w_i = \pm 1$ depending on the sign of $c_i$. This algorithm requires Gaussian functions as features and performs poorly when the features are binary.
\subsection{Inverse Reinforcement learning using quadratic programming (QP)}
%ng abbeel
Abbeel and Ng \cite{Abbeel:2004:ALV:1015330.1015430} worked further along the same strategy of matching features to determine weights. The algorithm is similar to the previous one with a change in the optimization problem. The optimization problem in this case has a max-min formulation.
\begin{align}\label{eq:irl2a}
 \max\limits_{t,\mathbf{w}} \: &t\\
 s.t. \: &V^{\pi^*}(s_0) \ge V^{\pi_i}(s_0) + t, i=1, \cdots, k\\
 &|w_i| \le 1, i=1, \cdots, d
\end{align} 

where $t=\min\limits_{i \in \{1, \cdots, k\}} V^{\pi^*}(s_0) - V^{\pi_i}(s_0)$.\\
This problem is equivalent to finding the maximum margin hyperplane between two sets of points. To solve it with a generic quadratic programming problem solver we can transform it into

\begin{align}\label{eq:irl2b}
 \min\limits_{\mathbf{w}} \: &\frac{1}{2} \|\mathbf{w}\|^2 \\
 s.t. \: &V^{\pi^*}(s_0) - V^{\pi_i}(s_0) \ge 1, i=1, \cdots, k
\end{align} 

The algorithm is repeated for a large number of iterations or until $t \le \epsilon$.\\
\begin{algorithm}
 \caption{IRL using quadratic programming}\label{al:irl2}
 \begin{algorithmic}[1]
  \State Randomly pick some policy $\pi_1$ and set $k=1$
  \Repeat
   \State Solve the optimization problem (equation \ref{eq:irl2b}) to get the optimal weight vector $\mathbf{w}$
   \State Generate the policy $\pi_{k+1}$ for $\mathbf{w}$
   \State $k:=k+1$
   \State $t=\min\limits_{i \in \{1, \cdots, k\}} V^{\pi^*}(s_0) - V^{\pi_i}(s_0)$
  \Until{$t \le \epsilon$}
 \end{algorithmic}
\end{algorithm}
\par The algorithm still has a few issues:
\begin{itemize}
 \setlength{\itemsep}{-2pt}
 \item If $t > \epsilon$ but not too big, the max-min formulation can prevent the algorithm from converging to a solution. It will try to maximize $t$ for a policy that comes close to the optimal policy.
 \item In case of a batch setting, the algorithm is not clearly defined. The constraints can become inconsistent after a few iterations.
\end{itemize}
Maximum margin planning \cite{Ratliff06maximummargin} does have a clear approach for the batch setting but even though the approach presented is similar to inverse reinforcement learning, its goal is to mimic the behaviour of the expert rather than recovering the underlying reward which is why we do not compare it here. %TODO add details?

%\newpage
\subsection{Maximum Entropy Inverse Reinforcement Learning (ME)}
Ziebart et. al \cite{Ziebart_2008_6055,bziebart2008navigate} presented a probabilistic approach to IRL based on the principle of maximum entropy. Instead of dealing with policies, they considered a distribution over the entire class of possible behaviours. Many different distributions of paths match feature counts (i.e. the sum of feature vectors over a path) when the demonstrated behaviour is sub-optimal. Any one distribution could favour some path over others. This is avoided using the principle of maximum entropy which chooses the distribution that does not exhibit any additional preferences beyond feature matching.\par
Consider the path $\zeta$ = ($s_0, s_1, \cdots,$). For deterministic MPDs, the resulting distribution over paths is parameterized by the reward weights, $\mathbf{w}$
\begin{equation}
 P(\zeta_i | \mathbf{w}) = \frac{1}{Z(\mathbf{w})} e^{\mathbf{w}^\top F_{\zeta_i}}
\end{equation}
where $F_{\zeta} = \sum_{s_j \in \zeta} \boldsymbol{\phi}(s_j)$. The partition function, $Z(\mathbf{w})$, given the parameter weights, always converges for finite horizon problems and infinite horizon problems with discounted reward weights.
\par
For non-deterministic MDPs, the distribution of the paths is conditioned on the weights and the transition distribution, $\mathbf{P}$.
\begin{equation}
 P(\zeta | \mathbf{w},\mathbf{P}) \approx \frac{1}{Z(\mathbf{w},\mathbf{P})} e^{\mathbf{w}^\top F_{\zeta}} \prod_{s_{t+1}, a_t, s_t \in \zeta} P_{sa}(s_{t+1}|a_t, s_t)
\end{equation}
The probability of an action is weighted by the expected exponentiated rewards of all paths that begin with that action
\begin{equation}
 P(\mathrm{action} \: a | \mathbf{w},\mathbf{P}) \propto \sum_{\zeta : a \in \zeta_{t=0}} P(\zeta | \mathbf{w},\mathbf{P})
\end{equation}
The model is trained by finding the parameters that maximize the log probability of the observed behaviour.
\begin{equation}
 \mathbf{w}^* = \arg \max_{\mathbf{w}} L(\mathbf{w}) = \arg \max_{\mathbf{w}} \: \log \prod_{examples} P(\zeta_i | \mathbf{w},\mathbf{P})
\end{equation} 
This function is convex for deterministic MDPs and the optimum weights can be determined using gradient based optimization techniques. The gradient is the difference between expected empirical feature counts and the learner's expected feature counts which can be expressed in terms of expected action visitation frequencies, $D_a$.
\begin{equation}\label{eq:grad}
 \nabla L(\mathbf{w}) = F_{\zeta^*} - \sum_\zeta P(\zeta | \mathbf{w},\mathbf{P}) F_\zeta = F_{\zeta^*} - \sum_a D_a \boldsymbol{\phi}(a)
\end{equation}
At the maxima, the feature expectations match, resulting in the learner to perform equivalently to the expert regardless of the actual reward weights.
\par
The expected state frequencies can be evaluated by a technique similar to the forward-backward algorithm for conditional random fields. (algorithm \ref{mealgo}).
\begin{algorithm}
 \caption{Expected action frequency calculation}\label{mealgo}
 \begin{algorithmic}[1]
   \Statex \textbf{Backward pass}
   \State Set $Z_{s_i}=1$ for valid goal states, 0 otherwise
   \State Recursively compute for $N$ iterations
   \Statex $Z_{a_{i,j}}=\sum\limits_k P(s_k | s_i, a_{i,j}) e^{\mathbf{w}^\top \boldsymbol{\phi}(s_i,a_{i,j})} Z_{s_k}$
   \Statex $Z_{s_i}=\sum\limits_{a_{i,j}} Z_{a_{i,j}}$
   \Statex \textbf{Forward pass}
   \State Set $Z_{s_i}'=1$ for initial state, 0 otherwise
   \State Recursively compute for $N$ iterations
   \Statex $Z_{a_{i,j}}' = Z_{s_i}' e^{\mathbf{w}^\top \boldsymbol{\phi}(s_i,a_{i,j})}$
   \Statex $Z_{s_i}' = \sum\limits_{actions \: a_{j,i} \: to \: s_i} Z_{a_{j,i}}'$
   \Statex \textbf{Summing frequencies}
   \State $D_{a_{i,j}}=\frac{Z_{s_i}' e^{\mathbf{w}^\top \boldsymbol{\phi}(s_i,a_{i,j})} Z_{s_j}}{Z_{s_initial}}$
 \end{algorithmic}
\end{algorithm}
\par An exponential gradient-based method is used for the optimization.(algorithm \ref{mealgogd}).
\begin{algorithm}
 \caption{Learn reward weights from data}\label{mealgogd}
 \begin{algorithmic}[1]
  \Statex \textbf{Stochastic Exponentiated Gradient Ascent}
  \State Initialize random $\mathbf{w} \in \Re^d, \alpha > 0$
  \For {$t = 1$ to $T$}
  \State Compute $D_a$ for all actions $a$ using Algorithm \ref{mealgo}  
  \State Compute gradient $\nabla L(\mathbf{w})$ from $D_a$ using equation \ref{eq:grad}
  \State $\mathbf{w}=\mathbf{w} e^{\frac{\alpha}{t} \nabla L(\mathbf{w})}$
  \EndFor
 \end{algorithmic}
\end{algorithm}
\\This approach too has some major disadvantages. It requires us to know the sign of the weight for a feature beforehand. The authors are a bit vague about the algorithm and it requires extra steps to normalize the state action frequencies, which we will discuss in detail later. This algorithm also requires that the feature counts (i.e. the sum of features over states/actions) be of nearly the same magnitude. If not, the gradient ascent method does not converge to a solution.
%\newpage
\subsection{Bayesian Inverse Reinforcement Learning (SBIRL)}
Ramachandran and Amir \cite{Ramachandran07bayesianinverse} provided another method for IRL. In theory, it tries to recover a probability distribution over the reward. They considered that the reward function $\textbf{R}$ is chosen from a prior distribution $P_R$. A series of observations of the expert is given by $O = \{(s_1,a_1), (s_2,a_2), \cdots (s_k,a_k)\}$, where the agent takes action $a_i$ in state $s_i$. The agent is assumed to be attempting to maximize the total accumulated reward based on $\textbf{R}$. As before, the policy is assumed to be stationary, which results in the following independence assumption:
\begin{equation*}
P(O|\textbf{R})=P((s_1,a_1)|\textbf{R}) P((s_2,a_2)|\textbf{R}) \cdots P((s_k,a_k)|\textbf{R})
\end{equation*}
Since maximizing the accumulated reward is equivalent to finding the optimal policy for the reward function, the probability of choosing an action $a_i$ at state $s_i$ is higher if the action has a larger $Q^\pi(s,a)$. This is modeled by an exponential distribution for the likelihood of $(s_i,a_i)$ with $Q^\pi$ as the potential function
\begin{equation*}
P((s_i,a_i)|\textbf{R})=\frac{1}{Z_i}e^{\alpha Q^\pi(s_i,a_i,\textbf{R})}
\end{equation*}
The likelihood of the entire observation is 
\begin{equation*}
P(O|\textbf{R})=\frac{1}{Z}e^{\alpha E(O,\textbf{R})}
\end{equation*}
where $E(O,\textbf{R})=\sum_{i}Q^\pi(s_i,a_i,\textbf{R})$ and $Z$ is the normalizing constant. The posterior probability of the reward function can be computed using Bayes rule
\begin{align*}
P(\textbf{R}|O)=&\frac{P(O|\textbf{R}) P_R(\textbf{R})}{P(O)} \\
&=\frac{1}{Z'} e^{\alpha E(O,\textbf{R})} P_R(\textbf{R})
\end{align*}
Computing the normalizing factor $Z'$ is hard but the authors use just the ratio of the densities in their sampling algorithm.
\par
Since reward learning is an estimation task, the authors discussed two loss functions used for estimation problems, the linear and squared error loss functions
\begin{align*}
L_{linear}(\textbf{R},\textbf{\^{R}}) =& \| \textbf{R} - \textbf{\^{R}} \|_1 \\
L_{SE}(\textbf{R},\textbf{\^{R}}) =& \| \textbf{R} - \textbf{\^{R}} \|_2
\end{align*}
The authors show that when ${R}$ is drawn from the posterior distribution, the expected value of $L_{SE}(\textbf{R},\textbf{\^{R}})$ is minimized by setting $\textbf{\^{R}}$ to the mean of the posterior.
\par
Since the posterior distribution is complex and analytical derivation of the mean is hard, the authors generated samples from the distribution and then return the sample mean as the estimate for the true mean of the distribution. They used a modified version of the MCMC GridWalk algorithm which they call PolicyWalk. They focused on evaluating the Q values of a policy efficiently (algorithm \ref{birlo}).
\begin{algorithm}
 \caption{Bayesian IRL}\label{birlo}
 \begin{algorithmic}[1]
  \Procedure{PolicyWalk}{Distribution $P$, MDP $M$, Step size $\delta$}
   \State Pick a random reward vector $\textbf{R} \in \Re^{|S|}/\delta$
   \State $\pi:= \mathtt{PolicyIteration}(M,\textbf{R})$
   \Loop 
    \State Pick a reward vector $\mathbf{\tilde{R}}$ uniformly at random from the neighbours of $\mathbf{R}$ in $\Re^{|S|}/\delta$
    \State Compute $Q^\pi (s,a,\mathbf{\tilde{R}})$ for all $s \in S, a \in A$
    \If{$\exists (s,a) \in (S,A), Q^\pi (s,\pi(s),\mathbf{\tilde{R}}) < Q^\pi (s,a,\mathbf{\tilde{R}})$} 
     \State $\tilde{\pi}:= \mathtt{PolicyIteration}(M,\mathbf{\tilde{R}},\pi)$
     \State Set $\mathbf{R}:=\mathbf{\tilde{R}}$ and $\pi :=\tilde{\pi}$ with probability $\min\{1,\frac{P(\mathbf{\tilde{R}},\tilde{\pi})}{P(\mathbf{R},\pi)}\}$ 
    \Else
     \State Set $\mathbf{R}:=\mathbf{\tilde{R}}$ with probability $\min\{1,\frac{P(\mathbf{\tilde{R}},\pi}{P(\mathbf{R},\pi)}\}$ 
    \EndIf
   \EndLoop \\
   \Return $\textbf{R}$
  \EndProcedure
 \end{algorithmic}
\end{algorithm}
\par
Note that this algorithm would increase the state space even further in case of a dynamic environment. %TODO explain?
%\newpage
\subsection{Other Approaches}
A number of algorithms require the complete expert policy like Bayesian IRL instead of just the observed trajectory of the expert.
\subsubsection{Other Bayesian approaches}
Choi and Kim \cite{NIPS2011_4479} proposed using the maximum-a-posterior (MAP) instead of the posterior mean like Bayesian IRL since there are infinitely many reward functions that induce policies that are not consistent with the way the agent behaves. These could yield a posterior mean reward function which produces an optimal policy inconsistent with the training data. The authors proposed a gradient method for MAP inference. Also, like Bayesian IRL, using the algorithm in dynamic environments would lead to a massive increase in state space and an extension of the algorithm for use with a linear formulation for the reward function is required.
\par 
Lopes et al. \cite{Lopes09} tried to improve Bayesian IRL using active sampling. Instead of random sampling data for learning from a predefined distribution, active learning selects potentially informative samples. The authors define the set $R_{sa}(p)$ as the set of reward functions $r$ such that $P(\pi_r(s)=a)=p$. For each pair $(s,a) \in S \times A$, the distribution $P(r|O)$ induces a distribution over the possible values $p$ for $P(\pi(s)=a)$. Using this distribution, the agent can query the demonstrator about the correct action in states where the uncertainty on the policy is large.
\subsubsection{IRL as supervised learning}
Klein et al. \cite{NIPS2012_4551} used structured classification for inverse reinforcement learning. In general, a multi-class classifier produces a decision rule which is equivalent to a policy if we consider the states as the input and the actions as the labels. A linearly parameterized score function $s_w : \Re^{S \times A} \to \Re$ of parameter vector $\boldsymbol{\theta} \in \Re^d$ is defined as $s_w(s,a)=\boldsymbol{\theta}^\top\psi(s,a)$ where $\psi(s,a)E_{\pi_E}[\boldsymbol{\phi}(s,a)]$. The classification rule $\pi_c$ is defined as $\pi_c(s) \in \arg \max_a s_w(s,a)$. Using a training set, the classification algorithm computes a parameter vector $\boldsymbol{\theta}_c$. The reward function is then defined as $R^c(s,a)=\boldsymbol{\theta}_c^\top\boldsymbol{\phi}(s,a)$. The training set is a set of state-action pairs sampled using the expert policy $\pi^*$.
\par 
Klein et al. \cite{Klein13} improved on their previous work and proposed a cascaded supervised learning approach. They again used a multi-class classifier which learns a score function $q \in \Re^{S \times A}$ that rates the association of a given action $a \in A$ with a certain input $s \in S$, but in this case it need not be linearly parameterized. The classification rule $\pi_c$ selects one of the actions that achieves the highest score, i.e. $\pi_c(s) \in \arg \max_a q(s,a)$. Given a dataset $D_c = \{(s_i,a_i=\pi_*(s_i))_i\}$ of states and actions chosen by the expert on those states, the classifier is trained. Using the score function $q$ of the classifier, the Bellman equation (equation \ref{eq:bl2}) is inverted to give the reward function
\begin{equation}
R^C(s,a)=q(s,a)-\gamma \sum\limits_{s'}P(s'|s,a)q(s',\pi_c(s'))
\end{equation}
Thus the reward function $R^c$ can be approximated using the information gathered by interacting with the system. Unlike most of the approaches discussed earlier, this approach does not require us to solve the MDP (the reinforcement learning problem) repeatedly.
%\newpage
\subsection{Summary}
Most of the algorithms that we just discussed solve an optimization problem. They try to minimize the difference in the sum of features between the trajectory of the expert and the trajectory dictated by an estimate of the weights. The first two iterative algorithms require us to solve the reinforcement learning problem using the weights for the current iteration. The maximum entropy method uses the state or action frequency estimation instead of generating a policy. These algorithms can generate the reward function using features and require the trajectory traversed by the expert as input. The other algorithms either don't use the features but try to specify the reward implicitly for each state or require the complete policy instead of the path traversed by the expert.
\begin{center}
  \begin{tabularx}{\textwidth}{ | p{2cm} | p{5.69cm} | p{5.69cm} |}
    \hline
    Algorithm & Advantages & Disadvantages \\ \hline
    LP & Low computational complexity and runtime & Requires complex features \\
       & & No batch definition \\ \hline
    QP & Low computational complexity and runtime & No batch definition \\    
	   & & Does not always converge to a solution \\ \hline 
    ME & Multi-objective optimization approach & Restrictions on features \\        
       & Does not require the solution of underlying MDP & Gradient based approach may not be globally optimal \\ \hline
    Bayesian approaches & Efficient computation & Require complete policy instead of trajectory\\        
       & & Limited to small state spaces \\ \hline
    Supervised learning approaches & Do not require the solution of the underlying MDP & Require complete policy instead of trajectory \\ \hline
  \end{tabularx}
\end{center}
%TODO table
%prob def
%\subsection{Priors} Move to implementation
%The authors discuss different priors (uniform, Gaussian, Beta).
\chapter{Proposed Learning Algorithms}\label{ch:na}
Each algorithm discussed earlier had at least one major flaw. We try to generalize Bayesian IRL and improve the maximum entropy IRL approach. We also propose another algorithm inspired by the approaches of Ng, Russell and Abbeel (LP, QP).
\section{Inverse Reinforcement Learning using Genetic Algorithms (GA)}
We see that all algorithms formulate the IRL problem as an optimization problem. The first two algorithms (LP, QP) that we have discussed reduced the objective function to a single dimension. The maximum entropy method does have the multi-objective formulation but in this case the gradient ascent method provides a solution that may not be globally optimal. We now formulate the problem as a multi-objective optimization problem. We continue to model it on feature matching.
\begin{align*}
\min_\mathbf{w} | F_{\zeta^*} - F_{\zeta_\mathbf{w}} |
\end{align*}
where $\zeta_\mathbf{w}$ is the path taken by the agent using the policy generated for the reward weights $\mathbf{w}$.
\par
A genetic algorithm (GA) is a randomized search algorithm which can generate solutions to optimization problems using techniques inspired by natural evolution, such as inheritance, mutation, selection, and crossover \cite{opac-b1081691}. Algorithm \ref{gag} describes a general genetic algorithm to search for the solution to an optimization problem.
\begin{algorithm}
\caption{Generic Genetic algorithm}\label{gag}
 \begin{algorithmic}[1]
  \State Define the fitness function based on the objective function of the optimization problem
  \State Initialize initial population of size $N$
  \Loop
   \State Evaluate the fitness of the population
   \State Assign a probability of selection based on the fitness to each individual
   \State Create $N/2$ pairs to undergo crossover and mutation to create `children'
   \State Select the next generation of individuals from the current population and the `children'
  \EndLoop
 \end{algorithmic}
\end{algorithm}
\par Generally, the population for the next generation is selection based on a randomized procedure with the probability of selection proportional to the fitness of an individual. In elitist algorithms, this step is different and the individuals are selected based on a ranking of their fitness. The fittest individuals always make it to the next generation.
\par 
In multi-objective optimization, multiple objective functions need to be optimized at the same time and in general some of these objectives are conflicting, i.e. not all objectives cannot attain their optimal value for the same input. This brings in the concept of pareto optimality or non-dominance. Consider a function $f : \Re^a \to \Re^b$. Let $Y = \{y : y=f(x), \forall x \in \Re^a\}$. A solution $x \in \Re^a$ is said to be pareto optimal (considering minimization) if $\{y' \in Y: y' \prec y \}=\emptyset$ where $y=f(x)$. 
A multi-objective GA usually generates a set of optimal solutions, some of which are pareto optimal. A single optimal solution can then be selected based on the performance on a cross validation data set.
\par 
MOGA \cite{Fonseca93geneticalgorithms} and NSGA-II \cite{Deb00afast} are two popular multi-objective genetic algorithms. MOGA (algorithm \ref{moga}), uses a two step approach to fitness evaluation.
\begin{algorithm}
\caption{MOGA}\label{moga}
 \begin{algorithmic}[1]
  \State Initialize initial population of size $N$
  \Loop
   \State Evaluate the fitness of the population
   \State Evaluate the rank of the individual
   \State Evaluate the fitness of the population based on the rank
   \State Assign a probability of selection based on the fitness to each individual
   \State Create $N/2$ pairs to undergo crossover and mutation to create `children'
   \State Select the population from the current population and the `children'
  \EndLoop
 \end{algorithmic}
\end{algorithm}
\par The rank of the individual is evaluated based on the concept of non-dominance. Consider the fitness function $f : \Re^a \to \Re^b$. Let $Y = \{y : y=f(x), \forall x \in \Re^a\}$. The rank of an individual $x_i$ is defined as the number of individuals it is dominated by, i.e. $|D_i|$ where $D_i = \{x \in X: f(x) \prec f(x_i), x \ne x_i \}$, where $X$ is the population set. An individual belonging to the pareto front has a lower rank and a higher chance of selection.
\par 
NSGA-II on the other hand uses a similar ranking but also incorporates the distance of the individuals in the function space. At any generation the offspring population is created. The two populations are combined together to form a new population of size $2N$. Then the population is classified into different non-domination classes (based on the rank). Thereafter, the new population is filled by points of different non-domination fronts, one at a time. The filling starts with the first non-domination front (of class one) and continues with points of the second non-domination front, and so on. Since the overall population size $2N$ , not all fronts can be accommodated in $N$ slots available for the new population. All fronts which could not be accommodated are deleted. When the
last allowed class is being considered, there may exist more points in the class
than the remaining slots in the new population. In this case, the class is again ranked based on the crowding distance, which is the sum of the distance in the function space to the closest individual in each direction. The higher, the crowding distance, the better the individual is considered. The remaining slots are filled based on this ranking. Note that the algorithm is elitist.
%\newpage
\section{Normalized Maximum Entropy IRL (MEN)}
We saw that maximum entropy IRL requires the sum of the different features to be of the same order. We can address this flaw using scaling or normalization. It is possible to generate random policies for a set of MDPs and evaluate the average sum of features $\phi_{i \: avg}$. We can then divide the respective component of the gradient by this value.
\begin{equation*}\label{eq:nmme}
\nabla L(\mathbf{w})_i=\frac{\nabla L(\mathbf{w})_i}{\phi_{i \: avg}} \quad \forall i \in \{1, 2, 3\}
\end{equation*}
%\newpage
\section{Bayesian Inverse Reinforcement Learning (BIRL, BIRLI)}
The standard Bayesian IRL approach was limited to small state spaces and required the reward to be defined explicitly for each state. Since we are using a linear formulation for the reward function, $R$, we need to modify algorithm \ref{birlo} to sample the distribution over $\mathbf{w}$ instead of $\mathbf{R}$. This uses the modification from \cite{6225241} removing the condition from step 7 of algorithm \ref{birlo}. We no longer have a grid since the weight vectors need to be normalized for comparison of two weight vectors. To compute a neighbour of a weight vector, we modify just one component of the vector by step size $\delta$ followed by normalization (algorithm \ref{birlm}).
\begin{algorithm}
 \caption{Modified Bayesian IRL}\label{birlm}
 \begin{algorithmic}[1]
  \Procedure{RandomWalk}{MDP $M$, Step size $\delta$}
   \State Pick a random weight vector $\mathbf{w} \in \Re^{d}$
   \State $\pi:= \mathtt{PolicyIteration}(M,\mathbf{w})$
   \State Compute $Q^\pi (s,a,\mathbf{w})$ for all $s \in S, a \in A$
   \Loop 
    \State Pick a weight vector $\mathbf{\tilde{w}}$ uniformly at random from the neighbours of $\mathbf{w}$
    \State Compute $Q^\pi (s,a,\mathbf{\tilde{w}})$ for all $s \in S, a \in A$
    \State $\tilde{\pi}:= \mathtt{PolicyIteration}(M,\mathbf{\tilde{w}},\pi)$
    \State Set $\mathbf{w}:=\mathbf{\tilde{w}}$ and $\pi :=\tilde{\pi}$ with probability $\min\{1,\frac{P(\mathbf{\tilde{w}}|O)}{P(\mathbf{w}|O)}\}$ 
   \EndLoop \\
   \Return $\mathbf{w}$
  \EndProcedure
 \end{algorithmic}
\end{algorithm}
\par In this case, the normalization constant $Z_i$ is also computed.
\begin{equation*}
P((s_i,a_i)|\mathbf{w})=\frac{e^{\alpha Q^\pi(s_i,a_i,\mathbf{w})}}{\sum\limits_{b \in A} e^{\alpha Q^\pi(s_i,b,\mathbf{w})}}
\end{equation*}
and we still assume the independence condition
\begin{equation}\label{eq:birls}
P(O|\mathbf{w})=P((s_1,a_1)|\mathbf{w}) P((s_2,a_2)|\mathbf{w}) \cdots P((s_k,a_k)|\mathbf{w})
\end{equation}
We refer to the algorithm as \textbf{BIRL}. However since we only require the ratio of probabilities in step 9 of algorithm \ref{birlm}, there is another way we could define $P(O|\mathbf{w})$ 
\begin{equation}\label{eq:birli}
P(O|\mathbf{w}) \propto e^{-|\mathbf{w} ^\top (F_{O_\mathbf{w}} - F_O)|}
\end{equation}
where $O_\mathbf{w}$ is the sequence of state, action pairs for the policy generated using the weights $\mathbf{w}$. This probability measure depends only on the observed path and does not require the complete policy. We refer to the algorithm as \textbf{BIRLI} when using this probability measure.
\par 
Since the random walk can get stuck at a local minimum, these algorithms need to be executed a few times and the best result can be selected using cross validation.
\section{Summary}
Our genetic algorithm approach to IRL provides a globally optimal solution without reducing the optimization problem to a single dimension. It can be easily extended to allow training in a batch setting. The Bayesian IRL approach has been extended to work with a reward function defined in terms of features. We further improve it so that it works with just the observed trajectory instead of the complete expert policy. We also improved Maximum Entropy IRL so that it converges to a solution even when the sum of features over a trajectory may not be of the same magnitude.

\chapter{Experiment setup and results}\label{ch:es}
We proposed new approaches for IRL and now we wish to compare them. There are different platforms available for evaluating the performance of the algorithms, such as the grid world or the mountain-car problem from \cite{Ng00algorithmsfor}. But these test problems are limited to small state spaces and consider a static environment. The car driving simulator from \cite{Abbeel:2004:ALV:1015330.1015430} uses a larger state space but the authors treat the problem as one with a static environment. We consider a problem with a dynamic environment which is not limited to small state spaces and for which it is easy to collect observations.
\par 
We restrict the agent's motion to one dimension. Including the temporal dimension results in a two dimensional problem. It can be pictured as a robot moving across a road crossing starting from one side of the road. All cars move at the same constant speed. There are some cars that the robot needs to collide with (say refuelling cars) but it shouldn't collide with the other ones. The robot can either move left, right or stay in its position. 
%attach figure
\begin{figure}[h!]
  
  \centering
    \includegraphics[scale=.5]{img1f.png}
    \caption{Robot in green, red objects indicating cars to be avoided, blue indicating objects to be collected}
  \label{gm}
\end{figure}
The state of the robot at any given time is defined by its location $x$ as well as the time step $t$. The reward function is assumed to be a linear combination of features. The robot can see only a finite time steps into the future which we call the time horizon, denoted by $H$ \cite{Henry10learningto}. This helps us in reducing the state space of the MDPs. However, the expert policy for the MDP may not be globally optimal due to the limited time horizon.
%equation
%relmoved for now We re-evaluate the MDPs (i.e. re-evaluate a policy) at each time step and aggregate it to get the complete policy for the complete MDP. 
We wish to determine the weight for each feature. We use three binary features for the problem.
\begin{itemize} %reduce spacing
 \setlength{\itemsep}{-2pt}
 \item Collision with red object: 1 for a collision, 0 otherwise
 \item Movement: 1 for moving in either direction, 0 otherwise
 \item Collision with blue object: 1 for a collision, 0 otherwise 
\end{itemize}
\par
Our training set and test set each consist of trajectories for 11 MDPs. The number of states in the MDPs are different (the number of locations $|X|$ is randomly chosen and varies from 10 to 30 and the number of time steps is $3\times |X|$). The state space can be considered as a $|X|\times3|X|$ matrix where each cell (or state) can either be empty, have a red object or a blue object. These matrices are generated randomly based on adjustable probabilities for the red and blue objects. Note that once generated, the same set of matrices is used for each experiment. The time horizon $H$, is set to 5.%TODO Explain/table
\par 
\begin{figure}[h!]
  
  \centering
    \includegraphics[scale=.5]{img2f.png}
    \caption{The arrows indicate the invalid actions at the edges}
  \label{gmb}
\end{figure}
We consider two sets of transition probabilities, first a deterministic case where an action always succeeds and then a non-deterministic case where an action succeeds with a probability of $0.8$ and the other two actions have an equal probability of $0.1$. For the edge cases, the probability of an action resulting in an invalid action (as shown in figure \ref{gmb}) is added to the probability of the action succeeding.
\par
We consider three metrics to evaluate our results:

\begin{itemize}
 \setlength{\itemsep}{-2pt}
 \item Path match: It is defined as the percentage of states $s \in S$ that are common in the path taken by the expert and the path taken by the agent. A higher percentage indicates a closer match to the observed path.
 \item Policy match for the observed path : It is defined as the percentage match of the actions $a \in A$ taken by the expert and the agent for the path taken by the expert. A higher percentage indicates a closer match to the policy.
 \item Average difference in features per time step : It is computed by summing the feature counts for the paths taken by the expert and the agent and taking the absolute value of the difference and dividing it by $|X|$. A lower number indicates a better match in the sum of features.
\end{itemize}

\par
We perform two sets of experiments. In the first case, we choose a weight vector based on the way we want the agent to behave. We then generate a policy using value iteration with the reward function defined using this weight vector and use it as the policy of the expert. This set of experiments tests whether the algorithms discussed are able to recover the pre-defined weights. Since IRL deals with recovering the reward function in cases where it is difficult to define it accurately, we focus on tasks performed by humans for our second set of experiments. For the same setup, we let human subjects control the robot and we collected the trajectories taken by the agent. In this case, the algorithms need to find a reward function such that it can reproduce the original trajectories or act similarly.
\par Since we have a dynamic environment, a policy can't be directed computed using value iteration but requires a concatenation of policies considering static environments. At each time step, a new MDP with the current visible states (as defined by the time horizon) is defined and a policy is computed. The actions at the current time step for the states constitute the complete policy for our environment. This policy may not be globally optimal for the MDP as a whole, but given the time horizon, better behaviour cannot pe achieved. 
\par 
For our second set of experiments, the features used to model the problem may be insufficient. The expert policy might not even be locally optimal for our model due to the limitations of human ability.
%\newpage
\section{Implementation details}
Our simulator uses keyboard input for the navigation of the robot. It can also be used to display trajectories recorded previously or generated from a policy.
\par 
The problem is modeled as an MDP $(S, A, \{P_{sa}\}, \gamma)$ with a missing reward function
\begin{itemize} %reduce spacing.1
 \setlength{\itemsep}{-2pt}
 \item $S$ is the finite set of $|X| \times |T|$ states where $X$ denotes the set of locations the agent can occupy and $T$ denotes the set of time steps. A state $s \in S$ is of the form $(x,t), x \in X, t \in T$.
 \item $A$ = $\{a_1, a_2, a_3\}$ where $a_1$ denotes moving left, $a_2$ denotes staying in the current location, $a_3$ denotes moving right. Note that action $a_1$ and $a_3$ can be invalid at the edges of the grid as shown in figure \ref{gmb}.
 \item The state transition probability, $P(s,a,s')$ varies based on whether we assume actions to be deterministic or not. When taking action $a_i$ in state $s: (x,t)$ and reaching state $s': (x',t+1)$, $x'=x+k-2, k \in \{1, 2, 3\}$.\\
 In the deterministic case, $P(s,a_i,s')=1$ when $k=i$, $0$ otherwise.\\
 In the non-deterministic case, $P(s,a_i,s')=0.8$ when $k=i$, $P(s,a_i,s')=0.1$ otherwise, except for the edge cases where $P(s,a_i,s')=0.9$ when $k=i$ for valid actions, and $P(s,a_i,s')=0.1$ otherwise.
 \item $\gamma \in [0,1)$ is the discount factor, we set it as $0.9$
\end{itemize}
The reward function $R$ is a linear combination of three features
\begin{equation*}
R=w_1\phi_1 + w_2\phi_2 + w_3\phi_3
\end{equation*}
To generate a policy, we use value iteration (algorithm \ref{vi}).\\
The policy is re-generated at each time step $t$ and the final policy $\pi_f$ is computed as
\begin{equation*}
\pi_f(x,t)=\pi_t(x,t) \quad \forall x \in X, t\in T
\end{equation*}
where $\pi_t$ denotes the policy generated at time step $t$ for the MDP with time steps limited to $t$ and $t+H$.
\par 
We implemented the following algorithms in Scilab:
\subsection{QP: IRL using quadratic programming}
We first implemented the algorithm from \cite{Abbeel:2004:ALV:1015330.1015430} where we need to optimize the problem (equation \ref{eq:irl2a}). Since we have multiple MDPs for training, we need to modify the optimization problem
\begin{align*} %TODO notation
 \min\limits_{t,\mathbf{w}} \: &t\\
 s.t. \: &V_j^{\pi^*}(s_0) \ge V_j^{\pi_i}(s_0) + t, i=1, \cdots, k, \: j=1 \cdots N\\
 &|w_i| \le 1, i=1, \cdots, d
\end{align*}
where $N$ denotes the number of training MDPs, $V_j$ indicates the condition for the $j^{th}$ MDP. The extra conditions tend to make the the problem infeasible quite quickly. We found it better to sum up these conditions. The problem now changes to
\begin{align*}
 \max\limits_{\mathbf{w}} \: &\frac{1}{2} \|\mathbf{w}\|^2 \\
 s.t. \: & \sum_{j=1}^{N} (V_j^{\pi^*}(s_0) - V_j^{\pi_i}(s_0)) \ge N, i=1, \cdots, k
\end{align*}
We use the standard Scilab function $qpsolve$ to solve the problem.

\subsection{ME: Maximum Entropy Method for IRL}
Our implementation for the maximum entropy method requires some changes to incorporate our dynamic environment. Again, the MDP is reduced to a smaller MDP limited to the time period $t$ to $t+H$. We make changes to the way the expected action frequency is computed.\\

\begin{algorithm}
 \caption{Expected action frequency calculation}\label{mealgom}
 \begin{algorithmic}[1]
 \State Set $gZ_{s_i}'=1$ for the initial state, 0 otherwise
  \For{$t=1$ to $T$}
   \Statex \textbf{Backward pass}
   \State Set $Z_{s_i}=1$ for valid local goal states, 0 otherwise
   \State Recursively compute for $N$ iterations
   \Statex $Z_{a_{i,j}}=\sum\limits_k P(s_k | s_i, a_{i,j}) e^{\mathbf{w}^\top \boldsymbol{\phi}(s_i,a_{i,j})} Z_{s_k}$
   \Statex $Z_{s_i}=\sum\limits_{a_{i,j}} Z_{a_{i,j}}$
   \Statex \textbf{Forward pass}
   \State Set $Z_{s_i}'=gZ_{s_i}'$ for all states $s_i:(w,t_i), t_i=t$, 0 otherwise
   \State Recursively compute for $N$ iterations
   \Statex $Z_{a_{i,j}}' = Z_{s_i}' e^{\mathbf{w}^\top \boldsymbol{\phi}(s_i,a_{i,j})}$
   \Statex $Z_{s_i}' = \sum\limits_{actions \: a_{j,i} \: to \: s_i} Z_{a_{j,i}}'$
   \Statex \textbf{Concatenation}
   \State Set $gZ_{s_i}'=Z_{s_i}'$ and $gZ_{s_i}=Z_{s_i}$ for all states $s_i:(x,t_i), \forall t_i, t \le t_i \le t+H$
   \EndFor
   \Statex \textbf{Summing frequencies}
   \State $D_{a_{i,j}}=\frac{gZ_{s_i}' e^{\mathbf{w}^\top \boldsymbol{\phi}(s_i,a_{i,j})} gZ_{s_j}}{gZ_{s_initial}}$
 \end{algorithmic}
\end{algorithm}

We also need to normalize $D$ before computing the gradient. The sum of action frequencies should sum up to the length of a path - 1.
\par 
Our second implementation uses equation \ref{eq:nmme}. We will refer to it as \textbf{MEN}.

\subsection{GA: IRL using genetic algorithms}
As discussed earlier, we use a multi-objective genetic algorithm. We use the $optim\_moga$ function from Scilab. We use a population size of 40 and run the algorithm for 10 generations. The parameters are not encoded (i.e. we use real values). For crossover, an extension of convex combination is used, which combines two individuals in a ratio which may be negative.
\begin{align*}
r=& random \: number \\
c_1=& r*i_1 + (1-r)*i_2\\
c_2=& (1-r)*i_1 + r*i_2\\
\end{align*}
The mutation function adds some $\pm \delta$ to the individuals to get a mutated individual. The fitness function is defined as the sum of the absolute value of the difference of feature counts.
\begin{equation*}
f_{fitness}=\sum_{j=1}^{N} | F_{\zeta_j^*} - F_{\zeta_{j\:\mathbf{w}}} |
\end{equation*}
where $N$ denotes the number of training MDPs.
\par 
MOGA being a non-elitist algorithm and may not retain all the solutions from the pareto front between generations in contrast to NSGA-II which is an elitist algorithm. Even though other elitist algorithms such as NSGA-II generally outperform MOGA, we are dealing with a relatively small population size and for our scenario, MOGA outperforms NSGA-II.
\par 
In case of NSGA-II, a low population size results in the all the solutions belonging to the same class or rank. Two different sets of weights may generate the same policy and would have the same objective function values. Such weights would have a $0$ crowding distance and would ultimately rank lower than some other set of weights which may otherwise have the same rank. NSGA-II thus shows a negative bias towards policies that may be generated by multiple sets of weights.
\par
The other reason could be that for the small population, an elitist algorithm would tend to retain a higher number of solutions from the pareto front thus reducing the exploration of the solution space.

\subsection{BIRL: Bayesian IRL}
In this case, the algorithm requires the complete expert policy instead of an observed path. We use equation \ref{eq:birls} for the standard implementation. Our implementation using equation \ref{eq:birli} is referred to as \textbf{BIRLI} and requires just the observed trajectory.
%\newpage
\section{Policies generated using predefined weights}
Since the maximum entropy method can only work with positive weights, we need to specify the sign in the features themselves. For the first set of experiments, our features are as follows:
\begin{itemize} %reduce spacing
 \setlength{\itemsep}{-2pt}
 \item a collision with a car gives us a negative reward (-1) 
 \item moving in either direction also gives us a negative reward (-1) 
 \item colliding with a refuelling car gives us a positive reward (1).
\end{itemize}

We set the weights as $[1000\quad 1\quad 1000]^\top$, indicating an urge to minimize the collision with cars with negative reward and maximizing the collision with refuelling cars and not caring much about the movement. We generate policies for the MDPs using this weight vector and these serve as the expert policies for their respective MDPs.


\subsection{Results}
\subsubsection{Deterministic actions}
\begin{center}
  \begin{tabular}{ | l | c | c | c | c |}
    \hline
    Algorithm & Weights & Path Match & Policy Match & Average Difference \\ \hline
    QP & 0.9264706 & 79.88 & 96.78 & 0.0356871 \\        
       & 0.0882353 & 	   &       & 0.1260655 \\
       & 0.6911765 & 	   &       & 0.0444611 \\ \hline
    GA & 0.5817141 & 95.84 & 99.46 & 0.0060606 \\        
       & 0.0237875 & 	   &       & 0.0286501 \\
       & 0.6006902 & 	   &       & 0.0060606 \\ \hline
    ME & 0.2099072 & 27.35 & 69.54 & 0.2191048 \\        
       & 0.7900928 & 	   &       & 0.9138373 \\
       & 1.341D-08 & 	   &       & 0.1451539 \\ \hline
    \textbf{MEN} & 0.4768660 & \textbf{97.66} & \textbf{99.86} & \textbf{0.0} \\        
       & 0.0165905 & 	   &       & \textbf{0.0165289} \\
       & 0.5065435 & 	   &       & \textbf{0.0} \\ \hline
    BIRL & 0.4368039 & 53.12 & 65.84 & 0.0451011 \\        
       & 0.0 & 	   &       & 0.4914036 \\
       & 0.5631961 & 	   &       & 0.0421774 \\ \hline 
    BIRLI & 0.4867780 & 94.99 & 99.06 & 0.0096970 \\        
       & 0.0248060 & 	   &       & 0.0431956 \\
       & 0.4884160 & 	   &       & 0.0096970 \\ \hline
  \end{tabular}
\end{center}
Note that vectors (weights, average difference) are in the order $[feature_1 \quad feature_2 \quad feature_3]^\top$.
\par 
The Normalized maximum entropy method performs the best closely followed by our genetic algorithm approach and BIRLI. The standard probability definition for BIRL performs poorly along with the un-normalized maximum entropy method.
\subsubsection{Non-deterministic actions}
\begin{center}
  \begin{tabular}{ | l | c | c | c | c |}
    \hline
    Algorithm & Weights & Path Match & Policy Match & Average Difference \\ \hline
    QP & 0.5817141 & 55.98 & 84.69 & 0.0178033 \\        
       & 0.0237875 & 	   &       & 0.3870907 \\
       & 0.6006902 & 	   &       & 0.0153828 \\ \hline
    \textbf{GA} & 0.6532417 & \textbf{98.09} & \textbf{99.77} & \textbf{0.0062628} \\        
       & 0.0009902 & 	   &       & \textbf{0.0232378} \\
       & 0.6349674 & 	   &       & \textbf{0.0061540} \\ \hline
    ME & 0.9768493 & 63.23 & 87.79 & 0.0364202 \\        
       & 2.743D-08 & 	   &       & 0.1333707 \\
       & 0.0231507 & 	   &       & 0.0560314 \\ \hline
    MEN & 0.6362234 & 82.70 & 93.72 & 0.0305718 \\        
       & 6.943D-21 & 	   &       & 0.1098317 \\
       & 0.3637766 & 	   &       & 0.0229517 \\ \hline
    BIRL & 0.7904742 & 71.99 & 90.02 & 0.0302867 \\        
       & 0.0 & 	   &       & 0.1718813 \\
       & 0.2095258 & 	   &       & 0.0358048 \\ \hline 
    BIRLI & 0.4666431 & 88.25 & 95.63 & 0.0072912 \\        
       & 0.0 & 	   &       & 0.1137779 \\
       & 0.5333569 & 	   &       & 0.0078020 \\ \hline
  \end{tabular}
\end{center}
In this case, our GA approach had the best performance followed by BIRLI and normalized maximum entropy method. Because of the limitations of the QP approach in the batch setting, it performs worse than the other algorithms.
%\newpage
\section{Trajectories of human controlled agents}
Our volunteers consisted of members from the CHROMA team at INRIA Grenoble. We collected the trajectories of the robot controlled by our volunteers for the same set of MDPs used earlier (the training as well as the test set). We divided our volunteers into two groups. We specified two set of rules:\\
\textbf{Rule set 1}
\begin{itemize}
 \setlength{\itemsep}{-2pt}
 \item Try to avoid the red objects
 \item Try to collect blue objects
\end{itemize}
\textbf{Rule set 2}
\begin{itemize}
 \setlength{\itemsep}{-2pt}
 \item Try to hit the red objects 
 \item Try to minimize movement
 \item Try to avoid the blue objects 
\end{itemize}
The people in each group were asked to follow a particular rule set. A number of trajectories were collected for each MDP and the best one fitting the rule set was selected as the expert trajectory for the MDP. This was evaluated using the sum of features over the trajectory. For the second rule set, a change of sign for the first and third feature is required for the maximum entropy method to work. %TODO how was the best one selected
\par
The actions are considered to be deterministic. BIRL cannot be compared since it requires the complete policy over the state space whereas we just have the path taken by the robot.
\subsection{Results}
\subsubsection{Rule set 1}
\begin{center}
  \begin{tabular}{ | l | c | c | c | c |}
    \hline
    Algorithm & Weights & Path Match & Policy Match & Average Difference \\ \hline
    QP & 0.0013509 & 39.86 & 76.57 & 0.1232874 \\        
       & 0.0006991 & 	   &       & 0.3835450 \\
       & 0.0401125 & 	   &       & 0.0740975 \\ \hline
    GA & 0.2666580 & \textbf{47.24} & 77.26 & 0.1027817 \\        
       & 0.0150257 & 	   &       & 0.2247087 \\
       & 0.3756208 & 	   &       & 0.0699799 \\ \hline
    ME & 0.2035503 & 25.48 & 68.78 & 0.1934495 \\        
       & 0.7964433 & 	   &       & 0.9488993 \\
       & 0.0000063 & 	   &       & 0.1281234 \\ \hline
    MEN & 0.2849231 & 43.87 & \textbf{77.46} & 0.1104534 \\        
       & 0.0658453 & 	   &       & 0.3742005 \\
       & 0.6492316 & 	   &       & 0.0540322 \\ \hline
    BIRLI & 0.3928222 & 32.74  & 53.94 & 0.1173798 \\        
          & 0 & 	   &       & 0.5801888 \\
          & 0.6071778 & 	   &       & 0.0593870 \\ \hline
  \end{tabular}
\end{center}
Based on the rule set, the first and third feature should have nearly equal weights and the second feature should have a lower weight. The standard maximum entropy implementation performs the worst with a very low weight assigned to the third feature. BIRLI assigns a $0$ weight to the second feature which agrees with the rule set but otherwise performs badly based on our metrics. The performance for the other three algorithms is comparable.
\par 
We can see that IRL is successful in recovering the reward function. Even though the rule set did not specify any restriction on movement, which would lead us to believe that the second weight should be $0$, but humans unconsciously try reduce movement and some of the algorithms do give us the weights accordingly.
\subsubsection{Rule set 2}
\begin{center}
  \begin{tabular}{ | l | c | c | c | c |}
    \hline
    Algorithm & Weights & Path Match & Policy Match & Average Difference \\ \hline
    QP & 0.12 & 34.67 & 84.55 & 0.2105596 \\        
       & 0.1371429 & 	   &       & 0.2970270 \\
       & 0.9057143 & 	   &       & 0.0053476 \\ \hline
    GA & 0.3174619 & \textbf{35.76} & 84.54 & 0.3135418 \\        
       & 0.3135551 & 	   &       & 0.3143837 \\
       & 0.7662550 & 	   &       & 0.0053476 \\ \hline
    ME & 2.909D-17 & 14.43 & \textbf{85.22} & 0.5744536 \\        
       & 1 & 	   &       & 0.4434879 \\
       & 6.377D-24 & 	   &       & 0.0321774 \\ \hline
    MEN & 0.0014155 & 14.43 & \textbf{85.22} & 0.5744536 \\        
       & 0.9985845 & 	   &       & 0.4434879 \\
       & 3.452D-21 & 	   &       & 0.0321774 \\ \hline
    BIRLI & 0.2159944 & 35.20 & 85.05 & \textbf{0.2082719} \\        
           & 0.2190940 & 	   &       & \textbf{0.2525666} \\
           & 0.5649116 & 	   &       & \textbf{0.0053476} \\ \hline
  \end{tabular}
\end{center}
From the weights, it seems that QP, GA and BIRLI assign an almost equal weight to the first and second features and a higher weight to the third feature (indicating a preference to avoid colliding with the blue objects). These weights agree with the rule set. The maximum entropy implementations however assign a very high weight to the second feature compared to the other two  and it seems that the policy generated would just try to minimize movement. GA maximized path match and policy match while BIRLI minimized the average difference of features.
\section{Summary}
We conducted two set of experiments. The first set determined whether the algorithms were able to recover predefined weights. The policies or trajectories of the expert were at least locally optimal. Our proposed approaches consistently outperformed the state of the art. The second set of experiments presented the real challenge. The algorithms were required to reproduce the expert's behaviour given just the observed trajectories. The trajectories in some cases were not even locally optimal. The results varied this time around but our proposed approaches still produced results consistent with the rules set for the experiments.
\chapter{Conclusion and future work}\label{ch:cf}
We discussed a few popular IRL algorithms and compared them with our own variations. We compared them on our test problem. Each algorithm works well in theory but based on the results, we can see that this does not extend to the real world. Some of the algorithms don't work well when the expert policy isn't optimal (as is the case with the human observations). Even when the policies are locally optimal (when provided with predefined weights), some algorithms (such as the maximum entropy method) in the standard form perform poorly. Our global optimization approach using Genetic Algorithms performs consistently well for our different experiments. Our modified Bayesian IRL algorithm too outperformed the standard algorithm.
\par 
However our approach using genetic algorithms is quite slow since it repeatedly evaluates a policy for each reward function in the population. Our BIRLI approach too has a few flaws, as it might converge to a solution with a non-zero weight only for the feature which has the least difference of the sum of features (equation \ref{eq:birli}). Using our performance metrics (path match or policy match) to define this probability could however lead to useful results for apprenticeship learning, where recovering the reward function isn't the goal. Our normalized maximum entropy IRL approach looks promising as well.
\par 
Our toy problem was quite simple and so were our features. Increasing the number of features introduces more human knowledge, however more features could cause over-fitting and generally require more data for training. As the complexity of the underlying MDP increases, feature selection for IRL can get extremely difficult. Integrating feature selection from Levine et. al \cite{NIPS2010_3918} into every algorithm could lead to interesting results. In this case, all relevant features are enumerated instead of manually specifying the set of features to be used. The authors discuss an algorithm where each iteration consists of an optimization step followed by a fitting step which determines the set of features to be used.
\par
We only considered the linear formulation of the reward function in terms of the features. This formulation can only work with a low number of features. Reducing the number of features being used is another problem that warrants further work. Since this formulation can be insufficient to correctly represent the reward, more complex representations need to be studied. Levine et. al \cite{NIPS2011_4420} used Gaussian processes to model the reward as a non-linear function. Genetic Programming \cite{Poli:2008:FGG:1796422} and Evolutionary Neural Networks \cite{ann} are two other approaches that can be used to replace our linear formulation in keeping with our global optimization approach.
\par
IRL has been used for planning based prediction \cite{Ziebart:2009:PPP:1732643.1732694} and activity forecasting \cite{Kitani:2012:AF:2404742.2404759}. The future locations of people or other dynamic obstacles can be predicted using the reward function obtained by IRL. Activity forecasting addresses the task of inferring future actions of people from noisy visual input (using the maximum entropy method). An extension to deal with noisy data from the expert would be required for the other algorithms discussed here. It has a high potential for use in autonomous vehicles as well.
\par 
We discussed IRL algorithms for discrete spaces. Working with continuous spaces is difficult since it generally increases the state space and dimensionality of the underlying problem. Levine and Koltun \cite{2012-cioc} consider continuous state spaces and continuous actions and using the concepts of maximum entropy from \cite{Ziebart_2008_6055} provide an algorithm that optimizes the approximate likelihood of the expert trajectories under a parameterized reward. They are limited to features that are differentiable but further work in this direction could allow us to use IRL for more complex and high dimensional problems.
\par 
We considered just one robot moving in its environment. However, the use of Inverse Reinforcement Learning to learn the interaction of multiple agents in the same environment is still in its infancy.
%\\$<$ future work $>$\\
%feature levels\\
%fewer features, instead of a lot of them (which introduce human knowledge)\\
%more complicated function\\
%application to real world problems, multiagent

\bibliography{report1}{}
\bibliographystyle{plain}
\end{document}     
