\documentclass[a4paper,10pt]{article}
\usepackage[utf8]{inputenc}
\usepackage{graphicx}
\usepackage{cite}

% Title Page
\title{Motion Planning in Dynamic Environments using Inverse Reinforcement Learning}
\author{Nishant Jain}
\date{}

\begin{document}
\maketitle

%Determining actions of robots in dynamic environments is well studied. Recent research has shown the benefit of using Markov Decision Processes to model problems. Reinforcement learning deals with obtaining a policy that would maximize the accumulated reward over the MDP. Inverse reinforcement learning however tries to recover the reward associated with the states or actions. Learning the rewards is transferable to other MDPs and is thus much more beneficial.
The inverse reinforcement learning problem was formulated in \cite{Russell:1998:LAU:279943.279964} as follows:\\
\textbf{Given} 1) measurements of an agent't behaviour over time, in a variety of circumstances, 2) if needed, measurements of the sensory inputs to that agent; 3) if available, a model of the environment.\\
\textbf{Determine} the reward function being optimized.
\\
%general mdp definition
\par
We begin with a one dimensional problem. Consider a robot moving across a road crossing. Cars move at a constant speed. There are some cars that the robot needs to collide with (say refuelling cars) but it shouldn't collide with the other ones. The robot can either move left, right or stay in its position. 
%attach figure
\centerline{\includegraphics[scale=.5]{img1f.png}}\par
The state of the robot is defined by its location as well as the time step. The reward function is assumed to be a linear combination of features.
%equation
We wish to determine the weight for each feature. We use three features for the problem, a collision with a car gives us a negative reward (-1), moving in either direction also gives us a negative reward (-1). Colliding with a refuelling car gives us a positive reward (1).
\par
%cite russell,
\cite{Ng00algorithmsfor} gave the first formulation of IRL. Since the state space is large, we try to recover the reward using sampled trajectories. We require a policy generated by an expert using which we can calculate the sum of features over a trajectory. We generate some random policy and then generate new weights such that eqn. is satisfied. This paper considered a linear formulation of the optimization problem.
% equations
This optimization problem however results in the weights getting assigned a value of either 1 or -1. The weights however are not always so simple and hence this formulation fails for most problems.
\par
Abbeel and Ng \cite{Abbeel:2004:ALV:1015330.1015430} worked further along the same strategy of matching feature weights.
%eqns
In this case however, if a policy close to the optimal one is found, which however does not satisfy our termination conditions, the max-min formulation can lead to disastrous results.
We thus choose a simpler method which again utilizes feature matching.
We try to generate weights that would generate a policy which generates a sum of features close to the optimal policy. We do so using a multi-objective genetic agorithm.
%give formulation
\par
Lastly we compare it with maximum entropy IRL \cite{Ziebart_2008_6055}.
The gradient at each step is again just the difference of features. The paper however is vague about the feature details which can determine the ration of probabilities of each action. The sum of probabilities of reaching a state at time step t need to be normamized. This method also requires that the sum of each feature be comparable, otherwise it leads to converging to incorrect solutions.

\bibliography{report1}{}
\bibliographystyle{plain}
\end{document}          