\documentclass{article}
\usepackage[margin=1in]{geometry}
\usepackage{pdfpages}
\begin{document}
\section{Who will be working on the project?}
Ryan Patrick, Taranjeet Singh Bhatia, and Hamid Izadinia

\section{What is the goal of your project?}
We want to evolve a controller for the Ms. Pac-Man agent that can perform at least as well as the current, state-of-the art controllers. While the controller proposed in \cite{tan2012} uses an genetic algorithm to evolve a neural network that controls a player's movement, we would like to remove the neural network and influence a player's movement solely based on a path generated through evolution.

\section{What is the motivation for your project?}
Previous work has shown that an approach that combines evolutionary approaches and neural netowrks to player contorl can outperform rule-based agents. We want to see if we can design a controller that removes the neural network component of that approach, and can achieve better results using only evolutionary computation. An algorithm that removes the neural network component may present a simpler solution that could clarify which elements of the combined solution had the greatest affect on the outcome: the neural network, or the evolution.

\section{How do you plan to achieve your goal?}
We plan to take a combination of an experimental and an applied approach. We will experiment with different elements of evolution and compare our performance against that of the large corpus of agents that have made available for the Ms. Pac-Man vs Ghosts platform. After our final parameters are set, we will be able to run experiments to see if our agent's performance is significantly different than that of our opponents against a variety of intelligent ghost controllers.

\section{Please give a description of how you will represent the information that is to be learned?}
We are considering two representations. One represents the game environment as a graph, where junctions (areas where a decision about direction of movement must be made) are represented as nodes and the corridors between junctions are represented as edges. The weights of the edges are evolved in a way that would dictate which direction should be pursued. The actual pursuit of a direction is dictated by a standard pathfinding algorithm.

The second method evolves the policy that is taken to move through the environment, based on the state of the game and the model of the world. The population of policies is evaluated and the best policy at a given time is pursued.

\subsection{Revision}
The first representation will have a fixed-length string based on the evolving rewards/penalties of pills, power pills, non-edible ghosts, and edible ghosts. A screenshot of the Ms. Pac-Man game is seen in Figure~\ref{fig:maze} and a visualization of our graph nodes are seen in Figure~\ref{fig:nodes}. The weights of edges between nodes in the graph will be changed as the game progresses based on the evolving values of game elements, and the agent will move in a way that attempts to maximize the value of the path without encountering a non-edible ghost.

\begin{figure}[htbp]
  \center
  \includegraphics[height=0.25\textheight]{figures/maze}
  \caption{Screenshot of the Ms. Pac-Man Game}
  \label{fig:maze}
\end{figure}

\begin{figure}[htbp]
  \center
  \includegraphics[height=0.25\textheight]{figures/nodes}
  \caption{Annotation of the nodes in our graph representation of the Ms. Pac-Man Game}
  \label{fig:nodes}
\end{figure}

\section{Please describe how you expect to evaluate how well your algorithm is working?}
We will compare the mean and standard deviation of performance between two Ms. Pac-Man controllers against a common subset of ghost controllers.

\subsection{Revision}
We will be using the in-game scores attained by our controller to judge its performance. Like the box packing problem, or the IPD problem, Ms. Pac-Man has objective measures that can be used to judge performance. We will take the mean and standard deviations of those in-game scores when we compare our agent against others and our agent's performance against various ghost teams.

\section{What do you expect will be the contribution of your work? How will it extend current published work?}
We intend to evolve an agent that performs as well as agents that are controlled by neural netowrks that are evolved, but do so without the overhead of maintaining a neural network. If we can achieve such a result, it would show that a neural network would require unnecessary overhead in controlling the Ms. Pac-Man agent.

\bibliographystyle{annotate}
\nocite{*}
\bibliography{bibliography}

%\input{papers}

\end{document}
