\documentclass[12pt]{article}

\usepackage{graphicx}
\usepackage{listings}
\usepackage[utf8]{inputenc}

%\pagestyle{plain}
%\pagestyle{empty}
\pagestyle{headings}

\begin{document}

\title{ WS2 - Virtual City\\ Goal Oriented Planning }
\author{Jendrik Illner , Bart de Bree \\
NHTV University of Applied Sciences \\
Breda, Netherlands}
\date{20th December 2012}

\maketitle

\begin{abstract}
Simulating an independent agent in a virtual city is a tough problem.
Survival depends on the ability of an AI-agent to reason about its current state and its future needs while keeping track of how to reach its goals.
This paper presents an approach to this problem based on planning future actions which allows the AI to survive easily. The presented approach allows for clear separation between modules and we therefore the resulting system is easy to extend and maintain.
\end{abstract}

\section{Introduction}
\begin{figure}
\center
\includegraphics[width=13cm]{images/city.png}
\caption{Virtual City}
\label{virtual_city}
\end{figure}

In most open world city games such has GTA4 \cite{GTA} and Mafia 2 \cite{MAF2} pedestrians fill the street to make the world seem more believable. Unfortunately in most games these pedestrians are not following any specific plans for their day. They just wander around until the player is too far away and then they suddenly disappear. Whenever the player returns to the same area, the same characters may appear again at new positions, having no memory of their previous life, or new ones may be generated at random. A careful player will notice this, therefore breaking the immersion into a believable simulated world.

The continuing problems with AI in games are the reason more game AI specific research and books are being published. The most recent book release being Artificial Intelligence for Games \cite{AIGAMES}

In the following we will introduce you to the setting and the problem we are trying to solve. We will then present our solution to the problem.

\subsection{Setting}
The Virtual City simulation as depicted in Figure \ref{virtual_city} tries to simulate the life of individual believable AI agents which do not just wander through the world but are actually following an appropriate plan.
The simulation represents a city made up from a $7\times7$ grid. Each cell in the grid is a building.
The AI simulates the role of one inhabitant of the city as he travels along the buildings and performs action in them. 

Each AI has the following statistics:
\begin{itemize}
	\item Rested
	
	Defines how tired the agent is.
	
	\item Relaxed
	
	Specifies how relaxed the agent is.
	
	\item Sated
	
	Defines how hungry the agent is. A value of 1 means not hungry and tends towards 0 as the agent gets more hungry.
	
	\item Fitness
	
	Specifies how fit the agent is. 	
	
	\item Enjoyed
	
	Enjoyment defines how happy the agent is. Free-time actives help to increase enjoyment.
	
\end{itemize}
Each of these statistics is in the range of $[0-1]$. If any of these statistics reaches zero the AI dies. 
The goal of the AI is to make sure that this does not happen. 
Other resources the AI has to keep track of is \textit{money} and \textit{groceries}.
There are many possible ways to increase the different statistics. Each action the AI can execute can only be executed at a given location, for example cooking can only be executed at home.
A number of preconditions have to be met before the AI can execute an action. To be able to cook the agent needs to have groceries. These can be bought at a store but only if the store is open and the agent has enough \textit{money} to buy them. 
The main challenge for the AI in this simulation is the large number sequences of possible actions the AI can execute and the preconditions which have to be resolved before an action can be executed.

\subsection{Solution}
To solve the challenges the AI is faced with, we decided to implement our AI based on Goal-Oriented Action Planning \cite{JEFF03}.
We describe how we implemented the planner architecture and how we connected it to the game.
Our AI is separated into the following phases:
\begin{enumerate}
\item Goal choosing: We describe how the AI picks the goal it will try to archive next.
\item	Plan Building: In this phase the planner builds a hierarchical graph of all possible plans to reach the given goal. 
We start at the goal and go backwards towards the position of the agent. 
Planning is done in abstract terms (\textit{GoTo Home}, \textit{Eat Snack}) instead of using specific positions.
\item	Plan choosing: The planner returns a list of possible plans the AI could follow to reach the goal. The planner does not decide on the final plan. The more high-level agent AI system will decide which plan to follow because it has more information about the state of the agent and possibly future plans available.
\item	Plan translation: In this phase the plan which has been chosen by Phase 3, is translated from the abstract planning terms into the concrete actions the AI has to execute.
\end{enumerate}

The presented architecture allows for clear separation between different phases creating a very effective solution which also makes maintenance and addition of future actions easily possible.

\section{Technique}

In the following section we will go into detail for the different phases of the AI pipeline. We will start in Section \ref{sec:goal_choosing} by discussing how the AI decides which goal to choose.

Then in Section \ref{sec:plan_building} we will show how the goal which should be satisfied gets translated into possible plans.

Section \ref{sec:plan_choosing} will present the technique used to choose which plan the AI should be executing.

Afterwards in Section \ref{sec:plan_translation} we will discuss how the plan will actually be executed.

\subsection {Choosing a goal}
\label{sec:goal_choosing}
The first phase of the AI algorithm is to decide which goal should be satisfied. A goal is defined by which attribute should be changed and what change is to be expected. For example one possible goal might be to increase the \textit{sated} value by 0.6.
For the decision we choose a simple approach. We start by checking the current state of the agent and we try to solve for the lowest statistic first. 
If no valid plan could be found for this goal, then we will try the second lowest statistic. This might happen when trying to buy groceries but the stores are closed. 
We continue this process until we have found a goal which can be satisfied.
We added one special condition to the goal selection.  If the agent has less money than required to buy groceries, and all other statistics are above $>0.8$ then the AI will try to earn some money. This is done so that the agent will have some maneuvering margin to buy groceries when required next time.

\subsection {Plan building}
\label{sec:plan_building}
The planner receives the goal from the previous phase. With the goal to reach and the current state of the agent the planner tries to find any plans that will reach the goal. The result is zero, one or more possible plans. $(N \ge 0)$
All the actions that the agent could possibly execute are contained in the action list. 
Each action is defined by the number of effects which change the state of the agent and the preconditions necessary to be fulfilled before the action can be executed.

Effects are defined by which statistic they influence and by how much they change those statistics. Some examples are -10 \textit{money}, +0.1 \textit{sated}. 

Preconditions are defined similarly but they define what statistics the agent has to meet before being able to execute an action. Some examples for preconditions are that the player has to be at home and needs to have two groceries to be able to cook.

For each possible action we check if the actions get us closer to the current goal. The current goal is not fixed and might change during the planning phase. 
For example the planning starts by finding actions which fulfill the \textit{sated} goal. Once we found an action which gets us closer to the goal we add those actions as children of the current parent node into the planner graph. 
Each node in the graph contains a list of preconditions which still need to be met. When a new action is added to the graph we review the preconditions of the parent node. If these preconditions are not yet met we add them again to the list of preconditions of the newly added action.

When the goal has been fulfilled we enter the next level of recursion to start to meet all preconditions contained in the precondition list. This process continues until all preconditions have been met.

\begin{figure}
\center
\includegraphics[width=13cm]{images/plan_graph.png}
\caption{A possible plan graph}
\label{possible_plan_graph}
\end{figure}

Figure \ref{possible_plan_graph} shows what a possible goal graph might look like after the planner phase has been completed.

\subsection {Choosing a plan}
\label{sec:plan_choosing}
The input for this stage is the action graph created during the last phase. This graph is transformed into a list of actions. For each plan one list is created. These are created from the graph by finding each node which has no children and walking up until we reach the goal. This is necessary because we start our search at the goal, and therefore the nodes in the graph are in reverse order. 
To decide which plan to use we use \textit{Utility theory} \cite{ZUB10}. This means that, to calculate the utility value, we start off with the current state of the agent. We then apply all effects of the actions to the world state and record those changes. The final world state is then converted into a single utility value and we choose the plan with the highest utility value for execution.
The utility value is calculated as the sum of all statistics clamped in the $[0-1]$ range and the amount of groceries normalized into the $[0-1]$ range. The money utility is calculated by $ U_m = \frac{M_r - M_s}{M_s} $.

$U_m$ is the resulting utility value and $M_r$ is the amount of money after the plan has been executed while $M_s$ is the amount of money before the plan was executed.

\subsection {Executing a plan}
\label{sec:plan_translation}
Now we will convert the chosen plan from the abstract planning representation into concrete actions which can be executed.
This phase is mostly straight-forward. Each abstract action has a corresponding concrete action to be executed. The only element which cannot be converted directly is the \textit{GoTo} action. This action only contains the abstract location (home, store, work …) and we have to decide which of the possible number of buildings we actually want to go to.
The decision which building to go to is actually a \textit{Traveling Salesman} style problem, since we want to visit a number of buildings with the least distance required to travel overall. But for the given problem the difference between the optimal path and a slightly less optimal path will not be too large.
We therefore decided to always use the nearest building to the home. We decided on doing so after testing the nearest building to the current location of the agent. 
But the nearest distance to the agent showed to be quite inefficient, as the nearest location to the agent was in most cases further from the home. But the agent eventually goes back home and therefore has to walk unnecessary distances.
Once the target location has been found, then a path from the current location to the target location has to be found as well. Path-finding is done using the \textit{A* search algorithm} \cite{WIKIPEDIA_A_STAR}. 

\section {Results}

\begin{figure}
\center
\includegraphics[width=13cm]{images/statistics_over_time.png}
\caption{Tracking of statistics over ~270 days}
\label{statistics_over_time_270}
\end{figure}

\begin{figure}
\center
\includegraphics[width=10cm]{images/perf_graph.png}
\caption{Performance in the large city layout with a varying number of agents}
\label{perf_graph}
\end{figure}

The resulting AI system shows great potential. We were able to archive very good results for the three main requirements survival, performance and maintainability. In the following we will go into more detail for each of these.

\textbf{Survival.} 
We have tested the AI extensively and the agent never died during the maximum testing phase of 270 days. 
The worst case scenario for the AI is that the player gets hungry just as the shop closed. It therefore has to wait for another 10 hours until the shops are open again. But as can be seen from Figure \ref{statistics_over_time_270} even in those cases the AI \textit{sated } statistics never dropped below 0.4.

\textbf{Performance.} 
The performance of the resulting system is very good. Our AI simulation is running in real-time at constant 60 FPS for one simulating agent. 
With 20 parallel agents we are still able to achieve a constant 60 FPS for the simulation. Testing with even more agents in the small city is not possible because there are not enough houses for the agents.

We therefore created a new city layout, see Appendix \ref{appendix_city_layout}. In that city-layout we are able to have 80 agents in the city at ~30 FPS.
The results for the performance investigation can be seen in Figure \ref{perf_graph}.

\textbf{Maintainability.} 
The separation of the AI planning process into four distinct phases proved to be very beneficial since it clearly separated the tasks and we were able to replace, tune and optimize one stage without having to make changes to the other stages of the pipeline.
The Goal Oriented Action Planning approach is really helpful for the maintainability of the resulting system. It clearly separates actions from goals and is therefore possible to add and remove actions easily without having to add any new transitions between the actions.

\section {Related Work}
This work is based on the work done in the area of Goal Oriented Action Planning. Jeff Orkin made this approach public by publishing the solutions developed to solve the problems faced during the development of FEAR as seen in \cite{JEFF03,JEFF06,JEFF05}.
Goal oriented action planning is heavily based on the results published by the MIT Synthetic Characters Group \cite{MIT01}. This provides a complete framework to deal with AI decisions based on non-perfect knowledge using the information collected from the environment of the agent.
An overview of different goal-oriented behavior techniques can be found in \cite{AIGAMES}.

The symbolic representation of the world state is based on ideas found in \cite{JEFF04}.

Other approaches to the presented problem can be found techniques as Finite State machines \cite{FSM} but these have the problem that they become hard to maintain after a small number of states and transitions have been added. Hierarchical Finite State machines \cite{AIGAMES} improve upon the ideas of FSM by allowing the reuse the state defined functionality in multiple hierarchical levels of the state machine. HSFM still have the problem that a transition from one state to another has to be defined explicitly which makes addition of new action unnecessarily complex.

\section{Conclusion and future work}
We were able to create a believable agent simulation in the time-frame we were given. 
Many improvements are still possible.
Our world state tracking doesn't yet take the execution time of actions into consideration. If we would track those timings more precisely we would be able to make sure that the AI prefers to execute actions which are being done more quickly to be able to execute other action before they cannot be executed anymore.
The decision process to choose the next most suitable buildings to go to might be improved by taking the next destination of the agent into consideration to minimize walking distances. 

Currently we only implemented a subset of possible actions. Adding more actions will make the simulation more believable. Currently the AI always eats at home, but it is also possible to eat at a restaurant or a diner. If we added more of these actions the daily routine of the agent would become more varied and interesting.

\bibliographystyle{plain}
\bibliography{content/bibliography}

\section*{Appendix}
\appendix

\section{Changes to the simulation world}
We implemented a number of changes to the simulation world. We added the ability to switch the city layout. This allowed us to experiment with different layouts for the city. In the new city layout the city is a lot larger and spans $13\times13$ tiles. 

The following new buildings have been added to the city as well.
\begin{itemize}
 \item NHTV
 \item Hospital
 \item Pub
 \item Small park
\end{itemize}

\label{appendix_city_layout}

\end{document}
