\chapter{Agent control}
\label{chap:agentControl}

\emph{Agents} are autonomous entities acting in the game environment. Their decisions are determined by their perception of the environment, and by their decision making mechanism.

The definition of an \emph{ideal rational agent} by Russell, and Norvig~\cite{2003aima} states:

\begin{longtable}{|p{0.9\textwidth}}
``For each possible percept sequence, an \emph{ideal rational agent} should do whatever action is expected to maximize its performance measure, on the basis of the evidence provided by the percept sequence and whatever built-in knowledge the agent has.''
\end{longtable}

In our case, we defined \emph{performance measure} in the traditional way: the best plan is the one containing the least actions (for more about performance measure, and results, see subsection~\ref{subs:suboptimalPlanning}).

Agents can be classified into three categories based on their decision making behavior. They can be \emph{reactive agents}, \emph{deliberative agents}, and finally \emph{hybrid agents}. A reactive agent is the simplest type, it has no foresight, in fact not even memory, nor an explicit goal set is strictly necessary. It processes environmental input from its sensors, and produces a direct effect in reaction. Deliberative agents on the other hand necessarily require own internal world state representation, and clearly defined goals to achieve. They use this knowledge to construct multiple step plans to fulfill their goals. A hybrid agent is a mixture of the previous two types that follows an own action sequence, but directly reacts to some external events without deliberation.

As it will be visible from section~\ref{sec:whenTochangePlans}, our agents are hybrid ones, primarily working as deliberative agents, with occasional direct reactions for the sake of playability.

\begin{figure}[H]
	\begin{center}
	\includegraphics[width=100mm]{../img/agentArchitecture.eps}
	\end{center}	
	\caption{Agent model used in our program}
	\label{fig:agentArchitecture}
\end{figure}

\clearpage

Our agents lack learning. Their decision making is limited to the hard coded set of possible actions; but the combination of these effectors is just limited by the capabilities of the used planning system, and the available knowledge of the environment.

In the following we detail the workings of our agent model displayed in figure~\ref{fig:agentArchitecture}.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\section{Goals}

Our agents have a primary and secondary desires.

The primary goal is to visit any active vending machine as soon as they see it.

The secondary goal sets in our case were implemented in a very straightforward way. It's virtually the only difference between our two agent types (burglar and guards). The burglar's goal is to gather a selected item (a treasure), and a desire to return to a predetermined room. The guards have a set of places to oversee; their goal is to visit each of these rooms once, then start the patrol again. 

It is worth to note, in the final version guards have no desire to catch or follow the burglar. Interaction between agents is managed completely by the game environment.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\section{Knowledge Base}

``A system is autonomous to the extent that its behavior is determined by its own experience.''\cite{2003aima} Our agents are not completely autonomous. They start each level with some predetermined knowledge of the environment, defined in the map file. The flaws in this belief guarantee that the burglar will need the player's assistance to complete the level. However as they explore the game world, they extend, and update their initial beliefs, and become more and more autonomous.

In our implementation the \emph{knowledge base} contains information about the locations of other known agents, and objects. It also notes the holding entities of all items, they have seen so far. 

The known objects are grouped into two categories: the ones \emph{examined closely}, and the others \emph{seen from a distance} (explained in section~\ref{sec:sensors}). This importance of this lies in that this way the agent may have an idea, how credible certain details of his beliefs are.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\section{Self awareness}

Our agents have very little self awareness. Their knowledge is limited to their position in the world layout, and the contents of their inventory. In our kind of game this is perfectly enough, and if necessary, it could have been easily extended.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\section{Sensors}
\label{sec:sensors}

Our agents have two ways of perceiving their environment. The senses involuntarily update their knowledge base in each game turn (see subsection \ref{subs:gameTurn}). One perception, that we shall call \emph{close examination} grants the agent complete and correct knowledge of every detail of the object he is currently operating with while the other one that we shall call \emph{looking around} lets the agent notice an incomplete set of details of the surrounding objects. An object has to be in the same room with the agent to be perceived. There is no way for an agent to see anything beyond the borders of the currently occupied room.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\section{Effectors}
\label{sec:effectors}

Interaction with the agent's environment consists of one of the following actions:
\begin{itemize}
  \item \emph{enter} -- enters a door leading into another room
  \item \emph{approach} -- moves near to a selected object, or agent
  \item \emph{lock} -- locks a door, or a container with a key
  \item \emph{unlock} -- unlocks a door, or a container with a key
  \item \emph{open} -- opens a door, or a container
  \item \emph{close} -- closes a door, or a container
  \item \emph{pick up} -- picks up an item from an opened container, or from another agent
  \item \emph{use} -- uses a vending machine
\end{itemize}
In each game turn exactly one of these action is accomplished, with the exception of approach. That is always broken %into sub-actions before execution based on the layout of the current room. This means that the rooms are divided into game positions and the agents are moving between such neighboring positions (see section~\ref{subs:gameTurn}).

Actions directly connected to an object can be successfully completed only if the agent is standing next to or on the position that the particular object occupies.

It must be noted that there is no explicit examination action. Sensors are invoked automatically after each step of the agent.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\section{Environment}

In this section we classify our environment from the agent's perspective by the world definition found in \cite{2003aima}.

To our agents our game world is \emph{inaccessible}, meaning their sensors do not provide full access to all the relevant data in any given moment. The agents need an internal representation of it and memory to remember previously accessed details. The environment is also \emph{nondeterministic} and \emph{dynamic}; with other words, actions may fail and the world state may change without the contribution of the observing agent; for example by acts of other agents or the player. Finally the world is \emph{discrete}, which means that the game is broken into turns and each agent has a fixed set of possible actions in each of these turns until the level is won or lost.

As opposed to our program, most other games provide an implicit layout knowledge to their agents. In our case the agents start out with a basic knowledge that some rooms make up the world, but they have no information about their position nor content. They start with a belief large enough to find some path (in the burglar's case a flawed one) to their intended destination and the rest may be revealed only through exploration. 

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\section{Decision making}
\label{sec:decisionMaking}

As we mentioned before, we set out to write a game with extensive use of planners. This choice determined the type of decision making process we ended up using in our program. Planning systems are capable of creating seemingly rational sequences of actions without the developer ever considering them. On the other hand, on-line use of planning during the actual gameplay is limited by real-time requirements (to read more about this problem and our proposed solutions see section~\ref{sec:timeRequirements}).

In the followings of the chapter we go into detailed descriptions of the problems we faced while trying to match the needs of our agents with the problem resolving capabilities our planners provide.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\section{Translation to planning problems}

There are some differences between the game world perceived by the agent, and the one forwarded to the planner. Figuring out this translation was a major challenge in the development process. Giving more than absolutely required floods the problem file with irrelevant data that slows down the planning systems and reveals their shortcomings (see sections~\ref{sec:timeRequirements} and \ref{subs:suboptimalPlanning}).

In order to avoid or at least postpone these effects we made three major simplifications. 

We decided to plan on the level of significant positions (doors, containers, \dots); this means that the planner does not perceive distances as they are in the real world and may generate seemingly illogical behavior (see section~\ref{subs:planningGranularity}).

A similar simplification is connected to the agent's senses. We decided to omit viewing angles and viewing distances; and limit the agent's sensors by the borders of the room it accommodates.

To avoid the task of planning in an environment that supports durative actions we made another simplification. The planner sees the world as a set of static entities. If finds an inconsistency between its beliefs and the nondeterministic world, he simply replans as described in section \ref{sec:whenTochangePlans}. We favored this method because each of the planning problems are relatively simple and relatively fast to resolve; in addition in our type of program agents are expected to frequently reconsider themselves so the quality of the resulted action sequences is still acceptable.

\subsection{Planning for multiple agents}

At the beginning of our experiments with behavior generating it seemed to be a great idea to let the planner create action sequences for multiple agents in a single run. In the gameplay each agent is a different resource, that is acting simultaneously, and we also needed to plan for agents whose goals are in conflict with each other. However the classic planning systems couldn't cope with these requirements.

The planner was requested to create action sequences for our agents with plans meeting in a selected room, but our trials resulted in unconvincing behavior. With different strategies we received different, but equally flawed action sequences; for example a burglar patiently waiting for a guard to get caught, or a guard who appears in the game area with perfect timing to surprise the burglar, or a burglar, who for seemingly no reason turns around to walk to the nearest guard.

From these experiments we arrived to the conclusion, that at least the two opposing agent types must be planned separately. In that way we can generate a seemingly rational action sequence for the burglar, and in a separated run we can plan for the guards with the knowledge of the burglar's course.

For the sake of simplicity in the final version we completely abandoned the idea of simultaneous planning, and we do it for a single agent at a time. Cooperative guard planning may be an interesting future improvement to the program, but that would take us to the fields of multi-agent systems.

\subsection{When to change plans}
\label{sec:whenTochangePlans}

The question -- when do the agents change their plans -- was crucial in the development. There are two causes that may trigger a planning event, these are the agent's primary and the secondary goals. 

The primary goal is triggered when the agent discovers an active vending machine. This event forces him to immediately approach the object and deactivate it. In order to ensure this behavior we had to invalidate the agent's previous plan and generate a new one containing only the primary goal. Simply extending the original goal-set frequently results in unacceptable results like the agent leaving the room then later returning to the vending machine. After the primary desire is satisfied, agents return to the secondary ones.

While striving to reach its secondary goals the agent may choose to change plans in two situations: \emph{when it failed}, or \emph{when there is a better solution available}.

\emph{When the plan failed}: At first we define the meaning of a \emph{plan failure} as an instruction in the agent's action sequence, that the he is unable to execute (the executing function returns failure). It happens when the expected results of the action are differing from the actual results. For example opening an already opened door would not be a plan failure, while opening a locked one would. Presuming an optimal action sequence such failure makes renders continuing plan execution impossible -- all following actions would return failure. In practice our plans might be suboptimal and the failed action might have been unnecessary, but we have no simple way to tell whether it is so. In these cases we have no other choice but to terminate the old plan and and generate a new one.

\emph{When there is a better solution available}: Such situation may occur for three reasons: the original plan was not optimal or the agent gathered some additional knowledge or the world changed in a positive manner.

Finding a more optimal solution could be achieved by repeating the planning process with an additional parameter to set the requested plan length to be shorter than the original one. Unfortunately not all of our planning systems support such parameters, so we did not implement this behavior.

From the agent's point of view gathering additional data and a favorable external change in the world state is the same. Both manifest as an update in the belief base. Comparing new information with the premises of our existing plan may reveal shortcuts; for example finding an unlocked door might mean that we can skip a plan section to go and retrieve the key that would unlock it. On the other hand there may be shortcuts that cant't be identified without replanning. Such situation would be finding an opened door leading the agent directly to a room that he expected to reach by crossing several others. In this case there is no part of the original action sequence that we can simply skip, we need completely new actions namely approach the door and enter it that would replace the walk around section.

Our agents are implementing two of the above mentioned behaviors. They update their plans when failing to execute an action, and they are capable of replanning when new information is available to them. For reasons explained in chapter~\ref{sec:replanningAndLevelCreation} their default behavior is to follow their original plan until a failure occurs. On activating the alternative planning rules see section~\ref{sec:runningTheGame}.

\subsection{Plan execution and replanning}

An agent simply follows actions of the computed plan and checks whether it should generate a new one in every game turn. The method used to decide whether the agent has to replan is described in Algorithm~\ref{agentDMS}.

\begin{algorithm}
  \caption{One game turn of a single agent}
  \label{agentDMS}
  \begin{algorithmic}[1]
  
    \REQUIRE $P$ --- plan that is being executed
    \REQUIRE $S_{agent}$ --- agent's prior knowledge of the level
    \REQUIRE $replanOnNewKnowledge$ --- flag to replan if recieves new information
    \ENSURE the agent executes a step and replans if necessary
    
    \STATE $actionResult \leftarrow$ executeNextActionFromPlan($P$)
    
    \STATE $S_{agent}' \leftarrow S_{agent}$  
    
    \STATE $S_{agent} \leftarrow$ updateBelief($S_{agent}$)
    
    \STATE $activatePrimaryGoal \leftarrow$ seesNewVendingMachine($S_{agent}$)
    
    \IF {$activatePrimaryGoal$}
      \STATE $P \leftarrow$ replanPrimaryGoal($S_{agent}$)
      \RETURN
    \ENDIF
    
    \IF {$actionResult$ == failed $\bigvee$ $S_{agent}' \subset S_{agent}$} 
      \STATE $P \leftarrow$ replanSecondaryGoal($S_{agent}$)
    \ENDIF
    
  \end{algorithmic}
\end{algorithm}


In Algorithm~\ref{agentDMS}, the agent at first attempts to execute an instruction from its list. If the plan is empty or the action fails the agent will have to generate a new plan. After each executed step the agent explores the surrounding environment and if spots an active vending machine drops the action sequence and replans with the primary goal. Finally if the user requires it the agent replans upon finding new information.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\subsection{Time requirements}
\label{sec:timeRequirements}

The length of time required to determine the next action is an important factor while selecting a controlling mechanism for the agents. What's probably even more important is the consistency of the time requirements.

From this point of view planning is unreliable. In our case an average planning run requires about 150 ms, but if the task is particularly difficult, planning continues way beyond that period. Sometimes it takes so long, that for the sake of the player the planning process has to be canceled by the main program (we choose this terminating limit to be at 8 s). In these worst case scenarios, where we have to cut the planner short, we do not receive any action sequence to guide the agent. We can't even tell the difference between a problem that has no solution, and a problem that's solution requires more time to be calculated. In these situations we presume that no solution exists and the agent gives up trying. In case it is a guard, he becomes immobile for the rest of the level, while if it's the burglar the level is declared to be lost.

On the other hand planning is not a regularly repeating task, and each successful planning run produces a full sequence of actions to the end of the level. If nothing goes wrong in the execution, and no new information is received, there is no further need to replan, only execute the produced sequence.

In the followings of this section we will show four methods how we considered to cope with the time requirements of planning. All the described approaches are intended to cope with the replanning that occurs at plan failures.

Our idea to hide or at least shorten the waiting period was to complete the replanning while the game was still executing the previous action sequence up until the failure point (where the agent will need the new plan).

As a heuristic solution it would be possible to compare the agent's freshly constructed plan (based on its current beliefs) to the current world state, and find the first failure point in it. With that knowledge we could initiate an early replanning process, so when the command execution finally reaches the failure, we would have to simply insert the new action sequence.

In practice there are two problems with this method that impede its usability.

First, the agents are continuously gathering information through their journey. Premature planning might fail to take into consideration important data that the agent has not yet gathered.

Second, the environment is changing dynamically; other agents and the player himself are actively updating the world state. In the most extreme cases even the not yet executed actions of the current agent may trigger replanning in different game characters, whose new action series will in turn affect the world state, and through that our agent's beliefs. This dynamic behavior may, and often does postpone or expedite the failure in our agent's plan or changes the action sequence the planner would have generated with the updated knowledge. 

A more dependable solution would be to update our early plans as the environment changes. A relatively simple approach would be to apply the above described checking algorithm on each agent after every relevant change in the game world. We define \emph{relevant change}, as one, where an operable object updates some property or an agent switches rooms. The weakness of this method is in the relatively high frequency of such changes; for example a given agent moves to a different room roughly in every 5 s on an average level and adding more agents this frequency is even higher.

An improvement on the previous solutions would be to execute a whole world simulation with all the agents in the background of the program. Using this method we could eliminate all the non-deterministic factors of the environment (changes caused by other agents), except for the player himself. We would have to repeat the simulation process only on the occasions when the player executes an action.

While implementing level generation features (see subsection~\ref{subs:leadingIntoTrap} in chapter~\ref{chap:creatingGameLevels}) we have implemented such background simulation, but we ended up abandoning it. It showed to be too expensive of an overhead in our architecture. The root of the problem was that while a single replanning operation for a single agent needs to construct a PDDL file, run the external planner, and interpret its result once, a full blown forward simulation contains multiple replannings for each of the agents on the map.

In general cases when our planning pauses take about 150 ms they easily fit into our game turn refresh rate thus trying to hide them is unnecessary; waiting is imperceivable anyway. On relatively complex levels (for example tutorial05) waiting for a new plan may be unpleasant. In the worst cases where a single planning approaches 8 s it's questionable whether we are able to calculate ahead long enough in an environment where the primary source of replanning is the player himself. 

With a world representation optimized to support such simulations, probably the last method would provide the most seamless planner integration into a real time game environment. Unfortunately however, our game was not started out with that in mind.

In the current version while the planner is working, we simply pause the game and no early planning is done. Waiting until the last possible moment with replanning has the additional feature, that we can be sure, we have to plan only for a single agent, and the knowledge base we are using is identical to the one that the agent will have at the moment of failure.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\section{Characteristics of our method}

\subsection{No planning in time}
\label{subs:planningInTime}

In our department there were experiments (for example~\cite{kucerova2010}), which included planning with durative events and timed literals. These requirements greatly burdened their planning process and knowing that in our semi-realtime environment long waiting periods are not acceptable we avoided any use of such planning structures.

\subsection{Suboptimal planning}
\label{subs:suboptimalPlanning}

As we mentioned before we define performance measure in length of action sequences.

By this measure planners we are using do not produce optimal sequences. While the generated plan does solve the given problem, it may contain irrelevant actions, and/or inefficient orderings of relevant actions. With solving increasingly complex problems the algorithm-characteristic ``flaws'' of each produced plan also gets more and more visible. Linear increase in the complexity of the \emph{world state} raises the the number of possible solutions to explore exponentially. This is a fully anticipate experience; with improved planning algorithms and increasing processing power the ``flaws'' may be less sever, but without the guarantee of optimal planning they will never completely disappear.

On the other hand if we define the \emph{performance measure} in a way that ``plans should be human-like'', these ``flaws'' could be called features. They seem to the user as if the agent was actively exploring its surrounding; however no such thing is programmed into its behavior. 

For example in some levels agents open doors that they don't intend to go through, or in more alarming cases they even enter rooms just to return and continue in some other direction. In our rule-set the first scenario has no dangers: the guards can't look through opened doors to discover the burglar, however the second action may obviously result a lost level. In practice the burglar have never entered a trap room in this manner, not even on complex levels, nevertheless the possibility is there and the players should be aware of it.

As a future work, specific cases, like with the burglar's ``explorations'' may be eliminated, by iterating through the planner output looking for loops in the agent's path, where inside the loop no ``pick up'' action occurs.

\subsection{Planning granularity}
\label{subs:planningGranularity}

In our program we used single level planning that is directly operating with objects and underlying A* pathfinding between those objects. To keep the problem files relatively small we had to allow some imperfections in the agent's movement behavior.

The current implementation has two undesirable features: one on \emph{object level}, and another on \emph{room level}.

\emph{On object level}: To reduce the number of predicates in the planning problems we do not require the planner to generate exact paths between objects in the same room. We have a single action --- approach --- to reach any position. The planner has no knowledge of the room layout, so it generates the visiting order by random.

For this reason the players might sometimes notice that an agent that needs to visit multiple objects in the same room chooses the order in an obviously ineffective order. Instead of always moving to the closest one, it moves back and forth between several locations.

With separating the planning process into \emph{map level-} and \emph{room level planning} we probably could have eliminated this behavior, but its occurrence is rare.

We have a seemingly similar problem with the sizes of rooms. The planner does not know their dimensions and it always considers a path through a single room to be superior to another through multiple rooms even if the actual path is longer. This issue could only be addressed by extending the planner domain, or breaking the rooms to smaller sections, but as visible from the example of map-chess-board level it would greatly affect the planner performance.


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

