%File: formatting-instruction.tex
\documentclass[letterpaper]{article}
\usepackage{aaai}
\usepackage{times}
\usepackage{helvet}
\usepackage{courier}
% TODO je to OK? jsou univerzalni?
\usepackage{amssymb}

\usepackage{algorithm}
\usepackage{algorithmic}
\usepackage{graphicx}
\usepackage{amsmath}
\usepackage{mdwlist}
\usepackage{paralist}

\frenchspacing

\pdfinfo{
/Title (Planning is the Game: Action Planning as a Design Tool and Game Mechanism)

/Author (Rudolf Kadlec, Csaba Toth, Martin Cerny, Roman Bartak, Cyril Brom)
/Subject (Proceedings of the Eighth Annual AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment)
/Keywords (planning, level design, game concept)}
\setcounter{secnumdepth}{0}  

% The file aaai.sty is the style file for AAAI Press 
% proceedings, working notes, and technical reports.
%
%\title{REMOVED-GAME-TITLE: How to Exploit Action Planning during Level Design as well During a Gameplay?}
\title{Planning is the Game: Action Planning as a Design Tool and Game Mechanism}

\author{Rudolf Kadlec \and Csaba T\'{o}th \and Martin \v{C}ern\'{y} \and  Roman Bart\'{a}k \and Cyril Brom\\
Charles University in Prague, Faculty of Mathematics and Physics\\
Malostranske nam. 25, Praha 1, 118 00, Czech Republic\\
\{rudolf.kadlec, toth.csaba.5000, cerny.m\}@gmail.com, bartak@ktiml.mff.cuni.cz, brom@ksvi.mff.cuni.cz\\
}

\begin{document}
\maketitle
\begin{abstract}
\begin{quote}

%%% OLD ABSTRACT
%In this paper we introduce a new class of anticipation games and one particular example game implementing this concept. In the anticipation game a human player observes a computer controlled agent, the player tries to predict agent`s actions and by modifications of the environment he indirectly helps the agent to achieve his goal. Our game prototype uses action planning as a fundamental concept both for level design and also during gameplay. A player's goal is to help a burglar to steal an artifact from a museum. The level is partly generated procedurally with the help of a planner and the agent 
%is controlled by a planning algorithm. However, the design process guarantees the occurrence of pitfalls in the burglar's plan that lead to his arrest. The player's task is first to identify these pitfalls during the plan execution, and then perform sequences of actions changing the environment to force the burglar to replan and thus avoid the pitfalls. In the end we also present results of game evaluation performed with human players.

Recent development in game AI has seen action planning and its derivates  being adapted for controlling agents in classical types of games, such as FPSs or RPGs.
Complementary, one can seek new types of gameplay elements inspired by planning. We propose and formally define a new game "genre" called anticipation games and demonstrate that planning can be used as their key concept both at design time and run time. In an anticipation game, a human player observes a computer controlled agent or agents, tries to predict their actions and indirectly helps them to achieve their goal. 
The paper describes an example prototype of an anticipation game we developed. The player helps a burglar steal an artifact from a museum guarded by guard agents. The burglar has  incomplete knowledge of the environment and his plan will contain pitfalls.
The player has to identify these pitfalls by observing burglar's behavior and change the environment so that the burglar replans and avoids the pitfalls. The game prototype is evaluated in a small-scale human-subject study, which suggests that the anticipation game concept is promising.

\end{quote}
\end{abstract}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\section{Introduction}

%Our game uses planning in both level design and gameplay.

Planning technologies have attracted a lot of attention in academic community and, more recently, even among game developers.  
So far, the research has focused mainly on adapting planning techniques to the needs of current game genres, be it at design time or during gameplay. The aim of the game industry and of many of the researchers is to replace or enrich traditional approaches of scripting and reactive planning. 
Besides trying to tailor planning for current games, we can attempt to create brand new types of gameplay elements that revolve around action planning and would be impossible with reactive decision making only. 
%That is instead of trying to use planning for decreasing designers burden to create games that would bring the same players experience as the current games we could create games that would bring new experience. Where the planning will be the core technology enabling this experience. 

Interactive Storytelling (IS) may be considered a step in this direction. In IS, planning is often used as a technology that enables creating or maintaining a coherent story plot. However, the existence of a plan remains hidden to the user. Contrary to this approach we propose a new type of games where the fact that we have a complete plan of agent's actions plays a key role. We call them \textit{anticipation games}. Imagine you play a game where the main agent has a mission that he must accomplish. He creates a plan for this mission but due to incomplete knowledge of the environment there are some \textit{pitfalls} in his plan. The human player has more complete knowledge of the environment, thus when observing execution of the agent's plan, she can  anticipate these pitfalls (in further text, agents will be referred to in masculinum, while the player in feminimum). Once she identifies a pitfall, she can modify the environment so that the agent has to re-plan and the new plan avoids this pitfall. The player influences the agent only indirectly through changes of the environment.

 
%   In our example prototype the main agent is controlled by the planning algorithm and player's task is to modify agent's plan, which can be done only indirectly by modification of the game environment, so he successfully achieves the given goal. E.g. a limited number of planned agent's actions can be visualized to the player and he has to anticipate what is the agent going to do next.

%%We adopted this approach and created a game whose core idea is that human player tries to anticipate future actions of the main agent of the game that is controlled by the planner. This idea is enabled by the fact that we have complete plan of agent's actions.

Besides using planning as the key element of the game-play, we also use it to assist the game designers to assure sufficient complexity of the level. In particular we check several properties of a game level with the same planning algorithm that was employed to plan agent's actions. Briefly speaking, the goal is to find a game level where the initial agent's plan contains a given number of pitfalls and these pitfalls can be avoided by players's actions that force the agent to re-plan.

The goal of this paper is to define formally the 
%“genre” of 
anticipation games, exemplify it on our implemented prototype, and describe algorithms used at design time and run time.

The rest of the paper continues as follows. In the next section we will detail previous work related to use of both planning and anticipation in games. Then, for explanatory purposes, we will describe game-play of our burglar game. Then we will define formally the class of anticipation games and describe our game from a technical perspective as an instance of the anticipation game. Finally, we will present a small evaluation of the concept of anticipation games, where we use the burglar game as an instrument.

\section{Related Work}

Considering planning in the context of computer games, a huge body of work focuses on the use of planners in design and verification of levels, e.g.~\cite{pizzi2008automatic,li2010offline,porteous2009controlling}. We consider this as an off-line use of planners. %TODO mozna citace ,porteous2011visual, kelly2008offline

On-line use of planning during actual gameplay is severely limited by the available computing time. With respect to commercial games, the most successful algorithm is the GOAP~\cite{orkin2003applying,orkin2006three}, a simplified STRIPS~\cite{fikes1972strips} like planner. So far 14 game titles have used this planner\footnote{For the list of commercial games using GOAP see http://web.media.mit.edu/$\sim$jorkin/goap.html, 23.11.2011}. 
Custom HTN planners have also been used in commercial games~\cite{champandard2009killzone}.
Aside from planners created specially for purposes of games, there were also attempts to use off-the-shelf PDDL 
compatible planners~\cite{bartheye2009real}
%,bartheye2008connecting}
or HTN planners~\cite{munoz2006coordinating}
%hoang2005hierarchical,
in games directly. PDDL is a modeling language used to describe planning domains in International Planning Competition (IPC) and it became a de-facto standard input language used by many planners.

In accord with IPC challenges, benchmark tasks motivated by needs of FPS games have been proposed recently ~\cite{vassos2011simplefps}.



%An interesting example of offline use of planning is automatic creation of reactive plans for agents at design time. One such example can be found in~\cite{kelly2008offline}.

%\cite{pizzi2008automatic} uses planning to aid level design. Designer can use planning to explore space of level solutions and if he finds some unsatisfactory solutions, he can manually change the initial level setup to fix this problem and again use the planner to find all satisfying plans. 

The main difference of our game prototype compared to the above-mentioned systems is that 1) we use planning in \textit{both} level design and gameplay and 2) planning is not merely a 
decision-making algorithm hidden from the player, it forms a key part of the gameplay.

A different body of work related to planning in games comes from the IS field. Although the game prototype we propose is not directly related to IS, it may be extended with
IS techniques. Those possiblities along with relevant references will be discussed in Future Work section.

The gameplay element of anticipating autonomuous agent actions and indirectly influencing them is found in many games. In From Dust~\cite{website:fromdust}, one of the game elements is that the player sees the planned movement of members of his tribe and alters the landscape so that they may safely travel to their location. In Frozen Synapse~\cite{website:frozensynapse} the player has to anticipate enemy movement and create plans that are then resolved simultaneously with enemy plans. Quite a few other strategy games employ anticipation and indirect commands at various levels, for example Black And White \cite{website:blackandwhite} or The Settlers \cite{website:settlers2}.

A very different kind of anticipation is present in The Incredible Machine~\cite{website:tim} and its sequels. The player tries to alter a given setup of devices, items and animals so that upon simulating the system according to a modified laws of physics it evolves to a specified goal condition (e. g. a ball reaches a designated place).

In all of the above mentioned games, the player anticipates either a very simple behaviour or the actions of another player - which are very hard to guess on the other hand. Incorporating planning allows the agents in our game to exhibit more complicated behaviour which might be more fun/challenging to try to foresee and thus the anticipation gameplay element can be brought to a new level and play a more central role. 



%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\begin{figure*}[ht]
	\centerline{\includegraphics[width=1.6\columnwidth]{media/sc_34_numbered.png}}
\small{
	\caption{
	\textit{
		Locking a door causes the choice of another path, seen from the burglar's perspective.
		Visible objects are:
           	\begin{inparaenum}[\itshape 1\upshape)]
			\item {the \textit{burglar},}
			\item {a \textit{guard},}
			\item {a closed \textit{container},}
			\item {a closed \textit{door}, marked with a darker colour to symbolize it is not fully consistent with the burglar's belief base,}
			\item {the level \textit{entrance},}
			\item {a \textit{camera}, that is marked with deep dark colour to symbolize that it is completely unknown to the burglar,}
			\item {an \textit{intent line} showing the future path of the burglar in the game area with arrows to mark his direction,}
			\item {a \textit{container} holding the \textit{artifact}, with textual description what the burglar intents to do with it.}
		\end{inparaenum}
}
	}
	\label{fig:gameScreenshot34}
}
\end{figure*}

\section{Burglar Game Description}
For explanatory purposes, we will detail mechanics of our game prototype first. Anticipation games in general will be defined later.

The overall situation in all game levels is that a \textit{burglar} -- a computer controlled agent -- tries to steal a valuable artifact from a secured museum. Apart from the burglar, there is one more class of active agents in the level, the \textit{guards}. The game world consists of interconnected \textit{rooms} of different sizes and shapes. 
The rooms can contain one of these objects: \textit{cameras}; \textit{containers}, that can hold \textit{keys}; \textit{artifact}, that is the target for the burglar; \textit{sleeping guards}, which can be tied down and the burglar can use their uniforms to sneak under the cameras; and finally \textit{doors} between rooms. If the burglar or a guard has a proper key, they can both lock and unlock a door or a container.

The burglar is caught if he gets into the same room with a patrolling guard or with an active camera. In the rest of the paper such places will be called \textit{trap rooms}. The complication is that the burglar knows only some of these dangers. 

The human \textit{player} observes the game world from a bird's eye view. The player's goal is to change small details of the environment, such as locking the doors and containers or even disabling cameras, to prevent the burglar getting caught on his mission. The player wins when the burglar runs away with the artifact, she looses if the burglar gets caught or has no valid plans to reach his goal. The levels are designed in a way that without help of the player the burglar will surely be caught.  

While the player can alter state of any object in the game, each interaction costs her a price in action points expense of which she should keep minimal. The player can also spend action points to take a look at the visualized plans of any agent's future actions, the more actions the player sees the more she pays. It is important to highlight that though classical path planning is part of the problem, action planning is very important there as the burglar needs to plan actions such as opening the doors, stealing the uniform etc.

%There are two types of agents on each level, a single \textit{burglar} and an unspecified number of \textit{guards}.
At the beginning of the game the burglar is always positioned at the entrance to the game area, and that is also the place where he has to return. 
Initially, the burglar knows the layout of the map, the exact location of the artifact and positions of some traps. Based on this incomplete knowledge, the burglar makes a plan how to steal the artifact and then escape from the museum. However it is guaranteed by the design process that his plan will contain pitfalls. There will always be trap rooms on his way and it is the player's task to make the burglar avoid them.
For instance, there are cameras in predefined rooms. When the player finds out that the burglar will enter a room with a camera, she can lock a door on the burglar's path so that he has to change his route and misses the trap room.

The burglar and the guards have their  own belief-base about the environment, which they use to plan future actions. Their knowledge may of course be wrong.
The belief-base is updated whenever the agent finds an inconsistency with the world state. This may trigger re-planning and thus it may cause change of the actual plan. Through the game, it is up to the player to predict the actions of the agents. However if she decides to spend action points to take a look at the visualized plan of an agent, the view also highlights where the belief base of the agent differs from the real world state.

On Figure \ref{fig:gameScreenshot34} there are two game situations. On the left image, the player locks a door on the burglar's path to prevent him encounter the guard. When the burglar discovers that the door is closed, he re-plans. The right image captures this situation, and also points out that there is another locked door on the burglar's new path, that he is not aware of yet. 

%A single game level consists of the above described objects - a layout, objects and agents, their positioning and exact state that includes theirs' belief bases. These levels are partially generated by our level design application, while the actual agent behaviour is created through the gameplay itself.
The difficulty of a level is given by the number of places where the player has to assist the burglar. The harder the level, the more trap rooms are put to the world, but on the other hand there has to be some sequence of actions (including the player's actions) leading to a successful end. Hence when designing the game level, planning techniques are valuable to verify these properties.


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%


\section{Anticipation Game Definition}
We will now abstract our game and we will formally define a class of games that we call anticipation games. We will do so by defining properties of the game level of such games. 

Anticipation game is a tuple: 
\begin{gather}
%\langle S^0_{real}, S^0_{agent}, A_{agent}, A_{player}, Agents, prohibited, G\rangle  \nonumber
\langle S, S^0_{real}, A_{player}, Agents, mainAgent, prohibited(S)\rangle  \nonumber
\end{gather}
% where  $S^0_{real}$ denotes the real world state at the beginning of the level, $A_{agent}$ (resp. $A_{player}$) is a set of actions available to the agent (resp. human player),  $agent$ represent the main agent that the player should help. The predicate $prohibited(S)$ defines which states of the world contain a pitfall and thus are prohibited to the main agent. The main agent is fully specified by its decision making system (Algorithm~\ref{alg:anticipationGameAgentDMS}) and a goal $G$ he tries to achieve. In each cycle the agent executes one action from its plan, then observes the new state of the world and updates his belief base according to it. In the end he decides whether he should re-plan. We will specify the particular implementation of $updateBeliefBase$ and $shouldReplan$ functions used in our game prototype later as they are not needed in the description of a general anticipation game. We should note that the $updateBeliefBase$ also models the agent's perception. Not all details of the real world state $S^{t}_{real}$ in time $t$ have to be perceivable by the agent (e.g. objects in different rooms). His internal believed world state $S^{t}_{agent}$ does not have to correspond to the real world state. Plan $P$ consist of a sequence of actions $a_i \in A_{agent}$, that is $P = a_1, a_2, ... a_n$.

where $S$ is a set of possible world states, $S^0_{real} \in S$ denotes the real world state at the beginning of the level, $A_{player}$ is a set of actions available to the human player,  $Agents$ is a set of agents in the level, $mainAgent \in Agents$ is the agent the player should help to, the other are background agents. The predicate $prohibited(S)$ defines which states of the world contain a pitfall and thus are prohibited to the main agent. Each agent is specified by a tuple $\langle S^0_{agent}, A_{agent}, goal(S) \rangle$ and its decision making system (Alg.~\ref{alg:anticipationGameAgentDMS}). $S^0_{agent} \in S$ is an initial world state as known by the agent, $A_{agent}$ is a set of his possible actions and $goal(S)$ is a predicate that defines states of the world the agent is trying to achieve. An action is a partial function  $a: S' \subset S \rightarrow S$. Action is applicable in every state $s \in S'$ and the corresponding function value is the new state after applying such action.  In each cycle the agent executes Alg.~\ref{alg:anticipationGameAgentDMS}, that is, he executes one action from its plan $P$. $P$ is a sequence of actions $a_i \in A_{agent}$, that is $P = a_1, a_2, ... a_n$. Then he observes the new state of the world and updates his belief base according to it. In the end he decides whether he should re-plan or not. We will specify the particular implementation of $updateBeliefBase$ and $shouldReplan$ functions used in our game prototype later as they are not needed in the description of a general anticipation game. We should note that the $updateBeliefBase$ also models the agent's perception. 

\begin{algorithm}
\caption{One step of an agent's decision making}
\label{alg:anticipationGameAgentDMS}
\begin{algorithmic}[1]

\REQUIRE $P$ --- plan that is being executed
\REQUIRE $G$ --- goal pursued by the agent
\REQUIRE $S^t_{agent}$ --- agent's prior knowledge of the level

\STATE $action \leftarrow$ getNextActionFromPlan($P$)
\IF {$action \neq undefined$}
	\STATE $S^{t+1}_{real} \leftarrow$ action($S^{t}_{real}$)
\ENDIF
\STATE $S^{t+1}_{agent} \leftarrow$ updateBeliefBase($S^{t+1}_{real}, S^t_{agent}$)
\IF {$action = undefined$ \OR shouldReplan($S^{t+1}_{agent}, P$)}
	\STATE $P \leftarrow$ plan($S^{t+1}_{agent}, G$)
\ENDIF
\end{algorithmic}
\end{algorithm}

Not all details of the real world state $S^{t}_{real}$ in time $t$ have to be perceivable by the agent (e.g. objects in different rooms). His internal believed world state $S^{t}_{agent}$ does not have to correspond to the real world state. 



Each level of an anticipation game has to be constructed so that there will be some pitfalls in the agent's initial plan. At the same time, the player should have the possibility to choose some actions whose outcome will make the agent to re-plan and pick a new plan. We formalize this requirement using the following formula:

\begin{gather}
\exists \bar{A}  \subseteq A_{player} \exists t: P = plan(S^{0}_{mainAgent}, G) \wedge \nonumber \\
 numFlawsInPlan(P, S^0_{mainAgent}) = 0~\wedge  \label{eq:believable} \\ 
numFlawsInPlan(P, S^0_{real}) = n \wedge n \ge 1~\wedge \label{eq:numFlaws} \\ 
 \bar{A} \oplus S^0_{real} = S^1_{real} \wedge S^0_{real} \neq S^1_{real}~\wedge \label{eq:userAct} \\
shouldReplan(S^{t}_{mainAgent}, P)~\wedge \label{eq:replan} \\ 
\forall t' < t: \neg shouldReplan(S^{t'}_{mainAgent}, P)~\wedge \label{eq:minReplan} \\
P' = plan(S^{t}_{mainAgent}, G)~\wedge \label{eq:newPlan} \\
% \wedge \nonumber \\
\neg flawed(P', S^t_{mainAgent}) \wedge \neg flawed(P', S^t_{real}) \label{eq:ok}
\end{gather}

Where $numFlawsInPlan(P, S^t) = |\{t' \in  \mathbb{N}, t' \ge t:$ execution of plan $P$ in state $S^t$ will lead the agent into state $S^{t'}$, such that  $prohibited(S^{t'})\}|$. Thus it returns the number of pitfalls in plan $P$ when executed from the state $S^t$.
% is true if when we start executing a plan $P$ in a state of the world $S^t$ then it leads to a situation when the burglar is \textit{caught}. 

The formula requires that there is an initial world state $S^0_{real}$ and its modified version known to the agent $S^0_{agent}$ such that a plan $P$ chosen by the agent seems to be solving the task given the agent's initial knowledge (Cond.~\ref{eq:believable}) but that contains $n \ge 1$ pitfalls in reality (Cond.~\ref{eq:numFlaws}) . Moreover there must be a set of the user's actions $\bar{A}$ application of which on the initial state results in a new different state (Cond.~\ref{eq:userAct}), the $\oplus$ operator is used to apply effects of actions on a world state. There must also be some time $t$ when the agent first re-plans due to inconsistency between $S^t_{agent}$ and $S^t_{real}$ (Cond.~\ref{eq:replan} and~\ref{eq:minReplan}). Eventually, after re-planning at time $t$, the agent will create a new plan $P'$  (Cond.~\ref{eq:newPlan}) that is without pitfalls both in $S^t_{agent}$ and $S^t_{real}$ (Cond.~\ref{eq:ok}). Thus $P'$ could be followed by the agent without the player's intervention and it will result into a successful completion of the level.


% Agent move through the environment in discrete steps, in each step he can observe some properties of the environment. The perception is modeled through the $updateBeliefBase$ function. $S^0_{agent}$ denotes agent's initial knowledge of the world state. Agent's knowledge in the next iteration is updated according to $Perc(S^{t+1}_{real}, S^{t}_{agent})$, that is it depends on the real state of the world and the agent's past knowledge.


Note that the main agent does not know the plans of the background agents.
This definition can also lead to creating levels where the initial burglar's plan contains $n$ pitfalls but it can be solved with just one user's action. 
We would like to overcome these limitations in future work, but there is a simple case, where no further requirements are needed: if the number of pitfalls, that may be resolved by a single action, is bound by a constant $k$, the minimal number of user actions is at least $n/k$.

%RB POZOR!! Melo by se tady nekde rict, ze ta definice je zjednodusena, protoze neuvazuje plany ostatnich agentu a   popisuje hru v jednom kroku, jakoby hrac znal plany vsechn agentu a to i jejich budouci plany po zmene prostredi.

%RK v aktualni implementaci opravdu plany ostatnich agentu nezna a nepocita s nimi

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\section{Anticipation Game Level Design}

With the anticipation game definition provided in the previous section we can create a generic level design algorithm. 
%Suppose that we have some initial level described by a world state $S_{init}$. In the end we return the world state with placed pitfalls as $S^0_{real}$ and the initial version as agent's prior knowledge of the level, that is $S^0_{agent} = S_{init}$. 
Suppose that a designer specifies $S^0_{agent}$ that describes the main agent's initial knowledge of the level. In the end we want the world state with pitfalls, that is, we want $S^0_{real}$
%  We want a procedure that will modify this level and output $S^0_{real}$ and $S^0_{agent}$ satisfying the previous definition.
Alg.~\ref{alg:levelDesignBrute} shows a brute force solution of the game level design problem. First at Line~\ref{ali:place} we create a plan solving the level. Then we iterate over all combinations of plan steps where we possibly could place some pitfall (Line~\ref{ali:iter}), the $possiblePitfalls$ function returns such steps. Next we modify the level so that there will really be pitfalls when the agent gets to these steps of the plan. At Line~\ref{ali:newPlan} we tell the agent where the pitfalls are (the agent knows the real state of the world $S_{real}$) and ask him to make a plan avoiding these pitfalls (this requirement is contained in the goal $G$). If there is such a plan the last step is finding human player's actions that will force the agent to re-plan and pick the plan $P'$, this is done by the $userReplanActions$ function. Note that $possiblePitfalls$, $placePitfalls$ and $userReplanActions$ procedures are game specific.

%%RB myslim, ze by to chtelo nekde zminit, proc se pitfalls "berou" z planu, tj. neco ve smyslu, ze kazde akci planu muze odpovidat pitfall podle pozice burglara

%The rest of the rooms in P are organized into random combinations the size of the required number of traps and each combination is tested whether it produces a solvable level.

\algsetup{indent=2em}
%\newcommand{\factorial}{\ensuremath{\mbox{\sc Evolution}}}
\begin{algorithm}
\caption{Anticipation game level design}
\label{alg:levelDesignBrute}
\begin{algorithmic}[1]

\REQUIRE $S^0_{agent}$ --- initial world state by the designer
\REQUIRE $n$ --- number of required pitfalls in the level

\STATE $P \leftarrow$ plan($S^0_{agent}, G$)	\label{ali:place}

\FORALL{$pitfalls \subseteq $ possiblePitfalls($P$) $ \wedge |pitfalls| = n$}		\label{ali:iter}
	\STATE $S_{real}\leftarrow$ placePitfalls($S^0_{agent}, P, pitfalls$) 		
	\IF{$\exists P': P' =$ plan($S_{real}, G$)  
	\AND 
	$\exists \bar{A}: \bar{A}$ = userReplanActions($S_{real}, P, P'$)}					\label{ali:newPlan}
%		\IF{$\exists \bar{A}: \bar{A}$ = userReplanActions($S_{real}, P, P'$)}
			\RETURN $\langle S_{real}, \bar{A}\rangle$
		\ENDIF
%	\ENDIF
\ENDFOR
\RETURN $null$

\end{algorithmic}
\end{algorithm}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\section{Anticipation Game Instance}

In the previous sections we introduced a formal definition of anticipation games. The decision-making and game-level-design algorithms were also described in an abstract way. Now we will describe our prototype game in terms of the previous definitions. A more detailed description can be found in the related thesis\footnote{downloadable from http://burglar-game.googlecode.com/files/thesis.pdf [23.07.2012.]}~\cite{toth2012}.

\subsubsection{Burglar's and Guards' Planning.}
In the on-line phase the burglar and the guards can perform the following actions: \textit{approach, open, close, lock, unlock, enter} and \textit{operate}. All these actions are atomic and take exactly one time unit to execute. Actions related to game objects can be performed only when the agent stands right next to the object. From the planner's point of view there is no difference between room sizes, or distances between the objects.

All the agents have the same planning domain. The only real difference between the two types of agents is in their goals. The guards' goals consist of a list of rooms that they need to visit, while the burglar has a single goal room and an artifact to gather. The guards are repeating the same plan again and again - once a guard visits all the rooms he starts again - unless the change of environment, such as locked doors, forces him to re-plan. If the guard has no plan to achieve his goal, he remains still.
%If a goal room is rendered completely inaccessable the guard gives up patrolling and stays in the last visited room just as the burglar would do.

%RB JE TO TAK SPRAVNE? A CO KDYZ PO ZAVRENI DVERI NEMUZE SVUJ PLAN NAVSTIVIT VSECHNY MISTNOSTI REALIZOVAT? MYSLIM, ZE PLANY STRAZI JSOU V TEXTU IGNOROVANY, COZ MUZE VZBUZOVAT OTAZKY, JAK TO S NIMI JE - VIZ RECENZE Z ICAPSU

%CT TODO

%RK pokud vim, tak se burglar zastavi a neprovadi uz zadne akce, dopsal jsem to do textu

The $updateBeliefBase$ function from Alg.~\ref{alg:anticipationGameAgentDMS} is implemented in a way that the burglar gets information about presence of all objects in his current room. However he recognizes some details of the objects only when he tries to use them (e.g. he realizes that the doors are locked only when he tries to open them).

The $shouldReplan$ function returns true only if the agent comes upon an instruction in his plan he is unable to execute (e.g. he finds a locked container that was supposed to be opened, without the key to open it). An alternative approach would be to re-plan each time the agent finds an inconsistency of his internal believed world state with the real world state. However this can cause much higher frequency of re-planning, thus leading to less predictable behavior. We tested this approach but finally we decided to use the first method where the agent re-plans only when the inconsistency causes a failure of his plan. Nevertheless our \textit{posthoc} evaluation has shown that the latter approach is closer to human behavior, which opens the door for future work. 

When implementing the $plan$ function that utilizes external PDDL planners we made the following observation. When the burglar knows that there is a pitfall in e.g. $Room3$, then the goal $G$ should contain negative predicate $\neg visitedRoom(Room3)$. However such negative predicate significantly slows down all the tested planners. It is much more convenient to emulate this requirement by removing the $Room3$ from the planning domain and running the planner on this modified domain.

%We also made an observation that when we want the burglar to avoid some rooms containing pitfalls it is much more convenient to remove these rooms from $S^t_{agent}$, that is create $\bar{S}^t_{agent}$ without this room just for purpose of,  instead of adding negative predicates like $\neg visitedRoom(Room3)$ to the goal $G$. The latter significantly slows down all the tested planners. This observation is used in our implementation of the $plan$ function that calls PDDL planners as subroutines.

\subsubsection{Level Design.}
In our design process, the designer specifies the map layout and possibly adds some objects to it. That becomes the main agent's prior knowledge $S^0_{agent}$. The map layout remains fixed, the pitfalls that are placed on the burglar's initial path by the $placePitfalls$ function are \textit{cameras} or \textit{guards}. The general Alg.~\ref{alg:levelDesignBrute} can be simplified because there are always some user actions $\bar{A}$ that are to be found by the function $userReplanActions$:
%, user can lock some doors or containers on burglars path and thus make him re-plan. 
each pitfall is in a room and the player can lock the doors to this room. Thus for each pitfall there is a player's action that will force the burglar to re-plan. The only open question is placing the pitfalls in such a way that there still exists a plan for the burglar. We use some domain specific information there. For example we do not try to place pitfalls in rooms that the burglar is unable to avoid, like graph chokepoints, or rooms that hold objects necessary for completion of the burglar's mission.

Note that generating plans for agents and the level building is more complex than path finding. It is not enough to find the shortest route, the acting agents may have to pick up items, use objects, lock and unlock doors; while doing this, the agents change the world state. In addition, to control whether there exists a valid solution for a level, the program has to take into account both the agents' and the player's possible actions.

Even though the algorithm still has time complexity exponentional in the number of rooms and objects, it proved to be usable on our small testing domains. Moreover it could be executed offline since it is run only at design time.


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\section{Implementation}
The game has been implemented as a Java application, it uses an external game engine Slick\footnote{Slick homepage: http://slick.cokeandcode.com [18.5.2012]}
%~\cite{website:slickengine}
%, A$^*$ navigation library~\cite{website:astarlibrary}
and an external planner SGPlan 5.22~\cite{chen2006sgplan}.
SGPlan can be replaced with other planners capable of solving problems in PDDL 2.2. We use Planning4J\footnote{Planning4J homepage: http://code.google.com/p/planning4j/ [17.5.2012]} interface, that works with several different planners. 
The Navigation library is used to smooth movement of the agents. While the planner uses high-level actions such as \textit{enter a room} or \textit{approach an object}, the particular path between two points in the environment is planned using the classical A* algorithm.
The game prototype is downloadable and open-source\footnote{Homepage of the game is http://burglar-game.googlecode.com [23.07.2012]}.


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\section{Domain Size and Performance}
% popis podle kapitoly 8 v diplomce
The planning domain of the game agents has 12 predicates describing properties of the environment, 10 different operators and 20 types of objects.
We created a level with 28 rooms to test performance of the planners.
%610 objects, from which 30 is interactive, 2 items, 2 agents. Adding agents and items to this map was not a priority, extra agents doe not increase the planning problem complexity, they cause only more frequent replanning. 
When translated to the burglar's PDDL planning problem this level had 74 PDDL objects and 167 initial facts. The exact number of facts and objects may vary based on the actual belief base of the agent, but the one used in this example had a near flawless knowledge of the world. The resulting plan for the problem above contained 78 grounded actions. On the current test configuration (Intel Core i7 2GHz, 2GB RAM), the time required to create such a plan is about 300 ms including the initial construction of the problem description in PDDL from the internal representation in Java and parsing the returned instruction list. Several PDDL planners have been tested on this level, FF 2.3~\cite{hoffmann2001ff}, Metric-FF~\cite{hoffmann2003metricff}, SGPlan 5.22 and MIPS-XXL~\cite{edelkamp2008mipsxxl} all ended within half a second, however Blackbox~\cite{ijcai99blackbox}, HSP~\cite{bonnet98hsp}, LPG~\cite{gerevini2006lpg}, LPRPG~\cite{coles2008lprpg}, Marvin~\cite{coles2007marvin} and MaxPlan~\cite{xing2006maxplan} ended with error or reported that the problem is unsolvable.

The hardest game level so far was a 10 by 10 room maze with all neighboring rooms interconnected, the problem definition contained 290 PDDL objects and 912 initial facts. In this level only FF 2.3, SGPlan 5.22 and Metric-FF found solution in 8 seconds limit. MIPS-XXL needed nearly 50 seconds and the other planers failed.

%RB TADY BY TO CHTELO SE TROCHU VICE ROZEPSAT, TAKTO TO VYPADA, ZE SE ZKOUSEL SNAD JEN JEDEN KONKRETNI PROBLEM , TREBA BY SE HODIL NEJAKY GRAF UKAZUJICI VYKONNOST RUZNYCH PLANOVACU PRO RUZNE PROBLEMY

%RK obavam se, ze na graf na 6ti stranach nebude misto, leda v pripade prijeti prikoupit 2 dalsi strany

%RK bylo by dobré odcitovat všechny plánovače ale bohužel není místo

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\section{Evaluation}

By experimental evaluation we wanted to obtain the following information concerning the gameplay of an anticipation game. First, whether the players can play the  game in a way we expected (Task 1). Second, what strategy would humans use for re-planning (Task 2). This was done with a study of human players trying our game prototype.
The evaluation was performed with 20 college students, 13 of them studied computer science.
% and 4 students of information studies and librarianship,
 There were 16 men and 4 women between 20 and 32 years old.


In Task 1 we had four questions, we tested if the players can: Q1) predict burglar's path, Q2) identify the pitfalls, Q3) pick the action that will make the burglar re-plan and avoid the pitfall and Q4) predict its new plan. In Task 2 we modified the game so that the players controlled directly the burglar and they had access only to the information the burglar has. We wanted to know if they would re-plan when an action from the initial plan fails (Strategy 1), or when a new shorter plan emerges because of a new knowledge (S2); or if they will explore the environment after noticing the change but before re-planning (S3).


%RB TADY BY TO CHTELO RICI, CO PRESNE BYLO CILEM EVALUACE, JAKA HYPOTEZA SE TIM OVEROVALA? TAKE NAPSAT, ZE OVERENI BYLO SOCIOLOGICKEHO TYPU A PROC

%RK done

%The main question was whether the users are able to play the game as we intended. Namely we tested if they can 1) predict burglar's path 2) identify the pitfall 3) pick the action that will make the burglar replan and avoid the pitfall and 4) predict its new plan. The next question was how would human players behave if the were to directly control the burglar, we wanted to know how much human-like the burglar's decision making algorithm is. 


\subsubsection{Method.}
In the first part of the evaluation the participants were introduced to the key concepts of the game by playing 3 tutorial levels\footnote{Levels 1, 2 and 4 as can be found in the downloadable distribution of the game.} that demonstrated all the concepts needed to solve the test levels. If they made some mistake they were allowed to play the level again until they solved it.

In Task 1 they were presented with three previously unseen levels A, B and C printed on a paper\footnote{The levels can be obtained on http://burglar-game.googlecode.com/files/AIIDE-12\_test\_levels.zip [23.07.2012.]} showing the same information as on a computer screen. Then they were asked to draw how they would solve the level with minimum penalty points. After drawing the solution they could run the simulation on a computer. Then they were asked to rate its difficulty using a five point Likert item ranging from 1-\textit{easy} to 5-\textit{difficult}. After completing levels A, B and C they rated overall enjoyment of the game.  Answers to Q1-4 were obtained by measuring percentage

\footnote{We compared the plan drawn by a participant with the actual plan chosen by the burglar. If the plan diverged in an insignificant detail (e. g. a different, but still minimal path through a set of empty rooms), we treated the plan as guessed. }

of participants that had correctly drawn the plans (Q1 and Q4) and marked pitfalls (Q2) and objects whose state has to be changed (Q3) ( Fig.~\ref{fig:testLevel}).

%RB PRIZNAM SE, ZE TOMU NEROZUMIM. HRALI TEDY TU HRU NEBO SIMULOVALI AKCE BURGLARA? NEBO JI HRALI OFFLINE PRI ZNALOSTI VSECH PLANU. TOHLE JE POTERBA PRESNEJI POPSAT.

%RK snad je to ted po prepsani jasnejsi

%%RB - porad neni jasne, co bylo vstupem. Dostali hraci mapu, kde byl nakresleny plan burglara? To je potom trochu jine zadani nez pri vlastni hre, kde vidi pouze provadene akce a za planu musi "platit". Myslim, ze by se to melo nekde alespon strucne rict.

In Task 2 we let the participants directly control the burglar in two different levels to find out how they would behave if they were the burglar. Both levels were designed in a way that there were some initially unknown objects, doors or even whole rooms that make it possible to create a new shorter plan if they were perceived by the human player (corresponds to S2). In the second level there were also many possibilities for exploratory behavior not following any plan (S3). Fig.~\ref{fig:replan} shows these two levels.

%RB TAKE PRESNE NEROZUMI, CO DELALI, NA OBRAZKU NEJSOU ZADNE POKLADY, TAKEZE CO BYLO CILEM? JAK TO BYLO SE ZNALOSTI SVETA, MELI PRESNEY OBRAZ NEBO NEKTER VECI BYLY SKRYTE A MUSELI PREPLANOVAT?

%CT zmenen obrazek

Participants were allowed to ask the experimenter questions about mechanics of the game both during the tutorials and in the testing phase. Each participant had about one hour to complete the whole procedure.

\begin{figure}[ht]
	\centerline{\includegraphics[width=1\columnwidth]{media/pp_2_5_solved.png}}
		\small{
	\caption{
\textit{
		One of the evaluation levels with the solution drawn by one of the participants. Blue solid line shows the burglar's initial plan as drawn by the participant (Q1). Red dashed circles mark pitfalls on the selected path (Q2). Yellow square marks door that should be closed (Q3). Green dashed line shows the final path (Q4) where the burglar disguises himself as a guard, taking advantage of the previously unknown sleeping guard.
	}}}
	\label{fig:testLevel}
\end{figure}
%% RB pozor na to, ze v tistene verzi nebudou barvy videt!!
\begin{figure}[ht]
	\centerline{\includegraphics[width=1\columnwidth]{media/rp_dark.png}}
		\small{
	\caption{\textit{
Two situations where human participants directly controlled the burglar.  Dark objects, doors and rooms were initially unknown to the player.
	}
}}
	\label{fig:replan}
\end{figure}


\subsubsection{Results.}
Results of Task 1 are summarized in the Table~\ref{tab:evalRes}.  %Average rating of difficulty was 3 for level A, 4 for B and 2.6 for C.


\begin{table}[h]
\centering
\begin{tabular}{ l | c c c c c}
	Level 	& Q1 		& Q2		& Q3			& Q4		& Avg. dif. rating\\ \hline
	A 	& 100\%	& 95\%	& 85\%		& 90\%	& 3\\
	B 	& 85\%	& 95\%	& 75\%		& 80\%	& 4\\
	C 	& 83\%	& 72\%	& 89\%		& 78\%	& 2.6\\
\end{tabular}
\small{
\caption{\textit{Percents of players that were successful in tasks designed to answer Q1-4 and avg. rating of difficulty. Level C was played by only 18 participants.}
}}
\label{tab:evalRes}
\end{table}

In Task 2 we found that in the first level (Fig.~\ref{fig:replan} left) all participants re-planned as soon as they realized that there is a shorter plan (when they enter the room with a sleeping guard they can take his uniform and sneak under the camera without avoiding this room), thus following S2. In the second level (Fig.~\ref{fig:replan} right) 12 players decided to explore previously unknown rooms that could even contain potential pitfalls, this corresponds to S3. The other 8 followed the plan based on the initial knowledge of the level. They re-planned only when they were sure that the new path will be safe given their updated belief base --- S2.



%
%\begin{figure}[ht]
%	\centerline{\includegraphics[width=0.5\columnwidth]{media/entertainment-hist-updated.png}}
%	\caption{
%		Overall rating of the game.  Scale goes from 1-\textit{boring} to 5-\textit{want to play more levels}.
%	}
%	\label{fig:overallRating}
%\end{figure}

\subsubsection{Discussion.}
We see that even after a brief period of training the participants were able to play the game quite well. They were able to guess the burglar's plan, identify the pitfalls, fix them and predict the new plan (see Table~\ref{tab:evalRes}). Thus the concept of the game seems to be viable. It is also positive that most players rated the game as entertaining when the average rating was 0.5 on the scale -2 ... +2.
%RB NIKDE NEVIDIM ODLAT NA TABULKU 1 ANI JEJI VYSVETLENI. ODKUD CERPATE INFORMACI, ZE HRACI HRACI BYLI SCHOPNI HRU DOBRE HRAT?

%RK snad se to prepsanim vyjasnilo

The experiments where the burglar was controlled by humans show us that
%1) humans re-plan as soon as they discover a new better plan (S2), not when the initial plan fails as it is implemented in our game; 2) some humans tend to explore alternatives that seem promising even if they have already created a valid plan. 
humans follow strategy S2 and in some situations S3, none of the participants used S1 implemented in our $shouldReplan$ procedure. This posses a question whether the burglar should not re-plan as a human. Future research is needed to investigate this question.


\section{Future Work}
As long as the whole game concept is based on players' ability to predict the burglar's plans, we need the planner not only to produce \textit{some} plans but arguably to produce plans that resemble plans of humans. Current off-the-shelf planners are not optimized for this type of objective, but we can get inspiration from the IS field. For instance there are works that try to extend planning algorithms to account for intentions of agents~\cite{riedl2010narrative}, suspense~\cite{cheong2008narrative} or emotions~\cite{aylett2006affectively}. We think that these properties can make the plans more engaging for humans. 
%The evaluation has shown that the re-plannig conditions implemented in our prototype does not match the behavior of humans in similar conditions. In our implementation the agent re-plans when the plan fails, however humans seem to re-plan when they find a new better plan. Changing this aspect would thus increase believability of burglar's behavior.
We can also focus on the re-planning strategy and change it as suggested by our evaluation. 
%There can be also enhancements focusing on increasing speed of the planning by incorporating landmarks~\cite{hoffmann2004ordered} or by using domain dependent heuristics.
Or we can extend the definition to require existence of harmful player's actions that lead to capturing the burglar.

% We see several directions that can extend our current prototype:
%\begin{inparaenum}[\itshape a\upshape)]
%	\item extending the set of possible objects that the player can interact with;
%	\item speeding up the planning process ;
%	\item ensuring that there are also wrong player actions that worsen the burglar's situation;
%\end{inparaenum}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%



\section{Conclusion}

%We have presented a working prototype of the game that uses off-the-shelf planner in game design and during playing of the game. We defined a novel genre of games that we call the anticipation games. These games advocate a novel form of exploiting of action planning in games. Even though the current levels of our game are of medium complexity we think that the game concept can be interesting to both academy and game industry. 

In this paper we defined formally a game genre of anticipation games, which advocates a novel form of exploiting action planning in games: both at design time and run time.We also presented a game prototype and a small-scale evaluation of the anticipation game concept. The game concept can be useful for academy as a research platform and although the game levels can only be of medium complexity given state of the art planners, it can be useful also for industry, e.g. for creating specific game missions.

\section{Acknowledgment}
This work was partially supported by the student research grant GA UK  655012/2012/A-INF/MFF, by the SVV project number 265 314 and by the grants GACR 201/09/H057 and P103/10/1287.

\bibliography{./literature}
\bibliographystyle{aaai}
\end{document}


%\section{TRASH - Problem Definition}
%
%In order to formally define the planning problem solved in our game we will use following notation: 
%$M=\langle Rooms, Doors \rangle$ is a planar graph representing map of the level, a state of each door is given by a function
%$
%Ds: Doors \to \{locked, unlocked\}
%$;
%$O$ is a set of operable objects, each object $o \in O$ can also have its internal state given by function $Os: O \to propositions\ about\ objects$ (e.g. container can be \textit{closed} or \textit{opened});
%$G$ is a set of guards patrolling the level;
%$b$ is the single burglar;
%$E = O \cup G \cup \{b\}$ stands for a set of all entities in the level;
%function $EtoR: E \to Rooms$ assigns each entity to one room;
%$A$ stands for a set of possible instantiated player's actions.
%
%
%Now we can define a world state of map $M$ as $S^M = \langle E, Os, EtoR, Ds \rangle \in \mathcal{S^M}$, where $\mathcal{S^M}$ denotes a set of all possible world states on the given map $M$.
%$S^M_{burglar} \in \mathcal{S^M}$ denotes a state of the world believed to be true by the burglar, $S^M_{real} \in \mathcal{S^M}$ denotes a real state of the world perceivable by the human player. Game level $L$ is defined by a triplet $\langle M, S^M_{real}, S^M_{burglar} \rangle$. Predicate $flawed(P,S^M)$ is true if execution of a plan $P$ in a state of the world $S^M$ leads to a situation when the burglar is \textit{caught}. From now on we omit the upper index in $S^M$ since we take $M$ as fixed.
%%   $n \in N$ is a required number of minimal player's changes of the world and $a \in
%
%%To capture the effect of the player's actions on the burglars knowledge, we define an auxiliary function 
%%$
%%H(a, P, S_{burglar}) = S'_{burglar}
%%$.
%%It models a situation where the player executes an action $a \in A$ and due to the effects of $a$ the burglar can no longer follow its current plan $P$, hence he stops and updates its world state $S_{burglar}$ with previously unknown effects of $a$, which results in the new state $S'_{burglar}$.
%
%To capture the effect of the player's actions on the burglars knowledge, we define an auxiliary function 
%$
%H(\bar{A}, S_{burglar}) = S'_{burglar}
%$.
%It models a situation where the player executes actions $\bar{A} \subseteq A$ and due to theirs effect burglar updates his belief to the new state $S'_{burglar}$.
%
%Now we can define the problem that has to be solved at design time as finding at least some solutions of function 
%%$F: \mathcal{L} \times N \to 2^{\mathcal{S} \times \mathcal{S}}$
%$F: \mathcal{S} \times \mathbb{N} \to 2^{\mathcal{S}}$. $F$ gets an initial world state $S_{init}$ and a number of pitfalls $n$ and outputs the set of world states that can be known to the burglar at the beginning of the game.
%We will now discuss several possible definitions of function $F$. The first version of the definition that is implemented by our prototype is:
%
%\begin{gather}
% S_{burglar} \in F(S_{real},n)  \equiv \nonumber \\
%%  \exists T: T(S_{init}) = S_{real}  \wedge \\
% \exists P:\neg flawed(P, S_{burglar}) \wedge flawed(P, S_{real})~\wedge \label{eq:believable} \\ 
%%  \wedge \nonumber \\
%\exists \bar{A}  \subseteq A: H(\bar{A}, S_{burglar}) = S'_{burglar}~\wedge \label{eq:userAct} \\
%% \wedge \nonumber \\
%|trapRoomsPresentInPlan(P)|=n~\wedge \label{eq:trapRooms} \\
%\exists P': \neg flawed(P', S'_{burglar}) \wedge \neg flawed(P', S_{real}) \label{eq:ok}
%% S'_{burglar} \in F(L, n-1) \wedge \label{eq:recursive} \\
%% \wedge \nonumber \\
%%\forall k < n - 1 : \langle S'_{burglar}, S_{real} \rangle \notin F(L, k) \label{eq:minimal}
%\end{gather}
%
%We require that there is a plan that seems to be solving the task given the burglar's initial knowledge but it contains pitfalls in reality (Cond.~\ref{eq:believable}), further there must be the user's actions that make the burglar change its belief base to the new state $S'_{burglar}$ (Cond.~\ref{eq:userAct}), the number of trap rooms in the plan must be $n$ (Cond.~\ref{eq:trapRooms}) and there must be the new plan $P'$ that is without trap rooms both in $S'_{burglar}$ and $S_{real}$ (Cond.~\ref{eq:ok}).
%
%This definition can lead to creating levels where the initial burglar's plan contains $n$ pitfalls but it can be solved with just one user's action. There is the plan $P'$  that is solving correctly the problem in $S'_{burglar}$.
%
%We could also propose an alternative, more restricted, definition of $F'$.
%First we define an alternative to $H$.
%$
%H'(a, P, S_{burglar}) = S'_{burglar}
%$ 
%models a situation where the player executes an action $a \in A$ and due to the effects of $a$ the burglar can no longer follow its current plan $P$, hence he stops and updates its world state with previously unknown effects of $a$, which results in the new state $S'_{burglar}$.
%
% Let for $n \in  \mathbb{N}, n \ge 1$ $F'$ fulfils the following condition:
%\begin{gather}
%S_{burglar}\in F'(S_{real}, n) \equiv (\ref{eq:believable})~\wedge \nonumber \\ 
%\exists a  \in A: H'(a, P, S_{burglar}) = S'_{burglar}~\wedge \label{eq:userAct2} \\
%S'_{burglar} \in F'(S_{real}, n-1)~\wedge \label{eq:recursive} \\
%\forall k < n - 1 :  S'_{burglar} \notin F'(S_{real}, k)~\label{eq:minimal}
%\end{gather}
%
%For $n = 0$ we define $F'$ as: 
%\begin{gather}
%S_{burglar} \in F'(S_{real}, 0) \iff  \nonumber \\ 
%\exists P: \neg flawed(P, S_{burglar}) \wedge \neg flawed(P, S_{real}) \label{eq:ok2}
%\end{gather}
%
%This definition shares the Cond. \ref{eq:believable} with definition of $F$. Then it requires that there is single user's action that makes the burglar stop and update its world state to $S'_{burglar}$ (Cond. \ref{eq:userAct2}); given $S'_{burglar}$, the burglar can choose a plan with $n - 1$ pitfalls (Cond.~\ref{eq:recursive}) but not less (Cond.~\ref{eq:minimal}). When the recursion reaches its end we require that the burglar can choose a plan that is solving the goal not only in his belief base but also in the real state of the world (Cond.~\ref{eq:ok2}).
%
%If we compare $F$ and $F'$, $F$ requires the player to make at least one action, whereas $F'$ requires full sequence of $n$ player's actions. On the other hand $F$, is easier to compute. We want to implement $F'$ in future versions but currently we use the definition $F$.
%
%%\begin{gather}
%%\forall S_{burglar} \in G(S_{real},n) \iff \nonumber (\ref{eq:believable}) \wedge (\ref{eq:userAct}) \wedge \\
%%S'_{burglar} \in G(L, n-1) \wedge \label{eq:recursive} 
%%\end{gather}
%In the next section, we specify how we implement our level design algorithm according to definition $F$. 


%\section{Gameplay}

%In the game itself the player selects a generated level, and starts observing it. The burglar and guards use the external planner to devise their plans upon their beliefs and start executing them. After the player has found out what the agents are up to, he makes changes in the level to save the burglar. % and let the gameworld execute it's plans.

%The agents' perceptions work in two modes. In first mode they can look around a room and notice existence of all present objects, but not their exact details (i.e. whether the doors are only closed or locked). The second, more detailed, mode of perceiving is limited only to an object they try to operate and it shows exact state of all properties of the object (i.e. the agent founds that the doors were closed when he tried to open them). 

%There are two approaches to the replanning. In the first the agents replan every time they come upon a new piece of information, or find any inconsistency in their belief base. This makes their movement more believable, but also makes their final path of action highly unpredictable for the level design process. Trying to predict, or simulating future knowledge changes drastically increases the time required to create a new level.We tested this approach and decided to implement another, more computationally effective.

%In the second approach, used in the current version, the agents only reconsider their plans if they come upon an instruction they are unable to execute (i.e. they find a locked object without key to it, or simply on a door that should have been open according to their beliefs).

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
