%%
%% A Review of the Seventh International Planning Competition
%% AI Magazine
%% 

\documentclass[letterpaper]{article}
\usepackage{aaai}
\usepackage{graphicx}
\setlength{\textwidth}{5in}
\setlength{\oddsidemargin}{.75in}
\setlength{\evensidemargin}{.75in}
\usepackage{palatino}
\usepackage{helvet}
\usepackage{courier}
\usepackage{endnotes}
\usepackage{url}
\makeatletter
\def\url@leostyle{%
  \@ifundefined{selectfont}{\def\UrlFont{\sf}}{\def\UrlFont{\rmfamily}}}
\makeatother
\urlstyle{leo}
\usepackage{fixltx2e}
\usepackage[none]{hyphenat}
\usepackage{microtype}
\DisableLigatures{encoding = *, family = *}
\frenchspacing
\raggedright
\setlength{\parindent}{10pt}
\let\footnote=\endnote
\title{A Survey of the Seventh International Planning Competition}
\author{Amanda Coles, Andrew Coles, Angel Garc{\'i}a Olaya, Sergio Jim{\'e}nez, Carlos Linares L{\'o}pez\\ Scott Sanner, Sungwook Yoon}
\nocopyright

% \documentclass[letterpaper,10pt,english]{article}
% \usepackage{graphicx}
% \usepackage{times}
% \usepackage{url}

% % really hate over-full h boxes
% \tolerance=10000
% % ----------------------------------------------------------------

\begin{document}
\maketitle
\onecolumn

\begin{quote}
  In this article we review the 2011 International Planning Competition.  We give an overview of the history of the competition, discussing how it has developed since its first edition in 1998.  The 2011 competition was run in three main separate tracks: the Deterministic (Classical) Track; the Learning Track; and the Uncertainty Track.  Each track proposed its own distinct set of new challenges and the participants rose to these admirably, the results of each track showing promising progress in each area.  The competition attracted a record number of participants this year, showing its continued and strong position as a major central pillar of the international planning research community.
\end{quote}


Automated Planning is the process of finding an ordered sequence of actions that, starting from a given initial state, allows the transition to a state where a series of objectives are achieved. Actions are usually expressed in terms of preconditions and effects; i.e. the requirements a state must meet for the action to be applied, and the changes subsequently made. Domain-independent planning relies on general problem solving techniques to find an (approximately) optimal sequence of actions and has been the focus of numerous International Planning Competitions (IPCs) over the years.

The first IPC was organised by Drew McDermott in 1998.  For the following 10 years it was a biennial event and remains a keystone in the world-wide planning research community: the most recent, seventh, IPC took place in 2011.  The major important contribution of the first competition was to establish a common standard language for defining planning problems --- the Planning Domain Definition Language (PDDL)~\cite{mcdermott.d:pddl} --- which has been developed and extended throughout the competition series.  Today, the extended PDDL is still widely used, and is key in allowing fair benchmarking of planners.  Participation has increased dramatically over the years and a growing number of tracks have formed, representing the broadening community --- see Figure~\ref{fig:competitionhistory} for details.  The three main tracks now operating are the Deterministic, Learning and Uncertainty Tracks.

The IPC has two main goals: to produce new benchmarks; and to gather and disseminate data about the current state-of-the-art. Entering a planner represents significant work, and the contribution of all participants in pushing planner development, along with the data gathered, are the major prized value of the competition. The impact of the IPC on the planning and scheduling community is broader than just determining a winner: benchmarking test sets are used for evaluating new ideas, and the defined state-of-the-art, the most recent winner, is a useful benchmark. Typically, entrants in the competition come from academia, though some industrial colleagues have been involved, and industrial sponsorship secured.  The independent assessment of available systems is useful to potential users of planners outside the research community.

The competition is run by the organisers over a period of several months, with participants submitting their planning systems electronically.  The results of each edition of the competition are presented in a special session of the International Conference on Automated Planning and Scheduling, ICAPS\footnote{Videos of the 2011 presentations are at {\scriptsize \url{http://videolectures.net/icaps2011_freiburg/}}}.  The IPC council, chaired by Lee McCluskey, oversee the competition series (and the knowledge engineering competition series ICKEPS) and are seeking chairs for the next competition, expected to take place in 2013. More information about the competition can be found on the IPC\footnote{{\scriptsize \url{http://icaps-conference.org/index.php/Main/Competitions}}} website.


% Deterministic part
% ----------------------------------------------------------------------------
\section{Deterministic Track}
\label{sec:deterministic}

The deterministic part of the competition is the longest-running track.  Its focus is on the ability of planners to solve problems across a wide range of unseen domains: a challenging test of the ability of planners to succeed as domain-independent systems.  Several sub-tracks of the competition have developed over the years, with all tracks at the centre of Figure~\ref{fig:competitionhistory} being considered sub-tracks of the deterministic competition.  The 2011 competition saw the introduction of a new track for multi-core planners.  Furthermore, another key contribution was to release all the software used to run the competition\footnote{available at {\scriptsize \url{http://www.plg.inf.uc3m.es/ipc2011-deterministic/FrontPage/Software}}}, thus reducing workload for future potential organisers.

The 2011 competition followed the successful 2008 competition, and was run
in a very similar way.  For 2011 we decided to keep the language the
same, without introducing extensions, as planners still need to `catch
up' with the currently available features. We also made use of the
plan validator VAL~\cite{howey.r.long.d.ea:val}. We maintained the
evaluation metrics introduced in IPC-2008, favouring quality and coverage
over problem-solving speed. Briefly, each planner is allowed 30 minutes on each
planning task, and receives a score between 0 and 1. The score is the ratio
between the quality of the solution found,
if any (if not, it is given zero), and the quality of the best
solution found by any entrant. The score is summed across all problems
for a given planner: the winner and runner up for each track being
those with the highest scores. Scores are not aggregated amongst tracks.
We included in the results a comparison to the winner of the last
competition to ensure progress is being made.

The 2011 competition was extremely popular: a record number of 55 entrants took part in the deterministic track alone, almost eight and three times more than the first and sixth competitions respectively, showing significant growth in community involvement.  A summary of each of the sub-tracks follows.

\begin{figure}[h]
 \includegraphics[width=\columnwidth]{competitionhistory}
 \caption{The History of the International Planning Competition}
 \label{fig:competitionhistory}
\end{figure}

\subsection{Satisficing Track} 

{\sc Lama} won the satisficing track for the second year running, in its new incarnation {\sc Lama-2011} (Richter, Westphal, Helmert \& R\"oger).  {\sc Lama} follows in a long history of successful planners using forward-chaining search --- including previous winners {\sc Hsp} (Bonet \& Geffner) in 1998, {\sc FF} (Hoffmann) in 2000 and {\sc Fast Downward} (Helmert \& Richter) in 2004 --- with further guidance obtained from landmarks (facts that must be true in any solution plan).  Interestingly the only non-forward-search planner to win this track was {\sc Lpg} (Gerevini \& Serina) in 2002, using stochastic local search.  A number of other interesting techniques have been seen throughout the years, including the use of pattern databases, and planning as satisfiability.  9 out of 27 of the planners in 2011 outperformed the 2008 winner {\sc Lama-2008} (Richter \& Westphal), showing good progress in the state-of-the-art.

\subsection{Multi-Core Track} 

With the advent of parallel computers at affordable prices we wanted to ask the question: can planners using multiple cores at the same time perform better than using the single core allowed in the classical track?  The winner of the multi-core track was {\sc ArvandHerd} (Nakhost, Mueller, Schaeffer, Sturtevant \& Valenzano); but it did not outperform the classical-track winner, {\sc Lama-2011}.  This is not so concerning, however --- the history of the IPC shows that classical planners are highly engineered in terms of data structures, and are difficult to beat in the first editions of new tracks.

\subsection{Temporal Track} 

Since the introduction of PDDL 2.1 in 2003, only a subset of the temporal planners available have been able to reason with the full temporal semantics of the language.  As such, for the 2011 temporal track, we included a special class of temporal problems that include required concurrency~\cite{mausamTime}.  That is, no solution to the problem exists if the planner is not able to run two actions in parallel at the same time.  The most successful planners in this track were the winner {\sc Daeyahsp} (Dr\'eo, Schoenauer, Sav\'eant \& Vidal) and runner up ex-aequo {\sc Yahsp2-mt} (Vidal) which performed best on the standard temporal problems, and runner up ex-aequo {\sc Popf2} (Coles, Coles, Fox \& Long), which was the only planner to solve problems in all domains with required concurrency.

\subsection{Optimal Track} 

As planning technology develops, writing planners that find optimal, as opposed to simply satisfying, solutions to problems becomes more feasible.  {\sc Fast Downward Stone Soup 1} (Helmert, Hoffmann, Karpas, Keyder, Nissim, Richter, R\"oger, Seipp \& Westphal) won this year's competition outperforming the new version of the 2008 winner, {\sc Gamer} (Edelkamp \& Kissmann). {\sc Fast Downward Stone Soup} is portfolio based, in contrast to the symbolic search using BDDs of {\sc Gamer}. The major shift towards forward search and away from planning as satisfiability in the two most recent competitions can be attributed to a change in the definition of optimality: the last two competitions have required a lowest-cost plan; whereas previous editions required a solution with the minimum number of actions. The former is much less amenable to a planning as satisfiability approach.


~\nocite{long.d.fox.m:3rd} %Third IPC
~\nocite{fox.m.long.d:pddl2}%PDDL2.1
~\nocite{hoffmann.j.edelkamp.s:deterministic}% IPC4
~\nocite{gerevini.ae.haslum.p.ea:deterministic} %IPC5
~\nocite{gerevini.a.long.d:plan} %pddl3


% Learning part
% ----------------------------------------------------------------------------
\section{Learning Track}

Efficient domain-independent search is a major challenge for AI. Using a single solver for many different problems significantly reduces human effort; the trade-off being that domain-specific systems, whilst time consuming to write, are generally much more efficient.  Creating a system that can automatically learn to solve problems more efficiently is a promising approach for combining the advantages of both types of systems.  This is the inspiration for research in learning for planning, a topic widely explored since the 1970s.  The first IPC learning track in 2008~\cite{ipclearningtrack08}, was an important milestone for research in learning in planning, providing a platform for fair comparison.  The track comprises two phases: a {\em learning} phase where the planners, given training problems, learn domain-specific knowledge; and an {\em evaluation} phase, where the planners exploit this knowledge in solving a set of unseen problems.  

We took much inspiration from the 2008 learning track in organising its 2011 successor. However, in light of lessons learnt we make several changes to the running of the competition.  A somewhat controversial outcome of the first learning track was that best-performing planners on the {\em evaluation} phase were not those that improved the most upon learning, indeed the winner showed little improvement, and several planners performed worse after learning.  {\sc Obtusewedge} (Yoon, Fern \& Givan), awarded best learner in 2008, was one of the few planners to improve.  A major innovation in 2011 was to use Pareto dominance as the metric for determining competition winners: a planner must both perform better than other planners {\em and} must have improved more by learning in order to be considered `better' than its competitor.  We further extended the scope for learning by allowing a longer learning period and providing problem generators, to allow an unrestricted number of available training problems.

A total of eight systems participated, broadly falling in to two categories: parameter tuners, learning to adjust the parameters of planners (or portfolios) for best performance; and knowledge learners, planners learning heuristics or policies for the given domain.  The competition made use of many previous planning benchmarks, generating larger challenging instances, and introduced two new domains challenging for commonly used delete relaxation heuristics.  These were the Spanner domain, in which delete relaxation planners tend to head towards dead ends, challenging planners to learn to avoid them; and the Barman domain, in which delete relaxation misses relevant knowledge about the state of limited resources. 

The results of the 2011 competition painted a much more positive
picture of learning in planning than those of its predecessor.  Out of
eight participants, six improved performance with learning in seven of
the nine domains.  Further, four of the competitors outperformed the
deterministic track winner, {\sc Lama-2011} (Richter, Westphal, Helmert \& R{\"o}ger), demonstrating that
learning can improve upon the state-of-the-art.  The winner {\sc
  PbP2} (Gerevini, Saetti \& Vallati), uses statistical learning to define the time-slots dedicated
to each planner in its portfolio.  The runner up, {\sc FD-Autotune} (Fawcett, Helmert, Hoos, Karpas, R{\"o}ger \& Seipp),
learns the best set of parameters for the popular planner {\sc Fast-Downward} (Helmert).  The most successful group of planners were parameter tuning systems, the results reveal a major open challenge to the learning in planning community: making planners that learn knowledge from the domain (e.g. macro-action, heuristic or policy learners) competitive with the state-of-the art. 


% Uncertainty part
% ----------------------------------------------------------------------------
\section{Uncertainty Track}

The uncertainty part of the IPC was initiated in 2004 by Michael
Littman and H{\aa}kan Younes with the introduction of PPDDL, the
probabilistic extension of PDDL~\cite{ippc04}.  PPDDL extends PDDL
with stochastic action effects, allowing a variety of Markov Decision
Processes (MDPs) to be encoded in a relational PDDL-like manner.  The
2006 competition (Givan \& Bonet) added a track for Conformant
planning (i.e., non-observable non-deterministic domains) and the 2008
competition (Bryce \& Buffet) added a track for fully-observable
non-deterministic (FOND) domains.  In the 2011 competition, we dropped
the Conformant and FOND tracks due to lack of interest, but added a
partially observed MDP (POMDP) track.  We also made a major change of
language from PPDDL to RDDL~\cite{rddl} (while providing automated
translations from RDDL to ground PPDDL and factored MDPs and POMDPs),
which allowed modeling a variety of new problems with stochasticity,
concurrency, and complex reward and transition structure not jointly
representable in lifted PPDDL.  The 2011 competition saw five MDP and
six POMDP planner entrants.

Previous competitions saw the emergence of {\sc
FF-Replan}~\cite{ffreplan} --- which replanned on unexpected outcomes
in a determinised translation of PPDDL --- as an influential and
top-performing planner.  With our language change from PPDDL to RDDL
in 2011 and our variety of new problem domains, planners based largely
on the UCT Monte Carlo tree search algorithm~\cite{uct} placed first
in both the MDP and POMDP tracks in the 2011 competition.  For the MDP
track, the winner was {\sc PROST} (Keller \& Eyerich), which used UCT
in combination with determinisation techniques to initialise
heuristics; the runner up was {\sc Glutton} (Kolobov, Dai, Mausam \&
Weld), which used an iterative deepening version of RTDP~\cite{rtdp}
with sampled Bellman backups.  For the POMDP track, the winner was
{\sc POMDPX NUS} (Wu, Lee \& Hsu), which used a Point-based Value
Iteration (PBVI) technique~\cite{sarsop} for smaller problems, but a
POMDP-variant of UCT~\cite{pomcp} for larger problems; the runner up
was {\sc KAIST AILAB} (Kim, Lee \& Kim), which used a symbolic variant
of PBVI~\cite{sim} with a number of enhancements.

Evaluation for the 2004, 2006, and 2008 competitions relied on
analysis of one or more of the following metrics: (1) average action
cost to reach the goal, (2) average number of time steps to reach the
goal, (3) percent of runs ending in a goal state, and (4) average
wall-clock planning time per problem instance.  Because lack of
planner attempts on some harder domains made it difficult to aggregate
average performance results on these metrics, we introduced an
alternate purely reward-based evaluation approach in 2011 --- for
every problem instance of every domain, a planner was assigned a
normalised $[0,1]$ score with the lower bound determined by the
maximum average performance of a noop and random policy and the upper
bound determined by the best competitor; any planner not competing or
underperforming the lower bound was assigned a score of 0 and all
normalised $[0,1]$ instance scores were averaged to arrive at a single
final score for each planner.

A recurring debate at each competition is whether problem domains have
reflected the full spectrum of probabilistic planning 
(e.g.,~\cite{probvsreplan}).  This issue partially motivated our change
from PPDDL to RDDL in 2011 in order to model stochastic domains like
multi-intersection traffic control and multi-elevator control that
could not be modeled in lifted PPDDL.  How the language and domain
choice for the 2013 IPC shapes up remains to be seen; however, given
the profound influence the uncertainty track of the IPC has had on the
direction of planning under uncertainty research in the past seven
years, we believe it is imperative that the competition domains in
2013 are chosen to ensure the greatest relevance to end applications
of interest to the planning under uncertainty community.


% Acknowledgements
% ----------------------------------------------------------------------------
\section{Acknowledgements}

The deterministic and learning parts have been sponsored by Decide
Soluciones, iActive, the University Carlos III de Madrid and
ICAPS. The hardware platform used during the competition was funded by
Spanish Science Ministry under project MICIIN TIN2008-06701-C03-03.

The uncertainty part of the IPC was supported by NICTA, PARC, and an
Amazon EC2 grant.  NICTA is funded by the Australian Government as
represented by the Department of Broadband, Communications and the
Digital Economy and the Australian Research Council through the ICT
Centre of Excellence program.


% Bibliography
% ----------------------------------------------------------------------------
\bibliographystyle{aaai}
\begin{thebibliography}{}

\bibitem[\protect\citeauthoryear{Barto, Bradtke, and Singh}{1995}]{rtdp}
Barto, A.~G.; Bradtke, S.~J.; and Singh, S.~P.
\newblock 1995.
\newblock Learning to act using real-time dynamic programming.
\newblock {\em Artificial Intelligence} 72:81--138.

\bibitem[\protect\citeauthoryear{Cushing \bgroup et al.\egroup
  }{2007}]{mausamTime}
Cushing, W.; Kambhampati, S.; Mausam; and Weld, D.
\newblock 2007.
\newblock When is temporal planning {\em really} temporal planning?
\newblock In {\em Proceedings of the Twentieth International Joint Conference
  on Artificial Intelligence (IJCAI-07)},  1852--1859.

\bibitem[\protect\citeauthoryear{Fern, Khardon, and
  Tadepalli}{2011}]{ipclearningtrack08}
Fern, A.; Khardon, R.; and Tadepalli, P.
\newblock 2011.
\newblock The first learning track of the international planning competition.
\newblock {\em Machine Learning} 84:81--107.

\bibitem[\protect\citeauthoryear{Fox and Long}{2003}]{fox.m.long.d:pddl2}
Fox, M., and Long, D.
\newblock 2003.
\newblock {PDDL}2.1: An extension to {PDDL} for expressing temporal planning
  domains.
\newblock {\em Journal of Artificial Intelligence Research} 20:61--124.

\bibitem[\protect\citeauthoryear{Gerevini and
  Long}{2005}]{gerevini.a.long.d:plan}
Gerevini, A., and Long, D.
\newblock 2005.
\newblock Plan constraints and preferences in {PDDL}3.
\newblock Technical report, Department of Electronics for Automation,
  University of Brescia, Italy.

\bibitem[\protect\citeauthoryear{Gerevini \bgroup et al.\egroup
  }{2009}]{gerevini.ae.haslum.p.ea:deterministic}
Gerevini, A.~E.; Haslum, P.; Long, D.; Saetti, A.; and Dimopoulos, Y.
\newblock 2009.
\newblock Deterministic planning in the fifth international planning
  competition: {PDDL}3 and experimental evaluation of the planners.
\newblock {\em Artificial Intelligence} 173:619--668.

\bibitem[\protect\citeauthoryear{Hoffmann and
  Edelkamp}{2005}]{hoffmann.j.edelkamp.s:deterministic}
Hoffmann, J., and Edelkamp, S.
\newblock 2005.
\newblock The deterministic part of {IPC}-4: An overview.
\newblock {\em Journal of Artificial Intelligence Research} 24:519--579.

\bibitem[\protect\citeauthoryear{Howey, Long, and
  Fox}{2004}]{howey.r.long.d.ea:val}
Howey, R.; Long, D.; and Fox, M.
\newblock 2004.
\newblock {VAL}: Automatic plan validation, continuous effects and mixed
  initiative planning using {PDDL}.
\newblock In {\em The Sixteenth IEEE International Conference on Tools with
  Artificial Intelligence (ICTAI-2004)},  294--301.

\bibitem[\protect\citeauthoryear{Kocsis and Szepesv{\'a}ri}{2006}]{uct}
Kocsis, L., and Szepesv{\'a}ri, C.
\newblock 2006.
\newblock Bandit based {M}onte-{C}arlo planning.
\newblock In {\em Proceedings of the 17th European Conference on Machine
  Learning ({ECML}-06)},  282--293.

\bibitem[\protect\citeauthoryear{Kurniawati, Hsu, and Lee}{2008}]{sarsop}
Kurniawati, H.; Hsu, D.; and Lee, W.~S.
\newblock 2008.
\newblock {SARSOP}: Efficient point-based {POMDP} planning by approximating
  optimally reachable belief spaces.
\newblock In {\em Proceedings of Robotics: Science and Systems IV}.

\bibitem[\protect\citeauthoryear{Little and Thi{\'e}baux}{2007}]{probvsreplan}
Little, I., and Thi{\'e}baux, S.
\newblock 2007.
\newblock Probabilistic planning vs. replanning.
\newblock In {\em ICAPS Workshop on IPC: Past, Present and Future}.

\bibitem[\protect\citeauthoryear{Long and Fox}{2003}]{long.d.fox.m:3rd}
Long, D., and Fox, M.
\newblock 2003.
\newblock The 3rd international planning competition: Results and analysis.
\newblock {\em Journal of Artificial Intelligence Research} 20:1--59.

\bibitem[\protect\citeauthoryear{McDermott}{1998}]{mcdermott.d:pddl}
McDermott, D.
\newblock 1998.
\newblock {PDDL} – the planning domain definition language.
\newblock Technical Report CVC TR-98-003/DCS TR-1165, Yale Center for
  Computational Vision and Control.

\bibitem[\protect\citeauthoryear{Sanner}{2010}]{rddl}
Sanner, S.
\newblock 2010.
\newblock Relational dynamic influence diagram language ({RDDL}): Language
  description.
\newblock http://users.cecs.anu.edu.au/\textasciitilde
  ssanner/IPPC\_2011/RDDL.pdf.

\bibitem[\protect\citeauthoryear{Silver and Veness}{2010}]{pomcp}
Silver, D., and Veness, J.
\newblock 2010.
\newblock {M}onte-{C}arlo planning in large {POMDPs}.
\newblock In {\em Proceedings of 24th Conference on Neural Information
  Processing Systems ({NIPS}-10)},  2164--2172.

\bibitem[\protect\citeauthoryear{Sim \bgroup et al.\egroup }{2008}]{sim}
Sim, H.~S.; Kim, K.-E.; Kim, J.~H.; Chang, D.-S.; and Koo, M.-W.
\newblock 2008.
\newblock Symbolic heuristic search value iteration for factored {POMDP}s.
\newblock In {\em Proceedings of the 23rd national conference on Artificial
  intelligence ({AAAI}-08)},  1088--1093.

\bibitem[\protect\citeauthoryear{Yoon, Fern, and Givan}{2007}]{ffreplan}
Yoon, S.; Fern, A.; and Givan, R.
\newblock 2007.
\newblock {FF}-replan: A baseline for probabilistic planning.
\newblock In {\em Proceedings of the 17th International Conference on Automated
  Planning and Scheduling ({ICAPS}-07)},  352--359.

\bibitem[\protect\citeauthoryear{Younes \bgroup et al.\egroup }{2005}]{ippc04}
Younes, H. L.~S.; Littman, M.~L.; Weissman, D.; and Asmuth, J.
\newblock 2005.
\newblock The first probabilistic track of the international planning
  competition.
\newblock {\em Journal of Artificial Intelligence Research (JAIR)} 24:851--887.

\end{thebibliography}


% Short bio
% ----------------------------------------------------------------------------
\section{Short Bio}
\label{sec:bio}

Amanda Coles holds an EPSRC research fellowship, and is based in the Department of Informatics, King's College London and was a co-organizer of the IPC-2011 Learning Track.  Her research interests include planning with preferences, and with time and resources.  She has co-authored several planners, including MARVIN, a macro-learning planner that competed in IPC-2004, and more recently LPRPG-P, a planner handling preferences.  She is an author or co-author of 30 publications on AI planning.  

Andrew Coles is lecturer in the Department of Informatics, King's College London.  His research interests include planning with rich domain models, and in learning and inference for planning.  He has over 30 publications in the area of AI planning, and has co-written several expressive temporal planners, including CRIKEY3 and POPF; the latter becoming the basis of application-based industrial work.  He was a co-organizer of the Learning Track of IPC-2011.

Angel Garc\'ia Olaya is an associate professor in the Planning and
Learning Group (PLG) of Universidad Carlos III de Madrid, Spain.  His
research interests are planning with soft-goals and planning and
execution for robotics and real-time environments. He was a co-organizer of the deterministic track of the IPC-2011.

Sergio Jim\'enez is a research assistant at the PLG of Universidad Carlos III de Madrid, Spain.  His research interest is learning for planning. Sergio was co-organizer of the Learning Track of the IPC 2011.

Carlos Linares L\'opez is associate professor at the PLG of Universidad Carlos III de Madrid, Spain.  His research interests are domain-dependent problem solving with heuristic search techniques and domain-independent automated planning. He was co-organizer of the deterministic track of the IPC-2011. 

Scott Sanner is a Senior Researcher at NICTA and an Adjunct Research Fellow at the Australian National University.  He was a co-organizer of the Uncertainty Track of the IPC-2011.

Sungwook Yoon is a Research Scientist at Palo Alto Research Center. He was a co-organizer of the Uncertainty Track of the IPC-2011.


\end{document}
