The Polyathlon problem domain from the RL competition ``challenges agents with a series of unknown domains''~\cite{rlcompetition2014Polyathlon}. Prior knowledge about the specifics of the environments is limited to a few general statements:

\begin{itemize}
    \item ``Observation Space: 6 dimensional, continuous valued in $[0,1]$''
    \item ``Action Space: 6 discrete actions''
    \item ``Rewards: Reward range (maybe loose) will be provided by the task spec''
    \item  ``All problems are episodic''
    \item ``Most problems are roughly Markov''
    \item ``Problems may have stochastic state transitions and reward functions''
    \item ``Some MDPs WILL have redundant actions and unnecessary observations''
    \item ``Some observations may be warped in weird ways''
\end{itemize}

Due to the limited a-priori knowledge, agents are required to learn on-line while exploring the respective environment. Since it is in general desirable that an agent converges to a strategy as quickly as possible, one can assess the agent's performance based on both the total reward obtained and on the speed at which the agent converges to a policy.

The facts that the environment is not known and that observations can be unreliable emphasize the importance of generating meaningful and perturbation-resilient features. It is not sure that this is the same as a partially observable Markov decision problem (POMPD), as the problem is in fact the opposite (\emph{more} than the environment is observable), and it may better be described as a noisy environment, but it is nevertheless possible that POMPD algorithms can make better sense of the observations. A further challenge is to detect and exploit similarities or differences between domains to enable transfer learning across domains, which might boost an agent's learning rate on successive domains. In \cite{bellemare2013arcade} a few algorithms are proposed that have been designed specifically for transfer learning with large observation spaces.

Despite the problems inherent in the Polyathlon domain, the problems themselves are fairly simple, and there are some comforting factors. Firstly, the state space is low-dimensional, so computational complexity will not be a major problem. Secondly, the action space is also small, and discrete, so no special measures will have to be taken to tackle issues that come with infinite or continuous action spaces. Thirdly, all problems are guaranteed to be episodic and at least ``roughly'' Markovian.

Stochastic state transitions and reward functions can cause problems. For example, in a non-stationary environment (adversarial multi-agent environments such as the Gridworld problem in the Polyathlon being a common case), it may be difficult to capitalise on eligibility traces, since state transition probabilities may have changed since the states were last visited \cite{geist2010revisiting}. 