\section{Conclusion}
\label{sec:conclusion}

In this report we have argued that a good representation of the actions and required information is essential, as simply learning on the given input/output space is infeasible. Our well programmed and analyzed input and output representations, combined with a simple rule-based strategy was able to win the domination game this year, proving that they suffice for good results. Despite this however, it was still problematic to learn a better strategy by using NEAT. We argued that this was caused mainly by the large amount of time it takes to evaluate a single game. A very lean input and output space was used, which we discussed to stil be powerful enough to learn a better strategy than our own defined rules. It seems the main conclusion we can draw here with respect to learning is that learning in the domination game, even for our much simplified version, is either really difficult or takes a lot of computational power.

To conclude, we think that despite our dissapointing learning results, the greatest challenge for AI at large is that of learning effective representations.

%perhaps there is a future for the automatic generation of well defined input and output sequences. Perhaps it is possible to make a program that abstracts over the raw input and output space in such a way that it finds representations like the ones we created automatically. Chains of actions and relations between them and the environment could be learned for this, possibly reducing the computational need. At the very least, the representation of the problem seems to be the key here.=======
