\section{Evaluation and Conclusion}
\label{conclusion}

Overall, our agents behave in a way we had expected. The driver
agents, except in a few cases, do well in planning their paths. Whether they are speeding or not, the drivers find the shortest
path. According to our experiments, our Q-learning algorithm plans paths poorly
when dealing with loopy graphs. We can mitigate this shortcoming by encoding 
Q-values that better rank the candidate paths for each driver.

Another shortfall of our work is the scalability of our EWA algorithm. Using breadth first search
makes it expensive to plan routes in graphs that posess a high diversity of paths. Moreover, the cost of planning
paths is increased by the individual agent's need to plan a path. In our multi-agent system, the driver could have different sources and destinations for all agents in the worst case. Thus, planning a path in this case requires running breadth first search $m$ times, where $m$ is the number of drivers. 

Despite the shortcomings, we managed to model and build a full simulation involving
police and driver agents. Our algorithms do give consistent results in many scenarios.
Moreover, our agents do learn to adjust their strategies according to the opponent strategy.
Although our results are not conclusive, we are able to already see that the deployment of police
force in strategic points of a map does prevent drivers from speeding on them. Hypothetically, our
project could be transformed into a tool that helps police departments plan their deployment of resoures to better
prevent drivers from speeding. A decrease in the amount of risky driving behavior would be a beneficial addition to public service.
