\section{Related Work}
\label{litsurvey}

To the best of our knowledge, only one other study models
the interactions between drivers and cops in traffic violation patrolling. Kim et al. \cite{kim_policedriver_1997}
use system dynamics to investigate if increased penalties help to decrease drivers' illegal
speeding. They model opponent strategies as mixed strategies. Similarly, we use both opponent
Q-learning and experience weighted attraction to assign probabilities expressing the preference
for certain strategies. This approach makes our strategies a form of mixed strategy. Another way in which our work
is similar to \cite{kim_policedriver_1997} is that our Q-learner, like their system dynamic model, only assumes a population mixed
strategy. In a population mixed strategy, players' behavior is influenced by the proportion of drivers
receiving a ticket as well as the proportion of police patrolling. Thus, the individual agents follow a pure strategy, while
the group of agents use aggregate statistics to introduce a stochastic choice \cite{kim_policedriver_1997}. Our work
differs from that of Kim et al. in two important ways. First, we simulate a full game. We not only model agent choices as functions of payoffs, but model the environment in which the agents interact as a graph as well. Secondly, we propose a suite of unsupervised learners that learn to play according to other players' strategies in a game theoretic sense.

In addition to the previously discussed work, there is a body of work 
\cite{gottlob_robbersmarshalls_2003,pita_guards_2011,alpern_patrols_2009,jain_stackelberg_2008,lau_patrolga_2010,tamika_patrol_2011} that address the security deployment and patrolling problem, which is a problem that can be modelled in similar ways to our problem.
This group of work addresses the problem of deploying patrolling or monitoring agents(police or guards) in strategic places in an effort to either respond to adversarial activity(terrorists, unlawful driver or robbers), or to prevent it. Gottlob et al. \cite{agmon_advpatrol_2009} explores the robbers and cops problem. Gottlob et al. encodes the problem using a graph where robbers use the edges of the graph to run from one place to the next while police aim to sit in a node where they can catch the robber. In our problem, the robber can be thought of as the driver, and the cops would be our police-force. Similarly, our agents also traverse a graph. Thus, it is clear that these are closely related problems . The main difference between their problem and ours is that the robber, unlike the drivers, are not interested in path planning in the graph. Robbers are only interested in obtaining the next edge that increases the distance between themselves and the cops\cite{agmon_advpatrol_2009}. In our case, the driver has a choice to speed or not to speed. Whatever their choice, they will always follow the optimal path that a path planning algorithm would return. Thus, we can say that our drivers are more restricted in their movements. This means that a solution to robbers and cops may not always be feasible for our agents. 

Pita et al., Alpern et al. and Jain at al. \cite{pita_guards_2011,alpern_patrols_2009,jain_stackelberg_2008} address another facet of the patrolling problem. They research how to randomize the scheduling and placement of defending agents(guard or police) in a strategic manner, such that the goals of the adversary are undermined. Pita and Jain \cite{pita_guards_2011,jain_stackelberg_2008} model the game of placing guards in strategic check points at the Los Angeles airport as a Bayesian Stackelberg game. In a Stackelberg game the leader agent commits to a strategy first, and the follower agent selfishly optimizes its rewards. In this game the guard commits to a strategy. The adversary, ie a terrorist, observes the leaders' strategies and then seeks to optimize the payoff for its own strategies by exploiting the loopholes in the leaders' strategies. The Stackelberg solvers give optimally randomized strategies for the guard agents, such that the adversaries cannot predict the next move of the guards.  Similarly, Alpern shows, by analytical means, that the guard agents can get feasibly randomized strategies by decomposing the complex space of the pure strategies. The ranking of these pure strategies can turn them into mixed strategies. Our work is closely related to the works of these authors in that our police agents' aim is to choose nodes in which they might be able to catch as many speeding drivers as possible. The only difference is that our drivers are not free to explore the graph. The nodes our police choose throughout a game is a response to the strategy choices of the drivers. It is because of this nature that we argue there is a higher synergy in the strategies of our agents.

From a methodological point of view, our work benefits from the wealth of literature that addresses opponent learning \cite{wu_opponent_2006,uther_adversarial_1997,zuckerman_adversarial_2007} and experience weighted attraction learning 
\cite{camerer_experience-weighted_1999,ho_self_tuning_2007}. The design of our methods are heavily based in the descriptions offered in Camerer \cite{camerer_experience-weighted_1999} and Wu's \cite{wu_opponent_2006} works. The descriptions of these methods are given in section \ref{background}.



 





