\section{Model}
\label{model}

The overall environment is simulated as a weighted graph. Figure \ref{fig:graphmodel}
shows an example graph that our program would use. Between any two nodes, that are not the node itself or the 
destination node, there exists two edges. The dark line in the graph shows the edge that the driver would follow if it chose not to speed. The dashed line in the figure shows the edge the driver agent would follow if it decided to speed.
Agents visit nodes in the graph according to their own individual strategy. 

\begin{figure}[h!]
  \centering
      \includegraphics[width=0.4\textwidth]{figures/graphmodel.png}
  \caption{Graph Model for the Environment }
  \label{fig:graphmodel}
\end{figure}

The police agent interacts with the graph environment by visiting one
node at a time, and by switching its current position depending on the
attractiveness of other nodes. In figure \ref{fig:graphmodel} a police current strategy
could consist of any node, eg. node B. Like the driver agent, the police agent has
a transient behavior. She switches to other nodes over time as multiple games are played.
After choosing a node, there are two possible scenarios for the police agent. It either 
switches to another node with higher profit, or it stays in the current node because it is
currently issuing tickets.

In contrast to the police agents, the driver agent interacts with the environment by planning a path from
source to destination. As an example, figure \ref{fig:graphmodel} shows that one possible driver agent strategy
could be the path A-B-D along with the choice to speed, or not to speed, along each edge.
Depending on the learner, Q-learner or EWA, the planning process
is done using a path planning algorithm, or by using the best decision at each node, respectively.
Details of these differences will emerge in the next section. Any driver agent can experience one of four different scenarios:
get a ticket for speeding, traveling at normal speed if the police are present, traveling at normal speed
even if the police are not present, or traveling above the speed limit and not receiving a ticket. Scenario four
is the one with the highest reward; scenario one is the one with the lowest.

We model our game as a Bayesian Stackelberg game. The police agent always commits to a strategy first.
The driver agent chooses its best strategy in response to a police agent move \cite{kim_policedriver_1997}.
In turn, the police agent might choose to play a Stackelberg follower action against the action chosen by their follower. Our EWA learners differ from our Q-learning agents when it comes to the way they play their mixed strategies; the next section explains how.

\subsection{EWA  and Q-learning Police Agent Models}

The EWA police agents, similar to the driver agents, play a mixed-strategy. This is important for EWA; if
the police agents were to always play pure-strategy games, the driver agent would be able to predict police locations with high degree of accuracy \cite{gatti_patrolling_2008}.
The police agents' mixed strategy is due to the characteristics of experience weighted learning. Using the EWA framework, the police agent keeps a history vector. This vector keeps track of the number of tickets the agent has given in a particular node of the graph. It also keeps a vector of attractions, an example of these vectors is given in table \ref{tab:ewa}. The attractions vector stores the attraction level for each potential strategy of the corresponding agent.  The information from these two vectors allows EWA to calculate the probability of choosing a given strategy. 

\begin{table}
\centering
    \begin{tabular}{ | l | l | l | l | l | l |}
    \hline
     Strategy & History & Attraction & Probability   \\ \hline
     A & 10 & 0.85 &  0.37 \\ \hline
     B & 1.0 * & 1.2 & .88  \\ \hline
    \end{tabular}
    \caption{Data that the agent uses for decision making}
    \label{tab:ewa}
\end{table}

A chosen police strategy in one game may receive a different probability in the subsequent game. The reason for this is that the value for the probability is dependent on the number of drivers that sped on a road in which the police agent was deployed. The EWA police agents do not remain stationary in a node of the graph for several games, unless that node is part of an equilibrium strategy for the agent. Note, however, that the probability of choosing a strategy also depends on the different EWA parameters as well as the reward structure of the game \cite{camerer_experience-weighted_1999}. For our police agent, we chose a fixed reward structure in which the agent is awarded five points for catching a driver. Five points are subtracted from the current payoff if a strategy would lead to missing a speeding driver.  The current reward structure was set empirically. We leave the writing of learning algorithms that help calibrate both the rewards and the parameters for the police agents, as well as the driver agents, as future work. This calibration is required in order to have adequate sensitivity to the opponent game.

$$
reward(r) = \left\{ \begin{array}{rl}
 -5 &\mbox{ if cathes} \\
  5 &\mbox{ otherwise}
       \end{array} \right.
$$

The Q-learning police agents also play a mixed-strategy game. They differentiate from the EWA police agents in the way they compute their next state. Instead of keeping a history vector, each police agent computes a value for its present state, as well as all neighboring states, for each timestep of the simulation. This value is given by the presence of police agents in the state, number of speeding drivers seen in the last timestep, and the number of tickets given in the present state. The presence of police in a state is noted so that no two cops occupy the same states; the number of tickets given does not depend on the number of police in a state, but rather only in the presence or absence of police. Police agents should try to cover more roads in an attempt to maximize their gain. The number of speeding drivers, for present and neighbor states, is collected in order to determine the potential states in which the police could issue more tickets, given the driver's decision to speed. The number of tickets for the present state is needed to indicate how profitable the police's present state happens to be. If no tickets were given in the last timestep, it may be an indication of decreasing speeding interest from drivers. Each police agent attempts to maximize the number of tickets given to drivers using this information in order to aid a decision whether to stay in the current state or to move to another. 

\subsection{EWA and Q-learning Driver Agent Models}

The EWA driver agent has a more complex strategy structure. Contrary to police agents, EWA driver agents have strategies that represent full paths from source to destination. In order to build the paths for a given driver agent, we build a breadth first algorithm. The breadth first algorithm takes the source and destination nodes of an agent in the graph, and returns all of the paths between the source and the destination. In turn, the agent takes each path and evaluates the weights of the edges in the path to determine the cheapest way to arrive at the destination.  Taking the example in figure \ref{fig:graphmodel}, if an agent specifies node A as its source and node D as its destination, the breadth first search would return the routes A-B-D and A-C-D. The agent evaluates each edge as follows:
\begin{verbatim}
function evaluateDriverStrategy(strategy):
    for edge in strategy:
       if no police and driver speeds:
          award highest reward
       if police and driver speeds:
          award penalty
       if driver does not speed:
          award small reward 
\end{verbatim}

The reward structure for an EWA driver agent is based on the weight for that section of the road. This allows the driver agent to pick roads that are optimal even when an agent chooses not to speed on the road network. The reward structure is represented as follows:

$$
reward = \left\{ \begin{array}{rl}
 -normalweight(edge) &\mbox{ if caught} \\
  a*speedingweight(edge) &\mbox{ if not caught} \\
  invnormalweight(edge) &\mbox{ if not speeding}
       \end{array} \right.
$$

In this function, normalweight is the weight of the edge if the driver drives lawfully.
The letter a is a constant number that was added empirically to the equation; it is a positive number and
scales the reward upwards. It is particularly useful when the normal weight is very close to the speeding weight; it allows the driver to pick the speeding strategy if no police are present. If police were present, the driver would be more conservative and pick the option not to speed, even if there are no police units, which is an economically suboptimal decision in our game.

The Q-learning driver agents, as do their police counterparts, play a mixed-strategy game. An Opponent Q-learning algorithm adaptation, of our making, is used by these agents. In this adaptation, a probability that indicates how safe it is to speed through a specific road without getting a ticket is used in place of the past locations of police agents. This probability is computed as follows:
\begin{eqnarray} 
&& P(i,j) = \frac{TG(i,j)}{D(i,j)}\times \frac{LT_{t-1}(i,j)}{t} \nonumber\\
&& SR(i,j) = 1 - P(i,j)
\label{eq:driverprob}
\end{eqnarray}

In equation \ref{eq:driverprob}, TG(i,j) is the number of tickets given to driver who sped from node $i$ to $j$.
$D(i,j$) is the total number of drivers that traveled from node $i$ to node $j$. $LT_{t-1}$ is the timestep of the last ticket given to drivers going from node $i$ to node $j$. $t$ is the current timestep, and $P(i,j)$ is the probability of getting a ticket if speeding from node $i$ to node $j$. Finally, $SR(i,j)$ is the probability of speeding from node $i$ to $j$ and not getting a ticket. 

Besides the values that are specifically dependant on the actions of the police drivers, the two Q-values associated with moving from one road to another also encode the rewards of the two possible options in each link, to speed or not to speed. For the Q-value in which the agent did not speed, the reward is computed as follows:

\begin{eqnarray} 
&& R(i,j) =1- \frac{C(i,j)}{\sum_{i,j} C(i,j)} \nonumber\\
&& RS(i,j) = R(i,j)+R(i,j)\times \frac{C(i,j)-CS(i,j)}{C(i,j)}
\label{eq:driverreward}
\end{eqnarray}

In equation \ref{eq:driverreward} $C(i,j)$ is the cost, in time, of going from node $i$ to state $j$ without speeding.
$Sum(C(i,j))$ is the total cost for all the roads in the map. $R(i,j)$ is the reward gained by not speeding from node $i$ to state $j$. In computing $R(i,j)$, the driver assumes that the transition to roads with the lowest cost is the best choice, as it may be that the next state is that agent's final destination. In order to calculate the Q-value for the speeding action in a given node, the reward can be computed using $RS(i,j)$, the reward gained by speeding from node $i$ to node $j$. In equation \ref{eq:driverreward} CS(i,j) is the cost in time of going from node $i$ to node $j$ and going above the speed limit. In computing $RS(i,j)$, the driver takes into account not only the cost of speeding in a link, but also how much better it is to speed compared to not speeding in that specific road.
These values are used in the Opponent Q-learning equation to help the driver agent decide which road to take and what speed choice to make. These two decisions can be made in one step since a Q-value for each speed choice on each road is kept. The RS probability is only considered when computing the Q-values associated with choosing to speed in each of the possible roads as it does not concern the driver in the case of not speeding. In choosing its next action, each driver agent selects the action given by the maximum of all the Q-values calculated for the neighboring roads.



