\documentclass{article}
\usepackage{fancyheadings,utopia,enumerate,fancyvrb,relsize,graphicx,amsmath}
\usepackage[hang, small,bf]{caption}
\usepackage[usenames,dvipsnames]{color}

\title{\textbf{Decision Making in Intelligent Systems}\\Assignment 1 - One Play Poker}
\author{Chaim Bastiaan [\#5742889] \& Kai Krabben [\#5743036]}
\date{\today}

\lhead{DMIS 1 - One Play Poker}
\rhead{Chaim Bastiaan \& Kai Krabben}
\pagestyle{fancy}
\parindent 0pt

\begin{document}
\maketitle

\section{Introduction}
In this report we present our implementation of value iteration to solve the finite Markov Decision Process of a game of One Play Poker. In this simplified version of the poker game: 
\begin{itemize}
	\item The deck consist of four suits of four cards each (Jack, Queen, King, Ace), resulting in a total of sixteen cards. 
	\item In total four cards are dealt, two to the player and two to the house
\end{itemize}
At the start of the game both the player and the house put an ante (1 dollar) to enter the game. The cards are dealt in alternating rounds: starting with the player, in each round either the player or the house receives a card, after which the player decides whether or not to bet an extra dollar. The house will always duplicate the players bet. At the end of the game, either the player or the house wins the pot, depending on who has the best hand. Possible combinations in this game are flush, pair, high card and kicker and hand are evaluated by standard poker rules. In case of a draw, the pot is split equally. \\

The next section provides an overview of the relevant theory for this project. Section \ref{application} describes how this theory translates to the problem of One Play Poker and section \ref{implementation} explains the key points and design choices of the implementation of this application. Section \ref{experiments} provides an overview of the experiments we run and the conclusion is found in section \ref{conclusion}.

\section{Theory}\label{theory}
	\subsection{Markov Property}
	A state holds the Markov property if the conditional probability distribution of future states depends only on the present state; i.e. all information on previous states is stored in the present state. In formula:
	\[P(s_{t+1}|s_t, a_t) = P(s_{t+1}|s_t, a_t, r_t, s_{t-1}, a_{t-1},...,r_1, s_0, a_0)\]
	Where $s_t$ is the state at time $t$, $a_t$ is the performed action at time $t$ and $r_t$ is the received reward at time $t$. 
	
	\subsection{Finite Markov Decision Process}
	A reinforcement learning task wherein the states hold the Markov Property is called a \emph{Markov Decision Process} (MDP). A MDP wherein the possible states and actions are finite is called a \emph{finite MDP}. To define an finite MDP, we need to give:
	\begin{itemize}
		\item the sets of all possible states and actions
		\item a transition model that defines the probability of moving from state $s$ with action $a$ to state $s'$.
		\item a reward model which returns the immediate reward for every state transition. 
	\end{itemize} 
	
	\subsection{Solving the finite MDP}
	The goal of solving a finite MDP is to come up with an optimal policy. For every state, this policy returns the action that maximizes the expected future total reward. One way to achieve this is by using value iteration, an iterative way to estimate the expected total reward of a single state. The update formula for value iteration is as follows:
	\[V_{k+1}(s) \leftarrow \max_a \displaystyle\sum_{s'}\mathcal{P}_{ss'}^a[\mathcal{R}_{ss'}^a+\gamma V_k(s')]\]
In this formula, $V_k(s)$ is the estimated value for state $s$ after $k$ sweeps. $\mathcal{P}_{ss'}^a$ and $\mathcal{R}_{ss'}^a$ provide respectively the chance and reward for the state transition from $s$ to $s'$ with action $a$. After each sweep, the estimated value is closer to the real value for $s$, because the new estimation is a combination of weighted sum of the real reward plus the old estimation of the next state after one transition. After a certain amounts of sweeps, the estimated value hardly changes within a sweep and we can use the value function to find an optimal policy. The optimal policy for a state (i.e. the action the maximizes the expected total future reward) is calculated as:
\[\pi(s) = \arg\max_a\displaystyle\sum_{s'}\mathcal{P}_{ss'}^a[\mathcal{R}_{ss'}^a+\gamma V_k(s')]\]

\section{Application}\label{application}
In the case of One Play Poker we define a state by four elements:
\begin{itemize}
	\item player: the cards in the hand of the player
	\item house: the cards in the hand of the house
	\item deck: the cards in the deck
	\item pot: the amount of dollars of the pot
\end{itemize}
The set of possible actions consists of betting and not betting (i.e. a bet of 1 or 0). The transition model of a state $s$ with action $a$ returns an equal probability distribution of the return states for all cards in the deck as the next drawn card, and an increase of the pot dependent of $a$ (0 for not betting and 2 for betting, since the house always doubles the bet). The immediate reward during the game is negative the bet. A positive reward is only available in the endstates and is equal to the pot if the player wins, half the pot for a draw and 0 for a lost game (since the betted money was allready counted as negative reward). The MDP is solved with value iteration and the optimal policy is stored to compare it with both a random and a handcrafted policy. 

\section{Implementation}\label{implementation}
We implemented the One Play Poker game in Python. This section provides a description of the key points and design choices.

\paragraph{Representing the game}
To represent the game, two classes were implemented:
\begin{itemize}
	\item class \texttt{Card} represents a single playing card by String fields \texttt{value} and \texttt{suit}. The class contains function to output the card as a string and to check if two \texttt{Card} objects represent the same playing card. 
	\item class \texttt{State} represents a state in the game by fields \texttt{player}, \texttt{house}, \texttt{deck} and \texttt{pot}. The first three fields are lists of \texttt{Card} objects in respectively the player's hand, the house's hand and the deck. \texttt{pot} is a integer that stands for the amount of dollars in the pot. The class contains functions to randomly draw a card, to output the state to the screen and to convert the state to a simple tuple version of \texttt{player}, \texttt{house} and \texttt{pot} that can be used as a key in a dictionary for the stored value function or to a string version.
\end{itemize}

\paragraph{Actions}
The set of possible actions is a simple list $[0,1]$ representing the possible bets. For the generation of all states it is important to consider that a hand of (card $x$, card $y$) is equal to the hand (card $y$, card $x$). 

\paragraph{States}
A full deck of sixteen cards is generated as the set of all possible combinations of the values (Jack, Queen, King and Ace) and suits (Spades, Hearts, Clubs and Diamonds). The set of all states as simple state objects is generated from a full deck as follows:
\begin{enumerate}
	\item take all sets of 1,2,3 or 4 cards from the deck.
	\item for each set $x$, generate all possible states from the possible distributions of these cards over the player and house's hand with all possible amounts of the pot in the set $\{2^n|n\in\{1:|x|\}\}$
	\item if the size of a hand is greater than 1, order the hand alphabetically by the string representations of the cards.
\end{enumerate}

\paragraph{Transition Model}
The transition model is a function of a state $s$ and action $a$ returns a list of all possible next states and the uniform probability for one of these state transitions:
\begin{enumerate}
	\item initialize list of possible next states as empty list
	\item for each card $c$ in the deck generate the resulting state after drawing this card and add this state to the result list.
	\item Calculate the uniform state transition probability for all possible states as $1/|\texttt{deck}_s|$
	\item return list of possible states and transition probability
\end{enumerate}

\paragraph{Reward Model}
The reward model is a function of a result state $s$ after performing action $a$ and returns the immediate reward for this state transition:
\begin{enumerate}
	\item initialize reward to zero
	\item if a bet was place, add negative the bet is to the reward
	\item if the state is a terminal state, add the pot to the reward for a win and half the pot for a draw.
	\item return the reward
\end{enumerate}

\paragraph{Value Iteration} 
We implemented value iteration quite straight forward from the algorithm in the lecture slides (see figure \ref{valueIteration} with default values $\theta=0.05$ and $\gamma=0.9$. The value function $V$ is stored as a dictionary with the simple version of each state $s\in S$ as key and the current estimation of $V(s)$ as value. The optimal policy $P$ is generated after value iteration has completed and stored in a dictionary with the simple version of each state $s\in S$ as keys and the optimal action $a$ for that state as value. The optimal policy is also stored in two text files, one for the keys and one for the values, so that it can be saved and used later.

\paragraph{Usage}
How to....

\begin{figure}%
	\includegraphics[width=\columnwidth]{valueIteration.png}%
	\caption{Pseudo Code for Value Iteration}%
	\label{valueIteration}%
\end{figure}

\paragraph{Playing the game}
The program provides a function \texttt{newGame} to start a new game from an initial state that calls the function \texttt{playRound} that recursively plays one round of the game until the game ends. In each round the action of the player can be provided by three different policies:
\begin{itemize}
	\item The optimal policy that resulted from value iteration.
	\item A random policy that bets at random.
	\item A handcrafted policy that we created ourselves. This handcrafted policy consists of the following:
	\begin{itemize}
		\item If one card has been drawn, bet if card is a king or ace.
		\item If two cards have been drawn, bet if your card is higher than the house's card.
		\item If three cards have been drawn, bet if you have a combination (pair or flush). 
	\end{itemize}
\end{itemize}

\section{Experiments and Results}\label{experiments}
The value iteration took four full sweep. Value iteration and generating plus exporting the optimal policy took about fifteen to twenty minutes. After the process was done, we tested the policy against the random and handcrafted policies by running 50,000 games with each policy and calculating the mean total reward. The results are given in table \ref{results}. For each set of 50,000 games the same random seed is used, so that the same 20,000 games are played (same cards are randomly drawn) and the results are well comparable. From the results we can conclude that without any prior knowledge on the game of One Play Poker, the policy generated through value iteration yields results that are above chance level and as least as good as the handcrafted policy that is based on our personal rational evaluation of different states in the game. 
 
\begin{table}%
	\begin{tabular}{|c|c|c|c|}
		\hline
		\textbf{Policy} & Random & HandCrafted & Value Iteration \\ \hline
		\textbf{Mean total reward} & 0 & 0 & 0 \\ \hline
	\end{tabular}
	\caption{Mean total reward for 50,000 runs of One Play Poker for different policies}
	\label{results}
\end{table}

\section{Conclusion and Discussion}\label{conclusion}
One Play Poker such a simplified version of the poker game that is easy for humans to provide the best action for a given state and even to provide a rational explanation for this course of action. It is therefore remarkable that a reinforcement learning process without any prior knowledge of the game is able to perform as least as well as a human player, just by using value iteration. We can also assume that for more complicated games, the value iteration will still be able to learn a nearly optimal policy where most humans would fail to do this. However, to extend to more complicated problems, some improvements to our program will have to be made to make it more efficient and less time-consuming. The most time can probably be saved by generating the transition model and reward model one time and storing it in a dictionary on forehand, in stead of calculating it for every state transition in every sweep of value iteration.

\end{document}