\documentclass[a4paper,11pt]{article}
\usepackage{amsmath}
\usepackage[pdftex]{graphicx}
\usepackage{soul, color}

\begin{document}

\begin{titlepage}
\begin{center}

\textsc{\Huge Tetris: Staying Alive}\\[0.5cm]
\textsc{\Large May 7th, 2012}\\[0.5cm]
\textsc{\Large Mark Larsen, Zachary O'Connor, Mark Swatosh}\\[0.25cm]
\includegraphics[width=3in]{./mm2.png}


\end{center}
\end {titlepage}
\tableofcontents
\newpage
\pagenumbering{arabic}


\section{Domain}
Tetris, from the perspective of artificial intelligence, is incredibly interesting. There is no way to "solve" Tetris; the game simply keeps going and going, until the player loses (for more on the background of Tetris, see Mark Larsen's search algorithm paper). It's somewhat easy to write a simple AI for Tetris that performs mediocrely; the trick is figuring out which method will consistently stay alive the longest.

There were two very important pieces to our AI--- the search algorithm used, and the heuristic function used. Because a game of Tetris has theoretically infinite depth, we had to figure out how to create a finite space to search on. Also, the heuristic had to be able to evaluate non-goal states. Because there is no "edge cost" in Tetris, the heuristic was all that we would be able to use to evaluate each state.

The heuristic is a huge part of creating an AI for tetris. Several attributes had to be defined by us to determine how good a state is--- for example, the highest row, the height of each row, and inaccessible empty areas. Weights were applied to each of these attributes, which were then tweaked until a good combination was found.

\section{Question}
It's important to keep in mind what perspective we had while devising this AI. The idea was to not create an AI that will clear lines but, instead, to create one that will try to stay alive as long as possible. The longer the AI stays alive, the more lines it will eventually be able to clear.

So, the question, then, was whether or not we can create an AI that can stay alive as long as possible, given a certain string of random pieces provided by the AI--- a very ambitious goal.

\section{Approach}
We adapted an open-source tetris clone coded in python to include support for various AIs. We implemented a variety of algorithms, working our way up to DLBFS with a depth of 2 combined with one level of Minimax. We tested each of these algorithms a large number of times (if possible) to determine an average and maximum number of lines cleared. These numbers were then used to compare the quality of the algorithms.

%%talk about why we used minimax (stays alive by preparing for the worst case)
The first two AIs we wrote, GBFS and DLBFS, are the obvious AIs which check every posible position of the first piece and the first and second pieces, respectively. The reasoning behind Minimax is less obvious. Minimax was created for adversarial situations, in which some opponent is attempting to make the game as difficult as possible for the player. However, in random situations, such as selecting tetrominoes, the "AI" is just as likely to give us a good piece as a bad one. The idea is that, by making the assumption that the AI is better than it actually is, Minimax would play more conservatively, and that this would provide an improvement to the standard GBFS and DLBFS by making the AI essentially prepare for the worst.

%Franken pieces

Minimax takes between 10 and 40 seconds to determine where to place a piece. To attempt to combat this lag, we invented frankenstein pieces. The idea behind frankenstein pieces is that Minimax only cares about the worst case scenario. Thus, if we take the oddities of the normal pieces and combine them into new pieces, we would create fewer pieces, which capture the worst case scenario for almost every possible node. In theory, this would enable Minimax to play almost as optimally, while reducing the branching factor significantly.

\section{Method}
Our testing method consisted of running each algorithm multiple times and recording the number of lines cleared in each run. We determined that 100 games per AI would finish in a reasonable amount of time while giving us sufficient data on which to evaluate each AI.

For each AI, we recorded the average and best case for the number of lines cleared. However, given a perfect sequence of pieces, each AI can theoretically play forever. This makes the best case situation interesting, but not particularly useful for comparison of the algorithms. We did not record the worst case situation, although it would also have been interesting, because there is a sequence of tetrominoes for which it has been mathematically proven that it is only possible to clear a single line before dying. We concluded that, since the extremes are highly influenced by randomness, the average number of lines cleared will be our statistic of choice for comparing the performance of our algorithms.

%maybe add more here about how to compare numbers?
\subsection{Expectations}
We naturally expected that incorporating more information into our AI would yield better results. Thus DLBFS, which looks at both the current and next piece, would outperform GBFS, and similarly Minimax2 would outperform Minimax1. We also expected DLBFS to outperform Minimax1, because DLBFS uses the additional information of the next piece, while Minimax1 instead assumes that it is the worst piece possible, often an incorrect assumption.

We were unsure if Minimax1 would outperform GBFS, similarly for Minimax2 and DLBFS, because we did not know how playing conservatively would affect the decisions made by the AI. Similarly, we did not know how FrankenMinimax would perform compared to either the corresponding Minimax algorithm or GBFS/DLBFS. We acknowledge that the creation of the franken-pieces most likely had a significant impact on the performance of FrankenMinimax, and that it would be possible to do further research on creating and utilizing franken-pieces.



\section{Results}

Due to time constraints, we were not able to complete 100 runs for either Minimax2 or FrankenMinimax2. However, we did collect enough results to tentatively compare them to the other algorithms.

\subsection{Description of the Algorithms}

\begin{itemize}

\item Minimax1 is GBFS + 1 level of minimax

\item Minimax2 is DLBFS + 1 level of minimax

\item Franken algorithms are algorithms run where the pieces used for minimax are the two "franken-pieces" we created in addition to the square piece.

\end{itemize}

\subsection{The Numbers}

\begin{tabular}{| l | c | c | c | c} \hline
  {\bf Algorithm} & {\bf Games Run} &{\bf Avg Lines} & {\bf Max Lines} & {\bf Avg Move Speed} \\ \hline
  GBFS            & 100 &  45 &  194 &  0.015 sec \\ \hline
  DLBFS           & 100 & 243 & 1318 &  0.27 sec \\ \hline\hline

  Minimax1        & 100 &  65 &  155 &  1.73 sec \\ \hline
  Minimax2        & 10  & 849 & 2168 & 35 sec \\ \hline\hline

  FrankenMinimax1 & 100 &  47 &  122 &  0.8 sec \\ \hline
  FrankenMinimax2 & 60  & 292 &  748 & 19 sec \\ \hline
\end{tabular}\hspace{-5 mm}

\subsection{Discussion}

At a first glance, we can see that incorporating the next piece into the algorithm provided a very significant improvement to each algorithm, as the \hl{average cases without using the next piece were uniformly below 100}, and no algorithm ever cleared 200 lines without using it. However, this increased the move speed (and we can assume by extension, the search space) by a factor of about 20.

The incorporation of Minimax to GBFS provided a humble improvement to the average case. However, although we have limited results, adding \hl{Minimax to DLBFS provided a huge performance increase}, leaving all the other algorithms in the dust. The cost of a level of minimax, however, was an increase in move speed by a factor of about 120, making even Minimax1 unfeasible for a real-time game of Tetris.

A very interesting result is that FrankenMinimax improved the average case performance of both GBFS and DLBFS. The improvement may be marginal, but it cannot be denied that \hl{the addition of franken-pieces adds some amount of useful information} to the AI. The cost of adding franken-pieces was about half that of adding minimax, increasing move speed by a factor of about 60.

A final, very interesting observation, is that, with the exception of Minimax2, adding a minimax component \hl{decreased the best case} while it \hl{increased the average case}. It is possible that this evidence \hl{reflects the effects of playing conservatively} on AI performance. In retrospect, it would have been extremely interesting to see if Minimax, even FrankenMinimax1, significantly increased the worst case performance of these algorithms.

\subsection{Moving forward}

A future direction for research would be to write an \hl{ExpectiMinimax AI}, and to compare the spread of results with that of Minimax. I would theorize that preparing for the average case would significantly improve the best case statistic and lower the worst case statistic. It would be very interesting, however, to see the effects on the average case statistic as compared to Minimax.

In this paper, we arbitrarily tweaked our heuristic to attempt to maximize performance. It would be interesting to design a \hl{genetic algorithm} that would attempt to maximize the heuristic based on the performance of an AI while using it.

As mentioned in the Expectations section, the \hl{creation of franken-pieces} may have highly affected the performance of the FrankenMinimax algorithms. Maximizing the performance of FrankenMinimax requires a closer study into what shapes maximize the performance of the algorithm.

However, before doing any of these research steps, it would most likely be necessary to \hl{rewrite the code} from python into another language, \hl{to improve efficiency} and take better advantage of multithreading, which python does very poorly.




\section{Lessons Learned}
\subsection{AI Related}
\begin{itemize}
\item {\bf Minimax can be used outside of purely adversarial situations}
\end{itemize}
\subsection{Life Lessons}
\begin{itemize}
\item Choose a language other than Python if multithreading is a possibility
\end{itemize}
\end{document}
