\documentclass[9pt]{article}
\usepackage{graphicx,algorithm,algorithmic,listings,a4wide,float,sidecap,multirow}
\usepackage[usenames]{color}
\definecolor{NavyBlue}{rgb}{0.08,0,0.6}

\begin{document}

\title{\color{NavyBlue}Virtual Pool}
\author{\color{NavyBlue}Abdulhamid Ghandour, Thomas John, Jaime Peretzman, Bharadwaj Vellore\\
  \color{NavyBlue}ag2672,tj2183,jp2642,vrb2102 @columbia.edu
}
\maketitle

\tableofcontents

\newpage
\section{\color{Blue}Introduction}
Virtual pool is a pool or billiards-like game played on an image of a pool table. Game play is based on a projected image of a pool-table-like surface, with balls positioned on it. A player can then use a cue or cue-like object to `strike' a ball. The ball which was struck is then projected in the direction it was struck, and made to settle at a new final position, possibly following collisions with other balls on the table. After the balls on the table have settled to their new positons, the player can strike them once again. \newline

\noindent The detection of the `strike' is done using a camera which captures the projected image with the cue stick over it. The image is then processed to determine the direction and speed of the movement of the cue tip relative to the position of the balls. The data gathered from this processing stage is then used to compute the trajectory and distance of motion of the balls, and reposition the balls appropriately. As the balls move and are repositioned, new images of the table and the balls are redrawn and projected for the player to be able to admire his or her stroke and plan the next one.

\subsection{Gameplay}
The system should be started up with the camera pointing roughly in the direction of the monitor or screen which is used to display the pool table. When the system begins, it automatically begins to calibrate. This process involves a degree of human intervention. During the calibration processes, the system directs to user to move the camera so that the image of the table is visible to the camera. The directions may be to move the camera right or left, up or down and forwards or backwards. When the system is ready, it requires the user to wait briefly while it completes the calibration, and then the game begins. \newline

\noindent When the game is in progress, the user can employ keys on the board to trigger a variety of actions. Pressing a key at any time will initiate recalibration of the system. This is particularly useful if a user accidentally distubs the camera during play. When recalibration is requested, the state of the game is saved, and restored later. The game can continue where it was disrupted. \newline

\noindent The game is complete when all balls on the table are pocketed. The user is then required to press another key to begin a new game. In fact, at any time during a game, the user can employ this key to reset, and begin a new game. \newline

\noindent It goes without saying that mastering virtual pool requires practice! To help novices, the system provides a switch that the user can throw to turn on a crosshair on screen. This serves as a guide to the player on the position of the cue as he or she moves it. Experts can play without the crosshair displayed. \newline

\noindent Messages from the system to the user as displayed on the LCD screen on the board. Players' points are displayed on the seven-segment display system. Players always take alternate turns. A player who pockets the white cue able incurs a penalty. When the white ball is pocketed, it is returned to the table and placed a random new position which is guaranteed not to be occupied by another ball.\newline

\subsection{Game Configuration}
The following switches are available to the user to select a configuration of the game and to trigger events.

\begin{table}
\centering
\begin{tabular}{| c | l |}
\hline
Switch & Function \\ \hline \hline
Key 0 & Reset System \\
Key 1 & Calibrate System \\
Key 2 & Start new game \\
SW 10 & Turn ON/OFF crosshair \\
SW 11 & Turn ON/OFF striking colored balls \\
SW 9-0 & Green Threshold \\ \hline
\end{tabular}
\end{table}

\section{\color{Blue}Design Overview}
The "Virtual Pool" or "Interactive Projection Pool" game system is built out of a combination of hardware and software components. The system is centred around a NIOS-2 processor\cite{altera:nios2hb}, a 32-bit general purpose embedded processor. The NIOS-II is a configurable soft-core processor, and in this case, it is targeted to be downloaded to the Cyclone-II\cite{altera:cyclone2dh} family FPGA from Altera. \newline

\noindent The system comprises a camera and a projection system connected to the Altera DE2 board comprising the FPGA, memories and other peripherals for connectivity. The physical configuration of the board is illustrated in Figure~\ref{fig:board}, along with an equivalent block view.

\begin{figure}[h!]
  \centering
  \includegraphics[scale=0.3]{camera.eps}
  \label{fig:board}
  \caption{Board Level Connection}
\end{figure}
\begin{figure}[h!]
  \centering
  \includegraphics[scale=0.4]{Altera_peripherals.eps}
  \label{fig:extIntfs}
  \caption{Block View of Physical Interfaces}
\end{figure}

\subsection{High Level View}

\subsubsection{Basic Ideas}
The implementation of the cue-detection is based on the color scheme adopted for the projected image. The pool table is colored green and all the balls placed on the table are colors that have large green components. The module receiving pixel data from the camera (when the camera is pointed at the image of the pool table) expects to see an image which is largely green (within a threshold to allow for environmental noise). As the module scans the image, it is therefore able to identify the presence of objects between the camera and the table by identifying portions of the picture that are distinctly (based on a threshold) different from green. The module then applies a set of image processing algorithms to determine whether the obstacle resembles a cue, and if yes, the position of its tip. This result is then applied to determine whether the cue will impact or has impacted a ball drawn on the table, and what the consequent displacement of the ball is. \newline

\subsubsection{Block View}
The IPG architecture is based on a NIOS II/f processor and six custom made peripherals. The processor and the six modules are interconnected through an Avalon Data Bus, as shown in figure \ref{fig:nios_diag}. The six hardware modules are Camera $I^2C$ Interface, the Vision System, SRAM, Sound Driver, VGA Controller and User Interface module.

\begin{figure}[h!]
  \centering
  \includegraphics[scale=0.3]{niosDiag.eps}
  \caption{Block Diagram View}
  \label{fig:nios_diag}
\end{figure}

\noindent The main task of the modules and the processor are as follows.

\begin{itemize}
 \item Camera I$^2$C Interface: This module communicates with the camera and enables a driver to customize the configuration of the camera as required. Parameters that are selectable include such values as the frame rate, resolution, active pixel area. 
 \item Vision System: This module consists of three submodules: the Camera Interface Pixel Processor, the Calibration System and the Recognition System. Together, these modules receive pixel data from the camera, select the required portion of the image, process the image and identify the cue-like objects in the image. 
 \item SRAM: The SRAM stores the instructions and data used in the software program that runs on the NIOS processor, and is accessed via a SRAM controller Avalon component.
 \item Sound Driver: This module implements the interface with on on-board DAC and helps generate the clatter associated with collisions among balls and between balls and the boundaries of the table.
 \item VGA controller: This module generates the sprites for the balls and the picture of the pool table, and controls the VGA display.
 \item User Interface: This module comprises all components that are required for the system to communicate with users, including the LCD display for sending messages to the user, the seven segment display for throwing up scores, and the switches and keys to receive configuration preferences and event triggers from the user.
 \item NIOS II/f Processor: This is the centre of the system. All preipheral control and communication, calibration, ball-dynamics simulation, including tranfer of momentum on collision, acceleration and damping, and game logic happens in software running on the NIOS.
\end{itemize}

\subsection{Hardware-Software partitioning}
The modules listed in the earlier section are done in hardware, in software, or in a mixture of both. Presented here is a short summary of the division of labor. \newline

\noindent The interface to the camera is implemented as a simple piece of hardware that implements the I$^2$C physical layer, and a piece of software that uses the register interface exported by the hardware to implement the I$^2$C protocol. \newline

\noindent The vision system is mostly done in hardware. However, the front end of the system as a whole comprises several components that need to work in synchronization and exchange data. This synchronization happens via software. For instance, the calibration module within the front end identifies, the active green area of the picture captured by the camera, and communicates it to the software for the information to be relayed to the image cropper module (detailed later). Similarly, the vision processing algorithm communicates the position of the tip of the cue to software once every frame. \newline

\noindent The sound driver works almost entirely in hardware and the role of the software is restricted to requesting that the sound be played. \newline

\noindent The VGA controller is highly configurable and offers an extensive set of options that the software can choose from to format the image that is displayed. The options include the size of the pool table to be drawn on screen, the size of the margins around the pool table, the number, position and colours of the balls that are drawn on the table, anso on.

\subsection{System Configuration}
The NIOS II processor family uses a 32-bit RISC architecture. The instance that it is used in this project is the Nios II/f processor, clocked at 50 MHz and attached to an instruction cache of 4 KB and a data Cache of 2 KB. Also, the processor is built with hardware multiplication and hardware division units along with a dynamic Branch Prediction and narrel Shifter logic. These last features are an important factor in being able to scale up the system to perform vector physics simulations smoothly even for a large number of balls which suffer near-simultaneous collisions.

\section{\color{Blue}Detailed Design}

Some of the significant challenges in the design on the system are the following:
\begin{itemize}
  \item The output of the camera has considerable noise, and filtering out the noise is important for correctly identifying obstacles in the camera's view.
  \item The camera and the display using different resolutions, and the span of the camera's view may be different from the size of the projected image. This implies that most algorithms running within the system are always dealing with two sets of co-ordinate systems. This also imposes the need for additional eror detection and correction schemes.
  \item Users are expected to employ objects that are discernably cue-like when playing. However, the algorithm should also be robust enough to deal with scenarios where random objects appear before the camera. This is particularly necessary in order to be able to deal with users' hands being extended into the `playing field'.
  \item The simulation of the movement, collisions and deceleration of the balls involves a significant amount of non-trivial vector mathematics to be implemented.
\end{itemize}

\subsection{Camera Controller}
This section details the interfacing of the external camera with the FPGA. The camera used in this system has the Micron MT9M011 CMOS active-pixel digital image sensor\cite{micron:datasheet}, which is able to capture frames at SXGA, VGA and CIF resolutions at close-to-video refresh rates.

\subsubsection{Camera Physical Interface}
The camera, a TRDB-DC2 from Terasic\cite{terasic:manual}, interfaces with the board via a 40-pin flat cable as illustrated in Figure~\ref{fig:board}. The DE2 board provides two 40 pin expansion headers. Each header connects directly to 36 pins on the Cyclone-II FPGA. In this case, the GPIO\verb|_|1 slot is used for connecting the camera. Of the two sensors available in the MT9M011, sensor 1 is used. The signals corresponding to this sensor - serial control, clock and data - are carried on pins 1 to 18 of the 40-pin interface. Details of the pin specification can be obtained from \cite{terasic:manual}.

\subsubsection{Camera Register Configuration}
Table~\ref{tab:regconf} gives a full list of the registers available to be configured on the MT9M011 and the manner in which they are expected to be configured for purposes of this application. This configuration is subject to change on the basis of choices, particularly in the matter of the frame rate and resolution, and for colour-specific gains, which are expected to be based on observations from initial tests. Hence some of these register values are left to be undefined. It may be noted that the configuration of these registers is controlled in software, which enables the application to use these setting flexibly. The hardware for the camera interface only provided the I$^2$C interface to send values to the camera hardware and receive values from it.

\begin{table*}[htbp]
\centering
\caption{TRDB-DC2 Register Settings}
\label{tab:regconf}
\begin{tabular*}{\textwidth}{| l | l | l | l | p{2.05in} |}
        \hline
        Register        & Offset& Default       & Configured    & Notes                                         \\ \hline
        Chip Version    & 0x00  & 0x1433        & -             & Read Only                                     \\
        Row Start       & 0x01  & 0x000C        & 0x00D5        & There are 8 dark rows and 4 rows skipped to allow for boundary effects \\
        Column Start    & 0x02  & 0x001E        & 0x0140        & There are 26 dark column and 4 columns skipped to allow for boundary effects \\
        Row Width       & 0x03  & 0x0400        & 0x01E0        & 480 rows of active video \\
        Column Width    & 0x04  & 0x0500        & 0x0280        & 640 columns of active video pixels \\
        Horizontal Blanking B& 0x05 & 0x018C    & 0x00CA        & 202 (minimum permitted when using two ADCs) pixel horizontal blanking \\
        Vertical Blanking B & 0x06 & 0x0032     & 0x0019        & 25 row vertical blanking \\
        Horizontal Blanking A & 0x07 & 0x00C6   & 0x00C6        & Unused (Relevant only when context switching is employed)     \\
        Vertical Blanking A & 0x08 & 0x0019     & 0x0019        & Unused (Relevant only when context switching is employed)     \\
        Shutter Width   & 0x09  & 0x0432        & 0x022A        & Reduced to increase frame rate \\
        Row Speed       & 0x0A  & 0x0001        & 0x0001        & Unchanged \\
        Extra Delay     & 0x0B  & 0x0000        & 0x0000        & Unchanged  \\
        Shutter Delay   & 0x0C  & 0x0000        & 0x0000        & Unchanged  \\
        Reset           & 0x0D  & 0x0008        & 0x0008        & Unchanged \\
FRAME\verb|_|VALID Control & 0x1F & 0x0000      & 0x0000        & Unchanged \\
        Read Mode - Context B & 0x20 & 0x0020   & 0x0020        & Unchanged \\
        Read Mode - Context A & 0x21 & 0x040C   & 0x040C        & Unchanged \\
        Show Control    & 0x22  & 0x0129        & 0x0129        & Unchanged     \\
        Flash Control   & 0x23  & 0x0608        & 0x0608        & Unchanged     \\
        Green 1 Gain    & 0x2B  & 0x0020        & 0x0020        & Unchanged \\
        Blue Gain       & 0x2C  & 0x0020        & 0x0020        & Unchanged     \\
        Red Gain        & 0x2D  & 0x0020        & 0x0020        & Unchanged \\
        Green 2 Gain    & 0x2E  & 0x0020        & 0x0020        & Unchanged \\
        Global Gain     & 0x2F  & 0x0020        & 0x0020        & Unchanged \\
        Context Control & 0xC8  & 0x000B        & 0x000B        & Unchanged     \\ \hline
\end{tabular*}
\end{table*}

\subsubsection{Camera Control Module}
The camera control module is a combination of a hardware block and a software driver that work together to implement the I$^2$C-like protocol that is used to configure the registers of the camera. The hardware module simply implements a bit level logic that is responsible for putting a `1' or a `0' on a pin, or reading data from it. The entire I$^2$C protocol is implemented in software. This includes controlling the clock that accompanies the data. \newline

\noindent The protocol for the camera control interface is simple. Handshaking during data transfer happens via a Start bit, a Stop bit and ACK/NACK bits. The camera control module behaves as the master and is responsible for generating the clocks for all transactions with the camera. As master, it is also responsible for generating the Start and Stop bit. Start and Stop bits on the SDAT line are generated only when the clock is HIGH. Data bits are put on the SDAT line only when clock is LOW. A Start bit involves a HIGH to LOW transition when the clock is HIGH. A stop bit involves a transition from LOW to HIGH when the CLOCK is high.

\paragraph{I$^2$C Interface}
The I$^2$C interface comprises two lines - a clock, and a serial data line. Each write to a register in the sensor happens in the following steps
\begin{itemize}
\item Send a START bit; this is done by first pulling the data line low and then pulling the clock line low.
\item Send the WRITE mode slave address (0xBA) with the SDATA being clocked by the SCLK line
\item Receive a single bit ACK
\item Send the register address (8 bits) on the SDATA line, again accompanied by the SCLK
\item Receive a single bit ACK
\item Send the MSB of the value to be written to the register on the SDATA line
\item Receive a single bit ACK
\item Send the LSB of the value to be written to the register on the SDATA line
\item Receive a single bit ACK
\item Send a STOP bit; this is done by pulling up the clock line and then pulling up the data line
\end{itemize}

\noindent This is implemented by having the software send a series of commands to hardware by setting a registers corresponding to the data to be sent on the SCLK and SDAT lines. The register corresponding to SCLK is set to `0' to pull the SCLK line low and `1' to pull it high. In contrast, the SDAT line is used to write as well as read data. Whenever a read is being performed (for instance, to receive the acknowledge from the camera), the internal driver of the SDAT line needs to be tri-stated. To enable this, the software requires an extra enable bit in the register used to control the SDAT line. This register comprises an Enable bit that causes the SDAT line to be tri-stated when `0' and enabled when `1'. When enabled, the value of the data line is controlled by another bit just as in the case of the SCLK.

\subsubsection{Programming the camera interface}
The registers that the camera control interface exposes to software running on the NIOS are listed in \ref{tab:ciregs}.

\begin{table}
\centering
\caption{Register Description for I2C Controller}
\label{tab:ciregs}
\begin{tabular}{| c |  c | l |}
\hline
Offset &  Bits & Function \\ \hline \hline
0 & 0 & Value to be output on SCLK line \\
\multirow{2}{*}{1} & 0 & Data to be output on the SDAT line \\
                            & 1 & Enable write on '1', Enable Read on '0' \\ \hline
\end{tabular}
\end{table}

\subsection{Pixel Processing Front End}
The system always functions in one of two modes - calibration and gameplay. Calibration mode always runs first, and may run again upon user request. Calibration is performed by drawing an image of the pool table on the display and then moving the camera until it is positioned such that the entire table lies within the view of the camera. To enable this, black colored margins are drawn around the table so that some basic pixel color recignition can be used to identify the objects that the camera is currently looking at, and therefore, how the camera should be moved so that it can see more of the active green pixel area. \newline

\noindent Clearly, during and after calibration, the camera is positioned such that the image captured by the camera contains the entire pool table and then some. However, the margins should be clipped during game play so that they are not visible to the vision algorithm. To enable this, an image cropper component is used that crops the portion of the image that is guaranteed to contain only information about the green area on screen. \newline

\noindent To accommodate the calibration and game play requirements, the front end of the system has the following architecture. The interface to the camera is provided by a pixel processing component that receives the pixel data from the camera along with some synchronization signals. The component simply forwards all data. However, it transforms the synchronization signals such that they can be conveniently used by downstream components. Essentially, the frame-valid signal from the camera is transformed into an end-of-frame, and the line valid signal is transformed to an end-of-line. Finally, this front end component generates an important signal called the valid-green. This is a signal that becomes `1' only when the pixel corresponding to a green in the Bayer pattern received from the camera is on the data lines. Every alternate sample is a green in the Bayer pattern. Therefore, the pixel processor generates a valid-green for every second pixel. The timing of these signal is indicated in Figure ~\ref{fig:citiming}. \newline

\begin{figure}
\centering
\includegraphics[scale=0.5]{ci_pxl_timing_diag}
\caption{Pixel Processor Component Timing}
\label{fig:citiming}
\end{figure}

\noindent Figure ~\ref{fig:calibFlow} and Figure ~\ref{fig:gameFlow} indicate the data flow paths in the calibration and game play modes. In the calibration mode, the image cropper is configured by software to crop no part of the picture. At this time, the image cropper feeds the calibration module. Once calibration is complete, and the start and end co-ordinates of the pool table are determined, the cropper is configured to crop the image to roughly (there is some room left for errors and noise) these co-ordinates. At this time, the cropped image is fed to the vision algorithm. Clearly, the vision algorithm receives only those pixels that green samples in the Bayer pattern, and since there is known to be ne object with low green on the table, the algorithm can identify objects from their colour. \newline

\begin{figure}
\centering
\includegraphics[scale=0.5]{calib_flow}
\caption{Calibration Mode Data Flow}
\label{fig:calibFlow}
\end{figure}

\begin{figure}
\centering
\includegraphics[scale=0.5]{game_flow}
\caption{Gameplay Mode Data Flow}
\label{fig:gameFlow}
\end{figure}

\subsection{Calibration}
Due to the dependence of our system in the camera, it is really important to properly guide the user in the correct positioning of the camera.  The camera calibration algorithm guides the user until the camera is able to recognize the whole active area (pool table). The active area is completely within the camera view range when the algorithm:

\begin{itemize}
  \item Detects a minimum number of consecutive green pixels, which after this threshold will be considered a green row
  \item Recognizes a minimum number of consecutive green rows
  \item Distinguishes at least a non green row before and after the green rows
  \item	Identifies a minimum number of non green pixels before and after the green pixels
\end{itemize}

\noindent In order to keep track of these requirements, two signals have been created. The first signal is a three bit signal called active\verb|_|row, which consists of green\verb|_|row, left\verb|_|column and right\verb|_|column. In this first signal, when the number of consecutive green pixels overpasses a minimum threshold, green\verb|_|row is set to `1'. At the same time, if a number of consecutive non-green pixels is detected, two scenarios might happen. If the threshold for the minimum number of consecutive green pixels has been already overpassed, the right\verb|_|column bit is set to `1' otherwise the left\verb|_|column bit is set to `1'.  The second signal, which is called changes\verb|_|sig, is a two bit signal. When at least a non green row is detected followed by a consecutive number of green rows, the first bit of the changes\verb|_|sig signal is set to `1'. The same way, when after a minimum number of green rows a non green row is detected, the second bit is set to `1'. \newline

\noindent Using the five bits mentioned above, we can orient the user towards calibrating the camera. First, if the green\verb|_|row bit is set to `0', it is assumed that the user is not aiming to the display. Consequently, the UI asks the user to move the camera towards the screen. Once the green\verb|_|row is set to `1', the UI will ask the user to move the camera depending on the other four bits. The different responses are summarized in the truth table on Table ~\ref{tab:calibTT}. This status will continue until a successful calibration is achieved. After the calibration is successful, the X and Y coordinates of the upper leftmost corner and lower rightmost corner of the identified green area are returned. Therefore, this algorithm is designed in such a way that a fix green area can be displayed in the VGA, and the algorithm will find the proper coordinates to crop the receiving image. Because of this property, the pool table area becomes completely independent of the camera and it can be positioned wherever it is desired. Also, in case the camera is disturbed during play time, the user will have the option to recalibrate the camera without losing the game status, including the position of the balls and player scores. \newline

\begin{table}[htbp]
\centering
\label{tab:calibTT}
\begin{tabular}{| c | c | c |}
\hline
Active\verb|_|row & changes\verb|_|sig & Instruction \\ \hline
0XX & XX & Point the camera towards the display \\ \hline
1XX & 00 & Move the camera Backwards\\ \hline
100 & XX & Move the camera Backwards \\ \hline
110 & XX & Move the camera to the Right \\ \hline
101 & XX & Move the camera to the Left \\ \hline
1XX & 01 & Move the camera down \\ \hline
1XX & 10 & Move the camera up \\ \hline
\end{tabular}
\end{table}
 
\subsection{Vision System}
The Vision System is the hardware block which processes input from the camera to identify the tip of the cue stick or the hand. During development of the system, two separate designs for the vision system were tested. The first design did not support use of the hand to play the game whereas the second design does, limited to certain orientations of the hand. The second design was integrated into the final system and is described here.

\subsubsection{Interface}
The interface signals to the vision system are shown in Figure ~\ref{visionBlock} and are described below.

\begin{figure}
  \centering
  \includegraphics[scale=0.5]{vision_system_block}
  \label{fig:visionBlock}
  \caption{Vision System Block Diagram}
\end{figure}

\begin{itemize}
\item Pixel\verb|_|Data: This input is the 10-bit color data from the camera.
\item Valid\verb|_|Green: The camera uses a Bayer color system, with every alternate pixel on Pixel\verb|_|Data being a green pixel color value. Given the different clock frequencies of the camera and the vision system, this translates to new green color data once every four vision system clock cycles. Further, the Pixel\verb|_|Data input is invalid during the blanking intervals of the camera. To indicate when the Pixel\verb|_|Data input has valid green data, the Valid\verb|_|Green signal is asserted for one clock cycle when there is new green data on the Pixel\verb|_|Data line.
\item End\verb|_|of\verb|_|Row and End\verb|_|Of\verb|_|Frame: The End\verb|_|of\verb|_|Row signal is asserted for a period of one clock cycle at the end of one row of pixel data. Similarly, End\verb|_|of\verb|_|Frame is asserted for a period of one clock at the end of each frame. End\verb|_|of\verb|_|Frame also serves as a reset for the Vision System and must be asserted during system startup.
\item Threshold: Threshold is a 10-bit color signal which indicates the threshold color value. Any pixel darker than this threshold is interpreted as part of the cue stick by the Vision System. The Threshold is wired to the switches on the board so that it can be adjusted.
\item X\verb|_|Out and Y\verb|_|Out: These are 16-bit output ports which provide the position of the tip of the cue stick or hand. Each period of logic `1' on Valid\verb|_|Green is interpreted as a new pixel in the row and therefore, the units for the X co-ordinate is the number of green pixels. Similarly, Y\verb|_|Out gives the number of rows, each de-limited by a pulse on the End\verb|_|of\verb|_|Row input. The output registers X\verb|_|Out and Y\verb|_|Out are updated everytime End\verb|_|of\verb|_|Frame is asserted with the value computed during the frame.
\end{itemize}

The timing of these signals is illustrated in Figure~\ref{fig:visionTiming}.
\begin{figure}
  \centering
  \includegraphics[scale=0.5]{vision_system_io_timing_diag}
  \label{fig:visionTiming}
  \caption{Vision System IO Timing Diagram}
\end{figure}

\subsubsection{The Working}
\paragraph{Basic Concept}
The vision system looks at the green channel pixel data from the camera and compares it with the threshold value to obtain a binary image. By looking at this binary image, the vision system finds the extremities of the dark portion of the image, viz. the top most, left most, right most and bottom most dark pixel co-ordinates. Using this information, the vision system branches out into different cases, each taking care of a possible orientation of the cue stick or hand and finally outputs one of the four extremity co-ordinates. In certain cases, the vision system uses data about the width of the image a certain distance below or above the top or bottom extremity respectively to come to a decision about which of the four extremities is the tip. \newline

\noindent It was realized early during the design phase that a sophisticated hand recognition algorithm with the ability to locate the index finger tip under all conditions is beyond the scope of this project. Therefore, certain heuristic assumptions were made regarding the possible orientations of the hand. These various orientations were divided into specific cases and conditions on the extremity co-ordinates and the widths mentioned above were developed for choosing between the different cases. \newline

\noindent The conditions are based on the idea of an extremity \emph{lying on an edge}. For example, when the left extremity is said to lie on an edge, it means the left most dark point in the image is on the left, top or bottom screen edges. It must be noted that when there are multiple points on the image which qualify for the left most (or right most) extremity, the bottom most amongst them is chosen. Similarly, for the top and bottom extremities, the right most is chosen. Another idea that is used is the concept of \emph{entry edge}. For example when the left extremity is on the left edge, the image is said to enter from the left. \newline

\noindent The various possible cases accounted for, the conditions for identifying a particular case and the resulting output co-ordinates are described below.

\paragraph{Bottom Left}
When the left and bottom extremities lie on an edge, the hand or cue stick is assumed to enter from the bottom left. The tip is either the top extremity or the right extremity and a decision has to be made between them for the cases shown in Figure ~\ref{fig:bl1}, Figure ~\ref{fig:bl2} and Figure ~\ref{fig:bl3}.

\begin{SCfigure}
\centering
\includegraphics[width=0.4\textwidth]{BottomLeft1}
\caption{In this case, the top and right extremities are close to each other. Under such a condition, the right extremity is chosen as the output. This is the only case when a cue stick is used instead of a hand.}
\label{fig:bl1}
\end{SCfigure}

\begin{SCfigure}
\centering
\includegraphics[width=0.4\textwidth]{BottomLeft2}
\caption{When the width measured a certain distance below the top extremity as shown is greater than a threshold value, it is assumed that the top extremity is not the finger tip. The right extremity is output in this case.}
\label{fig:bl2}
\end{SCfigure}

\begin{SCfigure}
\centering
\includegraphics[width=0.4\textwidth]{BottomLeft3}
\caption{When the width measured a certain distance below the top extremity is lesser than the finger width threshold, the top extremity is assumed to be the finger tip.}
\label{fig:bl3}
\end{SCfigure}

\paragraph{Bottom Right}
The ideas used for Bottom Left are mirrored and used for the Bottom Right case.

\paragraph{Top Left}
When the left and top extremities lie on an edge, the hand or cue stick is assumed to enter from the top left. The tip is either the bottom extremity or the right extremity and a decision has to be made between them for the cases shown in Figure ~\ref{fig:tl1}, Figure ~\ref{fig:tl2} and Figure ~\ref{fig:tl3}.

\begin{SCfigure}
\centering
\includegraphics[width=0.4\textwidth]{TopLeft1}
\caption{In this case, the bottom and right extremities are close to each other. Under such a condition, the right extremity is chosen as the output. This is the only case when a cue stick is used instead of a hand.}
\label{fig:tl1}
\end{SCfigure}

\begin{SCfigure}
\centering
\includegraphics[width=0.4\textwidth]{TopLeft2}
\caption{When the width measured a certain distance above the bottom extremity is greater than the finger width threshold, it is assumed that the bottom extremity is not the tip and the right extremity is output. It must be noted that varying results were obtained with this case. When the thumb projects downwards, towards the bottom, the finger width test gives incorrect results as it detects the thumb to be the index finger. Using the right extremity always as the tip for this case solves the problem. This causes problems when the wrist is bent downwards as shown in the Figure ~\ref{fig:tl3}. However, it is rare that such an orientation is encountered and therefore, can be neglected.}
\label{fig:tl2}
\end{SCfigure}

\begin{SCfigure}
\centering
\includegraphics[width=0.4\textwidth]{TopLeft3}
\caption{When the width measured a certain distance above the bottom extremity is lesser than the finger width threshold, the bottom extremity is assumed to be the finger tip.}
\label{fig:tl3}
\end{SCfigure}

\paragraph{Top Right}
The ideas used for Top Left are mirrored and used for the Top Right case.

\paragraph{Left}
If the left extremity alone lies on an edge, it follows that the entry edge is the left edge. In such a case the right extremity is the tip. The opposite applies when the right extremity alone lies on an edge. Figure ~\ref{fig:left} illustrates this.

\begin{SCfigure}
\centering
\includegraphics[width=0.4\textwidth]{Left}
\caption{If the left extremity alone lies on an edge, it follows that the entry edge is the left edge. In such a case the right extremity is the tip. The opposite applies when the right extremity alone lies on an edge.}
\label{fig:left}
\end{SCfigure}

\paragraph{Top}
If the top extremity alone lies on an edge, it follows that the entry edge is the top edge. In such a case the bottom extremity is the tip. The opposite applies when the bottom extremity alone lies on an edge. Figure ~\ref{fig:top} demonstrates this case.

\begin{SCfigure}
\centering
\includegraphics[width=0.4\textwidth]{Top}
\caption{If the top extremity alone lies on an edge, it follows that the entry edge is the top edge. In such a case the bottom extremity is the tip. The opposite applies when the bottom extremity alone lies on an edge.}
\label{fig:top}
\end{SCfigure}

\subsubsection{Filtering}
Making decisions based on a single finger width is inherently prone to errors as depending on the hand orientation, the measured width may occasionally cross the finger width threshold incorrectly. To avoid such noise, a filtering scheme was implemented in software which locks onto a bounding box around the detected tip and discards occasional excursions outside this locked region. Moreover, a four point moving average filter is also implemented in software to smoothen out the output from the vision system.

\subsubsection{Implementation}
The entire vision system was implemented in hardware. The information to be extracted from each frame of the camera input includes: the top, bottom, left and right extremities, and the horizontal width a fixed distance below the top extremity and a fixed distance above the bottom extremity. This data extraction is performed on the fly as data comes in from the camera. This eliminates the need for a frame buffer. At the end of the frame, this data is processed based on the conditions specified above.

\subsection{Ball Physics Simulation}

\subsubsection{Basics}
The simulation of the movement and collisions of the balls on the pool table is done is software running on the NIOS processor. Each ball is treated as an object which has such properties as position co-ordinates (or a position vector), a velocity vector, a colour, and a visibility state. \newline

\noindent Velocity along the x direction is considered positive for a ball whose x co-ordinate is increasing; i.e. the ball is being advanced from left to right on screen. Similarly, velocity aong the y direction is considered positive for a ball whose y co-ordinate is increasing; i.e the ball is moving from top to bottom on screen. For the opposite direction of motion along either axis, the velocity component along that axis is considered negative. The position of the ball is maintained in absolute screen co-ordinates. \newline

\noindent Ball visibility helps dealing with balls that have been pocketed and do not have to be considered for computations of motion and collision any further. Balls start out visible and are marked invisible as soon as they are pocketed. \newline

\noindent The game logic is handled entirely within a single loop that begins following all initializations and runs until either all balls are pocketed or the user requests either that calibration be performed or a new game be started. The loop maintains a notion of time and all calculates all events over normalized timesteps. At the start of each iteration of the loop, current time is regarded as 0 and the end of the timestep is regarded as 1. Then, given the positions and velocities of all visible balls at the current time, the times after which balls with suffer collisions are calculated. These collisions might be collisions with other balls, collisions with the wall or collisions with the cue. \newline

\noindent To simplify the algorithms used in the implementation, the tip of the cue is regarded as a ball of infinitismal size. At each iteration, the position of the cue as last recorded by the vision system hardware is retrieved. The distance that the cue has traversed since the last measurement is determined, and this distance is scaled to calculated a velocity of the cue. Clearly, the cue has the same properties as a ball and via this abstraction, the same mathematical functions can be able to calculate the impact of the cue on a ball as of balls on other such balls. \newline

\subsubsection{Event Handling Loop}
\noindent When the time-to-next-collision has been calculated for all balls, the time interval before the earliest of these collisions is picked as the size of the next incremental time step. If there are no collisions scheduled to occur in the unit time step, the full time step is used. The game is then advanced by this time step and the process is repeated until a unit time step has elapsed. This constitutes one iteration. \newline

\noindent When advancing the game by a timestep, new positions and velocities are calculated for all balls. If there are collisions foreseen for some balls, 


Cue is considered a special ball
Cue position is got from the vision system
Begin with a unit time step and determine time of earliest collision
Advance time by time of earliest collision
Move all balls to new position based on current position and velocity
Apply transfer of momentum to all balls that have undergone a collision
Apply friction to all balls
Advance time and repeat

\subsubsection{Collision Simulation}


\subsection{VGA Interface}
 The VGA Controller is an Avalon component that is responsible for displaying the pool table along with the borders and the seven balls. The balls are pre-drawn, and are displayed like a sprite. Each ball can have a color from a defined color matrix, in addition to an option of being invisible as controlled by the software. This color matrix contains the RGB value for seven different colors that will be used throughout the game. Basically everything is built in a dynamic way so that the software sets all positions and value.  One of these things is the black border which is around the table boundaries, and the software can send values through a single register setting the black areas of the top and bottom as well as for the sides.  Next the software can control the size of the pool table to be displayed by sending the horizontal and vertical start and end pint of the pool table. Within this area that was sent by the software, the VGA will draw the table will yellow borders and yellow socket, and setting the background of the table as yellow. The positions of these sockets are also dynamic, and the software can send their coordinates to the VGA controller in order to display them in the correct position.

At this point calibration is ready to start, and the pool table with the black margin is already display at this point. The calibration part will find that area of the pool table drawn by the VGA but in Camera pixel coordinates. And mapping between the area displayed and the area seen by the camera will find the scaling coefficient to be used.

For the balls in the game, the VGA can support up to seven ball, completely controlled by the software which will send their coordinates on the screen, their color, and whether they will be displayed or not. For that the VGA will use 21 registers to read the data for the balls and sets an internal flag after each read register. Once the VGA reads the 21 register correctly it will send to the software a signal saying that it is ready now to take the new coordinate and colors of the balls.

That is the controller will have to wait for all information about all balls to be received, wait till the end of the frame it is already displaying, update the current position values in its registry and then signal the software that is it ready for the next data. At the same time it starts displaying the new frame with the new ball positions. Since square sprites around the balls overlap if the balls are colliding, we are going to read from the square sprite we will read from inside the sprite to make a circular one. Basically there is a process running for every ball, and this will indicate the location of the circular area on the screen where its ball will be displayed.

In addition to that, there is a white cross hair that will be displayed to indicate the position on the cue tip, it also sent by the software after scaling and translating it to change between camera and VGA coordinates. This cross hair has the highest priority over all other objects and will always be on top. This cross hair can be disable using the software during game time.

\section{\color{Blue}Project Management}
\subsection{Versioning}
Configuration management for all project artefacts, code as well as documentation, is done online using Google Code. All users employ an SVN client to access the repository. The project can be accessed online at http://code.google.com/p/projection-billiards.

\begin{figure}[h!]
        \centering
        \includegraphics[scale=0.4]{directree.eps}
        \caption{Directory Tree Structure}
        \label{fig:directree}
\end{figure}
The code tree appears as indicated in Figure~\ref{fig:directree}. Test benches for the VHDL sources are included within the vhdsrc directory.

\section{\color{Blue}Glossary of Terms}
\begin{table}[h]
\begin{tabular}{ p{1in} p{4in} }
        ADC     & Analog to Digital Converter           \\
        FPGA    & Field Programmable Gate Array         \\
        GPIO    & General Purpose Input Output          \\
        I$^2$C  & Inter-IC Communication                \\
        IC      & Integrated Circuit                    \\
        MMIO    & Memory Mapped Input Output            \\
        VGA     & Video Graphics Adapter                \\
        VHDL    & VHSIC Hardware Description Language   \\
        VHSIC   & Very High Speed Integrated Circuit    \\
\end{tabular}
\end{table}

\section{\color{Blue}Source Code}

\bibliography{references}
\bibliographystyle{plain}

\clearpage
\newpage

\lstset{language=C,numbers=left,basicstyle=\scriptsize,frame=single,tabsize=2,showspaces=false,captionpos=b,numberstyle=\footnotesize,stepnumber=2,numbersep=5pt}
\newpage
\lstinputlisting{../../csrc/projectionbilliards/gameconfig.h}
\newpage
\lstinputlisting{../../csrc/projectionbilliards/types.h}
\newpage
\lstinputlisting{../../csrc/projectionbilliards/debug.h}
\newpage
\lstinputlisting{../../csrc/projectionbilliards/ball.h}
\newpage
\lstinputlisting{../../csrc/projectionbilliards/i2c.h}
\newpage
\lstinputlisting{../../csrc/projectionbilliards/fixedpoint.h}
\newpage
\lstinputlisting{../../csrc/projectionbilliards/calibration.h}
\newpage
\lstinputlisting{../../csrc/projectionbilliards/ui.h}
\newpage

\lstinputlisting{../../csrc/projectionbilliards/pool.c}
\newpage
\lstinputlisting{../../csrc/projectionbilliards/calibration.c}
\newpage
\lstinputlisting{../../csrc/projectionbilliards/i2c.c}
\newpage
\lstinputlisting{../../csrc/projectionbilliards/ball.c}
\newpage
\lstinputlisting{../../csrc/projectionbilliards/fixedpoint.c}
\newpage
\lstinputlisting{../../csrc/projectionbilliards/ui.c}

\lstset{language=VHDL,numbers=left,basicstyle=\scriptsize,frame=single,tabsize=2,showspaces=false,captionpos=b,numberstyle=\footnotesize,stepnumber=2,numbersep=5pt}

\newpage
\lstinputlisting{../../vhdsrc/niospool/i2c_controller.vhd}
\newpage
\lstinputlisting{../../vhdsrc/niospool/calibration.vhd}
\newpage
\lstinputlisting{../../vhdsrc/niospool/ci_pxl.vhd}
\newpage
\lstinputlisting{../../vhdsrc/niospool/imagecropper.vhd}
\newpage
\lstinputlisting{../../vhdsrc/niospool/vsystem.vhd}
\newpage
\lstinputlisting{../../vhdsrc/niospool/avalon_vision.vhd}
\newpage
\lstinputlisting{../../vhdsrc/niospool/vga_controller.vhd}
\newpage
\lstinputlisting{../../vhdsrc/niospool/soundcontrol.vhd}
\newpage
\lstinputlisting{../../vhdsrc/niospool/de2_wm8731_audio.vhd}
\newpage
\lstinputlisting{../../vhdsrc/niospool/uicontroller.vhd}
\newpage
\lstinputlisting{../../vhdsrc/niospool/niostop.vhd}

\end{document}
