%
%  untitled
%
%  Created by Edmund Chu on 2011-03-31.
%  Copyright (c) 2011 __MyCompanyName__. All rights reserved.
%
\documentclass[]{article}

% Use utf-8 encoding for foreign characters
\usepackage[utf8]{inputenc}

% Setup for fullpage use
\usepackage{fullpage}

% Uncomment some of the following if you use the features
%
% Running Headers and footers
%\usepackage{fancyhdr}

% Multipart figures
%\usepackage{subfigure}

% More symbols
%\usepackage{amsmath}
%\usepackage{amssymb}
%\usepackage{latexsym}

% Surround parts of graphics with box
\usepackage{boxedminipage}

% Package for including code in the document
\usepackage{listings}

% If you want to generate a toc for each chapter (use with book)
\usepackage{minitoc}

% This is now the recommended way for checking for PDFLaTeX:
\usepackage{ifpdf}

\usepackage{url}

%\newif\ifpdf
%\ifx\pdfoutput\undefined
%\pdffalse % we are not running PDFLaTeX
%\else
%\pdfoutput=1 % we are running PDFLaTeX
%\pdftrue
%\fi

\ifpdf
\usepackage[pdftex]{graphicx}
\else
\usepackage{graphicx}
\fi
\title{Writeup 1}
\author{Edmund Chu}
\date{April 1, 2011}

\begin{document}

\ifpdf
\DeclareGraphicsExtensions{.pdf, .jpg, .tif}
\else
\DeclareGraphicsExtensions{.eps, .jpg}
\fi

\maketitle

\section*{Introduction}

The goal of augmented reality (AR) is to superimpose virtual objects on the real world, such that these virtual objects appear to be a part of the real world. There are several approaches to AR. A location-based system, for example, relies on GPS positioning and directional orientation in order to determine what the user is looking at. Other systems rely on fiducial markers; when viewed through a camera, these markers are used by the system to determine where on-screen to draw the virtual object. The augmented reality system that I am implementing relies primarily on the latter approach. The system makes use of OpenCV\cite{opencv}, an open source computer vision library, in conjunction with OpenGL\cite{opengl}. Additionally, I am making use of the ArUco\cite{aruco} extension to OpenCV, which provides the basic marker detection functionality. In particular, I am investing solutions to occlusion handling, where a real object passing in front of a marker appears, onscreen, to obscure the virtual object, rather than having the virtual object disappear.

\section*{ArUco}

\subsection*{Marker codes}

In fiducial augmented reality systems, the locations of the markers are used to determine where to draw virtual objects. In some systems, a user may define any figure as a marker; in this AR system, however, the appearances of markers are strictly defined. Each marker consists of a 5-by-5 grid surrounded by a black border, with each square representing one bit of information. Ten of these twenty-five bits are used to encode data, while the rest are used for error checking; 1024 unique markers are possible.

The markers employ a simplified version of Hamming code\cite{hammingcode}. Each row in the grid contains five bits. The first bit is the parity bit of bits 3 and 5; unlike a normal Hamming code, however, this parity bit is inverted, so that a marker cannot be a solid black box; this reduces the possibility of false positives. The next two bits encode data. The remaining two bits are used for further error checking.

\subsection*{Marker recognition process}

The marker detection of ArUco operates in realtime. The images that are passed into the system are retrieved from a camera using OpenCV calls; marker detection in this system operates on each frame, searching for marker candidates and drawing a virtual object at the marker's location if necessary.

In this AR system, the input image is preprocessed with smoothing algorithms; smoothing reduces some noise from the image and removes irrelevant details that may ``confuse" the detector. Erosion and dilation operations are commonly used for these purposes. Erosion of an image isolates and reduces the size of bright regions, as well as reducing ``speckle noise." Dilation, on the other hand, causes bright regions to grow and join together. In this way, closely grouped and visually similar items are merged together into simpler shapes, while needless details are eliminated.

An adaptive threshold is also applied to the image. This operation works on each pixel in the image, calculating a weighted average in the region around that pixel. That pixel is accepted or rejected depending on whether or not the weighted average meets a specified threshold. This effectively turns the image into a black and white picture, which is important for isolating the most important information in an image\cite{learningocv}.

After preprocessing, the image's contours---the lists of points that define curves---are found using built-in OpenCV functions. These contours are culled, removing any curves that do not seem to be part of a four-sided figure, as well as any figures that are too small to be a marker.

Once the on-screen coordinates of these four-sided candidates are known, we search for marker codes inside each candidate. Since there is the possibility that the camera is not looking straight on at the marker, a homographic transformation is applied to each candidate. This warps the image and removes perspective distortion from the candidate, so that a frontal view is obtained. Each square is determined to be either black or white. Once the appearance of the grid is known, the 10 bits of data can be extracted.

\subsection*{Analysis}

The AR method described above has several benefits. The first is that it is rotation- and scale-invariant, which means that markers do not have to be of a fixed size, nor at a specific rotation, to be detected. This results in a fairly robust, accurate detection system. The Hamming code-based markers contribute also to the accuracy of detection, as they are specifically defined, and their distinct appearance means that the marker detector is unlikely to find false positives.

However, the markers lack the ability to encode an especially large amount of data. Additionally, because a marker's appearance is strictly defined in this AR system, there is no ability to ``customize" a marker so that it is printed with any symbol or glyph, making this method less useful for some common object recognition tasks, e.g. book cover recognition.

\section*{Next steps}

I am continuing with implementing occlusion handling. I investigated a contour tracking-based solution using optical flow---a method of determining how a set of pixels moves from one frame to the next---to determine when an object moved in front of a stationary marker. A polygon was built from the contours of the moving object and used to create the occlusion mask, which would obstruct the virtual object. However, this method was unacceptably slow, as it operated on the entirety of every frame; additionally, noisy video streams (such as from a webcam) impaired the tracking of moving contours, even with smoothing and noise reduction applied. These problems could be handled, though, by tracking moving objects only within a region of interest, viz. the area in and around a marker. This would reduce the number of operations to run per frame, possibly making it suitable to realtime applications.

I am also investigating a solution with background subtraction, which starts with a base background frame and looks for differences from that base frame to subsequent frames; a change from frame to frame indicates that something of interest is happening. By isolating this operation to the part of the screen that contains the marker, the general shape of the obscuring object can be found. However, in order to draw the virtual object in the proper position, I must assume that the marker will stay still while an object obscures the marker.

\bibliographystyle{abbrv}
\bibliography{bib1}
\end{document}
