\documentclass{article}
\usepackage{fullpage}
\begin{document}
	\title{Assignment 2}
	\date{Computer Science 278}
	\author{Brett Harrison and Daniel Suo}
	\maketitle
	\section{Introduction}
		In this assignment, we implemented the algorithm from the paper \emph{Region filling and object removal by exemplar-based inpainting}. It is a relatively simple, but powerful algorithm for filling in large missing parts in images.
	\section{Problem Statement}
		Given an image, what is an effective way to remove objects or chunks from the image and then fill in the resulting gap with something reasonable? What kinds of images and missing parts are especially well-treated by the proposed algorithm?
	\section{Algorithm and Implementation}
		The algorithm is fairly straightforward. We describe it below along with descriptions of our implementation and any difficulties we ran into. At a high level, the algorithm first identifies the boundary of the area to be filled in. Then, the algorithm determines which small patch on the boundary (some black pixels and some non-black pixels) should be filled in first. From the non-black pixels, the algorithm then determines which patch in the original image would best fill the patch in question on the boundary. After copying the patch from the original image into the patch on the boundary, the algorithm continues this process until there are no more patches left to be filled.
		\begin{enumerate}
			\item Read in image.
			\item Determine the portion of the image to be filled.
			\item Repeat until the entire image is filled in.
				\begin{enumerate}
					\item Identify the fill front or boundary of the area to be filled. If this front is null, exit the loop.
					\item[] We determined the fill front by finding all black pixels with non-black neighbors and stored the pixel coordinates in a list.
					\item Compute the priority of each patch.
					\item Find the patch on the boundary with the maximum priority.
					\item Find the exemplar patch that is closest to the patch on the boundary with maximum priority.
					\item Copy the image data from the exemplar patch to the maximum priority patch.
					\item Update the confidence matrix and repeat this loop.
				\end{enumerate}
		\end{enumerate}
		
	\subsection{Computing the Confidence Term}
	The paper did not very clearly explain how to compute the confidence term, so we would like to provide explanation for how we computed the different parts of the confidence term:
	\paragraph{Normal to the fill front} First, we created a binary representation of the image, i.e. pixel $(i,j)$ is black if and only if $(i,j)$ is black in the original image, and white otherwise. We then Gaussian blurred the image. Finally, we used a finite difference method in the greyscale component to compute the normal to the fill front.
	\paragraph{Image normal} First, for every pixel in the graph that is non-black and is surrounded by non-black neighbors, we pre-computed and stored a finite difference gradient in the $L$ component. If the pixel was black or had non-black neighbors, we stored $0$ for this value. Then, to calculate the gradient at pixel $(i,j)$ on the fill front, we found the patch centered at $(i,j)$, and returned the maximum value over all pixels in this patch of the pre-computed gradients for those pixels.
	\subsection{Finding the Exemplar}
	Originally, the CIE Lab transform returns a color $(L,a,b)$ with $L \in [0,100]$, $a \in [-1,1]$ and $b \in [-1,1]$. Hence only the $L$ component contributes significantly to Euclidean distance. Our original results when we only computed Euclidean distances in the $L$ component were poor, since the exemplar often had completely different hue even though it had similar brightness. To fix this problem, we scaled $a$ and $b$ by $50$ and then took a full three dimensional Euclidean distance. This produced significantly better results, especially for the ``minor'' image of the boat in the water next to the city skyline.
	
	We used basic matrix operations to find the differences between exemplar candidates and the patch with maximum priority on the boundary, so that we could compute the necessary operations all at once. While this significantly increased the speed of the algorithm, it should be noted that a program such as Matlab or a language such as C is much better suited for large matrix operations than Python. We tried to use a Python wrapper to Matlab, but this was also too slow in practice.
	
	\section{Results}
	We implemented this algorithm using \texttt{Python} 2.6. In retrospect, this was a poor decision due to the slow runtime of our implementation. As a result, we were not able to do significant amounts of testing on large images, but our results for smaller test images turned out very well. In the future, languages such as Matlab or C are much better suited for large Matrix operations.
	
	\section{Conclusion}
	This algorithm worked extremely well when the area to be filled in was relatively small and the image had strong linear structures with strong image gradients. However, we found that the algorithm had a lot of trouble filling in large spaces--small mistakes often propagated significantly when they should not have.
\end{document}