\documentclass[12pt,a4paper]{article}

\usepackage{algorithmic}
\usepackage{amsmath}
\usepackage{url}
\usepackage{graphicx}
\usepackage{appendix}

\setlength\pdfpagewidth{8.5in}
\setlength\pdfpageheight{11.0in} 
\setlength\textwidth{6.5in}
\setlength\textheight{9.0in}
\setlength\oddsidemargin{0.0in}
\setlength\evensidemargin{0.0in}
\setlength\topmargin{0.0in}
\setlength\headheight{0.0in}
\setlength\headsep{0.0in}

\begin{document}

\title{\bf{Cracking Checksums on the Cloud}}
\author{Elizabeth Soechting, Vijay Chidambaram, Chitra Muthukrishnan \\\& Deepak Ramamurthi }
\maketitle

\section{Abstract}
We use Google App Engine to implement a compute intensive application on the cloud computing platform. The challenge lies in the fact that the App Engine was designed for low latency applications, and throws several hurdles in the path of anyone trying to use it for compute intensive applications. We describe how we changed our approach as we encountered each restriction, and evaluate our approaches. 

\section{Introduction} \label{sec-intro}

Given the current furore about `computing on the cloud', this project was designed to investigate one of the most popular cloud computing platforms currently available, and to answer the following questions about the platform:
\begin{enumerate}
\item{How easy/hard is it to create an application on the cloud?}
\item{Can compute intensive applications run with relative ease on the cloud?}
\item{What are the restrictions and limitations that Google App Engine imposes?}
\item{What facilities can be added to make it more appealing for developers?}
\end{enumerate}

We started out designing our application with an ideal view of the the cloud, and changed our approach several times in order to obtain an application that runs reasonably efficiently on the App Engine. This included several changes to both our algorithm and our implementation. We evaluate the algorithms and report on the results provided by each one, and the resources consumed by each approach.

The rest of the paper is organized as follows: We describe our application in Section \ref{sec-app}.  We provide background about Google App Engine (Section 
\ref{sec-overview}), describe the design of the system (Section
\ref{sec-design}), discuss experience with App Engine and some of the 
problems we faced when implementing the system (Section 
\ref{sec-diff}), and present an evaluation of our different approaches 
(Section \ref{sec-results}). We discuss ways to improve AppEngine in Section \ref{sec-improve} and conclude in Section \ref{sec-conc}.

\section{The Checksum Cracker Application} \label{sec-app}
Given two documents, our application attempts to make the second one have the same CRC-32 checksum as the first document. It appends printable characters to the second document to attempt to find a checksum that is identical or similar to the checksum of the first document.

We realized that it might not be possible to find documents with the exact same checksum. Therefore we defined two metrics to help us in qualifying in results:
\begin{enumerate}
\item{\bf{Bit Difference}:} The bit difference between two checksums is defined as the number of corresponding bits between the two checksums that are different.
\item{\bf{Bit Prefix}:} The bit prefix is the number of leading bits among the two checksums that are identical. For example, the bit prefix of \textbf{11}001 and \textbf{11}100 is 2. 
\end{enumerate}

\section{Google App Engine} \label{sec-overview}
Google App Engine \cite{appengine} provides a means to run applications on Google's 
infrastructure, also known as running an application in the cloud.
Developers can implement their application on App Engine in either python
or Java. We chose to use python in our implementation. Python has several
web framework APIs as well as a Google web framework API. Our experience with 
App Engine is that it is easy to create and deploy a web application.

\subsection{Ease of Use}
We found the App Engine easy and intuitive to use. Google provides an SDK that allows the programmer to develop on his own machine, and then deploy the final product on the cloud. Developing on the Google App Engine is, apart from some configuration files, almost as easy as developing locally. We estimate that a web application, which has simplicity in terms of functional logic, can be developed and  deployed on the cloud within 1 day.

\subsection{Scalability}
One major difficulty with web applications is scalability. App Engine provides
the means to scale automatically without developer or administrator 
interference required. If your application receives more requests, App Engine 
allocates more resources to it until you have reached your allowed quotas.

\subsection{App Engine Quotas and Restrictions}
App Engine is free to use as a platform provided the application does not 
exceed its quotas. App Engine has two types of quotas, fixed quota and billable
quota. The billable quota can be used to purchase more resources once you have
used your base amount. 

\subsubsection{Restrictions}
We describe some of the restrictions that Google App Engine imposes for the free account\cite{limits}. There are other restrictions such as a restriction on the number of emails that can be sent from the application, but we omit those as they are not relevant to our application. These restrictions are for a 24 hour period:
\begin{enumerate}
\item{The application can only use up-to 6.5 CPU hours}
\item{The application can only service 1 million requests}
\item{The application can only make 10 million storage API calls} 
\item{The application can only build 100 indexes [This is across the entire lifetime of the application]}
\item{The application can only make only 100 Task queue API calls}
\end{enumerate}

Apart from these 24 hour restrictions, there are restrictions that apply on a per request basis. App Engine provides the application \textbf{30 seconds} to respond to a HTTP Request. If this time limit has been exceeded, App Engine throws a \texttt{DeadLineExceeded} error.

Applications can attempt to catch the \texttt{DeadLineExceeded} and save work before the application thread gets terminated. However, AppEngine gives less than 1 second before the thread is terminated. As we explain in other sections, this proved inadequate for saving work to the datastore.

\section{Design and Implementation} \label{sec-design}
We describe the design of our application, with respect to how we search for a string to append to the second document, in order to make its checksum as close as possible to the checksum of the original document.

After describing our initial search algorithm, we discuss the hurdles we faced in implementing it. We also describe our modified search algorithm that provided the scalability and the results that we required, while consuming minimal resources.

\subsection{Sequential Search Algorithm}
We  determined that in order to efficiently explore the search space, we must break up the execution into small units of work.  We first created an algorithm to explore the search space in lexicographical ordering.
\\
\begin{algorithmic}
\FOR{$i = 1$ to 10} 
\STATE generate set of strings of length $i$ from printable characters set
\STATE Append each string to $doc2$ and compare checksum to $doc1$
\STATE Save best result found so far
\STATE $i \leftarrow i + 1$
\ENDFOR
\end{algorithmic}

\subsubsection{Discussion}
Note that the results are saved only after all the strings of each length are explored. This proved to a problem later on, when there were too many strings of length, say 5, to be explored in 30 seconds so that we never got to save the result of that computation.

\subsection{Implementation of Parallelism} 
We first implemented the algorithm outlined in the previous section using threads. We discuss the problems we faced with threads, and how we switched to using Task Queues.

\subsubsection{Threads}
In our first approach, we partitioned the search space and assigned a thread to perform the search for each partition. Each thread would be given a maximum
length for which to search and a character to use as the starting point
in the search (i.e. thread 1 starts at ``a'', thread 2 starts at ``b'', etc.). For example,
\\
\\
Thread 1, with starting letter \texttt{a}, searches \texttt{a,aa,ab,ac,ad..aaa,aab..}
\\
Thread 2, with starting letter \texttt{b}, searches \texttt{b,ba,bb....baa,bab ..}
\\
\\
When the thread had searched the entire space it was given, it returned with 
the best answer it had found. Using threads allowed us to break up the search
space in to more manageble units of work. 

\subsubsection{Discussion}
The main problem with this approach is that all the threads \textbf{shared} the 30 second computation limit. This severely restricted the amount of computation that we were able to perform.

For string with a small maximum length (generally 3), the threads were able to return an answer quickly. However, when tried with larger maximum lengths (4 or 5), we were unable to 
obtain the final answer from each thread.

\subsubsection{Task Queues}
Our solution to this problem was to use the App Engine's \texttt{Task Queue API}
\cite{taskq}. Task queues allow the user to perform work outside of a 
request, as a background process. As with everything in App Engine, 
task queues are subject to certain limitations of time and number of tasks allowed. Each task is also required  to conform to the 30 time limit per request. In addition, there are a maximum
of 50 tasks which can be launched per second, the number of task queues an
application is limited to 10, and only 100 tasks can be added in a batch at a
time. To use task queues, a configuration file specifying the
name of the queue, the maximum number of requests per second (the rate at 
which requests can be processed), and the bucket size of the queue must be made. 

\subsubsection{Discussion}
The use of task queues mostly solved our problem. Since each task can run for
30 seconds, we can increase the total amount of time that each request can 
spend to search for the optimal answer and still return the original request to
the user within 30 seconds. So instead of spawning a thread for each 
partition, we insert a task into our queue. App Engine schedules the task to 
execute as soon as possible assuming there are adequate resources.

\subsection{Random Search Algorithm}
We found that the sequential search algorithm was consuming too much CPU and hence came up with a randomized search algorithm. 
\\
\begin{algorithmic}
\FOR{$i = 1$ to $number Of Trials$} 
\STATE Pick a number $N$ randomly between 1 and $maxLength$
\STATE Generate a random string $S$ of length $N$ using printable characters
\STATE Append $S$ to $doc2$ and compare checksum to $doc1$
\STATE Save best result found so far
\STATE $i \leftarrow i + 1$
\ENDFOR
\end{algorithmic}

\subsubsection{Discussion}
The randomized algorithm consumes much less resources and provided results equivalent to that of sequential search. The primary motivation for the randomized algorithm was the observation the sequential order of the first algorithm was not useful in generating better quality results, and also that the sequential algorithm was limited to the strings of length 4.

\subsection{Tackling the DeadlineExceeded Error} \label{sec-thirty}
This was one of the major challenges of developing this application on the App Engine. Every thread/task has 30 seconds to run, after which the \texttt{DeadlineExceeded} error is thrown. Applications are given a chance to catch this error, perform some state-saving, and then get terminated.

There are a  number of ways of handling this problem:
\begin{enumerate}
\item{Introduce a timer into your task/thread. When the thread has executed for 28 seconds, save state and exit the thread.}
\item{Save state at intermediate points during the thread execution}
\item{Design the application so that the termination of a thread does not impede the flow of the application}
\end{enumerate}

The first option is complicated by the fact that saving program state into the Datastore is very slow and takes a lot of time. Hence we turned to options 2 and 3, which we incorporated into our application. We save state at multiple points in the application flow. For example, in the sequential algorithm, the results are saved after all the strings of a particular length have been examined. However, this proved too coarse grained. In the randomized algorithm, we save state after  a fixed number of trials have been executed.

\subsubsection{Getting results to the user}
Another problem associated with the 30 second rule is how to get the final output to the user, given that the computation will take more than 30 seconds to perform. The reason that the computation takes more than 30 seconds to perform is that, even though we put tasks into the queue in one batch, App Engine may not execute them all at once. We observed that if there are more than 10 tasks in the queue, App Engine executes them in a phased manner, with only a subset of the tasks running at any point.

Our work-around for this problem was the following:
\begin{enumerate}
\item{When the user submits a request, a unique request ID is given to that request}
\item{The output page shown to the user automatically refreshes every \textbf{5 seconds}, pulling the latest results from the DataStore, using the requestID}
\item{The output shown to the user is sorted ascending on the bit difference and descending on the bit prefix.}
\end{enumerate}

\section{Difficulties Encountered} \label{sec-diff}
In this project, we encountered several problems, most of which were due to
the limitations of App Engine.
As mentioned in Section \ref{sec-thirty}, the 30 second
time out was very problematic because all work must fit that time frame. We
were able to allocate more time to a request by using the task queues, but 
sometimes the task queues reached the 30 second limit as well. It was difficult
to partition the search space such that a task could complete in under 30 
seconds and still not require us to launch more tasks than we were allowed by
App Engine. To mitigate the first problem of reaching the 30 second limit, we
chose to save our work periodically throughout the task. This meant that even 
if we were unable to search the entire space, we did have some intermediate
results which were still potentially valid results. We tried several different
search algorithms to try to split up the search space such that the 
computation could be completed within 30 seconds.

Another major problem was CPU quotas. App Engine only provides 6.5 CPU hours 
per day
(for free.) This can be used surprisingly quickly when you launch almost 100
tasks to search the combined space of all strings length 5 or less of all
printable characters. In addition, if you exceed your quota for one day with
tasks still left in the task queue, those will execute when the quota is reset
the following day. We were able to side-step this problem by creating multiple
accounts on App Engine.

\section{Evaluation} \label{sec-results}
In our evaluation, we sought to answer the following questions:
\begin{enumerate}
\item{What is the CPU resource utilization of the various approaches?}
\item{In the randomized algorithm, do we get better results by increasing the maximum length of the string to be appended, or by increasing the number of trials?}
\item{What is the average time to produce the final output for the application?}
\end{enumerate}

\subsection{Defining our approaches}
 Each approach is defined by three parameters:
\begin{enumerate}
\item{\textbf{Algorithm}: Whether we used the sequential algorithm or the randomized algorithm }
\item{\textbf{Max Length}: Maximum Length of the string appended to document 2 in both the algorithms }
\item{\textbf{Number of trials}: This applies only to the randomized algorithm. This is the number of trials that the randomized algorithms runs - Another way of looking at it is defining it as the number of points in the random search space that are examined. }
\end{enumerate}

\begin{table}[tdp]
\caption{Comparison of results obtained from various approaches}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline 
\textbf{Approach} & \textbf{Max Length} & \textbf{Bit Diff} & \textbf{Bit Prefix} \\
\hline
Sequential & 5 & 3 & 4 \\
\hline
Random - 1000 trials & 5 & 4 & 3 \\
\hline
Random - 10000 trials & 5 & 2 & 11 \\
\hline
Random - 100000 trials & 5 & 2 & 3 \\
\hline
 & & & & 
\hline
Random - 1000 trials & 10 & 4 & 5 \\
\hline
Random - 10000 trials & 10 & 1 & 0 \\
\hline
Random - 100000 trials & 10 & 2 & 15 \\
\hline
 & & & & 
 \hline
Random - 1000 trials & 50 & 4 & 5 \\
\hline
Random - 10000 trials & 50 & 2 & 9 \\
\hline
Random - 100000 trials & 50 & 2 & 23 \\
\hline
 & & & & 
 \hline
Random - 1000 trials & 100 & 4 & 18 \\
\hline
Random - 10000 trials & 100 & 3 & 16 \\
\hline
Random - 100000 trials & 100 & 2 & 15 \\
\hline
\end{tabular}
\end{center}
\label{default}
\end{table}

\subsection{Quality of Results}
Table 1 lists the the results obtained for the various approaches that we evaluated.
The results produced by the randomized version of the application are not as good as the ones produced by the sequential version, when the number of trials for the randomized version is 1000. When the number of trials increases, the randomized version produces very good results.

Beyond the maximum length of 5, the sequential version of the application is not able to provide outputs within the 30 second limit. Hence results produced by the sequential version for maximum length greater than 5 are not shown in Table 1. Each result shown in the table in the best result for that particular approach, taking out of running each approach 5 times. 

The results for the sequential version are very stable and do not vary between runs, whereas the results for the random version vary widely between runs, as they depend on randomly chosen strings. However, in almost all trials, the randomized version was able to find a string to produce the same bit difference as shown in the table, though not the same bit prefix. The exception is the result for maximum length 10, for 10000 trials. In this case, the result of bit difference 1 and bit prefix 0 occurred only 1 time.

The results enable us to answer one of our proposed questions: Do results improve with increase in number of trials or increase in maxlength? From Table 1, it can be observed that increase in number of trials significantly improves the quality of results, while increasing the maxlength while keeping the number of trials steady does not yield an improvement in the quality of results.

\subsection{CPU Utilization}
Figure 2 shows the CPU utilization of each approach that we explored in our application. The word limit in the figure denotes the number of trials for the randomized version of the application. 

\begin{figure}[tbp]
\begin{center}
\includegraphics[scale=0.7]{fig_cpu.png}
\caption{\textbf{CPU Utilization for various approaches}: The average number of CPU hours required to finish computation for 1 request for each approach. }
\label{default}
\end{center}
\end{figure}

As can be observed from the figure, the randomized version consumes far less CPU than the sequential version. The results depicted in the figure are the average of 5 runs.

Since Google App Engine only allows 6.5 CPU hours per day, the sequential version can only run 
4 times per day. Contrast this with the randomized approach, where even the CPU loaded 100000 trials version can run 30 times per day.

\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.7]{fig_timediff.png}
\caption{\textbf{Time to complete computation for various approaches}: The average number of seconds required to completely finish computation for 1 request for each approach. }
\label{default}
\end{center}
\end{figure}

\subsection{Time for completion of computation}
We also measured the time for completion of the computation needed for a request. We measure this as the time difference between the points when the first result is stored in the DataStore, and the last result is stored in the DataStore.

Figure 2 depicts these results. For the sequential verison, the time of completion of request is around 270 seconds. Increase in the number of trials of the randomized version naturally increases computation time, and hence an increase in the time of completion.

By comparing the curves for the randomized version for different maxlengths, we are able to observe that higher the maxlength, the more sharp the increase in time of completion as the number of trials increases. This is due to more computation being done in each trial as the maximum length increases.

\section{How can AppEngine be improved?}\label{sec-improve}
Though using AppEngine is easy and intuitive, it falls shorts on a number of aspects:
\begin{enumerate}
\item{\textbf{Debugging}:App Engine provides almost no debugging tools on the cloud. Any application which involves concurrency is hard to debug, especially when it is running on different machines which you don't control. More detailed debugging tools are needed for AppEngine.}
\item{\textbf{More dynamic support for indexes}:  While developing the application, we noted that AppEngine did not automatically create indexes for our DataStore queries. We needed to manually specify them, and wait for some time as they got built. This process should be automated in the future.}
\end{enumerate}

\section{Conclusion}\label{sec-conc}
We used AppEngine to develop and deploy a compute intensive application to change a document so that its CRC-32 checksum matches that of another document. Based on our project, we can conclude the following:
\begin{enumerate}
\item{AppEngine is not designed for compute intensive applications. It is meant primarily for low-latency applications. Though it can be made to run compute intensive workloads, it is not easy or efficient to do this.}
\item{Despite Google's obsession with speed, the DataStore is very slow and a \texttt{Put} operation to the DataStore might take more than a second.}
\item{For our design, a randomized version of a brute-force attempt to produce documents that had identical checksums seemed to perform better, both in terms of results produced and resources consumed, than a sequential version.}
\end{enumerate}

We also determine that while it is fun and intuitive to develop simple applications on the App Engine, any application that has more complexity needs better tools in terms of debuggers and interactive consoles.

\bibliographystyle{abbrv}
\bibliography{project}

\end{document}