\documentclass[twocolumn,twoside]{IEEEtran}
\usepackage{amssymb,amsmath}
\usepackage[table]{xcolor}
\usepackage{boxedminipage}
\usepackage[pdftex]{graphicx}
\usepackage{epstopdf}
\usepackage{listings}
% \usepackage[plainpages=false]{hyperref}
\usepackage{ifpdf}
\usepackage{array}
\usepackage{url}
\usepackage{eso-pic}
\usepackage[english]{babel} 
\usepackage{makeidx}
\usepackage{natbib}
\usepackage{glossaries}
\usepackage{wrapfig}
\include{glossary}
\makeglossaries
\makeindex

\title{Report for IN4392 Cloud Computing - Cloud Resource Manager} 
\author{
    \IEEEauthorblockN{S.P. Hoogendijk\IEEEauthorrefmark{1} \and R.S. Plak\IEEEauthorrefmark{1}}
    \IEEEauthorblockA{\IEEEauthorrefmark{1}Technical University of Delft
    \\\{s.p.hoogendijk, r.s.plak\}@student.tudelft.nl
    \\ 1379046, 1358375}
    \thanks{We would like to thank  Dr.Ir. D.H.J. Epema\IEEEauthorrefmark{1} and Dr. A. Iosup\IEEEauthorrefmark{1} for their insightful classes and helpful information.}
}
\date{\today}
	
	
\pagestyle{empty}

\begin{document}
\maketitle 

\begin{abstract} 
When handling large text files, calculating the occurrences of a regular expression can be very cumbersome. Calculating parts of the text file on multiple machines (or virtual machines) can reduce runtime significantly. However,  dividing the input file over the different machines leads to a high overload, as the file has to be split and sent to the machines. The following question arises: \textit{"How large does the input file has to be in order for the saved time to outgrow the overhead in parallel processing?"}. In this paper, we use simple regular expressions on several large textfiles. We will try different amounts of virtual machines with different filesizes in order to arrive at a function which calculates whether it is worthwhile to use parallel processing.
\end{abstract}

\section{Introduction} % (fold)
\label{sec:introduction}
\PARstart{I}{n} order to test what the advantages and disadvantages are for using a \gls{iaas} based cloud we created a program that does an analysis of a text file. The text files we will test are of different size, ranging from 1 \gls{mb} to 1 \gls{gb}. Though we will do a trivial job on multiple \glspl{vm} the type of job that we present can be any job that needs to process a text file, or any file for that part. The main characteristic is divide and conquer. The system divides a given file into several smaller pieces, which are given to the available virtual machines that need to process them. Each virtual machine then processes the file and reports its results back to the coordinating machine. 

%same systems
The system design is similar to a \gls{mr} framework, though it is by far not as advanced as those systems. The main comparison is in the fact that both a system using a \gls{mr} and our system decompose the workload by dividing the data and sending the data to the \glspl{vm}. Thus the computation follows the data. 

%implementation
We implemented the following system. Our system is at the moment only available from the command line. The system can create, show and kill \glspl{vm}. If there is any \gls{vm} that does not behave correctly, it is identified and the \gls{vm} is removed. The actual functionality of the system is counting vowels. One provides an input file to the master, which then divides the input file over the machines. When the transfer of the file-part finished the respective \gls{vm} can start doing its work. We currently use a simple default script available in CentOs called $wc$. This script can be replaced by any functionality available from the command line at the target \gls{vm}. When the \gls{vm} finishes it writes a tempfile to the master machine, who then reads this, and can combine the files or draw conclusions from the result. In our case it reads the value in the file, which is the number of vowels counted, and adds up the numbers.

%Remainder article
The rest of this paper is organized as follows. In section \ref{sec:background_information} we will provide some background information about \gls{iaas} based clouds. We will also provide more information about the system and its requirements. Next in section \ref{sec:system_design} we will describe the system design, which includes a description of the system features and the architecture. That section will be followed by the results of our experiments. This will include a description of the test environment and the actual results. In section \ref{sec:discussion} we will discuss the results and talk about the tradeoffs that are present when choosing for an \gls{iaas} based cloud system. Finally we will conclude in \ref{sec:conclusion}. In the appendix is an explanation of how to use the system. 
%remaining of page 1
% describe the problem, the existing systems and/or tools (related work), the system you are about to implement, and the structure of the remainder of the article; use one short paragraph for each
% section introduction (end)

\section{Background information} % (fold)
\label{sec:background_information}
\gls{iaas} based cloud computing offers a service where one can lend \glspl{vm} at will and use as many of them as needed. The user is charged for the time a \gls{vm} is running, even if the \gls{vm} is doing nothing. It is therefore useful to shut down a \gls{vm} when it is not in use anymore to reduce the service costs. A \gls{vm} is billed from startup till shutdown. This means that short running machines have a relative higher cost due to the relative higher startup/shutdown time versus the actual processing/computing time. 

The system we will present will use a \gls{iaas} based cloud. Currently it is based on OpenNebula, but can be adapted to suit other available clouds architectures.  The system is written in Java and is only available from the command line. The system starts on only one machine, which we will call the \textit{coordinator}. When the system is started it is ready to receive commands. The most important command is the \textit{count} command, which takes a filename as an argument. The system reads this file, checks which \glspl{vm} are available and divides the file in equal sizes by the amount of available \glspl{vm}. After the division, each chuck of the file is send to a \gls{vm} which then processes the file. The result of the \gls{vm} is written back to the coordinator. The coordinator then takes the results and computes the final result.

The system needs to reach five predefined requirements. These are automation, elasticity, performance, reliability and monitoring. Our system reaches these five basic requirements in the following way.
\subsubsection{automation}
Automation means that the system automates as much as possible. We achieve this by managing the \glspl{vm}, clearing dead \glspl{vm}, aggregating results and dividing computation. 
\subsubsection{Elasticity}
Elasticity means that the amount of available \glspl{vm} is ajusted dynamically to comply with the job demand. 
Currently we have a possibility to add/remove one or more \glspl{vm} before starting the job.
%Currently we have a scaling of machine based on the filesize. We empirically found that when a file is divided into pieces of around 30-50MB the overhead of the the \gls{iaas} based cloud is lowest. Therefore the system checks how many \glspl{vm} are available at the moment, and starts/closes the right amount of \glspl{vm} such that each machine can process a piece of 30-50MB in parallel. 
\subsubsection{Performance}
Performance means that each \gls{vm} has the same amount of work to do to improve the overall performance of the system. We achieve this by dividing the workload in equal pieces and assign each piece to a \gls{vm} that then processes it. Since all \glspl{vm} that are available are the same, giving them equal workload would mean that they end in around the same time, thus optimizing the performance. 

\subsubsection{Reliability}
Reliability means that when one or more \glspl{vm} fail in doing their respective part, the coordinator notices this and restarts the (sub) job. In our current application, the user gets an error message when something goes wrong with a machine and then stops processing. The user needs to manually restart the system for the same job.
%In our current application, the coordinator notices that a job is not completed correctly and tries to restart the same job on the same virtual machines. If this still fails, it restarts the failing machine and tries again. If after several tries, for instance three, it is still not possible to finish the job, the machines produces an error to the user indicating this.

\subsubsection{Monitoring}
Monitoring means that the system reports the amount of time used, how far the current job is other statistics. Our system does this by returning how long a job has taken to process on multiple machines. This excludes starting and stopping the \glspl{vm}. 
%Our system does this by returning results and processing time per machine. This way the user can see how far the job has been finished. When all jobs are finished, the total processing time of the job is returned. This excludes starting and stopping the \glspl{vm}.

%Half a page 
% describe the application (1 paragraph) and its requirements (1-3 paragraphs, summarized in a table if needed). 
% section background_information (end)

\section{System Design} % (fold)
\label{sec:system_design}

%1.5 page
% a. Resource Management Architecture: describe the design of your system, including the inter-operation of the provisioning, allocation, reliability, and monitoring components (which correspond to the homonym features required by the WantCloud CTO). 
% b. System Policies: describe the policies your system uses and supports. The latter may remain not implemented throughout your coursework, as long as you can explain how they can be supported in the future.
% c. (Optional, for bonus points, see Section F) Additional System Features: describe each additional feature of your system, one sub-section per feature.
% section system_design (end)
This section will describe the system architecture. It will explain the allocation, reliability and monitoring components of the system. It also explains the programmatic possibilities of the system.

\subsection{Global system idea}
In order for the software to be able to run jobs parallel, the system uses multithreading. When launching multiple VMs, the system constructs a new thread for each VM. This way, the booting of the VMs can be monitored and faults can be easily tracked. When the VMs are booted, they are continuously monitored. The actual vowel-count is called by the user. New threads are made for each VM and the VMs begin their calculation on their node. When they are done, they send the output back to the monitor which calculates the final result, combining the subproblems into one output.
\subsection{Components}
\begin{itemize}
\item \emph{VirtualMachine:} The virtual machine. Has all attributes a virtual machine has in OpenNebula. Keeps track of the status, IP, ID etc. and does the actual calculation of the (pieces of) input.
\item \emph{VMExecutor:} Uses a virtual machine to calculate vowel counts over a piece of an input problem. This component is used for the multithreading. When calculating vowel counts, the Monitor constructs several VMExecutors for parallel processing.
\item \emph{Monitor:} The monitor class of the system. Keeps track of all virtual machines that are active. 
\item \emph{StartVM:} Translates user input into system actions. 
\end{itemize} 
When using the program, the user is continuously prompted for input. The user can define actions that have to be done. It is possible to ask the program to list all the currently active (or failed) VMs, it is possible to boot extra virtual machines and it is possible to shut down virtual machines. It is also possible to clear the failed virtual machines. When having large textfiles on the same server as the program is running, it is possible to let the system count the number of vowels that is present in the text file. The system uses all active VMs to calculate the number of vowels. The system keeps track of which VMs are completely booted (and SSH-able), and which VMs are busy.

\section{Experimental results} % (fold)
\label{sec:experimental_results}
This section will describe the experiment environment setup. This includes which system we used and the type of workload we put on the system. We also provide the results of the experiment. In the next section (\ref{sec:discussion}) we will discuss what the implication are of the test results
\subsection{Environment setup}
\label{sec:expsetup}
We tested our system on the fourth \gls{das} cluster. On this cluster OpenNebula 3.4.1 is installed as an \gls{iaas} cloud provider. The \gls{das}4 cluster in delft comprises of 32 nodes each having two quad core processors running at 2.4GHz. The type of systems primarily used in the \gls{das}4 cluster are SuperMicro 2U-twins with Intel E5620 CPUs. In OpenNebula we started \glspl{vm} which required half the CPU power, having 400MB of memory and 1 virtual CPU. The driver used is the \gls{kvm} driver. 

For the implementation of our system we did not use any existing tools nor libraries. The same holds for the monitoring of our system, we all implemented the needed  features ourself. For conducting the experiment we neither used external tools, we got the results from timing in our Java source code. We realize that using existing tools would improve accuracy and will include the starting up and shutting down time of the \glspl{vm}. 
\glsreset{mb} %nog een keer duidelijk maken dat het om bits gaat, niet bytes.
As experimental workload we use several files that we've extracted from Wikipedia. These files are parts of HTML files with sizes of 1 \gls{mb}, 5\gls{mb}, 10\gls{mb}, 50\gls{mb}, 100\gls{mb}, 500\gls{mb} and finally 1000\gls{mb}. The \glspl{vm} are started up before starting the tests and are shut down when we want to test with less machines. Thus the running times provided are purely the computational costs. We did this because it was much easier for us to measure the actual computational cost without including the startup and shutdown time of a \glspl{vm}. We also argue that startup and shutdown times of homogeneous \glspl{vm} are fairly constant and that it is therefore possible to add these fixed times to the results of our experiments to get a good approximation of the running times including startup and shutdown. 

The tests are done in consecutive order on the same day. Therefore the test results might not be indicative for a longer run. This is because the \gls{das} cluster might have been under heavy use during our experiment, which would obviously have an impact on the final performance of the entire system. But then this might also happen in a real life situation where there are also multiple tenants with different who produce different loads. The \gls{das}4 is a special system in this case because next to providing a \gls{iaas} cloud, it is possible to use the cluster in another way which is not in the control of the hypervisor.
% a. Experimental setup: describe the working environments (DAS, Amazon EC2, etc.), the general workload and monitoring tools and libraries, other tools and libraries you have used to implement and deploy your system, other tools and libraries used to conduct your experiments. 

\subsection{Results}
The experiments we conducted are with the Wikipedia files we described in section \ref{sec:expsetup}. We will refer to these files as the test files and if a file is named, its name is the same as the filesize. 

In table \ref{table:test} are the medians of the runtimes of all files. The coloured cells are the cells where the median is the lowest. From these colours we could conclude that almost all files benefit from more \glspl{vm}. But when we see the graphs in figure \ref{fig:alltest} and \ref{fig:smalltest} this is not so obvious. Especially from figure \ref{fig:alltest} it seems that processing a file with more than 5 \glspl{vm} does not increase the performance much.

When we zoom in and only plot the four smallest test files we see that this does not hold for the smaller files (fig. \ref{fig:smalltest}). The total processing time seems to be independent of the amount of \glspl{vm} used. This is actually an interesting results, because this means that there is very little overhead on using multiple machines, excluding the startup and shutdown time. We would have expected that with an increase of machines, there would be an increase in processing time for these smaller test files. The expected longer processing time would mainly come from network overhead.

The test file that actually does what we expected is the 50\gls{mb} file. The performance on this file increases when there are more machines and finally levels out. We would have expected this behaviour for all larger files, 100\gls{mb} and up. The reason for this expectation is that counting the vowels in a file on one machine is already fast. Executing it on more machines would only introduce extra overhead without performance gain. Performance gain would only be seen when the running time on a single machine exceeds the overhead introduced by dividing the file over multiple \glspl{vm}. This increased running time would be introduced by the larger files. 

The reasons for the actual results are not completely clear to us. We noticed during testing that the actual running times differ a lot between two consecutive runs of the same file on the same amount of machines. For instance there is a difference between the fastest running time and slowest running time for the 1000\gls{mb} file on 30 \glspl{vm} of 42.75 seconds. We suspect these significant difference are due to the load of the \gls{das}4 and when the scheduler schedules our \glspl{vm} on the physical machine. We did not record the intermediate results of our machine and thus cannot check whether this delay was caused by one machine being scheduled late, or all machines performing slowly. 

One thing we could have done to stabilize our results is increasing the amount of test done per \gls{vm} setting and per file. We currently tested only 10 runs for each file with a specific amount of machines. Increasing this number to for instance a 100 would increase the stability of the median and average. These stabilized results might provide a better insight when the most performance gain is achieved. We did not do these tests due to the time constrain we have to finish this assignment. 

The total runtime of all test is 12390034ms, which is around 3.5 hours. If we would have kept all machines running during this period, we would not have a penalty from starting up and shutting down \glspl{vm}. Since we tested with 30 \glspl{vm} we would have been charged 4 hours per \gls{vm}. This results in a total of $30\cdot4=120$ hours of processing time. If we would use the cheapest machine available at the EC2 cloud from Amazon the total costs would be $120 \cdot 0.10 = 12$ Euro. We assume that one \gls{vm} from the Amazon EC2 costs 10 Euro cents per hour and that used time is rounded up to 1-hour increments.
% b. Experiments: describe the experiments you have conducted to analyze each system feature, then analyze them; use one sub-section per experiment. For each experiment, describe the workload, present the operation of the system, and analyze the results. In the analysis, report: 
%1.5 page
	% i. Charged-time = time that would have been charged using the Amazon EC2 timing approach (1-hour increments) 
	% ii. Charged-cost = cost that would have been charged using the Amazon EC2 charging approach, assuming 10 Euro-cents/charged hour 
	% iii. Service metrics of the experiment, such as runtime and response time of the service, etc. 
	% iv. (optional) Usage metrics of the experiment, such as per-VM and overall system usage and activity.
% section experimental_results (end)
\begin{figure}[t]
\includegraphics[width=0.5\textwidth]{runtime-all}
\caption{This figures shows the runtime versus the number of \glspl{vm} used. The values used are the same values from table \ref{table:test}. From this graph it is clear that when more than 5 \glspl{vm} are used the speedup is almost-non existent, even for larger jobs.}
\label{fig:alltest}
\includegraphics[width=0.5\textwidth]{runtime-small}
\caption{This figures shows the smaller jobs that we ran. From the figure it is clear that the more machines are used for the 50MB jobs, the faster the job finished. But the effect of multiple machines on jobs smaller than 50MB is negligible. Noteworthy is the fact that multiple machines do barely slow the process down, which one might expect.}
\label{fig:smalltest}
\end{figure}
\begin{table*}
	\begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}|l|l|l|l|l|l|l|l|l|}
		\hline	
		\textbf{size/VMs}	& 1		& 2		& 5		& 10	& 15	& 20	& 25	& 30\\
		\hline
		1000MB	& 166.40& 130.25& 59.66	& 61.03	& 70.51	& 69.05	& \cellcolor{blue!25}31.45	& 85.79\\
		\hline
		500MB	& 73.46	& 61.48	& 38.08	& 36.07	& 40.02	& 39.01	& 41.23	& \cellcolor{blue!25}24.05\\
		\hline
		100MB	& 14.91	& 16.13	& 14.28	& 10.72	& 14.92	& 15.62	& \cellcolor{blue!25}2.93	& 3.40\\
		\hline
		50MB	& 9.26	& 8.71	& 7.85	& 7.89	& 6.28	& 5.03	& \cellcolor{blue!25}3.11	& 3.43\\
		\hline
		10MB	& 1.95	& 2.11	& 2.32	& 1.52	& 2.13	& 0.92	& \cellcolor{blue!25}0.82	& 1.51\\
		\hline
		5MB		& 1.22	& 1.17	& 1.54	& 0.87	& 0.82	& \cellcolor{blue!25}0.80	& 0.86	& 1.65\\
		\hline
		1MB		& \cellcolor{blue!25}0.63	& 0.77	& 0.93	& 0.96	& 1.31	& 0.67	& 0.69	& 7.56\\
		\hline
	\end{tabular*}
\caption{Test results of our system on the DAS4. The values shown are the median values of 10 runs. Each column represents the number of \glspl{vm} used. Each row represents the size of the input file. The light blue colour indicates the lowest value}
\label{table:test}
\end{table*}

\section{Discussion} % (fold)
\label{sec:discussion}
From our test results we can see that cloud computing on a \gls{iaas} based cloud has it advantages on the completion time and thus responsiveness of computation or data intensive tasks. Though the results are interesting there are several tradeoffs that need to be considered before adopting a public \gls{iaas} based cloud for an application. 

First of all there are the costs, are the costs for using an \gls{iaas} better than using a cluster or private cloud? The answer of this question mainly depends on what the adopting company wants to achieve. A \gls{iaas} cloud has as advantage that the \glspl{vm} have a high availability, but charging cost goes per \gls{vm}. Thus keeping the \gls{vm} active costs money. This is not the case with a private \gls{iaas} cloud, where keeping an \gls{vm} active costs no money. Instead keeping a physical machine active costs money, electricity and maintenance costs. Second creating a private cloud requires a lot of investment up front. 

Since the number of running \glspl{vm} is limited by the amount of physical machines available, and a public \gls{iaas} cloud provider has a lot of physical machines, using a public \gls{iaas} gives virtually access to an unlimited amount of \glspl{vm}. This is not the case in any other environment. Access to such an amount of \glspl{vm} means that it is much easier to scale the application when the load is higher. For a company that expects a load of their application that has a lot sudden increases, such a property would be highly desired. This scaling is limited for a private cloud.

A disadvantage of a public cloud is the security. Though we could expect that a cloud service is secure, it might not be secure enough for a company. A company can not control the security of the entire system. They might not even get insight into the security of the system, because this information is sensitive information that might jeopardize the entire cloud. 

If the company is planning on expanding, not only the fact that it easy to lease extra \glspl{vm} is positive feature of a public cloud. It is also nice that the cloud provider probably has multiple data centres strategically placed on the earth. This would increase the response of an internet based service if virtual machines are leased near the users' location. So if a company decides to conquer the USA, they can give the same response time to their American users than that they provide for their European users.

Development of \gls{iaas} cloud based application have a higher cost than development for single machine applications. This is mainly due to the fact that development on a \gls{iaas} cloud higher skilled programmers and architects because it's a distributed system. 

So to conclude this section, if a company is not planning on growing fast and expects a very stable load, the cloud is not for them. Then there are better alternatives available that provide the same functionality at lower costs. If security is a big issue, such as for banks, the cloud is also not suited for the company due to the lack of control on security. 

Lets assume that WantCloud requires an average 5 machines around the year. Furthermore it sometimes requires a faster processing time for their larger jobs which requires more \glspl{vm}. When running 5 machines continuously for a year, the total costs would be $hours \cdot machines \cdot price = 8765\cdot0.10\cdot5 = 4382.5$ Euro. In comparison, a server consuming 450 watt with a price of 15 cents per KWh would cost $\frac{8765 \cdot 450}{1000} \cdot 0.15 = 591.6$ Euro. Multiplying this value by 5 for the servers required results in $2958.2$ Euro on electricity alone. This excludes any extra costs paid for the infrastructure that is used, such as an air-conditioner or the network equipment. It also excludes the costs of purchasing a server and potential lost of income due to the inability to scale an thus process the required jobs fast enough. 

Thus we would suggest that WantCloud BV would start using the cloud, if they do not require control over the security of the system. They would likely reduce cost compared to a self owned system and they would also be more flexible. If the company grows it would be easier for their system to grow with them by leasing extra \glspl{vm}.
% Since it is not clear what WantCloud BV requires from the cloud, we are not able to give them advice whether to adopt a public \gls{iaas} based cloud or not. 


% 1 page
% summarize the main findings of your work and discuss the tradeoffs inherent in the design of cloud-computing-based applications. Should the WantCloud CTO use IaaS-based clouds? Among others, use extrapolation on the results, as reported in Section 6.b of the report, to discuss the charged time and charged cost reported in section for 100,000/1,000,000/10,000,000 users and for 1 day/1 month/1 year.
% section discussion (end)

\section{Conclusion} % (fold)
\label{sec:conclusion}
Using a \gls{iaas} has its tradeoffs. Whether to adopt the cloud depends on the requirements that set. Most notably is the fact that if a guarantee is needed on the complete security of the system, the cloud cannot be used. Also the job that needs to be done on the cloud needs to be of a significantly large size to justify using the cloud. If the job size is small enough to run with a single machine, using the cloud will not be a cheaper option than using a normal virtual private server. 

When running an application in a \gls{iaas} based cloud it means that you share your time on the physical machine with other tenants. This will affect the runtime of the application, so a stable runtime is can not be guaranteed. Multiple tenants are inherent to the cloud, it is why they exist and why they are cheaper than a private machine. 
%unknown, max half a apge?
% section conclusion (end)
\section{Appendix}
\subsection{Time spend}
\begin{tabular}{r|l}
	\textbf{type of time} & \textbf{time in hours} \\
	think-time  & 15	 \\
	dev-time  & 45 \\
	xp-time  & 20 \\
	analysis-time  & 5 \\
	write-time  & 10 \\
	wasted-time & 4 \\
	total-time  & 99 \\
\end{tabular}
\subsection{Operating instructions}
The program is a jar file, based on Java 1.6. This version of Java is installed on the \gls{das}4 cluster. The jar file is available from the repository as well as the source code. The repository can be found at \url{http://code.google.com/p/cloud-resource-manager/source/browse}. 

The jar file is started with the command \emph{java -jar mvm.jar}. The program then initializes and is ready to receive commands. The first command you might want to use is \emph{help} which shows a help text. Actually this text is most useful and you could already get started without reading this instruction any further. Other available commands are:
\begin{tabular}{r|p{6.55cm}}
	\textbf{command} & \textbf{effect}\\
	show & Shows the available \glspl{vm} for processing \\
	add x & Adds x \glspl{vm} to the pool\\
	remove x & Removes x \glspl{vm} from the pool\\
	clear & Removes dead machines from the pool\\
	count `file' & Starts counting the number of vowels in the provides file. The number of machines used is the number of machines currently available in the pool.\\
\end{tabular}

It is obvious that count is the most important command of the application. The provided file does not need to be in the same directory, but if it isn't the full path should be provided. If the file provided to the application is in the same folder as the application, only the file name and extension is needed as an argument. 

The results are printed on the screen. The number of machines that show up next to the result is the actual number of machines that the job was run on. If there were more machines available, this means that not all machines where ready to receive an ssh connection. 

\printglossary
\end{document}
