\documentclass[12pt,a4paper,titlepage]{article}

\hyphenation{analysis system systems}

\usepackage{graphicx}
\usepackage{float}
\usepackage{amssymb}
\usepackage{setspace}
\usepackage[numbers]{natbib}
\begin{document}

\bibliographystyle{plainnat}

\begin{titlepage}

\center{\Large Cork Institute of Technology}
\center{\small Department of Computing, Cork, Ireland}
\\[3cm]
{\sffamily
{\huge
\begin{doublespace}
Tenant Behavior-driven Scheduler in OpenStack Cloud
\\[0.5cm]
\end{doublespace}
}
{by\\[0.5cm]
\Large Vladislav Belogrudov}
}
\\[0.5cm]
vlad.belogrudov@gmail.com
\\[2cm]
{\large \today}
\\[2cm]
\begin{flushleft}
Research Project\\
M.Sc. in Cloud Computing\\
Supervisor: Dr. Paul Walsh\\
\end{flushleft}
\vfill
\begin{flushleft} \small
This report is submitted in partial fulfillment of the requirements for
the Degree of Master of Science in Cloud Computing at Cork Institute of
Technology. It represents substantially the result of my own work except where
explicitly indicated in the text. The report may be freely copied and distributed
provided the source is explicitly acknowledged.\\
\end{flushleft}


\end{titlepage}

\pagestyle{plain}
\pagenumbering{roman}
\setcounter{page}{2}

\section*{Abstract}
\addcontentsline{toc}{section}{Abstract}
This research project deals with optimal placement and migration of virtual machines on physical servers in computing clouds. Many approaches to keep hosts loads balanced and customers happy are reactive in their nature, moving virtual machines from busy hosts to free ones only when necessary. This work attempts to design and to implement a proactive scheduling facility in computing cloud that is aware of virtual machines behavior in time. Many computer loads follow specific business cycles and the latter reflect human activities and movement of the Earth around the Sun. Time Series analysis is employed to capture behavior patterns (load history) of different virtual machines. Such patterns are combined on physical hosts for better utilization and responsiveness.

The analysis of Time Series included only daily cycles that is useful when virtual workloads are distributed during a day with some time shift, e.g. due to businesses run in different time zones. The experiments show how chosen or designed in this research balancing algorithms work with different load distributions. Weaknesses and further research possibilities are also discussed.   

Feasibility study is made with help of OpenStack framework that has been deployed on a powerful multi-core server as a virtual cloud ("cloud in a box").

\newpage
\section*{Acknowledgements}
\addcontentsline{toc}{section}{Acknowledgements}
Many thanks to my current employer, EMC Corporation, for friendly, innovative environment and support of my study without which this work would not happen, Dr. Paul Walsh for guiding my research into something manageable and result-oriented and for much useful advice, my family for tolerating my absence late evenings due to my studies.
 
\newpage
\section*{Vocabulary}
\addcontentsline{toc}{section}{Vocabulary}
\begin{description}
\item{Host} \\ Physical server that runs virtual machines
\item{VM} \\ Virtual Machines
\item{SLA} \\ Service Level Agreement
\item{Hypervisor} \\ software that runs on physical server, abstracts and shares its resources among virtual machines 
\item{KVM} \\ Kernel-based Virtual Machine, a full virtualization solution capable of hardware virtualization support, runs in Linux OS
\item{QEMU} \\ popular open source computing system (PC) emulator that dynamically translates processor instructions of VM into host commands 
\item{R} \\ project, language and environment to analyze data sets, includes many statistical and visualization features
\item{IaaS} \\ Infrastructure as a Service, cloud deployment model where cloud provider offers virtual resources like VMs, storage, network 
\item{SSE} \\ Sum of Square Errors, common technique to solve linear regression problems
\item{OLS} \\ SSE in R parlance
\end{description}
\newpage
\tableofcontents

\newpage
\pagenumbering{arabic}
\onehalfspacing
\section{Introduction}

\subsection{Project Background}
Our computing history has seen many changes during the last century. Cloud Computing taken by many as a new buzzword actually means a sequential repeat and evolution of computing technologies. In late 80s computers became cheap, powerful, Internet - faster and more reliable, the world moved from mainframes and big expensive servers to commodity PCs. Many thought about the end of life of mainframes - they could be replaced by a grid of inexpensive computers. In the last decade economical aspects of IT industry raised up, services needed to be more client oriented, specialization and trade as main drivers of economics introduced new ways of computing - in the cloud. Computing now can be outsourced, with a lot of benefits like uncapped performance, better security, freeing resources to focus on key business activities. Cloud Computing in simple words can be viewed as a business model based on three cornerstones - outsourcing, virtualization and client-server architecture. And mainframes a back again.

Big computing clouds like Amazon \cite{amazon13} and Rackspace \cite{rackspace13} gave big advantage to their clients because of pay-per-use, flexible and scalable resources. Virtualization made the latter easy to shrink or to grow on client demand. Big physical servers can accommodate several dozens of virtual machines. Because at any moment VMs require varying amounts of resources and an average server in physical incarnation takes only 10-15\% of physical resources it is possible to over-provision virtual servers with more resources than underlying hardware allows. This way all VMs can use more CPU or memory when necessary and hosting physical server utilization goes up. The benefit appears only with the right set of virtual workloads that is hard to predict. One way to have a good mix is to deploy large hosting server and many small virtual machines for customers from different time zones and business cycles. 

Nowadays more and more customers look at SLAs when choosing cloud providers because they don't want their workloads to be disturbed by cloud tenants. Typical hypervisors allow some level of control over virtual machines to guarantee minimum computing resources. There are two approaches - capped and share-based control. Capped approach only allows a defined part of resources to be consumed by clients. This is somewhat greedy method of cloud provider not to spend more for clients that pay less. Another approach is based on definition of shares - at any time client is allowed to consume all resources but if two or more clients struggle for them the resources are distributed proportionally to owned by clients shares. From customer perspective this is better but still requires good mix of tenants.

\subsection{Motivation and Objectives}

Almost any computer workload is periodic. This comes from the fact that computers are tools for many businesses and businesses are run by people for people. Human activities follow specific patterns that defined by our nature. E.g. people sleep at night and work during day, have rest on weekend, do breaks for lunch, have seasonal events and habits. Servers  are loaded with tasks that repeat hourly, daily, weekly, etc. Thanks to the Earth running around the Sun daily periods are distinguished very well and because of time shift (different time zones) workloads with day periods can be perfectly mixed.

To make perfect mix of workloads (or simply VMs in a computing cloud) firstly it is necessary to learn behavior of the workloads in the past. Then there should be an algorithm or strategy to answer combination and movement question  - which VMs are good with each other and in which oder to move them from one host to another. These are main objectives of the research. Another objective is a feasibility study of described techniques and designed algorithms in conjunction with real cloud framework - can it work in cloud, how much power is needed, performance of scheduling components.  

\subsection{Project Environment}

The project started with simple idea of workload combination. Internet became fast enough to use remote computers without much of performance degradation. This allowed people from different countries and continents share computing power of favorite clouds. Running virtual servers in Amazon or Rackspace clouds became a matter of several minutes and a valid credit card. One favorite choice for building own computing cloud is OpenStack project \cite{openstack13}. It has been chosen for evaluation of researched methods. OpenStack is open source, very versatile and abstracts itself from specific hypervisors, operating systems and tools. It follows philosophy "share nothing" - any components of it can be run anywhere on any server, even in virtual machines (virtual cloud). This research project has chosen to build virtual cloud with help of VirtualBox software \cite{virtualbox13} so that any physical hosts became virtual machines ("virtual boxes") running inside one more layer of virtualization. Hosts in OpenStack typically (and in this project) run Linux OS with KVM \cite{kvm13} / QEMU \cite{qemu13} virtualization software and use open-source tools for inter-networking and internal gears.

For experiments several virtual machines are created in "hardware" multi-core Linux server with help VirtualBox. These virtual machines are virtual "physical" hosts or nodes of OpenStack Compute. Each virtual host is affined to specific cores so that each node seems to have its own virtual CPU. The overhead of doubled-virtualization is quite big with this approach but for the research specific goals it was of low importance. One of the node in the setup is cloud controller but it also runs compute workloads. Other nodes are pure compute nodes.

A few assumptions and restrictions have been set for experiments. Firstly the environment is set to have enough memory for all nodes so that there is no lack of resources or competition between VMs. Second assumption is that all nodes are alike - their processor / memory / disk configuration is similar to each other. Special CPU load software has been written for VMs to allow consumption of CPU resources only. These "loaders" are configured with specified levels of CPU loads and periods. Another set of written in this project software allowed to inject into developed system specific "histories" of all VMs so that it is easy to see how developed algorithms and strategies work in different combination of hosts, VMS and their loads. And the last but not least assumption is that loads are restricted to either daily periods or follow random load generation. Other periods and their mix are to be researched in future attempts after this work. 

A lot of tools and methods for machine learning and data analysis have been tried in this work. IT was found that the best tool both as a developer / researcher environment and as a controlling script in controller node was R \cite{r13}. This language has a lot of powerful features not found in any other languages or their libraries.

\subsection{Document Organization}

This report has the following structure. In section 2 a quick overview of relevant research works and methods is given. Then section 3 discusses possible machine learning techniques that can be used to learn patterns in loads of VMs and some examples of how chosen methods work on collected load histories. Section 4 defines methods of optimal VM load combination and makes an attempt to look into alternative ways. Section 5 describes system architecture, i.e. how decision making components are integrated into OpenStack. Section 6 shows experimental results and findings. Conclusion in section 7 gives insights on feasibility study results, strengths and weaknesses of chosen methods and software, possibilities for future research. The last section consist of description of various software pieces developed in this project and a "how to" for building virtual cloud. 

\newpage
\section{Background Research}

There have been many attempts to balance virtual workloads to optimize performance of hosts and make better user experience in the past. Many researcher choose Statistical Analysis or Machine Learning techniques to learn about workloads and to predict their behavior in future. Machine Learning is a subfield of artificial intelligence that deals with study and construction of systems that can learn from data. It became very popular in recent years because a lot of data is being generated by devices and humans each moment and advances in computing area enabled fast processing of those tremendous amounts.

In \cite{ghosh12} researchers try to predict SLA breaches due to over-committed computing resources. Cloud users often request more resources than needed. Service providers try to solve this problem with adding more users than their physical servers can fit. It has been discovered that in internal private cloud of 2193 virtual machines 84\% were hardly getting 20\% of CPU utilization and 0.7\% ever reached 100\%. To safely over-commit physical server resources one has to be sure that at any time sum of VM resource usages does not exceed actual capacity. Since it is not easy to predict aggregate workload usage researchers came up with probability of SLA violation risk. They quantified the risk for a group of workloads by specifying utilization threshold and developed a method of workloads mix by comparing risks associated with different candidate workload groups. Computations over sampled usages included mean and standard deviation - no notice of usage patterns has been made for prediction. Also the case of full utilization of physical server has not been considered, e.g. if a set of workloads often gets 100\% of server CPU no VM re-arrangement is done.

Vector based approach of \cite{mishra11} is somewhat similar to what this work tries to accomplish. The paper discusses existing methodologies for VM placement and resource usage balancing. Usually such problems are solved by presenting a metric that is calculated for different VMs and hosts and by trying to balance loads based on values of the metric. Two tasks are necessary to solve - to correctly estimate usage requirements for each VM and to place VMs in proper groups. The first task of historical data collection and analysis is not considered by researchers - they solely concentrate on combination of needed by VMs resources. Two characteristics are taken into account CPU and memory requirements. Their method of choosing right host for a VM is a variation of bin packing problem \cite{bin13}. However it does not deal with shapes of VM loads.

Many researchers came with own formulae to calculate metrics for VM placement decisions. In \cite{arzuaga10} system imbalance is defined as ratio of standard deviation over mean of all server loads and VM placement decisions follow greedy approach - the most busy server is chosen and its tenants are iterated to find transitions which lead to minimization of the metric. Sandpiper \cite{wood07} is a system for monitoring and detecting hot-spots in servers running XEN hypervisor \cite{xen13}. It uses time series analysis to predict server overloads and to estimate peak CPU and network bandwidth. Overload metric ("Volume") used in Sandpiper equally depends on CPU, memory and bandwidth. Migration decisions include memory footprint of VMs ("Size") because live migration of VM itself is resource heavy operation. A VM with the biggest Volume to Size Ratio (VSR) is considered a good candidate to migrate from the most overloaded host to the least overloaded one. \cite{wood08} provides more details on time series and prediction of resource utilization. The authors use linear regression techniques based on minimization of Sum of Square Errors (SSE).  

Some works like \cite{hu10} take migration of VMs as the last resort for resource usage balancing problem because of data amounts transferred between hosts. A proper placement of new VMs with regard of server load history is considered a much better solution. The paper \cite{hu10} presents VM placement scheduling strategy based on genetic algorithm. The algorithm is a random search that resembles evolution law - after first results produced it makes better approximations on which VMs fit each other in groups.

Notion of daily periods and time zones of virtual workloads is made in \cite{zhang11}. Performance data of VMs is gathered for a few days back and predictions are calculated for the next 24 hours. The researchers follow approach of selecting a host for a VM with known history so that the load is evenly distributed. The approach has some similarities with the research described in this report though no attempts to find patterns or trends in workloads are done. Also no bigger periods (weeks, months) are suggested.

Sophisticated VM placement algorithm has been evaluated in \cite{ma12}. Researchers took TOPSIS method (Technique for Order of Preference by Similarity to Ideal Solution \cite{topsis13}) to optimally fit workloads. This method gives smaller number of VM movements and better utilization than First Fit and First Fit Decreasing algorithms \cite{bin13}.  

To mitigate overload of host two ways can be chosen - move one or more machines from overloaded host to under-loaded one or exchange a VM on overloaded host with a VM on under-loaded. In any way a minimum of migrations is desired. Most frequently first approach is chosen. Good choice of VM to move is the largest VM - it is assumed that this will lead to less migrations and will free overloaded host faster \cite{zhang12}. Such approach is called Largest VM First. With advances in networking and computing area it seems to be irrelevant how big footprint of migrated VM is. Another strategy would be to move more "silent" machines because it is easier to synchronize their memory among hosts. Also performance degradation of such VMs will be more noticeable by users than of more active workloads.

Modelling of virtual workloads with help of different machine learning techniques is researched in \cite{kundu12}. Such models can be of great benefit both for users and service providers. Three parameters are considered and analyzed - CPU, memory, disk IO. No workload mix strategies are discussed in this paper but it would be quite interesting to see provisioning requests from cloud users that specify models instead of traditional server parameters in near future. 

It is to be noticed that migration needs become less and utilization is smoother if physical servers are made much bigger than hosted VMs. Nowadays many vendors started to produce servers specially for deployment as cloud hosts with quite big number of CPUs and memory. But as it has been proven by last century history that whenever computers become more powerful the same happens with workloads - they grow in complexity and data volumes.

From the research works discussed above it can be summarized that there are several questions to answer for proper resource balancing of VMs and host overload mitigation. Firstly VM and host characteristics (parameters) are learned to describe overload conditions. Then a metric to calculate balanced conditions is chosen. With that metric applied a source (overloaded) and target (under-loaded) hosts are chosen and a number of VMs is set for migration. Many researchers calculate balanced conditions using means and deviations of observed parameters while others deploy some kinds of resource utilization history. These solutions differ in one important point - resource balancing is performed either reactively, when overload already happened or pro-actively, predicting system behavior and making the best to avoid overload. The latter usually is accomplished with help of machine learning techniques. To solve combinatorial optimization of VM placement either simple one by one migrations are performed or the most optimal mix of VMs and hosts is found at once and then some rearrangement strategies are applied (like Largest VM First). 

Resource distribution schemes sometimes differ from simple evenly balanced host loads. Some solutions also allow to group VMs on smaller number of hosts and to switch off freed hardware to save energy. As soon as it is not enough power on running hosts additional resources are awoken \cite{dpm13}. 

None of the solutions tries to find patterns (trends and seasonal effects) in workload histories or to make predictions for bigger periods than a day. How and whether it is possible are the questions to answer in this project. 


\newpage
\section{Machine Learning Techniques}

In this work machine learning technique called Time Series is used. Time series analysis helps to understand past and to predict future for quite some time in many areas like weather forecasting, airline or ground traffic management, government decisions, commerce, etc. When some value is measured sequentially in time such measurements form a time series. The fixed intervals at which measurements are performed are called sampling intervals. Time series allow to model measured data, i.e. come up with some kind of formulae that would allow to advice on possible future values or approximate missing ones. A great tool for learning data is R because it has build-in features for data analysis and visualization \cite{tsr09}. R is used throughout this research for data learning, visualization and decision making (as a stand-alone script).

Main features of time series are trends and seasonal variations. There can be different sizes of seasons, e.g. hours, days, weeks. The seasons represent repeating sets of values while trends show how form of data or repeating pattern is changed over time. Figure 1 shows time series for a typical airline booking and Figure 2 makes insight on what components these time series can consist of.  Both pictures have been plotted with a couple of simple commands in R (passenger dataset is embedded into R for learning and demonstration purposes). 

\begin{figure}[H]
\includegraphics[width=10cm]{pics/air}
\caption{Air passenger bookings in US}
\end{figure}

\begin{figure}[H]
\includegraphics[width=\linewidth]{pics/air_decomposed}
\caption{Decomposition of passenger time series in R}
\end{figure}

There are many models that can fit time series. A simple additive model can be expressed with:
\begin{equation}
x_t = m_t + s_t + z_t
\end{equation} 
Here $x_t$ is observed series, $m_t$ is a trend, $s_t$ is a seasonal effect and $z_t$ is a random error.

Seasonal variations and trends can be expressed with different formulae. The analysis of time series in this work only tries to find seasonal effect of CPU utilization rates by VMs. Only one season is considered with day long period and sampling interval is one minute. One minute is chosen empirically - it is hard to gather statistics more frequently in OpenStack cloud also if one takes too large interval modelling of data can be inaccurate. CPU load of host is measured in percents (0..100) for each VM in a cloud and is gathered for several days to find fitting model coefficients. In future works, with more and bigger periods more data will be gathered for time series analysis. Sample host CPU load by a VM with periodic history can be seen in Figure 3.

\begin{figure}[H]
\includegraphics[width=10cm]{pics/vmload1}
\caption{Host CPU utilization by a VM}
\end{figure}

This load shows how data has been sampled during a few days. It can also be seen that most activity on that VM happened during day time, lower activity was mornings and evenings and no activity at nights. For these time series harmonic model has been chosen and evaluated:
\begin{equation}
s_t = A \sin \left( 2 \pi f t + \phi \right)
\end{equation} 
where $s_t$ is a sine with amplitude $A$, frequency $f$ and phase shift $\phi$. Frequency or cycles per sampling interval for a day long since equals $1 / \left( 24 * 60 * 60 \right)$. The sine formula can be transformed to exclude $\phi$:
\begin{equation}
A \cos \left( 2 \pi f t + \phi \right) = a_s \sin \left( 2 \pi f t \right) + a_c \cos \left( 2 \pi f t \right)
\end{equation} 
where $a_s = A \cos \left( \phi \right)$ and $a_c = A \sin \left( \phi \right)$. This form is preferred because it will allow to apply Ordinary Least Squares (OLS) regression to find $a_s$ and $a_c$ parameters (OLS is just an another term for SSE technique). General seasonal model can be described as:
\begin{equation}
x_t = m_t + { \sum \limits_{i=1}^{ \left[ s / 2 \right] } \left( s_i \sin \left( 2 \pi i t / s \right) + c_i \cos \left( 2 \pi i t / s \right) \right) }   + z_t
\end{equation}
where $s$ is a period of season. The more seasons are taken into account the more sine waves are summed and the more precise model is obtained. In this research only one season / harmonic is taken for calculation, also a trend part is assumed to be simple constant. 

Before running into OLS time series can be smoothed (or filtered) to identify patterns if any. One of methods to filter data is centered moving average. Smoothing usually uses data in time before and after estimated point. As name of the method says to calculate estimate and average of neighbors is computed. Smoothed data for sampled VM load is shown in Figure 4:

\begin{figure}[H]
\includegraphics[width=14cm]{pics/vmload2}
\caption{Host CPU utilization by a VM after smoothing}
\end{figure}

OLS method application gives coefficients for trend constant, $a_s$ and $a_c$. From the last ones one calculates amplitude and phase shift:
\begin{equation}
A = \sqrt{  a_s^2 + a_c^2 }
\end{equation}
\begin{equation}
\phi = \arctan \left( a_s / a_c \right)
\end{equation}

After coefficients are found one can verify fitness of model to data (Figure 5). It can always happen that sine goes below zero or above 100\%. The latter is ignored but models below zero are corrected so that their constant trends are not smaller than season amplitudes (worst values are expected).

\begin{figure}[H]
\includegraphics[width=14cm]{pics/vmload3}
\caption{VM load model}
\end{figure}

Each VM in the system is identified by VM name and three parameters for modelling workloads. Now it is possible to match and mix models for  solving host load optimization problem. A single day is presented by 1440 samples which are easily processed by R with OLS in a single moment.  

\newpage
\section{VM Placement Decisions}

For balancing problem the "worst" host is identified. Host load can be modelled as a sum of models of VMs running on it. Sum of sine and cosine waves with the same frequency (and phase shift since it has been excluded in (3)) is a matter of simple addition of corresponding coefficients:
\begin{equation}
m_h = \sum \limits_{i=1}^{n_h} m_i
\end{equation}
\begin{equation}  
a_{s,h} = \sum \limits_{i=1}^{n_h} a_si
\end{equation}
\begin{equation}
a_{c,h} = \sum \limits_{i=1}^{n_h} a_ci
\end{equation}
\begin{equation}
A_h = \sqrt{  a_{s,h}^2 + a_{c,h}^2 }
\end{equation}
where $h$ is a host identifier, $n_h$ is a number of VMs on a host, $A_h$ is a host load amplitude, $m_h$ - constant from trend component of time series.

Badness is calculated as maximum of sine peaks:
\begin{equation}
Peak_h = m_h + A_h  
\end{equation}

For each host its peak is calculated and a maximum of peaks points to the overloaded host. Host load filtered is not the same as modelled one because models are not bound by 100\% ceiling (Figure 6, 7 and 8). Actually neither filtered nor modelled loads fit well host data. Modelled load is more useful because it helps to decide on which host can be overloaded more than others.

\begin{figure}[H]
\includegraphics[width=10cm]{pics/hostload}
\caption{Host CPU Load}
\end{figure}

\begin{figure}[H]
\includegraphics[width=10cm]{pics/hostload_smoothed}
\caption{Host CPU Load filtered}
\end{figure}

\begin{figure}[H]
\includegraphics[width=10cm]{pics/hostload_modelled}
\caption{Host CPU modelled}
\end{figure}

After finding overloaded host it is time to decide on which VM to move and which host to choose as a new home for that VM. Algorithm deployed in this work iterates over all VMs on overloaded host and checks them against other hosts - what peak would other host have if VM were moved there. Those migrations are chosen that minimize maximum of peaks from all hosts. The iteration is repeated to find overloaded host and VM to move until no target host is found, i.e. it is not possible to optimize system offloading particular host. Next step is to take next to the "worst" host for further VM migrations. The whole process repeats until the only host is left where algorithm ends. Calculations of new peaks are quite easy and similar to (7), (8), (9), (10). The only difference is that one can store peaks for each host and there won't be a need to re-sum all coefficients of all host VMs. The algorithm can be described in the following listing

\begin{verbatim}
01 Fill "hosts to optimize" list with all hosts
02 While length of "hosts to optimize" > 1
03     Find overloaded host and max peak
04     Foreach VM in overloaded host VMs
05         Foreach host in other hosts
06             if peak after migration is smaller than max peak
07                 then remember vm and target host
08     If no VMs to migrate have been identified
09         then remove overloaded host from "hosts to optimize"   
10     Else
11         move VM
12         recalculate peaks and other coefficients for hosts
\end{verbatim}

This way workloads with time shifts will be grouped together while workloads of the same phase will be separated. As a subsequence host load will become smoother with smaller peaks. The approach definitely won't fit environments where all workloads are within the same timezone or have close sine phase shift. It is assumed that for daily harmonic models such system is deployed for international clouds that are busy 24 hours.  

\newpage
\section{System Architecture}

The scheduling system consists of two parts - data acquisition and data analysis / load balancer. Overall system architecture is shown in Figure 9.

\begin{figure}[H]
\includegraphics[width=\linewidth]{pics/architecture}
\caption{Scheduling system architecture}
\end{figure}

Both parts of the system are connected thru a database, in this project - MySQL. Data capturing facility is distributed among physical machines and stores samples using remote connection to the database. It consists solely of scripts that know how to get information about what VMs run on their host and how to find information about resource usage at any given point in time. Data analysis is run on cloud controller node but actually can run anywhere, on computer with enough CPU and memory to perform techniques described in previous sections. Load balancer is coupled with data analysis component - its task is to run analysis at specific intervals, e.g. once in three days for daily loads, and to perform VM migrations in a specific to deployed hypervisor way. OpenStack computing cloud by default uses KVM / QEMU virtualization so both balancing and data capturing components have to be tuned to this type of hypervisor. Currently only KVM / QEMU solution is supported by the scheduler but it is relatively easy to port the latter to other popular software. Data analysis is the biggest part in code, complexity and processing power requirements. It is implemented as platform independent R script. During this research many other languages and libraries have been looked at but R proved to be most convenient, stable and powerful choice for such tasks.
 
Data acquisition scripts are run on all computing nodes and uses OpenStack specific commands to retrieve information about what VMs run on what hosts. VMs in host appear as normal OS processes. It takes some sophisticated logic to map VMs to processes to their CPU load values because OpenStack abstracts itself from any platform dependent software. Therefore OpenStack provides only minimum of control and monitoring features that are supported by most of underlying mechanics (hypervisor, OS, networking). The data capturing script uses libvirt (one more abstract layer for virtualization solutions) to map VM names to Linux processes and OS facilities to find out CPU usage.

One action can be optionally run on database to keep time series in good state. It is assumed by analyzer that time series have predefined sampling interval of 1 minute with accurate start and end of the interval. Currently this is assured by data acquisition scripts. Also it is important to capture CPU usage by VMs of one host in single shots so that sum of VM loads on one host at any given point of time is not bigger than 100\%. This is also designed into acquisition script. During the project development phase a simulation software has been written to generate VM histories and to directly load them into a database. These simulators also follow these two mandatory rules for generated data. 

Data analysis is the biggest part of the system and is run with parameters specifying start and end of period of interest, e.g. for the last 3 days. It is decided not to load everything for performance reasons - R takes up to 2 GB of RAM for a relatively small cloud. Though the memory size is not in linear correlation with size of data - this topic needs more detailed research on its own. It is enough to run analysis on moderate hardware with a couple of CPU cores. The overall performance impact should not be big because it is run occasionally with large periods between calculations.

Load balancer runs data analysis script and captures decisions on VM migrations. It tries to make necessary migrations with help of OpenStack commands and only when it succeeds a change in database records has to be performed - migrated VMs need to have updated host fields to preserve VM histories. Another role of balancer can be keeping database size manageable by deleting too old or non-existing VM records.

\newpage
\section{Experimental Evaluation}

For experiments two approaches have been tried to simulate virtual workloads. First approach was to develop a program that runs inside of each VM and keeps necessary level of CPU load. The program can be given a step-like function consisting of (CPU, period) pairs. It was running successfully keeping VMs and hosts busy but had small drawback because of double virtualization overhead. Also it took a lot of time to get and evaluate statistics. Another approach was to develop a workload data generators. These generators can follow defined step-like patterns or produce random values. Advantage of such generators was simulation of more realistic history due to addition of random factor to VMs load history and obtaining data for several days in minutes. It could considerably speed up algorithm evaluation and development of data analyzer component.

Several load sets have been generated and evaluated. There were 3 hosts running 4 VMs each. From these 4 VMs 3 have been started or simulated with periodic CPU load characteristics and 1 was running random load. Periodic VMs in each host were synchronized to have similar time shift while difference between hosts was set to 6 hours to simulate timezone. The results from the first load set with rather small load average per VM showed good re-balancing capabilities of the implemented algorithm. Figure 10 shows CPU load data for all VMs (first digit in VM name corresponds to host that started that VM). CPU loads per hosts are show in Figure 11. After smoothing host loads look like in Figure 12. Firstly data analyzer does decompositions of per-VM loads (Figure 13). Then host sine waves are build and peaks are calculated (Figure 14). Placement part of the algorithm gives recommended VM movements in priority order (Figures 15, 16). The analyzer predicted balanced hosts loads as shown in Figure 17. Actual CPU load statistics after VM migrations can be seen in Figure 18 and 19. 

\begin{figure}[H]
\includegraphics[width=\linewidth]{pics/vmloads}
\caption{CPU load statistics for all VMs in cloud}
\end{figure}

\begin{figure}[H]
\includegraphics[width=12cm]{pics/hostloads}
\caption{CPU load statistics for all hosts in cloud}
\end{figure}

\begin{figure}[H]
\includegraphics[width=12cm]{pics/hostloads_smoothed}
\caption{CPU load statistics for all hosts smoothed by moving averages}
\end{figure}

\begin{figure}[H]
\includegraphics[width=12cm]{pics/sinvms}
\caption{Modelled virtual loads}
\end{figure}

\begin{figure}[H]
\includegraphics[width=12cm]{pics/sinhosts}
\caption{Modelled host load peaks}
\end{figure}

\begin{figure}[H]
\includegraphics[width=12cm]{pics/moverr}
\caption{Recommended VM migrations}
\end{figure}

\begin{figure}[H]
\includegraphics[width=12cm]{pics/vmplaces}
\caption{Map of VM migrations}
\end{figure}

\begin{figure}[H]
\includegraphics[width=12cm]{pics/sinhosts_balanced}
\caption{Predicted host load peaks after VM migrations}
\end{figure}

\begin{figure}[H]
\includegraphics[width=12cm]{pics/hostloads_balanced}
\caption{CPU loads of hosts after VM migrations}
\end{figure}

\begin{figure}[H]
\includegraphics[width=12cm]{pics/hostloads_balanced_smoothed}
\caption{CPU loads of hosts before and after VM migrations, smoothed by moving averages}
\end{figure}

As one can see the system smoothed overall load on hosts CPUs making amplitude of CPU load changes smaller. After re-balancing process finishes newly gathered data allows to correct load distribution further (especially if shapes of loads from VMs could not be accurately captured).

Quite good balancing occurs when VMs take small parts of host CPU time. The gathered statistics differ from the previous scenario only by smaller virtual loads without periodic characteristics. Figures 20, 21, 22, 23, 24 and 25 give insight on how the balancing algorithm works in this case.

\begin{figure}[H]
\includegraphics[width=\linewidth]{pics/vmloads_small}
\caption{Statistics for all VMs in cloud - smaller overall load}
\end{figure}

\begin{figure}[H]
\includegraphics[width=12cm]{pics/sinhosts_small}
\caption{Modelled host load peaks}
\end{figure}

\begin{figure}[H]
\includegraphics[width=12cm]{pics/vmplaces_small}
\caption{Map of VM migrations}
\end{figure}

\begin{figure}[H]
\includegraphics[width=12cm]{pics/sinhosts_small_balanced}
\caption{Predicted host load peaks after VM migrations}
\end{figure}

\begin{figure}[H]
\includegraphics[width=12cm]{pics/hostloads_small}
\caption{CPU loads of hosts before and after VM migrations}
\end{figure}

\begin{figure}[H]
\includegraphics[width=12cm]{pics/hostloads_small_smoothed}
\caption{CPU loads of hosts before and after VM migrations, smoothed by moving averages}
\end{figure}

The third scenario shows an overloaded cloud where no good decisions could be found - Figures 26, 27 and 28. There have been more VMs and they struggled for hosts CPUs very often.

\begin{figure}[H]
\includegraphics[width=\linewidth]{pics/vmloads_big}
\caption{Statistics for all VMs in cloud - very big overall load}
\end{figure}

\begin{figure}[H]
\includegraphics[width=12cm]{pics/hostloads_big}
\caption{CPU loads of hosts}
\end{figure}

\begin{figure}[H]
\includegraphics[width=12cm]{pics/sinhosts_big}
\caption{Modelled host load peaks}
\end{figure}

\newpage
\section{Conclusions}

The system developed in this project showed good results with various virtual loads. The algorithm run as R script showed good performance and moderate memory consumption. It required up to 1 GB of RAM and low speed CPUs such as Intel i3. Compared to works described in background research section the data analysis component could apply harmonic time series computing to large amounts of data for bigger periods. The calculations to make decisions on VM movements were very simple yet they could properly balance hosts loads.

During experiments several limitations of the applied algorithm have been discovered. Firstly the placement of VMs was not always optimal. Sometimes the system could not balance virtual machines among hosts at all. The reasons for that have been examined:

\begin{enumerate}
\item If hosts are under high load it is not possible to properly determine workload shapes. E.g. very interesting effect has been seen on hosts where several periodic loads were combined with one constant load (with small random variation). After decomposition it showed the constant load as sine with phase shift of 180 degrees from the others. In simulated environment at first one could think that random part of constant workload was not really random and had some kind of cycle (Perl rand() function gave necessary CPU load variations). But it happened because the total of VM loads on a host at any time was restricted by 100\% so dumping down constant load level down in event of VM competition. After other VMs were in "night" state that VM alone could gain enough CPU and so appeared as sine wave model too. 
\item In some load sets it is not possible to balance hosts utilization without more advanced techniques like VM exchange. An optimal solution like TOPSIS \cite{topsis13} is required.
\item If hosts have equal virtual loads shifted in time no balancing happens. This happens because any move of any VMs to another host makes maximum of peak loads bigger.
\end{enumerate}

To improve situation where forms of loads are hard to recognize a simple solution has been proposed but did not work so far. Linux \emph{uptime} command can give host load statistics with system load averages for the past 1, 5, and 15 minutes. a load average shows how many processes were standing in a queue ready to be run. The bigger this value the more loaded host CPU is. If two or more processes struggle for a CPU unit at the same time load average becomes 1 or bigger. So if a host is overloaded with virtual loads it should be possible to find out what load level would correspond to a single virtual machine (a process in host OS). Experiments showed different correlations on several CPUs with no finally usable results.

In this work only single period of loads has been evaluated and a VM load model had only one harmonic. This is rather oversimplified solution (though it works even as is). More periods and harmonics are desired to be deployed in real data centers - this is left for the following research works.

\newpage
\section{Appendix}
\subsection{Virtual Cloud Setup}

The project used virtual cloud to develop and to test components of the system. OpenStack software has many deployment options and one of them is installation of hosts (controller or compute nodes) into virtual machines on some high-performance server or laptop. Essex version of OpenStack has been deployed (at the time of project final state Folsom came) in with help VirtualBox. The initial setup and configuration guide has been found in one of numerous blogs about OpenStack \cite{uksysadmin13} and later in a book \cite{jackson12} of Kevin Jackson, the author of that blog and OpenStack developer. Several scripts have been written to automate installation and configuration of virtual cloud. Additionally quite some problems have been solved during that installation on Ubuntu 12.04 Linux server. The scheme for the deployed virtual cloud in a PC can be seen in Figures 29 and 30.

\begin{figure}[H]
\includegraphics[width=12cm]{pics/opstack_vbox_net1}
\caption{Virtual cloud in a PC}
\end{figure}

\begin{figure}[H]
\includegraphics[width=12cm]{pics/opstack_vbox_net2}
\caption{Network scheme of a Compute Node}
\end{figure}

On one of servers (Intel i3 core CPU with 8 GB RAM) for experiments 3 hosts (1 controller + compute node and 2 pure compute nodes) have been installed as VirtualBox machines. Another setup occurred on multi-core enterprise server with lots of cores and memory - 8 virtual hosts with 4 dedicated cores each and 8 GB RAM. The last setup was not successful due to instability of OpenStack Essex software version - from the totally created 20 virtual machines 3-5 machines hung on start with internal no no errors at all. Many different VM images have been tried including customly created and official ones from Rackspace site but that seemed to have no influence on error rates. Smaller setup worked better but with smaller numbers of VMs. Below are instructions additionally to a manual from Ken Pepple \cite{uksysadmin13}.

\subsubsection{Installation of Virtual Hosts}

It is important to choose networks not conflicting with real ones - beside real network of hardware system VirtualBox creates virtual interfaces. The installation guide suggests creating two new virtual interfaces with addresses that should not be in any other networks the system could connect.

VirtualBox in this project has been configured with two host-only networks. First network is 172.16.0.0/16 is a public network, second network is 172.17.0.0/16 and is used for OpenStack internal network (inter-VM). Two virtual adapters will be set on desktop/laptop machine with addresses 172.16.0.254 and 172.17.0.254 (ifconfig will show both).

Command for installation of cloud controller:
\begin{verbatim}
./OSinstall.sh -P eth1 -F 172.16.1.0/24 -p eth2 -f 172.17.1.0/24
\end{verbatim}

Command to install compute node:
\begin{verbatim}
./OSinstall.sh -P eth1 -F 172.16.1.0/24 -p eth2 -f 172.17.1.0/24 -C 172.16.0.1 -T compute
\end{verbatim}

Starting cloud nodes with Affinity (VirtualBox VMs):
\begin{verbatim}
taskset -c 0 VBoxHeadless -vrde off -startvm essex1 
taskset -c 1 VBoxHeadless -vrde off -startvm essex2 
taskset -c 2 VBoxHeadless -vrde off -startvm essex3 
\end{verbatim}

\subsubsection{Installation of Scheduling System}

The scheduler consists of a MySQL database, Bash script to gather CPU statistics, R script to analyze the statistics and give re-balancing solutions, Bash script to run the R script periodically and to perform VM live migrations. All scripts are installed into \emph{/usr/local/bin} folder.

Creation of statistics database should be run on controller node with the following commands:
\begin{verbatim}
mysql -uroot -p\$OS_PASSWORD -e "DROP DATABASE vmstats; CREATE DATABASE vmstats;"
mysql -uroot -p\$OS_PASSWORD -e "GRANT DELETE,INSERT,UPDATE,SELECT ON vmstats.* to nova@'\%' IDENTIFIED BY 'openstack';"
mysql -uroot -p\$OS_PASSWORD vmstats -e "CREATE TABLE vmstats (vm CHAR(16), host CHAR(16), pcpu TINYINT, load1 FLOAT, load5 FLOAT, load15 FLOAT, mem SMALLINT, time TIMESTAMP DEFAULT CURRENT_TIMESTAMP, INDEX ixvm(vm), INDEX ixhost(host));"
\end{verbatim}

Collecting statistics is performed in distributed manner from each host. Controller node has database open to other hosts. All hosts should have done “apt-get install mysql-client-core-5.5” to access the database. Each host runs vmstat.sh script from /etc/rc.local that periodically queries Host CPU utilization.

Cron job is set to run balancing script vmbalance.sh at specified periods, e.g. once a week.

\subsection{Developed Software}

This research project can be accessed via Git at Google Code site \cite{tbds13}:
\begin{verbatim}
git clone https://code.google.com/p/openstack-tbd-scheduler/ 
\end{verbatim}

The components of the scheduling system include the following programs:
\begin{itemize}
\item vmstat.sh - CPU load gatherer
\item mover.r - analyzer in R
\item vmbalance.sh - VM balancing script
\end{itemize}

There are a few additional scripts that helped to simulate either virtual loads of VMs (keeping VMs busy) or statistics gathered in database by vmstat.sh:

\begin{itemize}
\item loader.sh - run on VM to simulate step-like CPU load
\item randomloader.sh - run on VM to simulate random CPU load
\item generator.pl - fills a database with simulated step-like CPU loads and some randomness
\item rgenerator.pl - fills a database with simulated random CPU loads
\item cgeneration.pl - correct database statistics after (r)generator.pl scripts: provides 100\% boundary to sums of CPU values 
\end{itemize}

R script to perform online demo of the algorithm is stored as tbds.r. 

\subsection{Data Sets}

This report references three data sets generated during experiments - as MySQL dump and in CSV format:
\begin{itemize}
\item vmstats.sql and vmstats.csv - the first data set (moderate host CPU usage)
\item vmstats\_small.sql and vmstats\_small.csv - the second data set (small host CPU usage)
\item vmstats\_big.sql and vmstats\_big.csv - the third data set (high host CPU usage)
\end{itemize}

All these can be found in the research Git repository \cite{tbds13}.

\newpage
\addcontentsline{toc}{section}{References}
\bibliography{report}

\end{document}
