\documentclass[twocolumn]{sig-alternate}
%\usepackage{epsf}
\usepackage{epsfig, url, color, times, subfigure}
\usepackage{algpseudocode}
\usepackage{algorithmicx}
\usepackage[ruled]{algorithm}
%\usepackage{alg,alg2}
%\input{psfig.sty}

%\input{preamble-isca}
\setcounter{secnumdepth}{4}     
\pagestyle{empty}

\textheight 9.25 in         % 1in top and bottom margin
\textwidth 7in        % 1in left and right margin
\columnwidth 3.33 in

\oddsidemargin -0.2in      % Both side margins are now 1in
\evensidemargin -0.2in \topmargin -0.7 in

% The header goes .5in from top of the page and from the text.
\hyphenation{test-bed}
\hyphenation{well-provi-sioned}
\hyphenation{ass-ign}
\hyphenation{test-beds}
\hyphenation{ali-gned}
\begin{document}
\conferenceinfo{IMC'12,} {November 14--16, 2012, Boston, Massachusetts, USA.} 
\CopyrightYear{2012} 
\crdata{978-1-4503-XXXX-X/12/11} 
\clubpenalty=10000 
\widowpenalty = 10000

\bibliographystyle{plain}
\algnewcommand\algorithmicswitch{\textbf{switch}}
\algnewcommand\algorithmiccase{\textbf{case}}
\algnewcommand\algorithmicassert{\texttt{assert}}
\algnewcommand\Assert[1]{\State \algorithmicassert(#1)}%
% New "environments"
\algdef{SE}[SWITCH]{Switch}{EndSwitch}[1]{\algorithmicswitch\ #1\ }{\algorithmicend\ \algorithmicswitch}%
\algdef{SE}[CASE]{Case}{EndCase}[1]{\algorithmiccase\ #1}{\algorithmicend\ \algorithmiccase}%
\algtext*{EndSwitch}%
\algtext*{EndCase}%

\clearpage
\pagenumbering{arabic}
\numberofauthors{3}
\title{Reducing Allocation Errors in Network Testbeds}
\author{
\alignauthor
Jelena Mirkovic\\
       \affaddr{USC/ISI}\\
       \affaddr{4676 Admiralty Way, Ste 1001}\\
       \affaddr{Marina Del Rey, USA}\\
       \email{sunshine@isi.edu}
\alignauthor
Hao Shi\\
       \affaddr{USC/ISI}\\
       \affaddr{4676 Admiralty Way, Ste 1001}\\
       \affaddr{Marina Del Rey, USA}\\
       \email{shihao.edu}
\alignauthor
Alefiya Hussain\\
       \affaddr{USC/ISI}\\
       \affaddr{4676 Admiralty Way, Ste 1001}\\
       \affaddr{Marina Del Rey, USA}\\
       \email{alefiya@isi.edu}
}

\maketitle



\begin{abstract}


Network testbeds have become widely used in computer science,
both for evaluation of research technologies and for hands-on teaching. 
This can naturally lead to oversubscription and resource allocation failures,
as limited testbed resources cannot meet the increasing demand.

This paper examines the causes of resource allocation failures on 
DeterLab testbed and finds three main culprits that create perceived 
resource oversubscription, even when available nodes exist: 
(1) overuse of mapping constraints by users, 
(2) testbed software errors and (3) suboptimal resource allocation.
We propose solutions that could resolve these issues and reduce allocation
failures to 40\% of the baseline. In the remaining cases, real resource oversubscription 
occurs, which calls for application of some fair sharing. We examine testbed
usage patterns and show that traditional fair-sharing techniques are not 
suitable for network testbeds. We then propose two novel approaches -- Take-a-Break
and Borrow-and-Return -- that temporarily pause long-running experiments. 
These approaches can reduce resource allocation failures to 25\% of the baseline case
by gently prolonging 1\% of instances. While our investigation is done on
DeterLab testbed's data, it should apply to all testbeds that run Emulab 
software.

\end{abstract}

% A category with the (minimum) three required fields
\category{C.2.1}{Computer Communication Networks}{Network Architecture and Design}{Distributed Networks}
%A category including the fourth, optional field follows...
\category{C.2.3}{Computer Communication Networks}{Network Operations}{Network Management}

\keywords{network testbeds, Emulab, resource allocation}

\section{Introduction}
\label{intro}

The last decade brought a major
change in experimentation practices in several areas of computer
science, as researchers migrate from using simulation and theory to
using network testbeds. Teachers are also shifting from
traditional lecture-oriented courses to more dynamic and realistic
teaching styles that incorporate testbed use for class demos or student 
assignments.
These diverse groups of users each have important deadlines that  they
hope to meet with help of testbeds, such as conference and demo deadlines, 
class projects, class demos, etc. 
%Yet, testbed usage today is
%amazingly organic and resource availability is highly unpredictable. 
%In this paper we focus on testbeds that allow users to obtain exclusive access
%to some portion of their resources. As user demand grows, these testbeds
%can and do experience overload that leads to allocation failures.

Current testbed resource allocation practices are not well aligned with these user needs.
Most testbeds
deploy no automated prioritization of allocation requests, serving them on 
first-come-first-served basis \cite{emulab,deterlab,schooner, planetlab,geni}, which 
makes it impossible to guarantee availability during deadlines.
In this paper we focus on testbeds that allow users to obtain exclusive access
to some portion of their resources. As user demand grows, these testbeds
experience overload that leads to allocation failures.
Most testbeds further let users keep allocated resources for as long
as needed \cite{emulab,deterlab,schooner}. While there are idle detection mechanisms that attempt to 
reclaim unused resources, users can
and do opt out of them. Two primary reasons for users creating long-running experiments are: (1) lack of testbed mechanisms to 
easily save disk state on multiple machines and (2) lack of bounds on 
maximum wait time until resources become available again. 
Jointly, testbed management and use practices create a climate where it is impossible
for an incoming user to have any assurance that testbed resources will be available 
when she needs them. 
Some users request help of testbed staff to reserve resources for major deadlines. Since many
testbeds lack reservation mechanisms \cite{emulab, deterlab, schooner}, these are done manually by 
staff pulling requested machines out from the available pool. This often occurs  
early, so staff could guarantee
availability, but it wastes resources and increases testbed overload.

Testbeds need better and more predictable allocation strategies
as they evolve from a novel to a mainstream experimentation platform.
Our main goal in this paper is to understand the reasons for resource
allocation failures and to propose changes in testbed operation that would 
reduce these. We explain the resource allocation problem in network 
testbeds in Section \ref{tmp}. We then examine reasons for resource allocation
failures in the Deterlab testbed \cite{deterlab, acsac} operation during 8 years of its existence 
(Section \ref{whyf}). DeterLab is a public testbed for security experimentation, hosted by USC/ISI and UC Berkeley.
It has around 350 general PCs and several tens of special-purpose nodes.
While it is build on Emulab technology, its focus is on cyber security research, test and evaluation.
It provides resources, tools, and infrastructure for researchers to conduct rigorous, repeatable experiments 
with new security technologies, and test their effectiveness in a realistic environment.
DeterLab is used extensively both by security researchers and educators. It also experiences
intensive internal use for development of new testbed technologies. 

\color{red}
We find that 80\% of failures occur due to a \textit{perceived} resource shortage, i.e., 
not because the testbed lacks enough nodes, but
because it lacks enough of the \textit{right} nodes that the user desires. As user desires
and testbed allocation strategies act together to create the shortage of the nodes that 
are in current demand, we next investigate how much relaxing user constraints (Section \ref{relaxing})
or improving resource allocation strategy (Section \ref{improving}) helps reduce 
allocation failures. Finally, we investigate if a change in testbed resource allocation policy
would further improve resource allocation both in cases of perceived and in cases of
true resource shortage (Section \ref{policy}).
\color{black}

While we only 
analyze DeterLab's dataset, this testbed's resource allocation algorithms and practices
derive from the use of Emulab software \cite{emulab}, which is used extensively
by 40+ testbeds around the world \cite{emuothers}. Our findings should apply to these testbeds. 

%Testbeds respond to increased demand for resources by purchasing more 
%hardware, through federation \cite{fedd} or through virtualization. 
%These help but to a limited extent.
%Cooling requirements and weight limitations restrict how much hardware can be hosted
%in any facility. Federation does not guarantee availability although it improves it,
%and it is difficult for users to create experiments that are portable to multiple testbeds.
%Some experiments cannot be virtualized, and those that can experience reliability issues
%when too many experiments share the same physical nodes. 

Main contributions of our paper are:
\begin{itemize}
\item This is the first analysis of causes for resource allocation failures in testbeds.
We find that 80\% of failures occur due to a perceived resource
shortage, when in fact there are sufficient nodes to host a user's request. Half 
of these cases occur because of inefficient testbed software, while the rest 
occur because of over-specification in user's resource requests.
\item We  closely examine  the  resource allocation algorithm used in Emulab
testbeds  -- \texttt{assign} \cite{assign} -- and show that it often performs suboptimally. 
We propose an improved algorithm -- \texttt{assign+} -- that generates 20\% less allocation failures, while
running 10 times faster and preserving more of the limited resources, such as
interswitch bandwidth.
\item We propose improvements to testbed resource allocation strategy by deploying migration 
and relaxing user constraints, to reduce resource allocation
failures to 60\% of those generated by \texttt{assign}.
\item We identify and demonstrate the need for some fair sharing and prioritization of user allocation 
requests at times of overload. We propose two ways to modify testbed resource allocation policy to achieve
these effects: Take-a-Break and Borrow-and-Return.
In both, resources from long-running experiments are reclaimed and offered to incoming ones, but
in Take-a-Break they are held as long as needed, while in Borrow-and-Return they are returned
back to the original experiment after 4 hours. We show that both these approaches improve fairness of
resource allocation, while reducing allocation failures to 25\% of those generated by \texttt{assign}. 
\item We suggest five improvements to management software in testbeds that improve their operation and better align them with user needs. 
\end{itemize}


\section{Related Work}
\label{related}

The wide adoption of emulation testbeds 
 in the networking research community has spurred 
  studies on different approaches for designing and managing 
   them.
For example, the resource management mechanisms for Globus 
 and PlanetLab are contrasted and compared extensively by Ripeanu~\cite{GlobusPlanetLab}. 
Additionally,  Banik et. al~\cite{floorcontrol} conduct 
 empirical evaluations for different protocols that can provide exclusive 
  access to shared resources on PlanetLab.
The StarBED project has several unique solutions for emulation that include 
 configuring the testbed and providing mechanisms for experiment management~\cite{simpleTestBed, StarBED2}. 
 These works either evaluate pros and cons of specific testbed management mechanisms or propose how to build 
 testbeds, but do not investigate resource allocation algorithms.

The scheduling and resource management of testbeds has become increasingly challenging.
Hermenier and Ricci examine the topological requirements 
 of the experiments on Emulab testbed \cite{emulab} over the last decade~\cite{Hermenier2012how}. 
 They propose a way to build better testbeds by: (1) increasing the heterogeneity of node connectivity, (2) connecting 
 nodes to different switches to accommodate heterogeneous topologies without use of interswitch bandwidth, and 
(3) purchasing smaller and cheaper switches to save costs.
Our work is orthogonal to theirs and focuses on optimizing allocation software and policies, regardless of testbed
architecture. 
 Kim et. al characterize the PlanetLab testbed's \cite{planetlab} usage over the last decade~\cite{kim2011understanding}. 
Their results indicate that bartering and central banking schemes for resource allocation 
 can handle only a small percentage of total scheduling requirements. 
 They do not propose better resource allocation algorithms, even though they identify  
  the factors that account for high resource contention or poor utilization. 

Yu et al. in \cite{yu2008rethinking} proposes collecting allocation requests during a time window and then allocating testbed resources to satisfy the constraints of this request group. They employ greedy-based algorithm to map nodes and path splitting to map links. Besides, they perform online migration to change the route or splitting ratio of a virtual link, which re-balances the mapping of virtual topologies to maximize the chance of accepting future requests. Their methods consider the general mapping problem at a high-level way, but do not take into account heterogeneity of testbed nodes. Besides, queuing allocation requests in network testbeds would introduce potentially large delays that users would not tolerate.
Chowdhury et al. in \cite{chowdhury2009virtual} utilize mixed integer programming to solve the resource allocation problem, but their constraints are limited to CPU capacity and distance between the location of two testbed nodes. J. Lu et. al~\cite{lu2006efficient} develop a method for mapping a virtual topologies onto a testbed in a cost-efficient way. They consider traffic-based constraints but do not consider node heterogeneity or node features.

\color{red}
In a broader setting, ISPs tend to address resource allocation problems by over provisioning their resources (bandwidth). This solution
does not readily apply to network testbeds. First, testbeds have limits on how many machines they can host that stem from the space, weight, cooling
and power capacity of the rooms that host them. Second, testbeds are hosted by academic institutions and funded through grants -- this limits both 
human and financial resources for purchase and maintenance of hardware.  Finally, testbed use exhibits heavy tails along many dimensions (see Section \ref{fairsharing}),
which prevents prediction of future resource needs.

Clusters and data centers face similar resource allocation issues as testbeds \cite{drf, mesos, condor}. In \cite{drf}, Ghodsi et al. propose
dominant resource fairness (DRF) for resource allocation in data centers.  This approach achieves fair allocation of heterogeneous
resources between users who prioritize them differently. Unfortunately, like other fair-sharing approaches, DRF is not readily applicable to 
testbeds (see Section \ref{fairsharing} for more details) due to interactive nature of experimentation and due to different value of 
long vs short experiments. In \cite{mesos}, Hindman et al. describe a platform called Mesos for sharing clusters by allowing multiple
resource allocation frameworks to run simultaneously. Mesos offers resource shares to the frameworks, based on some institutional
policy, e.g., fair share, and they decide which offers to accept and which tasks to allocate on them. Some principles from \cite{mesos}, 
such as resource offers, may apply to  testbeds, but they assume users that are way more sophisticated and informed about resource
allocation than testbeds currently
have. Condor \cite{condor} is a workload management system for compute-intensive jobs that aims to harness unused resources on 
heterogeneous and distributed hardware and can migrate data and jobs as nodes become available. While some Condor ideas may 
apply to network testbeds to achieve instance migration (see Section \ref{policy}) network resources are usually heavily customized 
by users (e.g., specific OS images are loaded, OS and network configuration is changed), which prevents fine-grain migration that
Condor excels at.
\color{black}

\section{Terminology}
\label{sec:terms}


%We define a \textit{network testbed} as a collection of computing and
%network resources that are shared by multiple users. 
%In this paper we are specifically focused on time-shared testbeds, where 
%users obtain exclusive control over some subset of nodes and the underlying 
%substrate by on-demand allocation mechanisms.
%Testbeds like the Utah Emulab~\cite{emulab},
% DeterLab~\cite{deterlab} and Schooner~\cite{schooner} fall in this category. 
% Due to limited resources these testbeds may
% experience overload conditions when demand exceeds the supply. This leads
% to failure of some attempted resource allocations, forcing their users to wait until
% resources become available. 
%Other network testbeds such as Planetlab~\cite{planetlab}, and GENI~\cite{geni}
% offer access to virtualized resources of remote machines, where
% a single physical node is shared by multiple users through a \textit{slice} interface.
%Slicing is usually implemented by means of
%\textit{virtualization}, a widely used technique in which a
%software layer multiplexes lower-level resources among higher-level
%software programs and systems. While a resource in such testbeds could 
%hypothetically be shared by an infinite number of users, overload conditions can 
%still occur when many users request the same resource. While their requests 
%are granted, oversubscribed resources
%lead to poor performance for everyone \cite{kim2011understanding}. 

% Something else needs to be said here about overload, errors, etc.
% We must say somewhere what goes into experiment definition and also
% how users can create custom images
%Also we must say somewhere what is DeterLab and what's its architecture, interswitch bw, routing, etc.
%Most network testbeds today deploy first-come first-served, unlimited-hold resource
%allocation policy, which we will call FCFS-UH for short. This means that resources are allocated on demand, if available, 
%and no fair sharing is enforced. While there are mechanisms in place to motivate
%users to release resources when idle they can easily be turned off by users leading
%to potentially unlimited hold on resources.
%We examine the effect such policies have on user behavior and resource allocation
%outcomes and we investigate alternative policies.


%While network testbeds were primarily constructed for research purpose, today they are used for four
%distinct  purposes:
%\begin{itemize}
%\item {\it Research} in networking and distributed systems for development and
% evaluation. Typically, in the research category, a group of users investigates a
% common research problem by accessing the testbed resources for
% hypothesis testing, deployment studies, or exploratory research.
% \item {\it Classes} in networking and distributed systems, to teach concepts about existing and new systems
% and technologies. In the more advanced courses, class projects may be
% of research nature,
%which blurs the line between class and research.
% \item {\it Internal} development and testing of new testbed technologies.
% \item {\it Scheduled demo activities}, to support focused evaluation in large government-funded  
% research programs over the course of several weeks.
%\end{itemize}
%Each of these user groups has important deadlines that may collide with those from other groups. 
%For example, research users have conference deadlines and demo deadlines to 
%research sponsors that cannot be moved. Class users have homework and project deadlines; while these may be
%adjusted by teachers this is not always possible. Internal users
%may urgently need to develop a new technology to patch a vulnerability in the testbed or may need
%to take a portion of testbed down to address an important issue. Scheduled demo activities
%often require many machines over the course of several weeks and cannot be postponed.
%Network testbed's FCFS-UH policy does nothing to prioritize needs of users with deadlines over other
%users or to prioritize use of one group over another. Such prioritization is today done manually by
%testbed owners and operation staff in a very ad-hoc manner. In this climate, it is possible that
%aggressive users may starve less aggressive ones through excessive resource allocation and prolonged
%holding of resources.
%
%\subsection{Terminology}
%\label{terminology}
%
We now introduce several terms that relate to network testbed use and illustrate them
  in Figure~\ref{termspic}.
  % and Figure~\ref{fig:user_proj_cat}.

\begin{figure}[htbp]
\begin{center}
\includegraphics[width=3in]{figs/termspic}
\caption{Terminology}
\label{termspic}
\end{center}
\end{figure}



An \textit{experiment} is a collection of inputs submitted by a user to the testbed
(one or more times) under the same identifier. These inputs describe experimenter's needs
such as experiment topology, software to be installed on nodes, etc. 
We will say that each input represents a \textit{virtual topology}. 
Experiments can be modified, e.g. by changing node number or connectivity. 
In Figure \ref{termspic} there are
 two experiments  A and B.

An \textit{instance} is an instantiation
 of the experiment at the physical resources of the testbed. 
 We say that an instance has \textit{duration} (how long were resources allocated to it), 
 \textit{size} (how many nodes were allocated) and \textit{virtual topology} (how were nodes connected to each other, what types of nodes were requested, what OS, etc.).
The same experiment can result in multiple \underline{non-overlapping}
 instances, one for each resource allocation. In Figure
 \ref{termspic} there are five instances, three linked to the experiment  A, and two linked to B.
Release of the resources back to the
 testbed or instance modification denotes the end of a particular instance.

A testbed \textit{project}
 is a collection of experiment definitions and
 authorized users, working on the same common project under a single \textit{head-PI}.
In Figure \ref{termspic} there is one project with two experiments and three users. 

An experiment can experience the following events: \\
\texttt{preload},  \texttt{start}, \texttt{swapin}, \texttt{swapout}, \texttt{swapmod} and \\ \texttt{destroy}.
\texttt{preload},  \texttt{start} and \texttt{destroy} can occur only once during experiment's lifetime, while others can occur multiple times. 
Each event is processed by one or more testbed scripts and can result in a success or a failure.
Figure \ref{statetrans} shows the state diagram of an experiment, where state transitions occur on successful events. 
A \texttt{preload} event leads to experiment's virtual topology being stored on the testbed, but no resources are yet allocated
to the experiment -- experiment exists in the \textit{defined} state. A \texttt{swapin} event leads to resource allocation, changing the experiment's state to \textit{allocated}.
 A \texttt{start} event is equivalent to  a \texttt{preload} followed by a \texttt{swapin}. 
 A \texttt{swapout} event releases resources from the  experiment, changing its state to \textit{defined}.
A \texttt{swapmod} event can occur either in \textit{defined} or in \textit{allocated} state. It changes the experiment's 
definition but does not lead to state change. If a \texttt{swapmod} fails while the experiment is in  \textit{allocated} state, 
the testbed software automatically generates a  \texttt{swapout} event and reverts experiment state to  \textit{defined}. 
A  \texttt{destroy} event removes an experiment's virtual topology and state from the
 testbed but history of its events still remains. 
Table \ref{frequency} shows the frequency of all experimental events in our dataset, which is described
in the following Section.


\begin{table}[htdp]
\begin{center}
\begin{tabular}{|c|c|c|}\hline
Event & Count & Frequency \\ \hline
preload & 10,472 & 4\% \\
start & 16,043 & 6.1\% \\
swapin & 101,275 & 38.6\% \\
swapmod & 36,819 & 14\% \\
swapout & 75,156 & 28.7\% \\
destroy & 22,575 & 8.6\% \\ 
total & 262,340 & 100\% \\ \hline
\end{tabular}
\caption{Frequency of experiment events in our data\label{frequency}}
\end{center}
\end{table}%

 
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=3in]{figs/statetrans}
\caption{Experiment state diagram}
\label{statetrans}
\end{center}
\end{figure}


\section{Data} \label{sec:data}

We analyze eight years of data  about DeterLab's operation collected from its inception in February 2004 
until February 2012. As of February 2012, DeterLab had 154 active research projects (556 research users), 38 active class projects 
(1,336 users) and 11 active internal projects (95 users). At the beginning of 2011 testbed consisted of 346 general PCs, and some 
special-purpose hardware. Half of the nodes are located at UCS/ISI, and other half at UC Berkeley. Table \ref{nodetypes}
shows the features of DeterLab's PCs.

\begin{table}[b]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Type & Disk & CPU & Mem& Interf. & Count\\
& (GB) & (GHz) & (GB) & & \\
\hline
1        & 250  &2.133  &4      &4 & 63\\
2       & 250   &2.133  &4      &4 & 63\\
3        &72    &3      &2      &4 & 32\\
4        & 72   &3      &2      &5 & 32\\
5       & 36    &3      &2      &4 & 61\\
6       & 36    &3      &2      &5 & 60\\
7       & 36    &3      &2      &9 & 4\\
8        & 238  & 1.8 & 4 &     5 & 31\\\hline

\end{tabular}
\caption{DeterLab's node types -- Jan 2011 \label{nodetypes}}
\end{center}
\end{table}%
DeterLab runs Emulab's software for experiment control \cite{emulab}, which means 
that all testbed management events such as node allocation,  release, user account
creation, etc. are issued from one control node called \texttt{boss} and
recorded in a database there. 
Additionally, some events create files in the file system on the  \texttt{boss} node.
We analyze a portion of database and file system state on this node that relates to resource allocations. The complete
list of our data is shown in Table \ref{data}. We have database records about testbed events and any errors that 
occurred during processing of these events. We further have files describing virtual topology and testbed state snapshot 
that were given to the allocation software -- \texttt{assign} -- and the allocation log showing which physical nodes
were assigned to an experiment instance. \color{red}
The virtual topology encodes user desires about the nodes they want, their configuration and 
connectivity. The testbed state snapshot gives a list of currently available nodes on the testbed, along with 
their switch connectivity, supported operating system images, features and feature weights. Such 
snapshots are created on each attempted  \texttt{start},  \texttt{swapin} or \texttt{swapmod} operation.
\color{black}

In our investigation we found that both database and file system data can  
be inconsistent or missing. 
This can occur for several reasons:
\begin{enumerate}
\item Different scripts may handle the same event and may generate database entries. It is possible for a script to behave in
an unexpected manner or overlook a corner case, leading to inconsistent information. 
For example, 
for a small number of experiments we found that database entries
show consecutive successful \texttt{swapin} events, 
which is an impossible state transition. We believe that this occurs because one script processes the event and records a success before
the event fully completes. In a small number of cases another script 
 detects a problem near the end of resource allocation and reverts the experiment's state to \textit{defined} but does not
 update the database.

\item State transitions can be invoked manually by testbed operations staff, without generating recorded testbed events. 
For example, we found a small number of experiments in  \textit{allocated} state according to the database, 
while file system state indicated that they returned the resources to the testbed. This can occur when 
testbed operations staff manually evicts several or all experiments to troubleshoot a testbed problem.

\item Testbed policies and software evolve, which may lead to different recording of an event over time.
For example in 2004--2006, when a user's request for experiment modification had a syntax error this was recoded in the database.
This practice was abandoned in later software releases. Similarly, when a user request
for experiment modification failed due to temporary lack of testbed resources, this request and testbed's state snapshot
were recorded in the file system on the \texttt{boss} node. This practice was stopped in early 2007 making it difficult 
to understand and troubleshoot resource allocation errors.

\item In a small number of cases software generating unique identifiers for file names had low randomness 
leading to newer files overwriting older ones within the same experiment. This means that file system state would 
be missing of some instances.

\end{enumerate}
During our analysis, we detect and either correct or discard entries with inconsistencies. We also attempt to infer missing data wherever possible, 
by combining the database and the file system information. 

\begin{table*}[htdp]
\begin{center}
\begin{tabular}{|c|c|p{4 in}|}
\hline
Source & Data & Meaning \\ \hline
DB & \texttt{events} & Time, experiment, project, size, exit code for each event.\\
DB &  \texttt{errors} & Time, experiment, cause and error message for each error. \\
FS& \texttt{/usr/testbed/expinfo} & Virtual topology, testbed resource snapshot, and resource allocation log for all successful and for some unsuccessful resource allocation requests.  \\ \hline
\end{tabular}
\caption{High-level description of the DeterLab's data we analyzed \label{data}}
\end{center}
\end{table*}%

\textit{\textbf{Suggestion 1:} Testbeds need better software development practices that start from a system model and verify 
 that developed code matches the model, e.g., through model checking and unit testing. } \color{red}While it is impossible to eliminate
 all bugs in a large codebase, a systematic tying of code to requirements and models would help
eliminate inconsistencies in record-keeping and even facilitate automated detection and forensics of testbed problems. 
\color{black}


%Say somewhere that we have regular PCs and other resources

\section{Testbed Mapping Problem}
\label{tmp}

We now explain some specifics of testbed operation that relate to resource allocation, 
using  Figure \ref{nmpillust} to illustrate them. 
 Many of the concepts in this Section were first introduced in \cite{assign}.

Over time network testbeds acquire nodes of different hardware types leading to heterogeneity. 
Types can differ in number of network interfaces, processor speed, memory speed, disk space, etc. 
In Figure \ref{nmpillust} the drawing on the left shows a sample testbed architecture.
There are three hardware types: \textit{A}, \textit{B} and \textit{C}, with 2, 2 and 6 nodes respectively. 
Each physical node is connected
to a switch. Because a single switch has a limited number of ports 
a testbed may have multiple switches connected by limited-bandwidth links, each hosting a subset of
nodes. In Figure \ref{nmpillust} there are three switches -- \textit{s1}, \textit{s2} and \textit{s3} -- with interswitch links 
shown as thick lines between them. Often nodes of the same type are connected to the same switch. Sometimes
it is beneficial to connect different node types to the same switch (e.g.,  nodes of type \textit{A} and  \textit{B} are connected to 
 \textit{s1}) or to connect some nodes to two different
switches (e.g., nodes \textit{C5} and \textit{C6} connect to \textit{s1} and \textit{s3}). DeterLab has instances of
all three node-to-switch connection types in its current architecture. 

Users submit their experiment configuration requests to the testbed as a \textit{virtual topology}. 
One such topology is shown in the right drawing 
in  Figure \ref{nmpillust}. 
A resource allocation algorithm attempts to solve the \textit{testbed mapping problem} \cite{assign}.
It starts from the virtual topology and a snapshot of the 
testbed state and attempts to find the best selection of 
hardware that satisfies  
experimenter-imposed and testbed-imposed constraints. 

Testbed-imposed constraints consist of limitations on available nodes of any given type, limitations on number of node interfaces, and
limited interswitch link bandwidth. 
Experimenter-imposed constraints are encoded in the virtual topology as \textit{desires} and consist of:
(1) \textbf{Node type constraints} --  a virtual node must be mapped to specific hardware type,
(2) \textbf{OS constraints} -- a virtual node must run specific OS,
(3) \textbf{Connectivity constraints} --  a virtual node should have specific number of network interfaces and is connected to another node
by a link of specific bandwidth.
Node type and OS constraints are encoded explicitly by annotating nodes in the virtual topology, and the connectivity constraints are 
implied in the topology's architecture. For example,  in Figure \ref{nmpillust}, explicit constraints request nodes
\textit{n1} and \textit{n2} to be of type \textit{pc} and run OS 1, while nodes  \textit{n3}, \textit{n4} and \textit{n5} should be of type
\textit{B} or \textit{C}. Implicit connectivity constraints require that \textit{n2} be mapped to a node with at least 3
network interfaces, \textit{n1} to a node with at least 2 interfaces, and the 
rest to nodes with at least 1 interface. Each link is required to have 1Gbit bandwidth. \color{red}This limits the number of virtual links that can be allocated
to an interswitch link and in turn invalidates some mapping of virtual to physical nodes that would oversubscribe interswitch bandwidth. \color{black}
Emulab software further lets users specify \textbf{fixed mappings}: pairs of virtual nodes that map to specific physical nodes. This sometimes helps \texttt{assign} algorithm to find a solution, where it would otherwise miss it. We elaborate on reasons for fixed mappings in the next Section. 


The notion of the \textit{node type} \cite{assign} extends beyond simple hardware types in two ways.
First, a physical node can ``satisfy'' multiple node types and may host multiple instances of the same type. 
For example, node \textit{A1} (see annotations at the bottom left in Figure  \ref{nmpillust}) 
can host one virtual node of type \textit{A}, one virtual node of type \textit{pc}, two virtual nodes of type
\textit{delay}, or ten virtual nodes of type \textit{pcvm} (virtual machine installed on a physical node).  
Second, a user can specify a \textit{vclass} -- a set of node types 
instead of the single type for any virtual node. In our example a user has asked for
nodes \textit{n3}, \textit{n4} and \textit{n5} to be either of type \textit{B} or of type \textit{C}. \textit{vclasses}
can be hard -- requiring that all nodes be assigned to the same node type from \textit{vclass} -- and soft -- allowing mixed type allocations, from the same 
\textit{vclass}. In DeterLab's operation we have only encountered use of soft \textit{vclasses}.
Corresponding to experimenter's desires, physical nodes have \textit{features}. For example,
in Figure \ref{nmpillust} there are the following features: (1) OS 1 runs on types \textit{A} and \textit{B}, 
(2) OS 2 runs on  types \textit{A} and \textit{C}, (3) firewallable feature is supported by  types \textit{A} and \textit{B}, 
(4) hosts-netfpga feature is supported by  types \textit{B} and \textit{C}. Each feature is accompanied by a \textit{weight} that
is used during resource allocation process to score and compare different solutions.

Testbeds create \textit{base} OS images for all their users, for popular OS types like Linux, Windows and Free BSD. 
Over time testbed staff creates newer versions of base images but the old ones still remain on the testbed and are used, we believe due to inertia.
Testbeds further allow users to create custom disk images as a way of 
saving experimental state between allocations. These images are rarely upgraded to new OS versions. As testbeds grow, old custom and base images 
cannot be supported by new hardware. Thus virtual topologies with such images can be 
allocated only to a portion of the testbed and OS desires turn into mapping constraints.

\textit{\textbf{Suggestion 2:} Testbeds need mechanisms that either provide state saving without disk imaging, or help users to upgrade their
 custom images automatically to new OS versions. Experiment specifications (virtual topologies) should also be upgraded automatically to use newer
 base OS images. This would eliminate OS-based constraints and improve allocation success. }

\begin{figure*}[htbp]
\begin{center}
\includegraphics[width=4in]{figs/nmpillust}
\caption{Illustration of the network testbed mapping problem}
\label{nmpillust}
\end{center}
\end{figure*}

An \textit{acceptable} solution to the testbed mapping problem
meets all experimenter-imposed and testbed-imposed constraints. 
We note that honoring an interswitch bandwidth constraint is a choice and not a must. Testbed software can 
allocate any number of virtual links onto the interswitch substrate, but if it oversubscribes this substrate and if
experimenters generate full-bandwidth load on the virtual links they may experience lower than expected performance. 
In our example in Figure \ref{nmpillust} it is possible to allocate links \textit{n2}-\textit{n4} and \textit{n2}-\textit{n5} on the same
1Gbit interswitch link, but if the experimenter sends 1Gbit of traffic on each of them \underline{at the same time} half of the traffic will 
be dropped. There are two choices when evaluating if interswitch bandwidth constraint is met: (1) evaluation can be done
only within the same experiment assuming no other experiment uses the same interswitch link, and (2) evaluation can be done
taking into account all experiments that use the same interswitch link. In practice, solution (1) is chosen because it improves the 
resource allocation success rate. Risk of violating experimenter's desires is minimal because the incidence
of multiple experiments using the same interswitch link and generating high traffic at the same time is low.

The \textit{best} solution is such that minimizes interswitch bandwidth consumption and minimizes unwanted features on selected
physical nodes -- these are the features that are present on the nodes but were not desired by the experimenter. Doing so improves the 
chance of success for future allocations.
In face of these constraints the testbed mapping problem becomes NP-hard because the number of possible solutions is too 
large to be exhaustively searched for the best one.

\section{Why Allocations Fail}
\label{whyf}

A resource allocation may fail for a number of reasons such as a syntax error in the user's request, a testbed software failure, a policy violation by a user's request, etc. In this paper we are only concerned by resource allocation failures that occur due to temporary shortage of testbed resources. 
This means
that the same virtual topology would successfully allocate on an empty testbed. We will call these TEMP failures and 
classify them into the following categories:
\begin{enumerate}
\item \textbf{FIXED:}  Virtual topology specified a fixed mapping of some virtual nodes to specific
physical nodes but testbed could not obtain access to these nodes. 
\item \textbf{TYPE:} 
Virtual topology had node type constraint that could not be met by the testbed.
\item \textbf{OS:} Virtual topology had OS constraint that could not be met by the testbed.
\item \textbf{CONNECT:} The testbed could not find a node with sufficient interfaces.
\item \textbf{INTERSWITCH:} The allocation algorithm found a solution but the projected interswitch bandwidth usage exceeded link capacity.
\item \textbf{TESTBED:} There is a problem in the testbed's software that only becomes evident during resource allocation.
One such problem occurs when the current allocation algorithm -- \texttt{assign} \cite{assign} -- fails to find a possible solution.
\end{enumerate}
\color{red}
Categories FIXED, TYPE, OS, CONNECT and INTERSWITCH stem directly from the way testbeds
address the network mapping problem (see the previous section) -- a failure to satisfy user or testbed constraints
will fall into one of these four categories. In our analysis of TEMP failures on DeterLab we further find that bugs in testbed
configuration and software occasionally lead to TEMP failures, \textit{when in reality there are available resources
to satisfy user and testbed constraints}. This leads us to create the TESTBED category. One could view TEMP failures that 
fall into TESTBED category as false TEMP failures since they do not occur due to a temporary resource shortage.
\color{black}

We first investigate why TEMP failures occurred historically on the DeterLab's testbed. 
We start with the records from the DeterLab's database, that contain experiment identifier, time and alleged cause of each failure as well as the error message generated by the testbed software. The database only has records for failures that occurred after April 13, 2006. There are 24,206 records, out of which
11,176 are TEMP failures. We use the error messages to classify these TEMP failures into the categories above. We find that 47.5\% are TYPE failures, 18.5\% are FIXED failures, 15.7\% are OS failures, 
3.8\% are CONNECT failures and only 0.5\% are INTERSWITCH failures. In 13.5\% of cases the error message indicates that mapping failed, but does not give the specific reason. Finally there are
0.2\% of failures that occur due to a policy violation or a semantic problem in the experimenter's request but are misclassified as TEMP failures.

While the above analysis offers a glimpse into why specific allocations failed, we would like to know how many failures occur due to 
\textit{true overload} -- there are no available resources on the testbed -- 
and how many occur due to \textit{perceived overload} and could be eliminated either by relaxing experimenter's constraints or by improving testbed software. 
To answer these questions we need to match each TEMP failure to the virtual topology and the testbed state snapshot that were given to 
\texttt{assign} so we could mine the desired and the available number of resources. 
We perform this matching in the following way:
\begin{enumerate}
\item We link each TEMP failure to a resource allocation log file showing details of the allocation process, by matching the time of the TEMP failure 
with the timestamp of the file.
\item From the log file we mine the file names of the virtual topology and the testbed state snapshots that were used by the resource allocation software, i.e. the \texttt{assign} algorithm. 
\item In 2007, DeterLab testbed stopped saving  the virtual topology and the testbed state files for failed allocations so we must infer them from other data. 
To infer the virtual topology we identify the testbed event (\texttt{swapmod} or \texttt{swapin}) that led to that specific TEMP failure. 
For failures that occurred on a \texttt{swapin} event, we attempt to find a previous successful  \texttt{swapmod} or \texttt{swapin} of the same experiment and link it to a virtual topology using the same process from steps 1 and 2. We associate this topology with the TEMP failure. 
To infer the testbed state at the time of TEMP failure, we process the testbed state snapshots chronologically up to the time of the failure 
and infer from those the physical node features and testbed
architecture (connections and bandwidth between nodes and switches).  Let us call this the \textit{testbed configuration}.
We also take the last testbed snapshot created before the TEMP failure and extract the list of available 
nodes at the time.  We then process any \textit{swapout} events between the time of the snapshot and the TEMP
failure, and add the released nodes to the available pool. This gives us the testbed state at the time of TEMP failure.
Then we combine all this information and generate the testbed snapshot in the format required by the \texttt{assign} algorithm.  
This inference process may result in an incorrect  testbed state only if some of the available nodes become unavailable in the time
from the last testbed snapshot before the TEMP failure until the TEMP failure occurs. This can happen due to a hardware error or 
due to a manual reservation by the testbed staff but such events are rare.
\end{enumerate}

We were able to match 9,066 out of 11,176 TEMP failures in this manner -- they form the \textit{matched-failure} set that we analyze further. 
Figure \ref{seasonal} shows the count of TEMP failures per month in our data. There are definite seasonality effects with peaks in April in 
November.
We focus only on demand and availability of general PC nodes since these are requested by majority of users.  
Only  1,679 of TEMP errors or 19.5\% occur because of a true overload, meaning that there are less PCs than desired. 
This means that 80\% of TEMP errors could \color{red}potentially\color{black} 
be reduced or eliminated by improving testbed software or by educating users how to minimize their use of constraints. 
To identify TESTBED errors we run both the \texttt{assign} and our \texttt{assign+} allocation algorithm, described in the Section \ref{assign+}, 
on the remaining 7,387 pairs in the \textit{matched-failure} set. 

\begin{figure}[htbp]
\begin{center}
\includegraphics[width=3in]{figs/errorspermonth}
\caption{TEMP failures per month}
\label{seasonal}
\end{center}
\end{figure}

\color{red}
We now discuss the reason for fixed mappings in user topologies. In many cases fixed mappings are inserted not by a user but by 
the testbed software when a running instance is being modified, e.g. by adding or removing nodes. This enables the testbed to keep the 
currently allocated nodes associated with the instance and just drop some (in case of removing nodes) or add a few more. However, 
if some of the allocated nodes become unresponsive the entire resource allocation fails. We believe that this is an incorrect model, and the 
testbed should fall back to the strategy of releasing all nodes and allocating from the entire available node pool. Another 
reason for fixed mappings occurs in a case when nodes of a given type may differ based on their location, and a user prefers some 
locations over others. We argue that these cases would be better handled through node features or through creation of location-specific
node types since fixed mappings allow users to select only one out of several possible node choices. 
\color{black}
We next modify virtual topologies in  the \textit{matched-failure} set to remove fixed mappings, because
they seem to often harm allocations, as evidenced by a high number of FIXED failures.
We find that  \texttt{assign+}  can successfully allocate resources in 1,392 cases, or 15.3\% of our \textit{matched-failures} set.
We further find that both \texttt{assign} and  \texttt{assign+} succeed in 2,251 or 24.5\% of cases. 
This means that the original failure, recorded in the database, was likely a ``bad run of the luck'' event for  \texttt{assign}, due to its randomized search strategy. We 
explain the details of \texttt{assign} in Section \ref{ass}.
Finally, we find that in 456 cases or 5\% allocation failed due to a spelling error in the database in some switch names. These
entries are used when testbed snapshots are created and a spelling error leads to a disconnected testbed. We thus conclude that 
3,288 or 36.3\% of TEMP errors occur due to experimenter's constraints, 4,099 or 45.2\% occur due to testbed software and 
1,679 or 19.5\% occur due to true overload. There are thus three ways of addressing the allocation problem: (1) helping users understand 
and reduce the constraints on their topologies, (2) designing better resource allocation algorithms and (3) enforcing some 
fair sharing of resources.  We explore each of these strategies in the following sections. 

\textit{\textbf{Suggestion 3:} Testbeds should develop automated self-checking software that detects events such as spelling errors in the 
database records, real switch and node disconnections, etc., well before they lead to resource allocation failures}.

\section{Relaxing User Constraints}
\label{relaxing}

We now explore how much user constraints influence allocability of instances on DeterLab.
We first match all the successfully allocated instances in our dataset with their virtual topology and the state of the empty testbed
that existed at the time of their allocation. For each topology, 
we simulate the checks in the testbed mapping software for node type, OS and connectivity constraints on this empty testbed. 
We limit our checks only to those 
nodes in the virtual topology that can be allocated on general PCs, and the testbed state only includes these PCs.
For each node we record the \textit{nodescore}, showing the percentage of testbed that can satisfy this node's constraints.
For example, if a user asked for a node of type $A$ or $B$ with OS 1 and if there are 30 nodes of type $A$, 30 of type $B$ 
and 20 of type $C$ in the testbed, with OS 1 running on A and C only the \textit{nodescore} for this node would be 
$30/80 = 0.375$ because it can only be allocated to nodes of type $A$. 
We then calculate the \textit{topscore}, averaging all the \textit{nodescore}'s in the virtual topology.  

The red line in the Figure \ref{scoretype} shows the cumulative distribution function (cdf) of all the \textit{topscores}
in the original topologies. There are 30\% of topologies that can allocate on less than 80\% of the testbed, 
20\% can allocate on less than half of the testbed and 10\% can allocate on less than 20\% of the testbed.
To identify the effect of the node type, OS and connectivity constraints on the allocability
we modify virtual topologies in the following ways: (1) \textbf{ALTTYPE:} We allow use of alternative node types that have similar or 
better hardware features than the user-specified node type; these are described in more detail in Section \ref{alttype},
(2)  \textbf{NOTYPE:} We completely remove the node type constraint, (3)  \textbf{NOOS-NOTYPE:} We remove both the node type and the OS constraints. 
Effect of these strategies on the allocability is also shown in Figure \ref{scoretype}. Use of alternative types improves 
the allocability, especially of those topologies that were previously severely restricted. There are 
now 15\% of topologies now allocate on less than half of the testbed and 
only 2\% allocate on less than 
20\% of the testbed. Removal of node type constraints has a profound effect. 
Only 11\% of topologies now allocate on less than 80\% of the testbed, only 3\% on less than half of the testbed and
only 0.1\% on less than 20\% of the testbed. Finally, removing all node type and OS constraints leads to
only 0.3\% of topologies to allocate on less than 80\% of the testbed. 

\begin{figure}[htbp]
\begin{center}
\includegraphics[width=3in]{figs/scoretype.pdf}
\caption{Topscore when we vary type restrictions}
\label{scoretype}
\end{center}
\end{figure}

We next explore the effect of (1) \textbf{NOOS:} Removing OS restrictions, (2)  \textbf{NOOS-ALTTYPE:} Removing OS restrictions and 
using alternative node types. The effect of these strategies is shown in Figure \ref{scoreos}. Removal of OS constraints leads to 23\% of 
topologies that can allocate on less than 80\% of the testbed, 
17\% can allocate on less than half of the testbed and 10\% can allocate on less than 20\% of the testbed. If we add to this use of alternative types,
21\% of 
topologies that can allocate on less than 80\% of the testbed, 
12\% can allocate on less than half of the testbed and only 2\% can allocate on less than 20\% of the testbed. 

We do not explore how changing connectivity would influence allocability, because connectivity constraints only affect a small number of topologies,
and lower the allocability by a small value. This is reflected in \textbf{NOOS-NOTYPE} line in Figure \ref{scoreos} where it
departs from 1 to values 0.9--1 for about 15\% of topologies.

\begin{figure}[htbp]
\begin{center}
\includegraphics[width=3in]{figs/scoreos.pdf}
\caption{Topscore when we vary OS restrictions}
\label{scoreos}
\end{center}
\end{figure}

\color{red}
Finally we explore the effect of virtual topology's effective size on resource allocation probability. Whenever this size exceeds the number of
available PCs in the testbed, the allocation will fail. Experimenters can usually  scale their experiments up or down, but testbeds  offer
no information about historical node availability that could help users weigh different size choices. 
For example, if an experimenter expects they will use a testbed over a month to 
prepare a conference submission, knowing that, in the past month, 50 PCs were available on the testbed only 20\% of the time, while 30 
PCs were available 80\% of the time, would help them evaluate their options. 


\begin{figure}[htbp]
\begin{center}
\includegraphics[width=3in]{figs/available2011.pdf}
\caption{Available PCs in 2011}
\label{available2011}
\end{center}
\end{figure}

Figure \ref{available2011} shows the available PCs during 2011. This data is very bursty,
which means that past data may be a poor predictor for current availability. 
To quantify how well these predictions may work we extract the historical availability of PCs
from all 7 years in our dataset, divided into 1-hour, 1-day, 1-week, 1-month and 3-month windows.
We then traverse all windows, using the cumulative probability distribution from the previous window to predict 
the availability of the same number of nodes in the current window. We assume that a prediction is accurate if the predicted
availability is higher than the actual value minus 5\%. For example if 50 PCs were available 50\% of time
in week 1, and are available 45\% or more time in week 2 we will say that the prediction was accurate. 

Figure \ref{accuracy} shows the prediction quality (how often our prediction is accurate) on the y-axis, 
for a given number of PCs on the x-axis.
We see that more than 80\% of hourly predictions are accurate for most PC values, which means that 
testbed situation does not change drastically on hourly basis. Prediction accuracy declines as the interval
grows larger and generally looks concave -- accuracy is high for very small or very large PC values and low
in between. Due to the bursty nature of the PC availability, it rarely happens that the available PCs fall below 20--50 nodes, and it also 
rarely happens that there are more than 250--280 nodes. Thus predictions based on the prior availability
for these node sizes often amount to "available some high percentage of time" for 20--50 nodes and "never available" for 
250--280 nodes and are mostly correct. On the other hand, availability of 100--200 PCs changes rapidly
and our predictions are accurate only 50\% of the time. We conclude that bursty nature of testbed usage prevents accurate availability predictions.
Instead, testbeds need to amend their allocation algorithms and policy mechanisms to improve predictability of resource allocation.  

\begin{figure}[htbp]
\begin{center}
\includegraphics[width=3in]{figs/accuracy.pdf}
\caption{Prediction accuracy}
\label{accuracy}
\end{center}
\end{figure}
\color{black}


This section has laid out strategies that users can deploy themselves to improve allocability of their topologies. 
But, how likely is this to happen, i.e. are
users flexible about their constraints? To answer this, we try to characterize evolution of virtual topologies in our dataset by first pairing failed allocations with the first following successful allocation in the same experiment and then comparing their topologies. 
We manage to pair 2,124 out of our 9,066 virtual topologies from the \textit{matched-failures} set.
In 956 of those pairs the topologies differ: in 639 cases  a user has modified node type or OS constraint, and in 322 cases  a user has reduced the topology's size. 
We conclude that users naturally relax their constraints when faced with an allocation failure about half of the time. When we examine how long it takes a user to converge to a ``good'' set of constraints we find that half of the users find the winning combination within one hour, 66\% within 4 hours, and 78\% within a day. But this distribution is heavy-tailed with the tail going into year-long values, possibly due to the user abandoning the experiment and returning to it much later.
\color{red}
Having automated tools that identify and propose alternative constraints to users would improve the convergence time to a topology that can be allocated and would improve user experience. We believe that such an interactive dialogue with the user would work better then letting users specify how important certain constraints are to them, since this more actively engages the user and informs them about possible tradeoffs. 
\color{black}

\textit{\textbf{Suggestion 4:} Testbeds need tools that help users evaluate tradeoffs between different constraint sets and automatically suggest modifications that improve allocability. This could be done prior to the actual attempt to allocate resources. }


\section{Improving Resource Allocation}
\label{improving}

We now explain how \texttt{assign} algorithm works and how we improve it in \texttt{assign+}. 

\subsection{\texttt{assign}}
\label{ass}

In \cite{assign} Ricci et al. propose and evaluate the \texttt{assign} algorithm as a solver for the testbed mapping problem.  Because this problem is NP-hard, 
Ricci et al. propose to solve it using simulated annealing \cite{annealing} -- a heuristic that performs a cost-function-guided exploration of a solution space. 
Simulated annealing starts from a random solution and scores it using a custom \textit{cost function} that evaluates its quality. It then perturbs the solution using a \textit{generation function}
to create the next one. If this solution is better than the previous one it is always accepted; otherwise it is accepted with some small probability, controlled by \textit{temperature}. This helps the simulated annealing to get out of the locally optimal solutions and find the global optimum. At the beginning of the search, the temperature is set to a high value, leading to most solutions being accepted. Over time the temperature is lowered, following a custom \textit{cooling schedule}, making the algorithm converge to a single ``best'' solution. There is no guarantee that the algorithm will find the best solution but it should find one that is much better than a random assignment, and fairly close to the best one. Obviously, as algorithm runs longer its chance of finding the global optimum increases but so does the runtime. To guarantee time-bounded operation, \texttt{assign}'s runtime is limited, which may sometimes make it miss a possible solution. 

To condense the search space, Ricci et al. introduce the concept of \textit{pclasses} -- sets of nodes that have the same node types, features, network interfaces and switch connections. In Figure \ref{nmpillust} we identify four  \textit{pclasses}. Virtual nodes are then mapped to  \textit{pclasses}. 
The \texttt{assign} algorithm starts from the set of all \textit{pclasses} and precomputes for each virtual node a list of \textit{pclasses} that are acceptable candidates. It then moves all the virtual nodes into \textit{unassigned} list and, at each step, tries to map one node from this list to a \textit{pclass}. When all the nodes have been assigned, the algorithm tries in each step to remap one randomly selected virtual node to another \textit{pclass}. Each solution is scored by calculating a penalty for used interswitch bandwidth and for unwanted features. The actual scoring function is
quite complex but it approximately adds up unwanted feature weights and fixed link penalties. 
A lower score denotes a better solution. In the end \texttt{assign} selects the solution with the lowest score as the best one. 

\subsection{\texttt{assign+}}
\label{assign+}
\color{red}
In designing \texttt{assign+} our main insight was to use expert knowledge of network testbed architecture to identify allocation strategies that lead to minimizing interswitch bandwidth. These strategies are deployed deterministically to generate candidate solutions, instead of exploring the entire space of possible allocations via simulated annealing, which significantly shortens the run time.  We also recognized that allocating strongly connected node clusters in experiment topologies together leads to preservation of interswitch bandwidth and shortens the run time. In the long run these strategies also lead to better distribution of instances over heterogeneous testbed resources.
\color{black}

Like \texttt{assign}, \texttt{assign+} generates  \textit{pclasses} and precomputes  for each virtual node a list of \textit{pclasses} that are acceptable candidates.  It then generates \textit{candidate lists}, aggregating virtual nodes that can be satisfied by the same candidate \textit{pclasses}. For example, in Figure \ref{nmpillust} \textit{n1} can be satisfied by \textit{pclass1} or \textit{pclass2},  \textit{n2} can be satisfied only by \textit{pclass2} because it requires three network interfaces, and \textit{n3}, \textit{n4} and \textit{n5} can be satisfied by \textit{pclass2}, \textit{pclass3} or \textit{pclass4}.  Each \textit{pclass} has a size which equals the number of currently available testbed nodes that belong to this class. 
Next, the program calls its \texttt{allocate} function five times, each time exploring a separate allocation strategy.

The main idea of the \texttt{allocate} function is to divide the virtual topology into several connected subgraphs and attempt to map each subgraph or its portion in one step, if possible. Only if this fails, the function attempts to map individual virtual nodes. This reduces the number of allocation steps, while minimizing the interswitch bandwidth, because connected nodes are mapped in one step whenever possible. 

The \texttt{allocate} function first breaks the virtual topology into several connected partitions attempting to minimize the number of cut edges. 
Our partitioning goal is to create a large number of possible, large, partitions, preferably of similar sizes, where smaller partitions can be
subsets of larger ones. This allows us flexibility to map these partitions to different-sized \textit{pclasses}. \color{red}
We achieve this goal by
traversing the topology from edges to the center and forming parent-child relationships, so that nodes closer to the center become
parents of the farther nodes.
\color{black}

This is a known graph partitioning problem for which many solutions exist (e.g. \cite{kernighanlin}), but these either require the number of 
partitions to be known in advance -- whereas we want to keep this number flexible -- or they are too complex for our needs. 
We employ the following heuristic to generate the partitions we need. We start from virtual nodes with the smallest degree and score them with number 1, also initializing round counter to 1. 
In each consecutive round, links that are directly connected to the scored nodes are marked, if the peer on the other side of the link is
either not scored yet or is scored with the higher number. The peer becomes a ``parent'' of the scored node if it does not already have one. 
The process stops when all nodes in the virtual topology have been scored. We illustrate the scores for nodes in the virtual topology in Figure \ref{nmpillust}. 
Black nodes belong to one partition and white ones to the second one. Node \textit{n2} is the parent of nodes \textit{n4} and \textit{n5} and node \textit{n1} is the parent of the node \textit{n3}.

Next, the \texttt{allocate} function traverses the candidate list from the most to the least restricted, attempting to map each virtual node and, if possible, its children.  Let us call the virtual node that is currently being allocated the \textit{allocating} node.
The most restricted candidate list has the smallest number of \textit{pclasses}. In our example this is the list for node
\textit{n2}. The function calculates the number of virtual nodes that must be allocated to this list and the number of physical nodes available in the list. If the first is larger than the second 
the entire mapping fails. 
Otherwise, we calculate for each parent node in the candidate list two types of children pools: \textit{minimum pool} and \textit{maximum pool}. Both calculations only 
include those children that have not yet been allocated. 
The minimum pool relates to the candidate list and contains all the children of the node that \underline{must} be allocated to this list.
The maximum pool relates to each \textit{pclass} in the candidate list and contains  
all the children of the given parent node that \underline{can} be allocated to this \textit{pclass}. 
In our example, when we allocate \textit{n1} its minimum pool would be empty because neither \textit{n4} nor \textit{n5} must be allocated to \textit{pclass2}, while the maximum pool would contain \textit{n4} and \textit{n5} for 
 \textit{pclass2}.
The  \texttt{allocate} function traverses each \textit{pclass} in the current candidate list in an order particular to each allocation strategy we explore. This order is always from the most to the least desirable candidate. It first tries to allocate the allocating node and its maximum pool. If there are no resources in any of the \textit{pclasses} of the candidate list it tries to allocate the allocating node and its minimum pool. If this also fails, it tries only to allocate the allocating node. If this fails the entire mapping fails.

There are five allocation strategies we pursue in the calls to the  \texttt{allocate} function: PART, SCORE, ISW, PREF and FRAG. \color{red}Each strategy uses 
knowledge of the network testbed architecture to generate candidate solutions that are supposed to minimize interswitch bandwidth use. The success
of each strategy depends on the available resources and the size and user-specified constraints in a given virtual topology.\color{black}
The first strategy -- PART -- minimizes partitions in the virtual topology by allocating \textit{pclasses} from largest to smallest size. 
The second -- SCORE -- minimizes the score of the allocation by allocating  \textit{pclasses} from those with the largest to those with the smallest score. 
We explore different ways to score a  \textit{pclass}, e.g., based on how many features it supports, based on how often it is requested, or a combination of both.
The next three allocation strategies score high those  \textit{pclasses} that already host parents or children of the allocating node. The ISW strategy also scores high those 
\textit{pclasses} that have high-bandwidth interswitch links to other \textit{pclasses}, which host direct neighbors of the allocating node.
 This only makes the difference when interswitch links have different capacities. 
ISW tries to minimize the interswitch bandwidth by allocating \textit{pclasses} from the largest to the smallest score.
 The PREF strategy also scores high those  \textit{pclasses} that share a switch with other \textit{pclasses}, which host direct neighbors of the allocating node. 
The FRAG and PREF strategies further score high those  \textit{pclasses}, which host direct neighbors of the allocating node. 
The PREF strategy tries to both minimize the interswitch bandwidth and to minimize partitions in the virtual topology by allocating  from  \textit{pclasses} with the largest to those with the smallest product of their score and size. The FRAG strategy tries to use the smallest number of \textit{pclasses} by allocating  from  \textit{pclasses} with the largest to those with the smallest product of their score and size.

At the end, the \texttt{allocate} function records the candidate solution and then tries to further reduce interswitch bandwidth cost by running Kernighan-Lin graph partitioning algorithm \cite{kernighanlin} to exchange some nodes between \textit{pclasses} if possible. Each exchange generates a new candidate solution. The algorithm stops when no further reduction is possible in the interswitch bandwidth.  Each solution's score is the sum of scores of all the physical nodes in it. 

After all calls to the  \texttt{allocate} function return, \texttt{assign+} chooses the best solution. This solution has the smallest interswitch bandwidth. If multiple such solutions exist, the one with the smallest score is selected. \color{red}Algorithms \ref{asalg} and \ref{asalg+} illustrate the designs and the differences between \texttt{assign} and \texttt{assign+}.
 
\begin{algorithm}
\caption{\texttt{assign} pseudocode. \label{asalg}}
\begin{algorithmic}[1]
\State generate pclasses
\State map each virtual node to candidate pclasses 
\State generate candidate lists for each virtual node
\Repeat \State assign one virtual node to a pclass \Until{$unassigned = \emptyset$}
\Repeat 
\State $solution$ = remap one virtual node to different pclass 
\State score $solution$; $solutions += solution$
\Until{sufficient iterations or average score low}
\State select the lowest scored solution
\end{algorithmic}
\end{algorithm}


\begin{algorithm}
\caption{\texttt{assign+} pseudocode. \label{asalg+}}
\begin{algorithmic}[1]
\State generate pclasses
\State map each virtual node to candidate pclasses 
\For{$strategy$ = (ISW, PART, SCORE, ISW, PREF, FRAG)} 
\State $solution$ = allocate($strategy$)
\State score $solution$; $solutions += solution$
\EndFor
\State select the lowest scored solution
\end{algorithmic}
\end{algorithm}

\color{black}

\subsection{Evaluation}

To compare the quality of found solutions and the runtime of  \texttt{assign} and \texttt{assign+}, we needed a testbed state and a set of resource allocation requests.
We reconstruct the state of the DeterLab testbed on January 1, 2011 using virtual topology and testbed state snapshot data from the filesystem. To make the allocation challenging, we permanently remove 91 PC nodes from the available pool, leaving 255. While this may seem extreme our analysis of testbed state 
over time indicates that often this many or more PCs are unavailable due to either reserved but not yet used nodes or to internal testbed development.
We seed the set of resource allocation requests with all successful and failed allocations on DeterLab in 2011. Each request contains the start and end time of the instance and its virtual topology file. For failed allocations, we generate their desired duration according to the duration distribution of successful allocations. Finally, we check that there are no overlapping instances belonging to the same experiment. If found, we keep the first instance and remove the following overlapping instances from the workload. 
We test this workload both with \texttt{assign} and \texttt{assign+} on an empty testbed and remove all instances (245 out of 18,729  or 1.3\%) that fail with both algorithms. 
We will label this final simulation setup ``2011 synthetic setup''. We then attempt to allocate all workload's instances, and release them in order dictated by their creation and end times, evolving the testbed state for each allocation and each release. 
\begin{figure}[ht]
\begin{center}
\includegraphics[width=3in]{figs/errors.pdf}
\caption{Failure rates}
\label{err}
\end{center}
\end{figure}


Figure \ref{err} shows the allocation failure rate over time on this setup for both algorithms. Since \texttt{assign} deploys randomized search, 
we show the mean and the standard deviation of its 10 runs. 
We test several approaches to score calculation: \texttt{assign+.1} deploys \texttt{assign}-like approach, where node types with more unwanted features are penalized higher, (2) \texttt{assign+.1m} is same as \texttt{assign+.1} but with memory, so node types that are requested more often receive a higher penalty when allocated, and (3) \texttt{assign+.2m} scores nodes only by how often they are requested. 
In case of \texttt{assign}, the failure rate starts small and increases to almost 6\% by the end of the simulation. Curves for  different flavors of \texttt{assign+} have the similar shape but the failure rate is always below that for  \texttt{assign}, reaching 4.7\% at the end. Overall,  \texttt{assign+}  creates 20\% less failed allocations than \texttt{assign}. \texttt{assign}-like score calculation outperforms those where popularity only depends on user requests. In the rest of our evaluations we only use  \texttt{assign+.1} under  \texttt{assign+} label.

\begin{figure}[htbp]
\begin{center}
\includegraphics[width=3in]{figs/runtime.pdf}
\caption{Runtime versus topology size}
\label{runtime}
\end{center}
\end{figure}

Figure \ref{runtime} shows the runtime of  \texttt{assign} and \texttt{assign+} versus topology size, with y-axis being in the log scale.  For both algorithms the runtime depends on the size and complexity of the virtual topology, and the number and diversity of available nodes on the testbed. We show the average of runtimes for each topology size and the error bars show one standard deviation around the mean. \texttt{assign+} is consistently around 10 times faster than  \texttt{assign}, thanks to its deterministic exploration of the search space. 

 




Figure \ref{bw} shows the  interswitch bandwidth allocated by   \texttt{assign} and  \texttt{assign+} versus the topology size. We group allocations into bins based on the virtual topology size, with step 10, and show the mean, with error bars showing one standard deviation. Here too,  \texttt{assign+}  significantly outperforms  \texttt{assign} on each topology size.  
On the average,   \texttt{assign+}  allocates only 23\% of the interswitch bandwidth allocated by \texttt{assign} .

\begin{figure}[htbp]
\begin{center}
\includegraphics[width=3.5in]{figs/bw.pdf}
\caption{Interswitch bandwidth vs topology size}
\label{bw}
\end{center}
\end{figure}

We further test the scalability of both algorithms by using Brite \cite{brite} to generate realistic, Internet-like topologies of larger sizes. These topologies are more complex and more connected than those found in most testbed experiments \cite{Hermenier2012how} and thus challenge allocation algorithms. We generate 10 topologies each of the size 10, 20, 50, 100, 200, 500, 1,000, 2,000, 5,000 and 10,000 nodes. For each node we request \texttt{pcvm} type making it possible to allocate large topologies on our limited testbed architecture. Each allocation request runs on an empty testbed. 
Figure \ref{scale} shows means and standard deviations for  runtime of \texttt{assign} and  \texttt{assign+} versus the topology size, both on log scale. \texttt{assign+} again outperforms \texttt{assign} having about 10 times shorter runtime. \texttt{assign} further fails to find a solution for 10,000-node topologies, while \texttt{assign+} finds it. 
\texttt{assign+} only allocates interswitch bandwidth in 5,000 and 10,000 node cases, while  \texttt{assign} allocates it for much smaller topologies. We believe that mechanisms that limit  \texttt{assign}'s runtime for large topologies interfere with its ability to find a good solution that minimizes the interswitch bandwidth.


\begin{figure}[htbp]
\begin{center}
\includegraphics[width=1\columnwidth]{figs/scale.pdf}
\caption{Scalability}
\label{scale}
\end{center}
\end{figure}

\subsection{Alternative Types}
\label{alttype}

We now assume that users request specific node types because they need some well-provisioned resource such as a large disk or a fast CPU. 
We explore if we can further improve resource allocation by expanding experimenter's node type constraint to equivalent or better hardware. We only consider disk, CPU and memory specifications because  network interface constraints are redundant in user requests -- these get
inferred from the virtual topology. We start from DeterLab's node types in January 2011, as shown in Table \ref{nodetypes}, and identify for each type \textit{alternative types} that have
same or better features.

Figure \ref{erram} shows the effect of using alternative types during \texttt{assign+} as the red line. The total failure rate is improved by 0.5\% over the basic \texttt{assign+}. 
Overall,  \texttt{assign+} with alternative types  creates 30\% less failed allocations than \texttt{assign}. 
We note that, should this strategy be deployed, testbeds would need to provide means for users to opt out, if they cannot accept alternative node types e.g., for repeatability reasons.

\begin{figure}[htbp]
\begin{center}
\includegraphics[width=3in]{figs/errorsam.pdf}
\caption{Failure rates for \texttt{assign+} when using alternative types and migration}
\label{erram}
\end{center}
\end{figure}

\subsection{Queueing}

One approach to handle remaining allocation failures could be to queue failed instances, and attempt to allocate them whenever an instance releases the nodes. Technology to support this does not currently exist in Emulab testbeds, but it would be simple to develop. The closest to this are ``batch'' experiments, which are then handed off to the testbed to allocate and deallocate. Only 10\% of experiments in today's DeterLab are batch experiments. 

We evaluate the queueing approach on our ``2011 synthetic setup.''  Note that this approach increases the overall demand for testbed's resources because failed instances do not disappear like in our previous simulations, but instead obtain resources at some point and potentially block some future instances. The green line in Figure \ref{errorsp} shows the failure rate of \texttt{assign+} with queueing, migration and alternative types approach. This rate is much higher than that 
of the  \texttt{assign+} with migration and alternative types because of increased instance density. 
Half of the failed instances get allocated within one hour, 80\% get allocated within 4 hours and 98\% get allocated within a day. The worst delay is 6.7 days. Looking at relative delay, 4\% of instances are only delayed up to 1\% compared to their original duration, 22\% of instances are delayed by at most 10\%, 62\% are delayed by at most 100\% and the worst delay extends the instance's duration 595 times! We conclude that queuing would be a good option to have but that users still may wait unpredictably long for resources.


\section{Changing The Allocation Policy}
\label{policy}

The approaches in the previous section change the resource allocation strategy but do not change the allocation policy on testbeds that allows users to hold resources for arbitrarily long times without interruption. In this section we investigate if changes in resource allocation policy would further improve allocation success. We examine two such changes: 
\begin{enumerate}
\item Experiment migration -- where running instances can be migrated to other resources in the testbed (that still satisfy user-specified requirements) to make space for the allocating instance.
\item Fair sharing -- where running instances can be paused if they have been idle for a long time, so their resources can be borrowed by an allocating instance. 
\end{enumerate}
\color{red}We acknowledge that either of these changes would represent a major shift in today's network testbeds' philosophy and use policy. Yet such shift may be necessary as testbeds become more popular and the demand on their resources exceeds capacity. Our work helps evaluate potential benefits of such policy changes.
Further, the above changes are potentially disruptive for some instances that rely on the constancy of hardware allocated to them for the duration of their lifetime. Ideally a testbed would have data about the purpose of each instance that would let it evaluate its sensitivity to hardware changes. Unfortunately no such data is collected by current Emulab testbeds. 

We assume that users would have mechanisms to opt out of these features, i.e. they could mark their experiment as "do not migrate" or "do not pause". We further assume that, if the benefits of these policy changes seem significant, testbeds would develop mechanisms to seamlessly migrate or pause and restart instances that would be minimally disruptive to users. Finally, we emphasize that instances could be migrated and/or paused only during idle times, when allocated machines exhibit no significant CPU, disk or network activity. 
Emulab software detects such idle machines each 10 minutes and records the event in the database, overwriting the previous status of the same physical node. We have collected these reports for one year to investigate the extent of idle time in instances. 
Figure \ref{idle} shows the distribution of the total idle time in four classes of instances: those that last $<$ 12, 12-24 h, 1-7 days and $>$ 7 days. All instances have significant idle time and long-lived instances are often idle for more than a week! This leads us to believe that our proposed policy modifications would apply to many instances and would not disrupt their existing dynamics. 

\begin{figure}[htbp]
\begin{center}
\includegraphics[width=3in]{figs/idle.pdf}
\caption{Idle times for instances of different duration}
\label{idle}
\end{center}
\end{figure}
\color{black}
\subsection{Migration}

We now explore if we can further improve resource allocation by performing experiment migration. This would include stopping some allocated instances and moving their configuration and data to other testbed machines to ``make space'' for the allocating instance.
While there exist techniques in distributed systems that can migrate running processes to another physical node \cite{migds}, we propose a much lighter-weight migration that would only move instances that are idle at the time. The simplest implementation of such migration would be to image all disks of the experimental machines, and load those images to new machines. 

We test the migration on our ``2011 synthetic setup''. If a regular resource allocation fails, we identify any instance that holds node types requested by the allocating instance as the \textit{migration candidate}. We then order candidates from the smallest to the largest and attempt to migrate each of them. To do so we reclaim resources from the migration candidate, try to allocate the allocating instance, and then try to reallocate the migration candidate. Allocation succeeds only if both of these actions succeed. Otherwise we restore the old state and try the next candidate. We only record a failure for the allocating instance if all migrations fail. In real deployment this can be easily simulated, without disturbing any instances, until the successful combination is found. 

Figure \ref{erram} shows the failure rate when using migration during \texttt{assign+}, with the blue line, and when combining migration and alternative types, with the purple line. Migration lowers the error rate by 0.5 \% when compared with alternative types only. Adding the alternative types to migration has a minor effect. 
Overall,  \texttt{assign+} with migration creates 40\% less failed allocations than the basic \texttt{assign}.
%Say something about idleness

\textit{\textbf{Suggestion 5:} Testbeds need better state-saving mechanisms that go beyond manual disk imaging.}
 

\subsection{Fair-Sharing}
\label{fairsharing}

When a demand for a shared resource exceeds the supply, the usual approach is to enforce fair-sharing and penalize big users. 
Traditional fair-sharing where every user (or in testbed case every project) receives a fair share of resources works well when: (1)
users have roughly similar needs for the resource, or (2) the demand does not heavily depend on the resource allocation success and jobs are scheduled in fixed time slots. In the first case, one can implement quotas on use, giving each user the same amount of credit and replenishing it on periodic basis. 
In the second case, one can implement fair queuing (e.g., \cite{csfq}), allocating jobs from big users whenever there is leftover resource from the small ones. 
Unfortunately, neither of these approaches works well for testbeds. 

Many measures of testbed usage exhibit heavy-tail properties that violate the first assumption about users having similar needs. For example, the distributions of instance size and duration is heavy-tailed (Figure \ref{size} and \ref{dur}): most instances are short and small, but there are a few very large or very long instances that dominate the distribution. If we assume that fairness should be measured at the project level, heavy tail property manifests again. The distribution of node-hours per project (Figure \ref{nh}), obtained by summing the products of size and duration in hours for all its instances, is also heavy tailed. The second assumption is violated because most fair-sharing algorithms are designed for fixed-size jobs while testbed instances have a wide range of durations that are not known in advance. 

\begin{figure}[ht]
\centering
\subfigure[Instance size]{
   \includegraphics[width =\columnwidth] {figs/heavysize}
   \label{size}
 }
 \subfigure[Instance duration]{
   \includegraphics[width =\columnwidth] {figs/heavydur}
   \label{dur}
 }
 \subfigure[Node-hours/hour project lifetime]{
   \includegraphics[width =\columnwidth] {figs/heavyactive}
   \label{nh}
 }
\label{heavytail}
\caption{Heavy tails in instance size, duration and project's use of the testbed}
\end{figure}

We now quantify the extent of unfairness on DeterLab testbed. We define a project as ``unfair'' if it uses more than its fair share of PCs in a week.  While we focus on PC use, similar definitions can be
devised for specific node types. We choose a week-long interval to 
unify the occurrence of heavy use due to any combination of large instances, long instances or many parallel instances in a project. 
A fair share of resources is defined as total number of possible node-hours in a week, taking into account available and allocated PCs,
divided by the total number of active projects in that week. A project can be classified as unfair one week and fair the other week.

Figure \ref{fairuse} shows the percentage of total possible node-hours used by 
unfair and by fair projects each week during 2011. There are 114  projects active during this time,
each of which has been fair at some point during the year; 27 projects have also been unfair.
There were total of 3,126 TEMP failures in 2011, averaging
58.7 failures per project when it is unfair, and 26.1 failures per project, when it is fair.
While an unfair project has more than double the errors of fair projects, this is expected,
 because unfair projects request more allocations per second,
their allocations are larger and have longer duration.
 
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=3in]{figs/hogs.pdf}
\caption{Usage of fair and unfair projects in 2011.}
\label{fairuse}
\end{center}
\end{figure}


Penalizing big testbed users is  difficult because large use in research projects seems to be correlated with publications.
We manually classify research projects on DeterLab as ``outcome projects'' if they have 
published one or more peer-reviewed publication, or MS and
PhD thesis, in which they acknowledge use of DeterLab. We find 48 outcome projects and 104 no-outcome projects. 
We then define an instance as big if it uses 20 nodes or more, and as long if it lasts one day or longer. Only 9\% of instances are big and 5\% are long.
Outcome projects have on the average 99 big and 33 long instances, while no-outcome projects have 10 big and 8 long instances. 
% Say something about total active time
Thus big instances, long instances and heavy testbed seem to be correlated with research publications and good publicity for the testbed, and are thus very valuable to testbed owners.
It would be unwise to alienate these users or discourage heavy testbed use. Instead, we want to gently tip the scale in favor of small users when possible. 

We identify three design goals for fairness policy on testbeds: 
\begin{enumerate}
\item \textbf{Predictability.} 
Any fairness approach must allow users to accurately anticipate when their resources may be reclaimed.
\item  \textbf{User control.} 
Actions taken to penalize a user must
depend solely on their actions and testbeds should offer opt-out mechanisms.
\item  \textbf{On-demand.} 
Resources should be reclaimed only when there is an instance whose allocation fails, and whose needs can be satisfied by these resources.
\end{enumerate}

One approach to tipping the scale would be to reclaim some resources from unfair projects until their use is reduced to a fair share. 
This would violate all three design goals, because unfair status changes depending on how many other projects are active, and freed
resources could sit unused on the testbed. Another approach would be to reclaim resources on demand 
from an instance that has used the most node-hours. Again this leads to unpredictable behavior from the user's point of view, and it may
interrupt short-running but large instances that are difficult to allocate again. We opt for the strategy that reclaims resources on demand 
from the longest-running instance, as long as it has been running for more than one day. This lets users identify which of their instances
may be reclaimed in advance. We propose two possible approaches to fair allocations: Take-a-Break and Borrow-and-Return.
%make sure we define allocating instance properly
 


\subsection{Take-a-Break}

In Take-a-Break approach, when a resource allocation fails, we identify any instance that holds node types that are requested by the allocating instance as the \textit{break candidate}. We then select the candidate that has been running the longest and, if its age is greater than one day, we release its resources to the allocating instance. The break candidate is queued for allocation immediately, and an attempt is made to allocate it after any resource release. 

Figure \ref{errorsp} shows the rate of TEMP failures for Take-a-Break approach, with migration and use of alternative node types, as the blue line. Failure rate is strikingly low, reaching 1.5\% by the end of our simulation even though the density of allocated instances is increased. Overall,  \texttt{assign+} with Take-a-Break  creates 74\% less failed allocations than \texttt{assign}. This comes at the price -- duration of 177 instances is prolonged. Half of these instances experience delays of up to 1 hour, 79\% up to 4 hours, and 97\% up to 1 day. Only six instances are delayed more than one day, the longest delay being 1.67 days. Looking at relative delay, 72\% of instances are only delayed up to 1\% compared to their original duration, 94\% of instances are delayed by at most 10\% and the worst delay doubles the instance's duration.
%Jelena: say here how this compares to queueing.

\begin{figure}[htbp]
\begin{center}
\includegraphics[width=3in]{figs/errorsp.pdf}
\caption{Error rates for \texttt{assign+} when using Queueing, Take-a-Break and Borrow-and-Return approaches. }
\label{errorsp}
\end{center}
\end{figure}

We now verify if we have tipped the scale in favor of fair projects.
We first apply the fairness calculation to the allocations resulting from running \texttt{assign+} on our ``2011 synthetic setup'' and obtain similar usage patterns,
to those seen in the real dataset,
except that the errors are reduced because we were removing overlapping instances during workload creation. There are 
14.5 failures per project when it is unfair, and 3.98 failures per project, when it is fair. 
We then apply the same calculation to the allocations resulting from running \texttt{assign+} with Take-a-Break on our ``2011 synthetic setup''
and we count both allocation failures and forcing an instance to take a break as ``failures''.
We find that there are 18.6 failures  per project when it is unfair, and 1.62 failures per project, when it is fair.
Thus unfair projects are slightly penalized and the failure rate of fair projects is more than halved.


\subsection{Borrow-and-Return}

While Take-a-Break approach helps fair projects obtain more resources, it forces instances whose resources have been reclaimed to wait for unpredictable duration. Borrow-and-Return approach amends this. Its design is the same as Take-a-Break approach, but resources are only  ``borrowed'' from long-running instances for 4 hours, after which they are returned to their original owner. Users receiving these borrowed nodes would be alerted to the fact that the nodes will be reclaimed at a certain time. Instances interrupted this way are queued and allocated as soon as possible. 

Figure \ref{errorsp} shows the rate of TEMP failures for Borrow-and-Return approach, with migration and use of alternative node types, as the purple line. Failure rate is similar to that of Take-a-Break approach. Duration of 291 instances is prolonged. 62\% of these instances experience delays of up to 1 hour, 84\% up to 4 hours, and 99\% up to 1 day. Only three instances are delayed more than one day, the longest delay being 1.67 days. Looking at relative delay, 22\% of instances are delayed up to 1\% compared to their original duration, 77\% of instances are delayed by at most 10\%, 98\% are delayed by at most 100\% and the worst delay extends the instance's duration 5 times. This approach seems to find a good middle ground between heavily penalizing long instances, like Take-a-Break does, and doing nothing, like Queueing does. Fairness of Borrow-and-Return approach is slightly worse than that of Take-a-Break, leading to the average of 18.6 failures for unfair projects, and 2.7 for fair projects.
%update numbers

%Add fairness results 
\section{Conclusions}

Network testbeds are extensively used today, both for research and for teaching, but their resource allocation algorithms and policies have not evolved much since their creation. This paper examines the causes for resource allocation failures in Emulab testbeds and finds that up to 60\% of failures could be avoided through: (1) providing better information to users about the cost of and the alternatives to their topology constraints, and (2) better resource allocation strategies. The remaining 40\% of failures can be halved by applying a gentle fair-sharing strategy such as Take-a-Break or Borrow-and-Return. The main challenge in designing fair testbed allocation policies lies in achieving fairness, while being sensitive to human user needs for predictability and while nurturing heavy  users that bring most value to the testbed. Our investigation is just the first of many that need to be undertaken to reconcile these conflicting but important goals. 

\section{Acknowledgments}

This material is based upon work supported by the National Science Foundation under Grant No. 1049758. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation
\bibliography{references}
\end{document}
