
\section{Introduction}
\label{sec:intro}
The last decade brought a major
change in experimentation practices in several areas of computer
science, as researchers shifted from using simulation and theory to
using network testbeds. Over this time many diverse testbeds have been
built throughout the world, and the trend of building better, bigger and
more diverse testbeds continues. Two large efforts in this direction in
the past few years stand out. In 2008, NSF launched a program to build a
nationwide GENI testbed \cite{geni} --- a novel type of testbed
distributed across many organizations, achieving unparalleled scale and
heterogeneity, and offering its users control not only over end hosts
but over all network elements. The same year, DARPA launched a program
to build the National Cyber Range \cite{ncr}, a large-scale testbed with
extensive support for experiment design, control, debugging and
repeatability.

While  investment in both public and private
 testbeds has grown significantly over the last decade time,
there have been no empirical studies on how testbeds are
used for research and education.
Testbeds offers a realistic platform for
 evalaution as compared to simulation
 as they deploy real software and hardware.
 Testbed traditionally support
  two major types of experiments:
(a) controlled and repeatable experiments,
 at multiple levels of abstraction,
 which greatly help improve the researcher's
  understanding of complex large-scale distributed
   systems and networks; and
(b) "in the wild" trials of experimental protocols and
 services that are deployed on top of the Internet so
  that they can interact with real world conditions
   through interactions with the network, end-hosts, and humans.

Testbed-based evaluations
 have a steep-learning curse and pose
  a number of challenges:
(a) testbed-based evaluation typically
 requires building prototypes,
rather than models in simulation;
(b) experiment misconfiguration and failure
 are very often encountered due to the
  networked and distributed environment in testbeds,
  and are very difficult to track;
%Testbeds introduce their own artifacts, as shown in
%\cite{sonia1, sonia2}.
(c) Due to
multiplexing of resources, users may not be able to acquire enough resources when they
need them;
(d) valid and systematic testbed-based evaluation
 requires engaging appropriate models
  for the topology and traffic aspects of the
   experiment.
For example, deriving an appropriate traffic model,
 involves identifying the right source of data,
 scaling it for the testbed,
 accommodating  the constraints of the heterogeneous testbed environment,
 and orchestrating it all together. This large time investment is needed
before any useful work can be done.
%%% we are not compelting with these fields.
% ns-2 changed network simulation, but before that simualtion was also
% very hard to do
%
%Whereas in mathematical modeling, network data analysis, simulation,
% comparitively there is negligible overhead for "setup",
%or with simulation where event generation and traffic orchestration is
%trivial.

Networking and distributed systems researchers, testbed operators,
 and public and private funders do not have any insight into
  the growing diversity and sophistication in testbed-based evaluation
  and the above listed challenges have an impact (if any)
   on research and education over the long run.
This paper is the first attempt to compare testbed usage
 characteristics across five diverse testbeds.
 Our emphasis is on a systematic and rigorous empirical analysis
  along with a traditional user survery to understand
  how testbeds are used.
  We consider
   experiment patterns both in terms of size and duration,
 project patterns as a collection of users, experiments, and publications
   and lastly testbed usage patterns in terms of
    utilization and research topics.

In this paper we analyze testbed logs from several public research testbeds ---
DETER \cite{deter}, Utah Emulab \cite{emulab}, Schooner \cite{schooner},
Planetlab \cite{planetlab} and StarBED \cite{starbed} -- and extract
information about users and research groups experimentation practices.
The PlanetLab data provides insight into experimentation characteristics
 "in the wild", typically deployment-types of studies
whereas DETER, Emulab, Schooner, and StarBED testbed data provides insight into
 experimentation characteristic of controlled environments.
We supplement this analysis with a survey of user experience with testbeds.

Our investigation reveals the following characteristics of
 testbed-based research:
\begin{itemize} \item \textbf{Finding 1:} Users and projects interact
with testbeds over a long periods of time.
In controlled enviroments, such as DETER and Emulab,
 most experiments last for a few hours,
 engage less than ten nodes, and are organized in simple topologies.
However, when we analyze the data further, we find
 that several small experiments gradually evolve into longer, larger and more
  complex experiments indicating
   the iterative nature of experimental research.
\item \textbf{Finding 2:} Projects that
eventually produce a measurable outcome -- a publication -- exhibit more
activity than other projects.
\item \textbf{Finding 3:} Testbed use
increases over time due to increase in membership and active projects,
and so does experiment duration, but experiment sizes decrease slightly, possibly due to contention. 
\item \textbf{Finding
4:} Distributions of many features that describe testbed usage, such as,
experiment durations, experiment sizes, project activity are
heavy-tailed. They span a wide range of values with most points
clustered at small values, and few points residing in the long tail.
\item \textbf{Finding 5:} At any quarter around 20\% of active projects
are ``big'' users, which means they consume between 60\% and 80\% of
total consumed resources in that quarter.
%In spite of an increased use of
%testbeds in classes, more than 75\% usage goes towards research
%projects, and more than half of the usage goes toward those research
%projects that eventually produce a publication.
\item \textbf{Finding
6:} In spite of the tools and services offered by the testbed operators,
 testbeds are still hard to use.
% we are not compelting with theory and simulation!!
Nearly 80\% of our surveyed users would like improved
 tools for experiment management, repeatability, and
fidelity.
\end{itemize}

We believe that our findings may help inform testbed funding,
development and management efforts.

\subsection{Related Work}

Our work is the first to investigate experimentation patters of testbed
users. We found one publication, \cite{meshnet}, where authors explore
testbed usage but from the operator standpoint. They mine usage
statistics of testbed hardware, including time spent in solving hardware
and software related problems, traffic in MAC layer, boot time of
machines, and so forth, with the goal of improving testbed operations.
We are also aware of a paper in preparation \cite{PlanetLabUsage} on
PlanetLab usage that is similar to our work in a sense that it also
investigates experimentation patterns of PlanetLab users. However,
\cite{PlanetLabUsage} focuses only on PlanetLab, often asking questions
that are PlanetLab specific, while we expand our analysis to five very
different testbeds looking to correlate and contrast experimentation
practices.


