\section{Usage Characteristics} \label{sec:character}

In this section we investigate testbed usage characteristics within
experiments, projects and user activity. We seek to determine and quantify experimentation
patterns across these dimensions and also to investigate if they differ across research and class
and across research outcome vs no-outcome categories. Our main findings is that most
experiments in DETER and Utah Emulab are short and small, and many of them use simple topologies (see Section \ref{sec:diversity} for this last claim). This trend
holds across research and class, and across research outcome and no-outcome
categories.
 But these
experiments are crucial for testing new ideas and they lead to larger, longer
and more complex experiments, which themselves lead to measurable
research outcome. We also show that research projects with outcomes
exhibit more activity than no-outcome projects in several dimensions: they
create more experiment definitions and instances, and they have more
project members. This trend holds for users of such projects -- they are more active
than users of no-outcome projects.
%Jelena come back here for users

\subsection{Experiment Characteristics}
\label{sec:exp_char}

\begin{figure}
\begin{center}
	\subfigure[Experiment Size]{
	\includegraphics[width=3.3in,type=pdf,ext=.pdf,read=.pdf]
	{figs/exp.size.gnu}
	\label{fig:expsize}
	}
	\subfigure[Experiment Duration]{
	\includegraphics[width=3.3in,type=pdf,ext=.pdf,read=.pdf]
	{figs/exp.dur.gnu}
	\label{fig:expdur}
	}
	\caption{The distribution of number of nodes and duration of
an experiment on the DETER, Utah Emulab, PlanetLab and StarBED testbeds.
Experiment size distributions of the DETER and Utah Emulab testbeds are
analyzed in both the research and class categories.}
\end{center} \end{figure}


In this section we investigate characteristics of experiment
instances in terms of the number of nodes,
 and the duration of the experiment.
The data indicates that most interactions with the testbed
 are small where experiments instances use few number of nodes,
  and short where experiment instances run for a few hours.
 We also analyzed this data across DETER research outcome and no-outcome
 categories but found not significant difference in distributions, which
 shows that outcome projects also have significant number of small and short experiments.
In Section~\ref{sec:proj_char} and \ref{sec:diversity}, we discuss and present
 data to show how even though each
 individual interaction with the testbed may be small and short,
 some of these experiments produce measureable
  outcomes, and many lead to
   larger and longer experiments.

Figure~\ref{fig:expsize} shows the distribution of the number of nodes
used in an experiment instance size. We plot results for research and
class categories on the DETER testbed, and research and class categories
for the Utah Emulab testbeds, and for all experiments in PlanetLab and
the StarBED testbeds. While we have a count of class slices (projects) on PlanetLab,
we have no way to identify these slices in the dataset.

We observe DETER and Utah Emulab's research usage distributions are
similar where \~35\% of the experiment instances have 2-5 nodes, \~20\% have
6-10 nodes and smaller percentages fall in 1, 11-20 and 21-50 nodes. There is a long
tail in the distribution (x-axis is not to scale) with 15\% of experiment instances
contributing to 85\% of the value range. 
In
the PlanetLab testbed on the other hand, \~45\% experiment instances
requested between 101--800 nodes, with a peak at 201-500 nodes where 25\% of instances fall. This
distribution is not heavy-tailed. We believe there are several reasons
for this difference. First, a large group of projects on PlanetLab are
deployment studies and need at-scale experimentation in a realistic
Internet environment. On the other hand most of the DETER and Utah Emulab
experiments are used for hypothesis testing and exploratory research,
which are usually done at a much smaller scale. Second, PlanetLab is
significantly larger than DETER and Utah Emulab testbeds in terms of
available virtual nodes. While it has 552 sites, each site may host more than one
PlanetLab node, and each node may host many slivers and thus become
part of many slices.
Third, we believe contention of resources is an important
factor on smaller testbeds  and is discouraging larger-scale
experimentation. Interestingly, StarBED's experiments are more distributed
across the different experiment size bins, with ~55\% of the experiments
with 1--50 nodes, and ~50\% of the experiments with 50--500 nodes. The
StarBED testbed is primarily used for hypothesis testing and exploratory
research but it differs in three significant ways from the DETER and
Utah Emulab: (1) It has 920 nodes, as compared to 320 nodes at the DETER
testbed and 480 nodes at the Emulab testbed (2) It has a small number of
active users, 62 active users (not taking recycling of user names into account)
as compared to several hundred for Utah
Emulab and DETER. (3) It hosts several emulation tools specialized for various research domains, which eases experiment setup at larger scale, (4) It requires batched experimentation, which
prompts users to experiment only with mature technologies and to
perform major development and idea testing elsewhere.
We believe all these factors combined allow StarBED
users to acquire resources and manage much larger experiments than what
we observe on the DETER and the Utah Emulab testbeds.

In the DETER class experiments, 90\% of the experiments use less than
five nodes. In the Utah Emulab class experiments, 5\% use single node
experiments, 40\% use 2--5 nodes, 31\% are 6--10, and 17\% use 11--20
nodes. The DETER testbed has many more class users than Utah Emulab,
as seen in Table~\ref{tab:cleaning}, and usually class exercises  require
each student to create their own experiment. We believe this contention
for resources drives the size of class experiments on DETER to lower values.
The DETER testbed also
limits the number of testbed nodes a class can request to achieve fair availability for
each class project, and to spare some nodes for research  projects. This likely
influences the size of class experiments on DETER making them smaller
than on Utah Emulab.


Figure~\ref{fig:expdur} shows the distribution of the
 duration of each experiment instance across
 four testbeds, again analyzing the research
  and class experiments on the DETER and the Utah Emulab
   testbed separately.
There are several interesting characteristics.
First, we observe that DETER and Utah Emulab testbed
 experiment duration distributions
 are almost similar for both research and class projects.
Most instances (56--71\%) last between 1 hour and 1 day,
 with ~10\% lasting between 10--30 minutes,
 and ~10\% lasting between 30 minutes to 1 hour.
 Less than ~10\% of the experiments last
  longer than one day. This may be the outcome of
  idle-swap policies on these testbeds that force reclaiming
  of resources after 4 hours of idle time (1 hour for class experiments
  on DETER), but it may also be an indicator that
  most experimentation on these testbeds occurs interactively
  and is thus limited by a human's wake time.

Second, on the PlanetLab testbed, surprisingly 46\% of
the experiments last for less than 1 day, even though a deployment
study usually takes longer to produce meaningful results. Such a small duration
may indicate many trial and error deployments.
The rest of the experiments are distributed over longer durations 28\% from 1 day to 1 week,
15\% from 1 week to 1 month, and  9\% from 1 month to 3 months.
Only 2\% of the experiments last longer than 3 months.
The PlanetLab dataset records slice events only once a day,
 hence the minimum duration we can record for PlanetLab is 24 hours.

Third, on the StarBED testbed,
most experiment instances last longer than in the DETER
 and Utah Emulab testbeds, with 31\% lasting less than one
  week, and 70\% lasting less than a month.
All resources are reserved prior to experimentation on the StarBED,
and experiments are batched. Thus
 even though StarBED is used for hypothesis testing and
 exploratory research, like DETER and Utah Emulab,
 people may overestimate experiment duration or they may
 anticipate several batch runs over one experiment instance.

Finally, comparing the distributions of instance duration across
 all the testbeds, we see that short instances are most frequent but there are
 always a small number of instances that last a long time.
 All distributions are heavy-tailed showing 98/2 split in case of DETER and
 Utah Emulab, 88/12 split in case of Planetlab and 78/22 split in case of Starbed.
We believe there are two reasons for short instance duration.
(i) There are instances
  that can produce useful results in short duration, such as compiling a new disk image or verifying that it loads.
(ii) Short duration experiments are used to quickly test new ideas that are then further developed and investigated in longer and larger experiments. The researcher
  then iterates through many small and short experiments
   before configuring a larger experiment, which is typically
    executed one or twice for a longer duration.
We show this usage pattern in Section~\ref{sec:proj_char}.


\subsection{User Characteristics}
\label{sec:user_char}

In this section we investigate characteristics of active research users on
the DETER and the Utah Emulab testbeds (we have no data about PlanetLab and
StarBED users), across several use metrics:
 number of experiment definitions per user,
  number of experiment instances per user and
 number of projects where user is a member. We analyzed class usage as well
 but there were no interesting trends to report.
Unfortunately, we do not have enough information
 in the remaining datasets to analyze these select metrics.

We summarize all user metrics due to space reasons. Looking at number of experiment
definitions per user, distributions look very similar for DETER and Utah Emulab research users.
Majority of users create up to 50 experiment definitions, with
20\% creating 1 definition and 60\% creating less than 5. The distribution is heavy-tailed with 6\% of users
contributing to 94\% of the distribution.
Comparing the same metric across research outcome/no-outcome categories, users in
outcome categories exhibit more activity. There are 15\% of outcome users
with 1 experiment definition as compared with 32\% of no-outcome users, but
close to 40\% of outcome users create 6-20 definitions compared to only 20\% of
no-outcome users.
These statistics indicate
 that research users in projects that eventually produce an outcome find a good fit between
 their research needs and the testbed, which keeps them more motivated
  and productive.

Distributions of number of instances per user, are also very similar for DETER and Utah Emulab testbeds, and are also heavy-tailed with 91/9 and 93/7 splits respectively. Around 15\% of users create only one instance and 75\% create less than 50 instances. Comparing across outcome and no-outcome categories,
we again
find that research users with a measurable outcome
create more instances.
We observed 67\% of the outcome research users instantiate more than
twenty experiment instances as compared to only 44\% of the research users
 in no-outcome projects.
We also observed the distributions of number of projects per rseearch
 user. Between 89\% and 91\% of the
research users on both testbeds belong to only one project,
and 7-8\% belong to two projects. This distribution does not change over outcome/no-outcome categories.


\subsection{Project Characteristics}
\label{sec:proj_char}

\begin{figure}
\begin{center}
\includegraphics[width=3.3in,type=pdf,ext=.pdf,read=.pdf]
{figs/proj.active.cmp.gnu}
\caption{Difference in the experimentation time
 spent by projects with a measureble outcome and without
  a measureable outcome is statistically significant.}
\label{fig:projactive}
\end{center}
\end{figure}

In this section we investigate
 characteristics of projects across DETER and Utah Emulab.
For space reasons we only present here interesting data that
shows that research outcome projects are more active than no-outcome
projects. Coupled with the fact that distributions of experiment instance
size and duration do not differ across these categories, we conclude
that outcome projects simply interact with the testbed more often and for
longer durations to produce a measurable outcome.

%
%Research projects are typically small in terms of number of
% users.
%Both testbeds have 62-64\% project with only 2--5 members
% and 16-18\% projects wih 6--10 members.
%We observe ~44\% of all DETER classes
% have 21-50 members and 24\% have 51-100 members.
%DETER also has a few class projects with more than 200 members.
%The Utah Emulab testbed class projects
% are smaller, where ~70\% have less than 20 members.


Through the research outcome projects, we attempt to assess
 the utility of a testbed in the research-to-publication cycle.
The data from the DETER testbed
 indicates the time lapsed between the start of the project
 and the time to the first publication ranges
  from 3 month to 5.2 years with a median value of 1 year.


Next we want to statistically confirm
 that projects that report a measurable outcome,
  spend more time on the testbed.
We first calculate the \textit{experiment\_time} of a project,
which is defined as sum of durations of all experiment instances associated
with a given project.  (Refer to Figure~\ref{fig:termspic}).
Hence if a project had two
experiment instances fully in parallel on a testbed, its experiment-time would
be twice that of a single instance. Experiment\_time does not
depend on the experiment size. Mathematically, we define:

\begin{equation*}
    experiment\textrm{\_}time=\sum_{k=1}^{N_{inst}}Duration_k
\end{equation*}

In Figure~\ref{fig:projactive} we show the distribution of
experiment\_time in the DETER testbed research projects
 with and without a measureable outcome.
Not surprisingly, the projects with outcome spend
 significantly more time of the testbed.
Only 10\% of the measurable outcome research projects
 have the experiment\_time less than one week
  as compared to project with no outcome, where
   ~48\% have the experiment\_time less than one week.
To statistically confirm that the experiment\_time
 of outcome and no-outcome projects have different means,
  we perform the Kruskal-Wallis one-way ANOVA test.
We consider the null hypothesis: $H_0$: there is no relation
 between the experiment\_time and the project outcome.
If $H_0$ is true, then the variance estimate of experiment\_time
 within all the outcome projects should be approximately
  the same as the variance in experiment\_time within
  the two groups of outcome and no-outcome projects.
The test calculates a measure $F$ to estimate the variances;
if $F$ is significantly greater than 1,
 the test is statistically significant,
  and we can conclude that the means and
   variances for the two groups of projects
    are different from each other
     and reject hypothesis $H_0$.
It also defines a $p-value$, the probability
 of observing the same result assuming $H_0$ is true.
For the data in Figure~\ref{fig:projactive},
 the $F$ ratio is 12, indicating a strong
relation between the experiment\_time  and the
 project outcome and the $p-value$ is $7.4$ x $10^{-10}$,
  indicating a very low probability of $H_0$ being
   correct.

\begin{figure} \begin{center}
\includegraphics[width=3.3in,type=pdf,ext=.pdf,read=.pdf]
{figs/proj.agevsactive.res.gnu}
\caption{The Effect of the project age on the experiment\_time}
\label{fig:projagevsactive} \end{center} \end{figure}


%However, we also need to verify that experiment\_time
% in the outcome projects is not biased
%  by projects that have a longer presence
%   on the testbed.
We further investigated if the activity of the outcome projects is biased by the
length of their presence on the testbed.
Figure~\ref{fig:projagevsactive} plots experiment\_time  in
 log-scale on the y-axis and the age of the project (defined from the project's creation) on the DETER testbed  in log-scale on the x-axis.
 Most outcome
projects cluster in the upper right corner of the graph, that is, they have
both been around for a long time (more than 3 years) and they have been
very active (more than 1 week of experiment-time).
Some outcome projects
 have been only around for a year or a little less but were quite active
 (more than 1 week of experiment-time).
We also observe one outcome project had
 only 1 hour of experiment\_time and three outcome project that
  had between 1 day and 1 week.



\subsection{Testbed Characteristics}
In this section we investigate how cummulative patterns of usage change
 over time and across the testbeds.
We primarily seek to answer the following two questions:
 (i) how are testbeds used in terms of physical resources
(ii) what type of research is done on the testbeds.
Due to space constraints, we only present the results from the DETER testbed.

In Figure~\ref{fig:usagenp}, the x-axis
 indicates the quarters DETER has been functional
  and the left-side y-axis
  indicates the maximum and average number of nodes used in the testbed at any point
  in that quarter. We count quarters from January 1.
The right-side y-axis indicates the the number of
 nodes per project.
We observe that the maximum nodes used by
 a project on the testbed tracks the size of the
  testbed.
Hence every time there are new machines
 added to the testbed,
  the maximum utilization of the testbed increases.
Additionally, although not shown here,
 we find the total number of active projects each
  quarter is also steadily increasing.
The average utilization increases slowly over
 time, but the average number of nodes per project
  (shown on right-side y-axis) does not show the same trend,
  indicating that utilization grows primarily
   due to increase in active projects on the testbed.

We further investigate how the quarter's usage is split between projects.
We define a \textit{big project} as a project whose node-hour usage\footnote{Node-hour usage for an
entity (experiment, user, project or testbed) is calculated as number of
allocated nodes for that entity times the allocation's duration
expressed in hours.}  in a quarter is more than twice the value of the
equal share for that quarter. The equal share is calculated as total
number of node-hours used in a quarter divided by the total number of
active projects. Around 20\% of projects are big in each quarter
but they account for 60--80\% of that quarter's usage. While there are big class projects they are only responsible for roughly 5\% of quarter's usage.
For space
reasons we omit Utah Emulab's and PlanetLab's graphs and discussion but they show the
same presence of big projects.

\begin{figure}
\begin{center}
\includegraphics[width=3.3in,type=pdf,ext=.pdf,read=.pdf]{figs/nodes.gnu}
\caption{The utilization of nodes on the
 DETER testbed during each quater.}
\label{fig:usagenp}
\end{center}
\end{figure}


Additionally, we observe that most of the testbed resources
 are consumed by research projects that eventually produce an outcome.
Figure~\ref{fig:usagedis} shows the percentage of total usage in a quarter
that can be attributed to research projects, and to research projects with outcome.
We observe that research projects
 account for 70-100\% of usage on DETER (similarly on Utah Emulab, not shown here)  and
 most of that usage is generated by outcome research projects.
 Note the outcome consumption drops off near the end because the
outcomes of the research done in the past few years will only become
visible in the future, since it may take several years for a project
 to publish (Section \ref{sec:proj_char}).


\begin{figure}\begin{center}
\includegraphics[width=3.3in,type=pdf,ext=.pdf,read=.pdf]
{figs/usage.outcome.gnu}
\caption{A significant fraction of the testbed resources are utilized by
 research projects that results in a measurable outcome.}
\label{fig:usagedis}
\end{center}
\end{figure}


Lastly, we investigate how experiment size and duration may change over time.
Figure~\ref{fig:trendsize},  shows the median, 80-th percentile and maximum
size of an experiment in a quarter, for DETER's research
projects. While the maximum size increases slightly,
the median and even 80-th percentile decrease very slowly over time.
The values for these metrics are much
lower than those for the maximum, which fits with our observation that
several big projects are responsible for most of testbed's usage in a
quarter.  We believe that the experiment size decrease occurs due to increased contention
for resources. We notice the same trend on Utah Emulab. PlanetLab and StarBED
data is too short and it fluctuates too much for us to notice a trend over time.

Figure \ref{trendsduration} shows  the median, 80-th percentile
and maximum duration of an experiment in a quarter. Here we again see that the maximum
value is much higher than the median and the 80-th percentile.
All measures increase slightly over time, and we notice the same trend
on Utah Emulab.
We believe that this increase in user interaction
shows that testbeds are becoming more useful to researchers, perhaps due
to addition of new tools and resources, or perhaps because the users
became more skilled.

\begin{figure*}
\begin{center}
	\subfigure[Experiment Size]{
	\includegraphics[width=3.3in,type=pdf,ext=.pdf,read=.pdf]
	{figs/exp.size.trend.gnu}
	\label{fig:trendsize}
	}
	\subfigure[Duration]{
	\includegraphics[width=3.3in,type=pdf,ext=.pdf,read=.pdf]
	{figs/exp.dur.trend.gnu}
	\label{fig:trendsduration}
	}
	\caption{Experiment size and duration trends in research experiments
	on DETER }
\end{center}
\end{figure*}

In order to assess what types of research are done on testbeds
we next look at usage per research category on DETER over time, shown in
Figure~\ref{catusage}.
 Here we can clearly see how security research
trends change over time. For example, the DDoS category's usage of
DETER increases over years, due to rich tools for such experiments on
DETER and because DDoS scenarios lend themselves to testbed
experimentation.
The network architecture activity increases drastically in
 in  2007, which coincides with the rise of NSF's FIND program,
  which funds research on
future Internet design.
The activity  in malware-related research is present since 2005 but
increases in mid 2008, which coincides with the time when malware
became a hot topic for security research.
On the other hand worm
activity lasted 2004--2007 and then died out, as it did in research
venues at the same time.
Research in topics of privacy lasted only from mid 2006 to
mid 2007 surprising.
The testbed research activity increased since 2009, probably
 due to the starts of both the NSF GENI program and the DARPA NCR program to build large-scale
testbeds.
The activity related to teaching and classes increased since 2009,
 primarily due DETER's active participation in
 education related activities. There is very
little usage in infrastructure, botnets and evaluation areas.


\begin{figure}[htbp] \begin{center}
\includegraphics[width=4in,type=pdf,ext=.pdf,read=.pdf]
{figs/cat.usage.gnu}
\caption{Usage per project category in node-hours per quarter in DETER}
\label{catusage} \end{center} \end{figure}

%\begin{figure*}[htbp] %\begin{center}
%\includegraphics[width=6in,type=pdf,ext=.pdf,read=.pdf]{figs/cat.usage.
%emu.gnu} %\caption{Usage per project category in node-hours in Emulab}
%\label{catusage} %\end{center} %\end{figure*}

%Finally, we investigate how the experiments
% use the nodes through analyzing node idleness
%  information for all allocated nodes in the DETER dataset. We only have
%  data about node idleness from September 2010 to April 2011.
%As discussed in Section~\ref{sec:terms}, these nodes belong
% to an active experiment on the DETER testbed and each nodes
% reports activity along several dimensions.
%First, 57\% of the records contain an idle node. There are
% 30\% of active records with  some network activity,
% 10\% with some CPU activity,
% and 3\% with some terminal activity.
% These figures add up to more than 1 since a node can
%have more than one type of activity at a time.
%%Network activity dominates, which
%%is expected on a network testbed where
%%nodes should be primarily used to
%%generate or consume network traffic.

%We further investigated the causes of node idleness by calculating
% the percentage of time when all nodes were idle, for each experiment during these 6.5 months
%Around 40\% of experiments had no idle time, but
%the remaining 60\% had idleness ranging from 10--100\%
%To identify what contributed to idleness in an experiment,
% we investigated the idleness periods during an experiment instance's duration.
%There were 34\% instances
%with idle time in the middle, which means that some activity
% occurred after that idle time.
%There were 45\% instances with
% idle time at the end. The values of idle time peaked at 1 hour and 4
%hours, which is the threshold when DETER swaps out idle class and
%research experiments respectively.
%There were 10\% of experiments that
%were idle less than 10 minutes and between 10 and 30 minutes between a
%swapout.

%We also investigated whether users allocate more
% nodes than they need, that is,
% whether there are nodes in experiments that
%are never used.
%All allocated nodes were used by 99\% of the experiments.

%From this analysis we conclude that experiment idleness occurs often as
%a consequence of interactive and iterative experimentation mode, that is,
% a human manually
%drives the experiment, and then steps out for lunch, meetings,
% or an appointment.
%
% \iffalse
%to sleep overnight. {\color{red} Testbed mechanisms that would let
%researchers plan ahead and develop rich experimental scenarios would
%help reduce idleness in testbeds.}

%%%% This is summarized a text %%%%%
%\begin{table}[htdp] \caption{Node activity} \begin{center}
%\begin{tabular}{|c|c|} \hline Activity & Percentage \\ \hline Idle &
%57\% \\ Network & 30\% \\ CPU & 10\\ Interactive & 3 \\ \hline
%\end{tabular} \end{center} \label{idle} \end{table}% .
%\fi

%

{\bf Summary:} Thus in this section we
 observed several experiment, project and user patterns.
Projects and users usually become active within a month of
their inception on the testbed, but some may take a long time (months to
years) to activate. Projects and users interact with a testbed over a
long time period (several months to several years). Interactions are
usually brief (1 hour to 1 day) and numerous (more than 20), interleaved
with long pauses.  Most experiments have a moderate size (10 nodes or
less) and simple topologies. Research projects with a
 measurable outcome have more members and generate more experiment definitions and instances. They also consume
a large portion of the testbed resources. While usage of testbeds in classes is
becoming very popular, a very large portion of testbed resources is still used by research users.
Testbed usage increases over time due to rise in number of active projects but the
experiment sizes show a slightly decreasing trend, indicating contention for resources. 
Experiment durations increase a little over time, which we attribute to increase in users' experimentation skills and to more tools for testbed experimentation. 


