\section{Data Sets} \label{sec:data}

We have five testbed usage datasets as shown in Table~\ref{tab:testbed_data}.
Each dataset has varying amounts of detail and missing statistics.
Some data is not present in the dataset because it was not shared by the testbed operators due to privacy concerns, while other data is absent since
  the testbed does not collect and store statistics in the listed category.
In this section, we discuss the data offered by each
testbed, how they map to the terminology discussed in Section~\ref{sec:terms},
 and how we cleaned and processed the data for subsequent analysis
  and characterizations discussed in Section~\ref{sec:character}.

\begin{table*}[t]
\begin{small}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Category & Content & DETER & UEmu & Schooner & PLab & $\bigstar$Bed \\
& & (02/04-05/11) & (01/02-05/11) &  (10/03-11/10)& (07/08-04/10) &(11/08-5/11)\\
\hline
Users & Identifier (UID) & x & x (anon) & x (anon) & & x (anon) \\
& Affiliation & x & x & x & & \\ & City, State,
Country & x & x & x & & \\ \hline Projects & Identifier (PID) & x
& x (anon) & x (anon) & & \\
& Membership (UID to PID mapping) & x & x & x & & \\
& Role of each member & x & x & x & & \\ \hline
Exp Events & identifier (EID) & x & x &  & x & x\\
& EID to PID mapping & x & x & x & x &\\
& EID to UID mapping & x & x &  &  & \\
& Action (allocation/release) & x & x &  & x & x \\
& Number of nodes  & x & x &  & x & x\\
& List of nodes & x &  &  &  &  \\
\hline
Publications & Citation & x & x  & x &  & \\
& PID & x &  &  &  &  \\ \hline
\end{tabular}
\end{center}
\end{small}
\caption{The datasets from five testbeds and the available detail within each dataset.}
\label{tab:testbed_data}
\end{table*}

The Utah Emulab \cite{emulab}, DETER \cite{deter}, and Schooner \cite{schooner}
are testbeds built on the Emulab technology. Their users gain
exclusive access to physical nodes, which they can organize into
topologies via a Web interface. Experimental traffic rarely
leaves the testbed facility, and usually does not mix with
any external traffic. Resources are allocated on demand, on first-come-first-serve
basis and released explicitly by the user, after experimentation.
All three testbeds enforce idle-swap policies,
and reclaim nodes from experiments, if all of them are idle for a specified period.

The primary Emulab installation is run by the Flux
Group, part of the School of Computing at the University of Utah
 and is referenced in the fourth column in Table~\ref{tab:testbed_data}.
 Utah Emulab testbed has around 600 nodes.
We have data about Utah Emulab's users, their project membership and experimental events. User and project names and affiliations are anonymized, and we
 only have a high-level categorization of projects into internal, research or class.

 The DETER testbed is also an Emulab-based testbed,
 designed primarily to supports research and development for
 cyber security technologies. It has around 400 nodes, and is referenced in the
 third column in the Table~\ref{tab:testbed_data}. Here we have a complete, non-anonymized
 data about users, their project membership and activities, as well as project descriptions and
 detailed categories. Since user data is not anonymized we can link it to outcomes and
 then link outcomes to project identifiers.

The Schooner testbed is also built using the Emulab technology,
 with a special focus on enabling empirical research on common industry platforms.
Schooner has around 100 nodes, and is represented as column five in Table~\ref{tab:testbed_data}.
We have data about Schooner's users and their project membership, and only a binary indication of whether they have ever been active. We do not have data about experimental events. User names
are anonymized but their affiliations and project names are not. We have categorization of projects into
several research categories, class and internal projects.

The PlanetLab testbed \cite{planetlab} is a geographically distributed platform for
deploying, evaluating, and accessing planetary-scale network
services.
PlanetLab users acquire a
slice, which is a collection of virtual machines (VMs) spread
across participating remote sites hosted by different organizations. There are
552 participating sites today.
Slices run concurrently on PlanetLab,
 acting as network-wide containers that isolate services from
each other.
An instantiation of a slice on a particular node is
called a sliver. A slice can allocate resources on different numbers of nodes each time
it is instantiated. We map a unique slice both into one experiment definition and into one project.
Slice instantiation on PlanetLab is mapped into an experiment instance and number of
slivers into the size of the instance.
The PlanetLab dataset is represented as column six in Table~\ref{tab:testbed_data}.
Events in the dataset indicate the unique slice ID, time of slice instantiation or resource release,
and number of slivers for that slice.
The dataset however does not include the mapping of the slice identifier
 to the project descriptions and hence limits the types of
  PlanetLab usage characterizations discussed in Section ~\ref{sec:character}.

The StarBED testbed \cite{starbed}, is a general purpose network testbed
 by the National Institute of Information and Communications Technology in Japan.
 StarBED has developed testbed control and experiment management tools
 called SpringOS, which build the desired experiment topology and automatically
 drive the experiment following the pre-defined steps by the user.
In the Starbed testbed
 users obtain exclusive access to physical nodes, similar to Emulab,
 but allocation of resources and organization of nodes into
user-specified topologies requires manual user action. Resources are reserved
by users ahead of time and held for the duration of the reservation.  Experiments
are often batched and ran by StarBED software, rather than by users.
The last column in Table~\ref{tab:testbed_data} discusses
the dataset we received from StarBED.
It records times of reservations, their duration and number of nodes, along with the
 user identifier, but user names are anonymized, there is no
 unique project or experiment identifier, and user names are recycled each
 year. We map StarBED reservations into experiment instances, and link them to one
 default experiment definition, a default user and a default project.

%Our goal during the study was to
% sample a diverse set of network testbeds to understand
% current trends in usage characteristics.
%As discussed in Section~\ref{sec:intro},
% insight into current network testbed usage trends will
%  provide guidance into building the next-generation of network
%  testbed and allow understanding the requirements of the research
%  community.
%As we pointed out earlier, the datasets we received from the five testbeds had
% varying level of detail since each network testbed
%  records different types of usage data.
% Additionally, due to privacy reasons
% most testbeds could not share data that would identify their users and
%projects, or these identifiers were anonymized.
%The anonymization hence did not all us to
% categorize the success or outcome of
%  such anonymized projects.

\subsection{From Data to Working Set}

For each received dataset, we the extract relationships
 as discussed in Section~\ref{sec:terms}. We first
  present the unprocessed statistics for each
  testbed and then derive the relationships that can
be supported by the data.

\paragraph{DETER Dataset:}
This dataset is collected from the start of the testbed February
 2004 to April 2011.
 There are 235 approved
  projects and 2,347 users as shown in
   Table~\ref{tab:cleaning}, in the first column.
From this set we first identified 247 orphan users.
%Orphans can be created in multiple
%(a) it belongs to a prinicpal investigator of a project that
% does not get approved. There are \emph{X} such accounts.
%(b) students who sign up for a testbed account
% in an attempt to join an approval pending project
%  or non existent project. There are \emph{X} such orphaned
% accounts.
%(c) class-specific user accounts that are created in surplus
% at the start of a semester to accomodate the students.
% If the class enrollment is less than the number of
%  class-specific user accounts, the unused accounts
%are marked as orphaned.
%There are 56 such class specific orphaned accounts.

We then identify and filter out all internal projects and
 users as discussed in Section~\ref{sec:terms}.
There are 12 internal projects with 73 users.
The remaining 2,037 users and 223 projects are then
 classified as active and inactive.
A project is classified as active, if it has had
 at least one manipulation during it's lifetime.
Our working dataset consists of the 167 active projects
 and the 1,506 active users.
 In Section~\ref{sec:character} we briefly
 discuss the characteristics of the remaining
  56 inactive projects to identify {\it early}
   and {\it stale} projects.
The working set is then classified into
research and class projects based on
 detailed project descriptions provided by project heads.
 Each of these categories is also divided into outcome and no-outcome group.
 We further classify research projects into several
 subcategories based on the research fields they
 investigate, and we link them to measurable outcomes.
Some outcomes were reported to us by users, while we found the others
through Web search, and vetted them manually. We make no
claims that our outcome list is complete.

\paragraph{Utah Emulab Dataset:}
This dataset is collected from January 2002 to April 2011.
There are 737 approved projects and 3,587 users.
The dataset contains records of
 all unapproved projects and the corresponding
  users accounts.
Hence we removed 351 orphan users associated with the
 unapproved projects, which left zero orphans in the dataset.

We then identify and filter out 25 internal
 projects and 194 internal users.
The remaining 3,062 users and 712 projects are then
 classified as active -- 509, and inactive -- 1,772.
Active users and projects form our working set and are
classified into class and research. Class projects are
further classified into outcome or no-outcome based
on the size of their membership. Since we have no access to
project descriptions we cannot further categorize research
projects, nor can we link them to numerous publications
that acknowledge use of Utah Emulab.

\paragraph{PlanetLab Dataset:}
The dataset is collected from July 2008 to
 April 2010.
There are 813 unique slice identifiers in the dataset
 with a total of 159,340 slice records.
Each slice record denotes current activity on the slice
 including number of nodes, which we use in our analysis.
The slice records are then converted to a working dataset
 of 79,664 experiment events.
 Although there are more than 300 publications
  that reference PlanetLab as their
  experimentation platform, since slice identifiers are anonymized
  and we have no descriptions of their purpose, we could
  not relate slice activity to publications and
   measure outcome. The dataset includes however counts of projects belonging to several
   research categories.

 \paragraph{Schooner Dataset:}
The dataset is collected from October 2003 to
 November 2010.
 We cannot tell from the data which projects are approved and which are not.
There are
 45 projects and 283 users in the dataset. There are
 109 orphan users, 2 internal projects and 2 internal users, which we remove.
The users and projects are then classified as active or inactive.
Our working set consists of 37 projects and 96 users, which we then
categorize as class or research. Since we have no access to project descriptions
or user names we cannot identify research outcomes.
%There are 5 class projects and 32 research projects.
%There are 20 class users, 66 research users and 10 mixed users.
\paragraph{StarBED Dataset:}
The dataset is collected from Nov 2008 to
Apr 2011. StarBED data only contains records of anonymized username,
 start and end of their experiment instances and the number of nodes in the reservation.
 In addition to this, usernames are re-anonymized each fiscal year.
We have no way to identify projects on StarBED and no way to link
experiment instances to users throughout the set, nor can we link projects to outcomes.
We also cannot tell from the data which
experiment instances were internal and which were not. Thus our
working set consists of all experiment instances in the data.


\subsection{Discussion of Data Categorization}
%%%JElena stopped here
Table~\ref{tab:cleaning} shows the breakdown of projects and users per
categories introduced in Section~\ref{sec:terms} for all five testbeds,
and Figure \ref{fig:user_proj_cat} illustrates relationships of
the categories.  We start with
235 projects and 2,347 users for DETER and with 737 projects and 3,256
users for Utah Emulab. We note that Utah Emulab has 3.14 times more
projects but only 1.4 times more users than DETER testbed. This is due
to the DETER testbed having several large class projects with close to
100 students each, as we show later in this section. Schooner is much smaller
than both DETER and Utah Emulab, with 45 projects and 283 users.
Planetlab has 813 active slices in the data set but we do not know how many
total slices there may be, and StarBED has no notion of a project.

We next remove internal users and projects where we have this knowledge.
DETER, Schooner and Utah Emulab all have 4--7\% of internal projects
and 4-9\% of internal users. We then classify projects and users based on
activity. Both Utah Emulab and DETER have 72\% of active projects, while
Schooner has 86\% . In DETER, there are 74\% of
active users, while in Utah Emulab there are 57\% and in Schooner 56\%.
While we lack detailed activity data for Schooner, from DETER and Utah Emulab's data
we were able to establish the reason for this large difference in the
percentage of active users (74\% on DETER vs 57\% on Utah Emulab).
There are four large projects on Utah Emulab where a large portion of users
is inactive and this drives down the overall active user ratio.

We now look deeper into reasons for inactivity. First, a testbed may not
meet a user's experimental needs. It may either lack some experimental
tools or hardware that the user believed would be available, or it may
be hard to use, or it may just not be the best fit for the user's
research questions. Second, a PI may open a project hoping to find a
student or funds to work on some research in the future, which makes
project inactive for a while. In both of these cases an inactive user
has strong intent to perform some research but lacks means to do so at
the moment on a testbed.  If the user is the PI of a project this makes
the whole project inactive. The third reason leads only to user
inactivity. PIs often ask their graduate students or students in their
class to join a testbed so they could do some research or class work on
it. A student may fail to perform in a class or on a research task and
their testbed inactivity is just a consequence of this failure.
Considering the second reason for inactivity we wanted to establish the
maximum warm-up time for a project or a user. Inactive projects/users
that have existed on the testbed for less than this maximum warm-up time
would be considered \textit{early} and the rest would be considered
\textit{stale}. We show the cumulative distribution
function for warm-up time and inactive time of users in Figure
\ref{fig:expdur} (projects' inactive time is similarly distributed). The warm-up data has a heavy tail with 95/5 split. Most users and projects become
active within a week or a month from their inception (graph not shown), but there is a
small number of users and projects that may take several years to become
active, which is surprising. 
Taking the maximum value of warm-up time for each testbed as the
threshold for early/stale classification 55\% and 89\% of inactive
projects on DETER and Utah Emulab respectively would be considered
early, and so would around 90\% of inactive users on both testbeds. Thus,
while inactive project/user statistics are large, a significant number of these
projects and users could activate in the future. There are no early users
or projects on Schooner.

We next split active project/user set into
research and class projects/users. A user may belong to both a research
and a class project in which case we label him as ``mixed'' and we
include him in the analysis of both sets. There are 20\% class projects among active
projects on DETER, 8\% on Utah Emulab, 14\% on Schooner and 4\% on PlanetLab.
Larger testbeds -- Utah Emulab and PlanetLab -- have a smaller ratio of
class projects, likely because attracting a larger research population. They are
general-purpose testbeds, while DETER and Schooner specialize in security and router
research. There are 75\% class users on DETER, 25\% on Utah Emulab, and
21\% on Schooner (we have no data on PlanetLab users).
We find this large difference in user mix striking. Between DETER and
Utah Emulab we can attribute it to
a large size of several classes that use DETER (hundreds of students),
while Utah Emulab has no classes of this size. This difference is even
more striking because DETER recycles student accounts for their class projects, thus the
number of humans interacting with DETER for class purposes is larger than reported here.
Unfortunately DETER has just recently started to record recycling events so we
have no data to correct user count for recycling.
There is a small number
of users that belong both to research and class categories (1\% on DETER, 3\% on Utah Emulab and10\% on Schooner). These users are either PIs that use a testbed both
for their research and in classes they teach, or they are students of
such PIs that are also their teaching assistants, or they first take a class with the PI and
then continue doing research with them.

For DETER we can identify those research and class projects that had
measurable outcomes because we have access to user names. On Utah Emulab
and Schooner we can only identify class projects with outcome because this
categorization is based on project membership and can be applied to
anonymized data. We have no user data for PlanetLab or StarBED and thus
no way to identify classes with outcomes.
Out of research projects on DETER 34\% produce a publicly measurable outcome.
There are  74\% class outcome projects on DETER, 83\% on Utah Emulab
and 80\% on Schooner. We
attribute this much higher percentage of outcome projects in class than
in research category to the fact that class outcomes are much easier to
achieve. All is needed for a class outcome is for the instructor to
assign to his students some task to be performed on the testbed, while a
research outcome by our definition requires a published paper in a peer-refereed venue.
We also note that many of research no-outcome projects may have resulted in
a private outcome such as a company whitepaper, a government report, or a technical
report but we have no way of locating these outcomes.

%Draw analysis diagram and sets
\begin{table}[htdp]
\begin{center}
\begin{small}
\begin{tabular}{|c|c|c|c|c|c|} \hline
Projects & DETER & UEmulab & Schooner & PLab & $\bigstar$BED\\ \hline
Approved & 235 & 737 & 45 & &\\
Internal  & 12 & 25 & 2 & &  \\
Inactive & 56 & 203 & 6 & & \\
Early$*$ & 30  & 158 & 0 & & \\
Stale$*$ & 26 & 45 & 6 &  & \\
Active & 167  & 509 & 37 & 813 &  \\
Class & 34  & 42  & 5 & 31 & \\
Cl-Out & 26 & 35 & 4 & &  \\
Research & 133 &  467 & 33 & 782 & \\
Re-Out & 45  & && & \\ \hline
Users & DETER & UEmulab & Schooner & PLab & $\bigstar$BED\\ \hline
Non-orphan & 2,100 &  3,256 & 174 &  &\\
Internal &  73 & 194 & 2 & & \\
Inactive & 521 & 1,290 & 76 & &  \\
Early$*$ & 463 & 1,157 & 0 & & \\
Stale$*$ & 98 & 133 & 76 & & \\
Active & 1,506  &1,772 & 96 & &  \\
Class & 1,132 & 443 & 20 & & \\
Research & 356  & 1,267 &  66 & & \\
Mixed & 18  & 62 & 10 & & \\ \hline
Instances & DETER & UEmulab & Schooner & PLab & $\bigstar$BED\\ \hline
Working Set & 7,482 & 11,382 & & 3,678 & 446 \\ \hline

\end{tabular}
\end{small}
\end{center}

\caption{Breakdown of project and user data per category. Starred rows
are generated by taking the maximum warm-up time for the working set
of projects (or users) and using it as a threshold.}
\label{tab:cleaning}
\end{table}%

\begin{figure} [h]
\begin{center}
	\includegraphics[width=3.3in,type=pdf,ext=.pdf,read=.pdf]
	{figs/warmup.inact.user.gnu}
	\label{fig:warmup}
	\caption{Active user warm-up and inactive user age on DETER and Utah Emulab}
\end{center} \end{figure}


\subsection{Deeper Look into DETER Research}
We now look at representation of various research categories in DETER,
shown in Table \ref{projrc}. Our goal here is to understand how useful
testbeds are for some research fields. We manually categorize DETER research projects
based on their description and we link them to publications if the
publication acknowledges use of DETER testbed and if there is a match
between the PI or project member names with the publication authors.
On DETER, the top category by number of
projects is class. We believe that this is due to two reasons. First,
there is a great benefit for student learning if principles taught in lecture are
illustrated through hands-on materials, and teachers are well aware of this fact.
Second, DETER hosts a public repository of class exercises (\url{http://education.deterlab.net}
that PIs can
reuse, making adoption easy. Next by number of projects are malware,
DDoS and architecture. This shows both the perception of the PIs that
testbeds are best suited to research questions in malware, DDoS and
architecture fields and the popularity of these fields in security
research.

We next look at usage per category and project in node-hours. Both
duration and size of experiment instances contribute to this measure.
Here, malware and DDoS lead by far. This is due both to many projects in
these categories and to large usage per project (column 4). Malware
experiments are sometimes long because they collect malware samples from
the wild, and both categories have large experiments.

We next look at outcomes and outcome projects in a category. If a
project generated three publications we would count this as three
\textit{outcomes} (column 7) but one \textit{outcome project} (column 5). Looking at
percentage of outcome projects in all projects (column 6) in the category, class,
worms and DDoS lead. This means that testbeds are especially useful for
researchers in these fields, who achieve high yield out of their testbed
use. Looking at number of outcomes per project (column 8) in a category, DDoS, worm
and infrastructure categories are on top. We attribute presence of DDoS
near the top across many usage measures to two reasons: (1) DETER has
well-developed and easy-to-use tools for DDoS attack traffic generation
and for legitimate traffic generation at network and application level
(random contents). This makes DDoS experimentation easier to set up than
experimentation in other categories, (2) Testbed experiments may be the
best fit for DDoS scenarios. Unlike worms and botnets, meaningful DDoS
scenarios  can be created with tens of nodes. Unlike privacy and
malware, DDoS experimentation may not require realistic legitimate
traffic at the content level, which is difficult to generate. A high
number of outcomes per project in these categories further indicates
that once a user develops necessary tools for experimentation in these
areas they can reuse them to produce multiple meaningful experiments,
and multiple publications.

Looking at outcomes per 1 million of node-hours of usage (column 9), the most
fruitful category is botnets followed after a large gap by worms,
infrastructure and class categories. For these categories, it appears
that a small usage of the testbed can help in producing many
publications.

\begin{table*}[htdp] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
Category & Projects & Nh (M) & Nh / project (K) &  OP & OP
/ All & \# O & \# O / project & \# O / 1M Nh\\
\hline
Class & 34 & 0.43  & 13 & 26  & 76\% & 26 & 0.76 & 58 \\
Malware & 20 & 1 & 50 & 13 &  30\% & 6 & 0.65 & 13\\
DDoS & 18 & 1  & 56 & 27 & 50\% &  9 &1.5 & 27\\
Architecture & 14 & 0.59  & 42 & 9 &  43\% & 6&0.64 & 10\\
Testbeds & 12 & 0.23  & 19 & 1&8\%& 1  &  0.08 & 4 \\
Infrastructure & 12 & 0.2  & 17 & 12 & 42\% &5 & 1 & 60\\
Worms & 11 & 0.33 & 30 & 18 &  73\% & 8 &1.63 & 55\\
Evaluation & 8 & 0.06 & 8& 1 & 13\% &  1 &0.13&17 \\
Privacy & 6 & 0.16 & 27 & 0 & 0&0 & 0 & 0\\
Botnets & 4 & 0.003 & 0.75 & 3 &  25\% & 1 &0.75& 1000\\
Other & 28 & 0.56 & 20 & 11 &  29\% &8 & 0.39 & 20\\\hline
\end{tabular} \end{center} \caption{Usage and outcomes per
research category in DETER. O = outcome, OP = outcome project, Nh = node-hours.}
\label{projrc} \end{table*}%

