\section{Background}
\label{sec:terms}

We define a \textit{network testbed} as a collection of computing and
network resources that are shared by multiple users with the goal to
support networking and distributed system
 research or educational activities.
The computing resources can include both physical and virtual machines
 while the networking resources include wired, wireless,
 and Internet-based interconnects.
% In this paper we do not discuss wireless
% testbeds, such as, the Orbit wireless
% network testbed at Rutgers~\cite{}.

An important goal for a networked and
 distributed research testbed is to share a set
of physical resources among multiple experiments that might either run
concurrently or sequentially.
In order to ensure
 that experiments do not interfere with each other,
 appropriate resource allocation mechanisms have to be
  implemented.

Some network testbeds offer exclusive control over the
 nodes and the underlying substrate
 by either on-demand allocation mechanisms or
   pre-defined time-sharing allocations.
For example, the Utah Emulab~\cite{emulab},
 StarBED~\cite{starbed},
DETER~\cite{deterlab} and Schooner~\cite{schooner} testbed
 allow a user to gain exclusive control of
physical nodes and of
 the experimental network that interconnects these nodes.
Other network testbeds such as Planetlab~\cite{planetlab},
 offer access to virtualized resources of remote machines, where
 a single physical node is shared by multiple users through a \textit{slice} interface.
Slicing is usually implemented by means of
\textit{virtualization}, a widely used technique in which a
software layer multiplexes lower-level resources among higher-level
software programs and systems.
The end host nodes in PlanetLab are distributed across the globe and
 connected through the Internet.
The experimenter uses overlay techniques to define
 specific topologies and link characteristics.
Our definition of a network testbed also includes private
 networking testbeds in research organizations and companies,
 that are shared by employees in those organizations.

This paper provides measurements and insights about
 network testbed usage characteristics.
We focus primarily on five research testbeds in this study, whose usage data we were able to obtain: DETER \cite{deter}, Utah Emulab \cite{emulab}, Schooner \cite{schooner}, Planetlab \cite{planetlab} and StarBED \cite{starbed}. We derive and compare usage characteristics of these testbeds,
 along several dimensions, including experimentation patterns
 user and project participation patterns and resource usage.
%However, for Emulab-based testbeds due to privacy concerns,
%We note that testbeds often lack data about how users use their resources.
%That is, they collect information about node allocations and release, and who allocated the nodes
%but often lack data about what users do with the nodes. Knowing  a consequence
%of testbeds offering direct access to bare machines, rather than providing users with an
%experimentation interface through which they could configure and control their experiments.


\subsection{Purpose}
\label{sec:testbed_uses}

Network testbeds are primarily used for:
\begin{itemize}
\item {\it Research} in networking and distributed systems for development and
 evaluation,  and
\item {\it Classes}, to teach concepts about existing and new systems
 and technologies.
\end{itemize}

Typically, in the research category, a group of users investigates a
 common research problem by accessing the testbed resources for
 hypothesis testing, deployment studies, or exploratory research.
Hypothesis testing through experiments
 is a rigorous exploration of  the parameter space
 to validate a  particular hypothesis,
 where the experimenter has a good sense of the
 input parameters and expected results.
Exploratory research involves taking a technology
 relatively unknown to the user and deploying it on the testbed for further investigation.
The experimenter has limited knowledge about the capabilities of the technology
 and hence cannot anticipate the results of the experiment.
In deployment studies, the experimenter deploys a relatively mature technology on the testbed
to expose it to realistic conditions, such as external user traffic or Internet cross traffic.
Either some or all these three modes of experimentation may be used by a single
research group in course of their exploration of a single research topic. Highly-controlled
testbed environments, such as DETER \cite{deter}, Utah Emulab \cite{emulab},
Schooner \cite{schooner} and Starbed \cite{starbed} are well-suited for
hypothesis testing and, due to their good isolation of experiments from outside world, they
facilitate exploratory research. Testbeds like Planetlab \cite{planetlab} that are
embedded in realistic Internet environment are well-suited for deployment research.

Testbed users come from academia (faculty and students),
 government and private labs (researchers and staff)  and the industry (company
employees).
Users experiment on testbeds with a goal to gain insights and data needed to
publish research results as papers or white
papers, to direct government policies or to test a product.
In teaching, the instructors access the testbeds
 to either illustrate a concept taught in
 class or to assign a practical project to students.
In the more advanced courses, these projects may be
 of research nature,
which blurs the line between class and research.
When designing this comparative study, our goal
 was not only to understand how testbeds are used today, but
 also to evaluate how well testbeds aid users in achieving
their publication or teaching outcomes. In Section~\ref{sec:data} we identify some
publicly measurable outcomes of testbed use, and In Section~\ref{sec:character} we
evaluate how well and what type of testbed use leads to these outcomes.

%As discussed above, research users are motivated to use the testbed
% for a large range of activities and hence it
% is extremely tricky and labor intensive to evaluate a measurable
% outcome for such activities.
%Also, many users and projects do not publicly disseminate or publish
% their research results making it hard to categorize such efforts.


\subsection{Terminology}
\label{terminology}
In this section we introduce several terms
 that are used in the paper to discuss the usage
 characteristics of network testbeds. We also illustrate
 the terms and their
  relations
  in Figure~\ref{fig:termspic} and Figure~\ref{fig:user_proj_cat}.

An \textit{experiment definition} is the input submitted to the testbed
(one or more times) describing the physical
 resources and the dynamic setup operations
  required for the experiment.
Typically the
experiment definition is closely tied to \textit{one particular
experimentation goal}, that is,
 a research question or a class assignment.
Each experiment definition has a unique persistent identifier -- EID. Definitions can be modified, e.g. by changing node number or connectivity. In Figure \ref{fig:termspic} there are
 two experiment definitions with EID A and B.

An \textit{experiment instance} is an instantiation
 of the experiment definition at the physical resources of the testbed. We say that an experiment instance has \textit{duration} (how long were resources allocated to it), \textit{size} (how many nodes were allocated) and \textit{topology} (how were nodes connected to each other).
The same experiment definition can result in multiple non-overlapping
 experiment instances, one for each node allocation. In Figure
 \ref{fig:termspic} there are five experiment instances, three linked to the experiment definition A, and two linked to the definition B.
Release of the resources back to the
 testbed denotes the end of a particular experiment instance.
If an experiment definition is modified while there is an active instance
corresponding to it, the instance may, as a consequence, change its size or topology.
In Figure \ref{fig:termspic} instance 3 changes its size and topology, which will later result in two size records for that instance in our analysis.
Experiment definitions are not stored permanently in testbed data we analyzed, i.e. if an experiment definition is modified it overwrites the old version with the same EID, and a user can also delete an experiment definition.  Thus only those two experiment instances marked with big "T" in Figure \ref{fig:termspic} would be recorded in the testbed, if corresponding experiment definitions are not deleted by the user.

An \textit{experiment manipulation} is an interaction between the user
and the testbed's control/central server.
It usually results in allocation or
 deallocation of resources,  or in update of database entries. In Figure \ref{fig:termspic} user 1 allocates and deallocates resources for instance 1 and instance 2.  User 2 allocates resources for instance 3, but user 3 modifies this instance to include more nodes and later deallocates resources. User 3 also allocates and deallocates resources for instance 4 and instance 5.

A testbed \textit{project}
 is a collection of experiment definitions and
 authorized users,
%under a single Principal Investigator (PI)
aiming to investigate one
scientific research agenda or to participate in one specific class. Each project on the testbed is identified by a unique identifier -- PID.
In Figure \ref{fig:termspic} there is one project with two experiment
definitions and three users. All projects have a head user, who
is responsible for project's members and their actions, should
any problems arise. Each user is identified on the testbed by a unique username -- UID.

We now categorize testbed projects and users. The relationship all user and project categorizes is
 illustrated in Figure~\ref{fig:user_proj_cat}.

Testbed users are categorized into three separate groups:
\renewcommand{\labelenumi}{\alph{enumi}.}
\begin{enumerate}
\item \textit{Active} users that have manipulated
at least one experiment.
\item \textit{Inactive} users who have never directly manipulated an
  experiment, but may have accessed resources allocated by
  other users in their research group.
\item \textit{Orphaned} users, who have never been associated with a testbed project.
While it is possible for an inactive user to still do some useful
work with the testbed, for example, by logging into an experiment instance
allocated to another member, orphan users cannot do any
useful work on the testbed.
\end{enumerate}

Similar to users categorization, projects are categorized as:
\begin{enumerate}
\item \textit{Active} if a project associated with at least one
  experiment manipulation.
\item \textit{Inactive}, if there are no experiments associated
  with the project.
\item \textit{Unapproved}, if the project has not been approved to use the testbed.
\end{enumerate}
Some testbeds keep data about their unapproved projects, while others do not.



Our initial
 analysis of project and user data indicated that there were a lot of inactive
 users on three out of five testbeds where we had user data.
 To further understand this category we defined a \textit{warm-up time} of a project, which is the
 time lapse between the creation of the project on the testbed
  and its first experiment manipulation. Taking the
  maximum warm-up time of active users as a threshold,
we further categorize inactive users and projects as \textit{early} users (or projects) if their ages on the testbed fall
 below the threshold, or as \textit{stale} users (or projects)
otherwise.

 \begin{figure}[t]
	\begin{center}
		\includegraphics[width=3.3in]{figs/termspic.pdf}
		\caption{Illustration of users, experiment definitions, experiment instances and projects}
		\label{fig:termspic}
	\end{center}
\end{figure}

\begin{figure}[t]
	\begin{center}
		\includegraphics[width=3.3in]{figs/projcat.pdf}
		\caption{User and project categories}
		\label{fig:user_proj_cat}
	\end{center}
\end{figure}

Similarly, we categorize active users and projects as

\begin{enumerate}

\item
\textit{Internal} project/user if their focus is testbed monitoring or development.
Internal users are all
members of an internal project.
While several internal users also participate in other
 research projects on a testbed,
 we have found through manual investigation
that it is not possible to identify their intent
 during an experimentation activity as research or internal testbed
 management. We found
 several incidences where internal projects were used to do research about
 topics unrelated to testbed operation and management and vice versa.
 Thus to avoid biases in our data and since all internal users
 have vested interest in the testbed and are likely to be very active,
 we exclude all activity from internal projects and
 internal users during analyses discussed in Section~\ref{sec:data},
 unless otherwise noted.

 \item
 \textit {Outcome} projects are those where it is possible to
 clearly attribute some measurable outcome of the projects to the use
of testbed. In this paper we classify
 research projects as outcome projects if they have
published one or more peer-reviewed publication, or MS and
PhD thesis, in which they acknowledge the testbed's role in facilitating such research.
While some research projects,
especially those from academia,
publish extensively, other projects associated with industry or government
lab may produce only private outcomes.
 Hence we believe this metric under-estimates the number of
  projects that have received true benefits from their
   use of the testbed.
   %However, in Section~\ref{sec:survey}
  %we briefly discuss a testbed user survey,
   %and outline the challenges in collecting and process such data
 %to estimate behavioral metrics such as outcomes.
We classify class projects as outcome projects
 if they have more than three members.
This threshold excludes one instructor and two TAs,
 and values exceeding it indicate that students taking the class
have actually used the testbed for their course work.

\item \textit{No-outcome} projects are those that do not fall into
 the above categories. Some projects in this category
 exhibit usage patterns
that are suggestive of performing research or developing class materials,
 but they have not yet generated a publicly measurable outcome. In case
of research projects there are many reasons for this effect. First, some
research takes years to mature to publication and such 
 project may have just started using the testbed.
Second, the project may be industrial or government
project and thus it may not produce a peer-reviewed
publication.
Third, some research may produce negative results, which usually do not get published.
 This category also includes projects where we observe
  that users have briefly interacted with the testbed and then
  became idle for a long period.
Also in Sections \ref{sec:data} and \ref{sec:character}
 we show that projects may have a large warm-up 
time before their first testbed use which may contribute to 
 their categorization. 
\end{enumerate}

Finally, we define {\it node--hours} for an entity
 as the product of the total number of allocated nodes for that
  entity and duration expressed in hours.
The entity may be a user, an experiment, or a project.

In our investigation of testbed usage characteristics
 we have discovered a broad
range of activity levels in every dimension.
While some testbed datasets were very detailed and rich allowing detailed
 analysis, few testbed datasets where anonymized allowing only limited
  types analysis as discussed further in Section~\ref{sec:data}.

%Some projects and some
%users were very active, while others generated none or a few experiment
%instances. Some experiment instances were long-lived (months) while
%others were extremely short (under 10 minutes). From pure use pattern
%data it was impossible to understand the causes of such broad range of
%uses. To disambiguate and possibly explain this data we have attempted
%to somehow quantify from project descriptions, experiment definition
%descriptions, personal conversations with the PIs and publications
%co-authored by project members how useful the testbed was for each given
%project. We could do so only for DETER data as the data we have from
%other testbeds is anonymized due to privacy concerns.

