\section{Looking Forward}
\label{sec:lfwd}

% tools, repositories
% support the complete experiment lifecycle 
% at-scale experimentaton 
% heterogeneous devices and diversity 




Table~\ref{tab:cleaning} shows the breakdown of projects and users per
categories introduced in Section~\ref{sec:terms} for DETER and Utah
Emulab testbeds and Figure \ref{projcat} illustrates relationships of
the categories.  We start with
235 projects and 2,347 users for DETER and with 737 projects and 3,256
users for Utah Emulab. We note that Utah Emulab has 3.14 times more
projects but only 1.4 times more users than DETER testbed. This is due
to the DETER testbed having several large class projects with close to
100 students each, as we show later in this section.


The non-orphan users and approved projects are then classified into
active, if they had at least one experiment manipulation, or inactive
otherwise. In both testbeds 76\% of projects are active. This similarity
is striking, given that Utah Emulab has 3.14 times more projects than
DETER. Two of DETER's active projects and 15 of Utah Emulab's active
projects have had no resource allocations. This happens when a user
creates an experiment definition in testbed database but never requests
that resources should be allocated to it.  In DETER, there are 75\% of
active users, while in Emulab there are 61\%. % Try to look into why.
There are three main reasons for inactivity. First, a testbed may not
meet a user's experimental needs. It may either lack some experimental
tools or hardware that the user believed would be available, or it may
be hard to use, or it may just not be the best fit for the user's
research questions. Second, a PI may open a project hoping to find a
student or funds to work on some research in the future, which makes
project inactive for a while. In both of these cases an inactive user
has strong intent to perform some research but lacks means to do so at
the moment on a testbed.  If the user is the PI of a project this makes
the whole project inactive. The third reason leads only to user
inactivity. PIs often ask their graduate students or students in their
class to join a testbed so they could do some research or class work on
it. A student may fail to perform in a class or on a research task and
their testbed inactivity is just a consequence of this failure.
Considering the second reason for inactivity we wanted to establish the
maximum warm-up time for a project or a user. Inactive projects/users
that have existed on the testbed for less than this maximum warm-up time
would be considered \textit{early} and the rest would be considered
\textit{stale}. We show the histogram of warmup times for users in
Figure \ref{warmupus} on DETER and Utah Emulab. Warmup times for
projects are similarly distributed. Most users and projects become
active within a week or a month from their inception, but there is a
small number of users and projects that may take several years to become
active, which we did not expect. We show the cumulative distribution
function for warmup time and inactive time of users in Figure
\ref{warmupinus} (projects' inactive time is similarly distributed).
Taking the maximum value of warmup time for each testbed as the
threshold for early/stale classification 55\% and 89\% of inactive
projects on DETER and Utah Emulab respectively would be considered
early, and so would around 90\% of inactive users on both testbeds.

We next filter out internal projects and their members from our project
and user sets. There are 7\% (DETER) and 5\% (UE) of active projects
that are internal, and there are 5\% (DETER) and 10\% (UE) of active
users that belong to internal projects. This leaves with with 90-95\% of
active projects/users in our working set. We then split this set into
research and class projects/users. A user may belong to both a research
and a class project in which case we label him as ``mixed'' and we
include him in analysis of both sets. There are 20\% (DETER) and 8\%
(UE) of class projects and 75\% (DETER) and 25\% (UE) of class users in
our working sets. We find this difference in user mix striking,
especially since DETER class user accounts are recycled as we explained
earlier. This large representation of class users in the population on
DETER is due mostly to a large size of several classes that use DETER,
while Utah Emulab has no classes of this size. There is a small number
of users that belong both to research and class categories (1\% on DETER
and 3\% on Utah Emulab). These users are either PIs that use DETER both
for their research and in classes they teach, or they are students of
such PIs that either TA their class or they first take the class and
then continue doing research with the PI.

For DETER we can identify those research and class projects that had
measurable outcomes because we have access to user names. On Utah Emulab
we can only identify class projects with outcome because this
categorization is based on project membership and can be applied to
anonymized data. There are 35\% of research outcome projects (DETER) and
there are 74\% (DETER) and 83\% (UE) class outcome projects. We
attribute this much higher percentage of outcome projects in class than
in research category to the fact that class outcomes are much easier to
achieve. All is needed for a class outcome is for the instructor to
assign to his students some task to be performed on the testbed, while a
research outcome requires a published paper in a peer-refereed venue.
%Draw analysis diagram and sets 
\begin{table}[htdp] 
\begin{center}
\begin{tabular}{|c|c|c|} \hline Projects & DETER & Emulab \\ \hline
Total & 235 & 737 \\ Active & 179 (76\% of T) & 534 (76\% of T) \\
Active no alloc & 2 & 15 \\ Inactive & 56 & 203 \\ Early$*$ & 30 (55\%
of I) & 158 (78\% of I)\\ Stale$*$ & 26 & 45 \\ Internal and active & 12
(7\% of A) & 25 (5\% of A)\\ Working set  & 167 (93\% of A) & 509 (95\%
of A)\\ Class & 34 (20\% of WS) & 42 (8\% of WS) \\ Outcome (class) & 26
(76\% of C) & 35 (83\% of C)\\ Research & 133 (80\% of WS) &  467 (92\%
of WS)\\ Outcome (research) & 47 (35\% of R) & \\ 
\hline Users & DETER & Emulab\\ 
\hline 
Total & 2,100 &  3,256 \\  Active & 1,579 (75\% of NO) &
1,966 (61\% of NO)\\ Inactive & 521 & 1,290 \\ Early$*$ & 463 (89\% of
I) & 1,157 (90\% of I)\\ Stale$*$ & 98 & 133 \\ Internal &  73 (5\% of
A) & 194 (10\% of A)\\ Working set  & 1,506 (95\% of A) & 1,772 (90\% of
A)\\ Class & 1,132 (75\% of WS) & 443 (25\% of WS)\\ Research & 356
(24\% of WS)  & 1,267 (72\% of WS)\\ Mixed & 18 (1\% of WS) & 62 (3\% of
WS)\\ \hline 
\end{tabular} 
\end{center} 
\caption{Breakdown of project and user data per category. Starred rows
are generated by taking the maximum warm-up time for the working set
of projects (or users) and using it as a threshold} 
\label{tab:cleaning}
\end{table}%

\begin{figure}[htbp] \begin{center} \includegraphics[width=3in,
type=pdf,ext=.pdf,read=.pdf]{figs/warmup.user.gnu} \caption{User warmup
time in DETER and Emulab} \label{warmupus} \end{center} \end{figure}


\begin{figure}[htbp] \begin{center} \includegraphics[width=3in,
type=pdf,ext=.pdf,read=.pdf]{figs/warmup.inact.user.gnu} \caption{User
warmup and inactive time distributions} \label{warmupinus} \end{center}
\end{figure}



We now look at representation of various research categories in DETER,
shown in Table \ref{projrc}. Our goal here is to understand how useful
testbeds are for some research fields. We manually categorize projects
based on their description and we link them to publications if the
publication acknowledges use of DETER testbed and if there is a match
between the PI or project member names with the publication authors. We
did not have access to project descriptions for Utah Emulab so we could
not categorize their projects. %Perhaps try to do this or show data just
for proj/pub with no link On DETER, the top category by number of
projects is class. We believe that this is due to two reasons. First,
illustrating principles taught in lecture through hands-on materials is
very engaging to students and teachers are well aware of this fact.
Second, DETER hosts a public repository of class exercises that PIs can
reuse, making adoption easy. Next by number of projects are malware,
DDoS and architecture. This shows both the perception of the PIs that
testbeds are best suited to research questions in malware, DDoS and
architecture fields and the popularity of these fields in security
research.

We next look at usage per category and project in node-hours. Both
duration and size of experiment instances contribute to this measure.
Here, malware and DDoS lead by far. This is due both to many projects in
these categories and to large usage per project (column 4). Malware
experiments are sometimes long because they collect malware samples from
the wild, and both categories have large experiments.

We next look at outcomes and outcome projects in a category. If a
project generated three publications we would count this as three
\textit{outcomes} but one \textit{outcome project}. Looking at
percentage of outcome projects in all projects in the category, class,
worms and DDoS lead. This means that testbeds are especially useful for
researchers in these fields, who achieve high yield out of their testbed
use. Looking at number of outcomes per project in a category, DDoS, worm
and infrastructure categories are on top. We attribute presence of DDoS
near the top across many usage measures to two reasons: (1) DETER has
well-developed and easy-to-use tools for DDoS attack traffic generation
and for legitimate traffic generation at network and application level
(random contents). This makes DDoS experimentation easier to set up than
experimentation in other categories, (2) Testbed experiments may be the
best fit for DDoS scenarios. Unlike worms and botnets, meaningful DDoS
scenarios  can be created with tens of nodes. Unlike privacy and
malware, DDoS experimentation may not require realistic legitimate
traffic at the content level, which is difficult to generate. A high
number of outcomes per project in these categories further indicates
that once a user develops necessary tools for experimentation in these
areas they can reuse them to produce multiple meaningful experiments,
and multiple publications.

Looking at outcomes per 1 million of node-hours of usage, the most
fruitful category is botnets followed after a large gap by worms,
infrastructure and class categories. For these categories, it appears
that a small usage of the testbed can help in producing many
publications. {\color{red} Table columns need rearranging}

\begin{table*}[htdp] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline Category & Projects & Nh (M) & Nh / project (K) & \# O & OP & OP
/ All & \# O / project & \# O / 1M nh\\\hline Class & 34 & 0.43  & 13 &
26 & 26  & 76\% & 0.76 & 58 \\ Malware & 20 & 1 & 50 & 13 & 6 & 30\% &
0.65 & 13\\ DDoS & 18 & 1  & 56 & 27 & 9 & 50\% & 1.5 & 27\\
Architecture & 14 & 0.59  & 42 & 9 & 6& 43\% & 0.64 & 10\\ Testbeds & 12
& 0.23  & 19 & 1&1  &  8\%& 0.08 & 4 \\ Infrastructure & 12 & 0.2  & 17
& 12 &5 & 42\% & 1 & 60\\ Worms & 11 & 0.33 & 30 & 18 & 8 & 73\% & 1.63
& 55\\ Evaluation & 8 & 0.06 & 8& 1 & 1 & 13\% & 0.13&17 \\ Privacy & 6
& 0.16 & 27 & 0 & 0&0 & 0 & 0\\ Botnets & 4 & 0.003 & 0.75 & 3 & 1 &
25\% & 0.75& 1000\\ Other & 28 & 0.56 & 20 & 11 & 8 & 29\% & 0.39 &
20\\\hline \end{tabular} \end{center} \caption{Usage and outcomes per
research category. O = outcome, OP = outcome project, Nh = node-hours.}
\label{projrc} \end{table*}%


XXXXXX
From Experiment Characteristics
XXXXXXXXXXXXXXX

Figure \ref{expswaps} shows the number of experiment instances per
definition in all four testbeds. For DETER and Utah Emulab most
definitions result in either 1 instance (around 42\%) or in 2-5
instances (32-36\%). Those definitions that result in one instance may
correspond to situations when a user has a simple research question, or
when a user concludes that his experiment definition was wrong.  For
Planetlab 15\% of definitions result in 1 instance, 49\% result in 2-5
instances, 22\% in 6-10 instances and 10\% in 11-20 instances. This
trend towards more reuse in Planetlab likely comes from difficulty of
new definition creation in Planetlab as compared to Emulab testbeds. In
Planetlab the PI of the site must create slices (which we equate to
experiment definitions) for users, thus a user must communicate to the
PI if they need a new slice. Since a slice simply links to an identifier
and not to any particular topology, software or hardware configuration
it is suitable for reuse for many different research questions. In
Emulab testbeds experiment definition links to a topology, node
operating systems and potentially software. While all these can be
changed under the same experiment definition's identifier, this only
makes sense if the change is not significant and if the current settings
will not be reused. Otherwise the user creates a new definition easily,
via a Web interface, within minutes. Starbed records link resource
allocations to users  and have no concept of experiment definitions so
we do not show Starbed data on this graph. \begin{figure}[htbp]
\begin{center} \includegraphics[width=3in,
type=pdf,ext=.pdf,read=.pdf]{figs/exp.swaps.gnu} \caption{Experiment
instances per definition} \label{expswaps} \end{center} \end{figure}


XXXXXXX
Project Charac
XXXXXXXXXXXX

WE Figure~\ref{projswaps} shows the number of experiment instances per
project in DETER and the Utah Emulab (left) and the same metric in DETER
across research categories with and without outcome (right). Research
projects look similar on both testbeds, with most projects (13-16\%)
falling into 21-50, 51-100 and 101-200 instances categories. There are
no clear peaks. There are 7-9\% of research projects with a single
instance. It is likely that for most of these projects there is a poor
fit between the testbed and their research goals. Class projects have
many more instances, because they have more members than research
projects. Their histograms resemble normal distribution with peaks at
101-200 instances for DETER and 201-500 instances for Utah Emulab.
Looking at experiment instances for DETER research projects with and
without an outcome, outcome projects appear much more active. Their
distribution resembles normal distribution with peak at 101-200 (27\%)
and 201-500 (31\%) instances while no-outcome projects' distribution
peaks at 21-51 (16\%) and 51-100 instances (19\%) and also has
significant clusters at 1 (13\%) and 2-5 (17\%) instance categories.
This higher activity of outcome projects indicates that the testbed is
useful to the associated researchers.

\begin{figure}[] \begin{center} \subfigure[DETER vs Utah Emulab]{
\includegraphics[scale=1, width=3in,
type=pdf,ext=.pdf,read=.pdf]{figs/proj.swaps.gnu} \label{fig:swapsdvse}
}

\subfigure[DETER research with and without outcome]{
\includegraphics[scale=1, width=3in,
type=pdf,ext=.pdf,read=.pdf]{figs/proj.swaps.cmp.gnu}
\label{fig:swapsovsno} }
\caption{Experiment instances in a project.} \label{projswaps}
\end{center} \end{figure}

Looking into number of experiment definitions per project (Figure
omitted for space reasons) in DETER and Utah Emulab, the research
projects' distributions match. Between 25\% and 29\% of projects have
2-5 definitions, and between 13\% and 16\% fall into 1, 6-10 and 11-20
definitions categories. A small number of projects have more than 20
definitions, in some cases going over 200 definitions. Looking at class
projects, both DETER and Utah Emulab projects peak at 21-50 and 51-100
definitions categories. DETER has more class projects in higher-count
categories (101-200 and 501-800) because it hosts larger classes than
Utah Emulab.  Looking at research projects on DETER with and without
outcome, we see that outcome projects are more active, peaking at 21-50
definitions category (31\%), with 11-18\% in 2-5, 6-10 and 11-20
definitions categories. No-outcome projects peak in 1 (21\%) and 2-5
(38\%) definitions categories with 14\% in each of 6-10 and 11-20
categories.


%\begin{figure*}[htbp] \begin{center} %\includegraphics[width=3in,
%type=pdf,ext=.pdf,read=.pdf]{figs/proj.size.gnu}
%\includegraphics[width=3in,
%type=pdf,ext=.pdf,read=.pdf]{figs/proj.size.cmp.gnu}
%\caption{Experiments per project. Left: DETER vs Emulab, Right: All vs
%outcome} \label{projsize} \end{center} \end{figure*}



%\begin{figure*}[htbp] \begin{center} \includegraphics[width=3in,
%type=pdf,ext=.pdf,read=.pdf]{figs/proj.user.gnu}
%\includegraphics[width=3in,
%type=pdf,ext=.pdf,read=.pdf]{figs/proj.user.cmp.gnu} \caption{Members
%per project. Left: DETER vs Utah Emulab, Right: No-outcome vs outcome}
%\label{projuser} \end{center} \end{figure*}



