
\setcounter{page}{1}
\begin{center}
    {\Large\bf Project Description -- \cah} 
\end{center}% 
% From the NSF Grants Proposal Guide: "The Project Description should
% provide a clear statement of the work to be undertaken and must include:
% objectives for the period of the proposed work and expected significance;
% relation to longer-term goals of the PI's project; and relation to the
% present state of knowledge in the field, to work in progress by the PI
% under other support and to work in progress elsewhere."


We propose to investigate, 
 build and deploy 
 a publicly accessible repository 
 of up-to-date application-level data. 
This data is sorely needed 
 in the networking and security communities.
Currently available network data with application-level information
 is often outdated and is either private
 or customized to specific, narrow research needs. 
We will address this problem 
 by  designing and deploying a publicly accessible repository of
application-level data called {\cah},  where \textit{Critter} stands for \underline{C}ontent-\underline{Ri}ch \underline{T}raffic \underline{T}rac\underline{e R}epository. We have the following design goals for \cah:
\begin{my_enumerate}
 \item Data is fresh and diverse. We will achieve this goal by creating an
       anonymous contributor network where contributors can join and leave at will,
       ensuring continuous data collection.  Our framework will be flexible
       and allow for a broad range of contribution modes, thus attracting
       diverse participants.  
\item  Data utility is maximized.  We will
       achieve this by minimally processing contributed data and allowing
       researchers to query this
	``almost raw'' data for features that interest them.
 \item Contributor privacy and identity are fully protected. 
       We will achieve this by giving contributors full control over their
       data and its usage at a fine-grain level including mechanisms to
       withdraw data fully, store data remotely or locally, and contribute
       only what they are comfortable with.  All stored data will first be
       sanitized to alter personal and private information and then
       encrypted.
       Our secure query framework will protect privacy further by 
       allowing only
       aggregate and prevalent results to be returned to researchers, thus
       protecting data contributor privacy by using the ``hiding in a
       crowd'' approach.
\end{my_enumerate}
This research addresses the security and privacy focus
   of the Trustworthy Computing solicitation in two ways.
First,
   our work will enable 
   access to content-rich network data,
   which is
   essential to continued progress
   in networking and cybersecurity research.
Second,
   our work will explore new approaches
   to secure sharing of private data, 
   which is a recurring problem in many facets of networking and cybersecurity. 

\section{Introduction and Motivation}

Access to application- and data-layer packet information
  is vital to cybersecurity and networking research.
This content-rich data is needed for data mining and 
  for furthering efforts to reach 
  the ground truth.
Such data is
  vital for properly evaluating and tuning
  approaches in 
  many areas of cybersecurity
such as
  intrusion detection,
  steganography,
  traffic camouflaging
  and
  traffic classification.

\subsection{Challenges}
 \label{sec:challenges}
%Such data is not publicly shared because of tremendous privacy risk. e.g, may contain
%names, SSNs X
Despite the need for content-rich data,
 such data
 is not publicly available because
 of the tremendous privacy risks
 associated with its sharing.
The few content-rich datasets which are \emph{public}
 address these privacy risks by mining some features from private data and 
 synthetically generating content that resembles the original data along these features.
 Another approach to privacy risk management is to release 
only a small fraction of traffic,
 such as worm propagation traffic, that contains no obvious private data.
Both approaches result in datasets that meet only
 narrow research needs---such as
 intrusion detection evaluation
 and investigation of specific malware defenses.


Protecting content-rich data, 
 obtained from Internet users,
 is a non-trivial problem.
Packet payloads include an unbounded
 amount of private information---such as 
 addresses, personal conversations, 
 bank information, social security numbers, 
 and so forth.
To compound the problem,
 some private information
 may not appear to pose a risk---such as product and location
 preferences---but may still be used by someone acquainted with the 
 data contributor to uniquely identify him or her.

  
Standard methods for 
 protecting publicly available 
 network data through sanitization and anonymization
 are not well-suited to protect
 content-rich network data.
Currently, publicly available network data 
 from Internet users
 is \emph{sanitized}---a process
 which removes most or all of the application-level
 data and anonymizes sensitive information such as IP and MAC addresses.
Despite such mitigating measures,
 sanitized data is still highly vulnerable to both 
 active and passive
 de-anonymizing attacks~\cite{cryptopan, webtraffic, drift, advocate}.
Additionally, sanitization offers poor protection
 against future attacks.
Once 
 a user has downloaded the data,
 the provider has no control over how
 data is used,
 and there are no mechanisms 
 for a data provider to retract published data
 once an attack has been discovered.
Due to the greater privacy risks that come with
 sharing content-rich data, 
 building upon traditional sanitization methods
 to protect privacy is not an option.
 
Fortunately, 
 in her current NSF-funded work (award \#0914780),
 PI Mirkovic
 introduces a more robust option for protecting
 publicly shared data using \emph{secure queries} that results in both
 higher utility for researchers and better protection for data contributors.
Secure queries protect data by allowing researchers to query for 
 only aggregate features of the data,
 and preserve user privacy by 
 applying $k$-anonymity and $l$-diversity principles 
 (we discuss secure queries 
 in detail in Section~\ref{sec:secure_queries}). 
In this endeavor,
 we propose to utilize PI Mirkovic's existing work
 to help address some aspects of privacy risks associated with
 sharing content-rich data.
  
In addition to the need for content-rich data,
  there is a need for more diverse
  and more up-to-date network data, both content-rich and content-less.
The research community's current public resources
 for network data are very limited. 
Data comes from a handful of environments, 
 and is used for years or even decades after collection (e.g., Internet Traffic Archive \cite{ita}), while Internet traffic trends change rapidly and widely.
Evaluation performed
  with outdated or homogeneous sources
  can give misleadingly 
  optimistic results.
For example,
  an intrusion detection system may have a
  very low false-positive rate when tested
  with data from a university environment,
  but may generate an overwhelming number
  of false-positives in a corporate environment, 
  which renders the system useless. 
Likewise,
  a system
  may perform well on outdated traces, only to crumble when 
  deployed and subject to 
 the increased volume and diversity of present-day traffic.
 Additionally, access to fresh and diverse network data
 is needed for a deeper understanding
 of trends and emergent behavior
 such as
 the evolution of 
 botnets, 
 Internet worms,
 denial of service attack,
 and prevalence of peer-to-peer traffic in networks. Such insights are needed for an 
 informed design of novel network and cybersecurity systems.

 
 \subsection{Proposed Effort}
It is unrealistic to expect that the problem of data freshness and diversity can be
addressed by approaching network service providers. Service providers
focus on routing traffic and providing good performance to their clients, and not on 
data collection and release. Due to the fear of privacy risks, user dissatisfaction and liability,
these entities are especially reluctant to publicly share their traffic data, even when it is
thoroughly sanitized and anonymized. Sharing content-rich traffic data would be 
even more unlikely.
The typical model 
 for network data collection---where
 traffic information is collected
at a router for an entire network---is clearly
 not appropriate for collecting
 application-level information, which is highly private.
 Many users would object to this blanket data collection and would leave such a network provider. 


% What is needed is a way for people to contribute their data continuously. People will only be comfortable with it
% if we allow them to control what they contribute, how is it protected and used, and if they can stop at will and pull all their data.
Instead, we plan to reach out directly
 to individual users
 willing to share their data.
 Connecting to individuals 
 directly,
 and providing mechanisms for 
 them to contribute continuously,
 ensures up-to-date 
 data from a diverse set of environments.
The only way to engage
 such individuals
 is to provide mechanisms 
 for the user to contribute anonymously, 
 and have
 flexible control over 
 exactly what data she contributes,
 and 
 how her data is protected and used.
The individual must also be able
 to completely withdraw her data---both past 
 and current---at will.

   
%  just say here: this is what we propose. Highlight design goals and how we
% will meet them similar to summary part.
We propose
 to build and deploy such a framework called \cah\ that
will appeal
 both to \emph{data contributors}--individual Internet users willing to
 share their data---and to \textit{researchers}---the users of this data.
 For data contributors,
 our proposed work will
 provide a platform to 
 safely and 
 actively participate
 in research advancement.
Our goals are to fully
 protect a contributor's
 data and identity, and 
 provide a contributor with
 full, flexible control
 over how, when and why her data is used in research.
A contributor's data is protected by multiple mechanisms, as explained in 
 Section~\ref{cah}.
For researchers,
 our proposed work will provide 
 a publicly accessible repository
 of up-to-date content-rich data
 from a diverse set of environments.
Researchers will be able to query 
 the repository on demand and 
 receive aggregate information about features that interest them.
Queries will be run on ``almost raw'' data to maximize the
utility to researchers. We will minimally process the 
data to remove Personal Private Information
 (PPI) and  minimize risk to 
 our data contributors from third-party data stealing.

 
\section{Threat Model}

The biggest challenge we will address in our research is the privacy risk to
data contributors.  We now define our threat model.

\begin{my_enumerate}
\item{\textbf {Data Stealing.}} 
Contributor data may be stolen by a third party, not necessarily
related to \cah, e.g., a Trojan.  A raw version of contributor data
contains much sensitive and private information that would not otherwise
exist in one place.  Data could be stolen from disk storage, 
from memory or from the network.

\item{\textbf {Data Query Correlation.}} 
 Contributor data may be queried by someone familiar with the contributor,
 or someone with auxiliary information from other sources, 
 for the purpose of linking query results 
 to a specific contributor.

\item{\textbf {Contribution Correlation.}} An observer may attempt to 
 correlate a contributor's 
 network behavior---such as IP address or a pattern of connections---to
 responses to queries
 and thereby learn private information
 about a specific contributor.

%identify contributor identity or to infer
%private information from contributor's IP address or from the pattern of her
%contributions, e.g., analyzing queries that receive a reply from this
%specific contributor. 
 
\end{my_enumerate}

In the next section we elaborate how we address these privacy risks. 
%First, may elect to host her data on her machine, thus
%never relinquishing it.
%
%
%Second, contributed data will be minimally processed 
%to modify all personal and private information (PPI), while
%preserving its statistical properties, as we discuss in Section \ref{nlp}. 
%Such processing is necessary to minimize risk to 
% our data contributors should data be stolen from their machines by some third party, 
%or should they lose their machine. 
%Third, contributed data will be encrypted to further minimize the risk of
%data stealing. 
%Four, no human apart from the contributor is ever allowed access to the raw,
%PPI-sanitized, data. Instead, researchers can query the data via our Critter-at-home
%framework, and they receive aggregate statistics (counts, distributions, etc.) of
%the traffic features they query for. 
%Five, all contact with a contributor is at 
% her discretion and
% is done through an anonymous network,
% where contributor identities are hidden both from
% researchers and the Internet at large.

\section{\cah}
\label{cah}

%Protecting data contributors'
% privacy and identities
% while simultaneously offering 
% researchers maximal utility
% is challenging.
%%We now give more details about challenges we plan to address in our research and how we will do that. Main challenges are:
%% privacy, control over data and identity, utility to researchers 
%In this section, 
% we introduce the basic proposed framework 
% for Critter-At-Home.
%In the following subsections
% we give more details about the challenges involved,
% as well as how
% we plan to address these challenges in our research.

\begin{figure}
	\begin{center}
		\includegraphics[scale=0.10]{../figures/critter-at-home.pdf}
	\end{center}
	\caption{The basic overview of \cah.}
	\label{fig:all}
\end{figure}

\cah\ is a set of modular components we call \textit{Critter client},
 housed on a data contributor's local
 machine 
 and a centrally located
 \emph{Critter server}, whose task is to collect and disseminate researcher queries and replies,
 and to apply some privacy protections to reply data.
Figure~\ref{fig:all} shows the basic
 components for \cah.

Data contributors (shown in gray in Figure~\ref{fig:all})
 run the Critter client program on their local machines.  The Critter client
collects the contributor data via its Recorder module, according to the
Recording Policy.  The client then engages the PPI Sanitizer module
(Section~\ref{sec:ppi}) 
 to remove Personal Private Information (PPI),
 encrypts the data with the contributor-generated symmetric key and then hands 
 encrypted data over to the Archiver module. 
Both PPI-sanitization
 and encryption 
 are done to address the \textbf{data stealing} threat in our threat model.
A contributor may elect to store the data locally or 
 at the remote location managed by the Critter server. This decision is recorded in the 
 Storage Policy and honored by the Archiver.

If the data is stored locally, the Query Handler module in the
 Critter client polls the Critter server for new queries, whenever the
 contributor's machine is connected to the Critter network.
The Handler decrypts data, and processes the query if it is permitted by the
 Query Policy.
Before returning results to the Critter server, 
 the Handler encrypts results and a portion of the query policy---which must be resolved
 centrally during aggregation---using the Critter server's public key.
The process is similar for data
 stored on the Critter server,
 except the need for polling
 to obtain new queries is eliminated, and the data is 
 encrypted by the server-generated symmetric key. 

To protect against \textbf{contribution correlation}, 
 contributors connect to the Critter server 
 via an anonymizing network, such as the Tor network \cite{tor, torweb}
 (see Section~\ref{sec:tor}) for more details. 

Researchers submit queries and obtain results via
 a community portal, which passes these queries 
 on to the Query System, located on the Critter server.
Queries submitted to the Critter server
 through the public portal,
 are stored, 
 and handed out to contributors
 whenever they join the Critter network. 
Researchers can query 
 on any features which interest them but 
 they only receive
 aggregate responses (counts, histograms, etc.)
 to address \textbf{data query correlation}.
 These responses are synthesized from contributors' individual responses, after applying their 
 Query Policies to ensure ``hiding in the crowd''
 (Section~\ref{sec:secure_queries}).
Aggregate responses may be finalized
 after a configurable time-frame or after predefined response-count is reached,
 or a query may be indefinite, and periodically all responses to it are aggregated and returned to the researcher. 

In the following subsections we provide details
 about the challenges involved in building \cah\ and 
 how we intend to address these challenges.
%We discuss first the aspects of data collection (Section~\ref{recording}),
% and how PPI-Sanitization addresses the \textbf{data stealing} threat
% (Section~\ref{sec:ppi}).
%We then 
% go into details on how researchers can query
% collected data (Section~\ref{sec:query})
% and how aggregated responses address the \textbf{data query correlation}
% threat (Section~\ref{sec:secure_queries}).
%Last we discuss how we anonymize participation,
% using a mix network
% to address the \textbf{contribution correlation} threat
% (Section~\ref{sec:tor}).

\subsection{Collecting Data}
 	\label{sec:data}

\begin{figure}
	\begin{center}
		\includegraphics[scale=0.1]{../figures/data_path.pdf}
	\end{center}
	\caption{Data collection and storage in \cah.}
	\label{fig:data}
\end{figure}

In this section 
 we discuss how we collect and store data
 from data contributors.
Figure~\ref{fig:data} depicts that basic process
 for data collection and storage.
Data is recorded, 
 any Private Personal Information (PPI) is replaced,
 and data is then stored locally or on a remote storage system
 in an encrypted format.

\subsubsection{Recording Data}
\label{recording}

% Our vision is that we will give users some program to download, relatively
% small and portable.  The program collects the data.
The data collection process 
 takes place locally on a data contributor's machine.
Our vision is to build a 
 relatively cheap to run,
 and highly portable client program.
The program needs to be cheap to run resource-wise
 so as not to impact a contributor's
 regular computer use.
It needs to be
 portable between environments
 to encourage a diverse set of contributors 
 to participate.

Application-level data could be recorded at the network layer or by instrumenting applications.
We chose to record at the network layer for portability.
Data capture at the network
 is quite well supported by the 
 vast majority of available platforms through
 a variety 
 of libraries---such as libpcap~\cite{libpcap}, winpcap~\cite{winpcap} 
 and WAND's libtrace~\cite{libtrace}).
Any data which is encrypted by an application,
 would be recorded as encrypted traffic
 in Critter.

\textbf{REC:} The major tasks in this segment of work are the development of
the recorder's code and the design and implementation of the recording
policy language.  We desire to support a range of contributors with
different expertise and privacy concerns.  We will work to develop a
flexible and fine-grained policy framework, allowing interested contributors
to specify their recording policy at several levels.  For example, they will
be able to permit or
 prohibit recording at the application level 
 (``never record traffic from BitTorrent''),
server level (``never record traffic to/from my bank site''),
 location level (``never record when my laptop is on my home network''), or
 message level (``never record HTTP POST traffic'').  Policies should be
 easy to write by unsophisticated contributors and policies should be able
 to express both coarse-grained and fine-grained rules.  We will design a
 default, conservative recording policy for those contributors who wish to
 remain hands-off.
 
\subsubsection{PPI Sanitizer}
	\label{sec:ppi}


After recording, our PPI sanitizer will modify any PPI found in the recorded
data, before encrypting and storing it.

There is no generally accepted and exhaustive list of what consists Private
Personal Information (PPI).  Instead PPI is any information perceived so by
its owner.  Prior work in network data sharing \cite{cryptopan, webtraffic,
drift, advocate} has shown that even data that is not PPI at the first
glance can be used to uniquely identify its contributor.  Thus
identification and elimination of all PPI data is extremely difficult. 
This difficulty has been a major problem with releasing data publicly.  

However, since \cah\ will not publicly
release raw or PPI-sanitized data, we do not need to solve this problem.  
Instead, we  will perform
PPI sanitization to minimize the risk to the data contributor should her
data be stolen by a third party attacker that gains unauthorized access to
her machine, e.g., via an intrusion or a Trojan.  

\textbf{PPI1:} We will first investigate which data should be considered PPI by our
sanitizer.  Based on our definition of the threat model, we believe that at
least individual names, email addresses, phone numbers, addresses,
usernames, passwords, financial information and social security numbers
should be identified as PPI, since such data is sought after by the
attackers for financial gain.  
We will investigate other pieces of private
and personal data, relating them to applications that generate such data. 
Critter will also allow for the contributor's modification of the PPI list.


During sanitization, we wish to retain as much
 research utility as possible,
 so instead of removing PPI, we propose to replace it with information 
 that is of the same type and is somehow contextually similar to the original information. 
Performing such replacement has two
 major challenges: First, some PPI may have a well-defined format such as
social security numbers, but other may not, e.g., usernames and passwords,
thus automatically  identifying PPI in free text is challenging.  
Second,
 understanding how to generate replacement items for identified PPI so to
 preserve the data type and some inherent properties of this type, and thus
 preserve research utility, is hard.  For example, a researcher may want to
 investigate how often a phishing attempt is masked as an advertisement for
 Russian female services.  
To facilitate this research, personal names
 should be replaced by other personal names, but we should also make an
 effort to preserve gender and perhaps name origin during replacement thus a
 Russian female name should be replaced by another Russian female name.


\textbf{PPI2:}  To address the challenge of identifying 
 PPI, 
 we propose to investigate
 four approaches.

First, we intend to explore
 natural language processing (NLP) tools
 which can identify named entities.
Tools such as 
 the Stanford Named Entity Recognizer
 (NER)~\cite{ner, extract} are quite capable 
 of identifying
 sequences of words
 which are personal names and names of places. We will explore other similar tools in the 
 NLP domain to see how they can apply to our problem.

Second,  
 we will explore use of context information to
signal the presence of PPI.
For example, an HTML POST response to 
 a password type request 
 would contain enough context
 to identify the PPI.  We expect to mine context from application headers,
field names in Web forms, etc.  We will also explore how to identify and use
context in free-form text such as when someone specifies their username and
password in an E-mail message. Specifically we will survey NLP approaches for
term extraction \cite{extract} that also use context information to identify relevant terms in free text.

Third, we will solicit contributor input and feedback on PPI identification. 
Contributors could be walked through
 a questionnaire which would 
 help them itemize PPI.
With the right interface, 
 contributors could also help train
 and sanity-check 
 PPI identification
 during early stages of deployment.
We will investigate what exactly could be asked of a human user, how to ask, how much
burden would it introduce and how willing would users be to provide such data.  

Lastly,
 regular expressions may prove
 useful at identifying 
 some types of PPI, such as
 social security numbers, that have a well-defined format.

From preliminary work with a graduate student in the past semester,
 we have compiled a starting list of 
 types of PPI .
Table~\ref{tab:andy} shows our 
 initial findings for common
 PPI and some candidate 
 methods for detection.


\begin{table}[h]
\begin{tabular}{p{1in} p{3in} p{2in}}

\textbf{Type of PPI} & \textbf{Features to Preserve}& \textbf{Detection} \\
\hline
Names & Preserve character distribution and gender possibly. & Named Entity Recognition \\
\hline
Addresses and Locations & Preserve character distribution as much as possible. 
Retain address formatting (i.e. Ave or Ave. or Avenue) and level of address
specificity.  Also, if a location is expressed in GPS coordinates replace
with valid GPS coordinates. & Named Entity Recognition, regex, (optional)
dictionary search \\
\hline
Social Security Numbers & Retain formatting (i.e. 123-45-6789 or 123456789) &
regex, and context search\\
\hline
Credit Cards & Retain formatting (i.e. 1234 5555 6498 6789) & regex,
(optional) dictionary search \\
\hline
Usernames & Retain usage of dictionary words and distribution of
letters and numbers. & identify from context and user input\\
\hline
Passwords & Substitute dictionary words with another dictionary word of same
length (and ideally character distribution).  Then substitute the rest of
the word with a similar distribution of letters and numbers. & identify from
context \\
\hline
Dates & Retain formatting (i.e. 12/31/91 or Dec. 31 1991) & Named Entity
Recognition, regex\\
\hline
E-mail Addresses & Preserve character distribution as much as possible.& Named
Entity Recognition, regex \\
\hline
Account Numbers & Retain formatting as well as number of
digits / characters. & Identify from context and regex\\
\hline
Bank Names & None & Named Entity Recognition, (optional) dictionary search \\
\hline
Monetary Values & Retain formatting & regex and Named Entity Recognition \\
\hline
\end{tabular}
 \caption{Preliminary results from exploring  PPI types
 and features to preserve during sanitization.  \label{tab:andy}
}
\end{table}

\textbf{PPI3:} To address the challenge of replacement,
we will first investigate what features of each
data type may be relevant to researchers.  
We will seek to enumerate
features that researchers have used
in previously published work, that were mined from 
private traces of application-level data.
These will be the features we will seek to preserve 
during our sanitization. 
We plan to investigate two sanitization methods: 
 random generation of a replacement item 
 and choosing replacements from 
 large precompiled list for each data type, annotated with select features
 relevant for this data type.
The latter option requires 
 a significant amount of research
 into the tradeoff between
 the characteristics retained in the data
 and the level of information 
 leaked. 

For example, 
 if a researcher wished to determine
 how often dictionary words
 were used in passwords,
 random replacement of all 
 passwords would render the 
 data useless for such a query.
If instead, 
 all dictionary words found in passwords
 were replaced by randomly selected
 dictionary words,
 the data would retain utility
 for such a query.
However, 
 any attacker who managed to obtain the
 sanitized data, e.g., via an intrusion onto the contributor's machine,
 would then have an easier
 time guessing original passwords.

Retaining utility of data
 often requires maintaining 
 consistency across data.
%For example,
% if all instances of a contributor's password 
% were consistently
% replaced with the same string,
%the resulting data could be used 
% to investigate how often web users
% use the same password for multiple sites.
%If the replacement were consistent across
% multiple contributors,
% the data could be used to research
% how often web users
% select common passwords.
Consistent replacement is 
 desirable because the data is
 then useful for asking how
 unique or common certain occurrences 
 are across contributors.
We will investigate how to achieve both consistent and meaningful replacement of 
PPI items, with use of hashing techniques.

Table~\ref{tab:andy} lists features 
 we identified as 
 important to retain
 for each type of PPI
 we investigated
 during our preliminary work. In our proposed research we
 plan to modify and expand this table as we gain better understanding of 
 how researchers use application-level data today.

 % Even if sensitive data is encrypted users may put sensitive information
 % in non-encrypted data such as instant message, email, etc.
% Say this is one of main parts of our research
% I'd say we did preliminary research on this with a grad student. We will use NLP approach. We will identify sensitive items such as =- full list that andy identified. I would also use his notes on how to deal with each type of info on that list. 
% Say which NLP tool we will use
% Say users can examine our list of sensitive data and how we sanitize it and can change/add to this
% Say no human ever sees raw data apart from contributor.

\subsubsection{Remote Storage System}
\label{storage}
%After PPI is replaced
% by the PPI-sanitizer,
% we plan to pass data to an archiver
% which handles data buffering, encryption
% and storage.
%
%We plan to offer to contributors flexible archiving 
% support, which they will control
% through the storage policy.
%Contributors who wish to retain more control
% over their data 
% can store it locally.
\textbf{STO:} Some contributors may elect to store their data 
 on our remote storage system, e.g., to avoid filling their disk space.  
 In this task we will design and implement the Archiver module and the remote storage system. 
We will design the Archiver so that
 data is sent to the remote storage system
 during times when the contributor's machine is idle, thus the CPU usage and data upload
 will not impact the contributor's
 quality of service.
Sent data 
 will be encrypted with the server's public key and sent over an anonymizing network.

We will further set up a single server at USC/ISI to act as the remote storage system in our \cah\ implementation. 
This server will support 12 TB of storage space, 
which may fill quickly if \cah\ membership exceeds our expectations.
Expanding this storage, possibly to cloud systems, is likely to happen if \cah\ becomes
very popular, but it is 
beyond the scope of this proposal. 


\subsection{Querying Data}
	\label{sec:query}
% Say this is another main part of our research.  We will use Tor network. 
% Users will be able to pull queries and supply results.  How queries reach
% users and how results go back.  there will be some delay.  say our program
% on users machines will by default try to pull queries every so often.
\begin{figure}
\begin{center}
	\begin{center}
		\includegraphics[width=\textwidth]{../figures/query_path.pdf}
	\end{center}
	\caption{\small{An overview of the querying process in five steps:(1) A
	researcher submits a query via the public portal. (2) Critter
	clients connect and poll for new queries via an anonymizing network. (3) The researcher's
	stored query is sent to clients. (4) Patrol processes the query if
	the Query Policy permits, and returns encrypted results along with
	information on how a contributor wants its response aggregated. (5)
	Aggregated results are stored by the query system and can be
	retrieved by the researcher. 
        Patrol components (shown in light
         blue) will come from PI Mirkovic's NSF-funded project on secure queries.}}
	\label{fig:query}
	\end{center}
\end{figure}


Figure~\ref{fig:query} illustrates our  proposed process for 
 querying data in \cah.
%Researchers specify queries
% through a community portal
% and 
% these queries are then stored in the query system.
%Data contributors connect to the Critter server,
% poll for new queries
% and respond to these queries 
% through the query handler in
% their local \cah\ program.
%Results from queries are aggregated by the Critter server
% and returned via the community portal.

\textbf{QUE1:} In this task, we will design and implement the Critter server and the Query System. 
The Query System implements a store and poll mechanism
 for queries, because 
 contributors may only connect to the \cah\ network
 intermittently,
 even if they are running the data recorder on 
 a continual basis.
By storing queries and collecting responses
 over a period of time
 we can maximize the number of contributors
 responding to a query.
Researchers will be able to configure
 how long to wait or how may responses to wait for
 before returning results.
We also anticipate 
 supporting features such as 
 rolling aggregation and 
 ongoing queries which return
 periodic results.
As with 
 all other communication within the Critter network,
 query polls, queries and responses
 go through an anonymizing mix network
 to protect against contributor correlation.
Data housed on the central storage system can be queried
 on a more instantaneous basis, but otherwise it is treated the same as the 
 locally stored data. 

\subsubsection{Secure Queries}
	\label{sec:secure_queries}
	
To protect from 	\textbf{data query correlation} we will 
draw on  our NSF-funded project
 on secure queries. 
When sharing privacy-sensitive data via our secure queries, the
original data always remains under the control of its owner.  The data owner
releases information through a query language---Trol---and 
 an online portal----Patrol---the language interpreter.  
Queries are submitted by researchers, in
a form of scripts written in the Trol query language.  They are compared to
owner-established policies to evaluate if they pose privacy risk.  This
evaluation is automatic and fine-grained, thus queries on sensitive fields
are prohibited \textit{only in the specific context that poses a privacy
risk}.  Human involvement is necessary only to establish the query policies
and our work provides sample policies that are likely to meet the needs of
many data owners.  Permitted queries are run on the original data and their
results are returned to researchers.  Results consist of aggregate features
such as counts, averages, histograms, distributions, etc.  Individual packet
information is never returned as a result.

%While permitting only aggregate results hinders exploratory research, where
%researchers seek to understand a novel phenomenon by observing individual
%packets, it is necessary to provide privacy guarantees against active and
%passive attacks on trace sanitization.  Our secure query framework aims to
%facilitate validation of mature research, with well-formed hypotheses.  It
%can also be used for extraction of select traffic features to be fed into a
%realistic traffic generator.

Trol and Patrol protect contributor privacy in several ways. First, only aggregate
results are returned to the researchers, thus many active and passive
attacks that work against sanitized network traces are ineffective in our
context.  Second, Trol restricts use of its language constructs over
sensitive fields, according to the data-contributor-specified policy.  For example,
a contributor may require that manipulation of IP address values from the trace
data can only be allowed if these are first hashed.  Third, Patrol ensures
that $k$-anonymity \cite{kdb} and $l$-diversity \cite{ldb} criteria
is met for any result returned to the researcher, thus applying ``hiding in a
crowd'' approach for privacy protection.  For example, if a researcher asks
for a count of email messages that are sent on New Year's Eve, $k$-anonymity
ensures that the reply he will get is the set of bins associated with
\textit{ranges of counts} such that at least $k$ different contributors'
replies fall in each bin, and $l$-diversity ensures that the number of bins
is higher than $l$.  Via his Query Policy a contributor may decide to enforce
one or both of these criteria and may customize $k$ and $l$ values per data
field type.  Fourth, Patrol audits researcher queries looking for attempts
to defeat its privacy protection mechanisms by tracking and faking, as
explained in \cite{nda}.

\textbf{QUE2:} Our Trol language is finalized and our Patrol interpreter is in the last
stages of its development.  We expect that both will be ready for use by the
time this proposal would be funded.  In our proposed work we will need to
slightly modify Trol and Patrol framework to meet the unique needs of
\cah\ data processing.  First, for those contributors who elect to store
their own Critter data, we will have to bundle Patrol with our Critter
client code distribution.  This means that our Patrol code must be portable
to multiple operating systems and have no large dependencies, and that it
should be small and cheap to run on variety of hardware types, some
presumably very old.  Second, in our original secure query framework, all
data processing and privacy protections are applied at a central location
under data contributor's control.  In our \cah\ framework some
processing and privacy protections will be performed by the Critter client and
others by the Critter server.  Specifically, $k$-anonymity, $l$-diversity and
auditing must be done at the server side.  Further, since data being
processed at the server belongs to multiple contributors they may have
different $k$ and $l$ settings.  Finally, some Trol language constructs may
need to be modified to support manipulation of application-level data and
application-field-specific operations.  In our proposed research we will
identify and apply necessary Trol and Patrol modifications, and integrate
them with the Critter client and the Critter server code.

\textbf{QUE3:} Some contributors may desire very tight control over the use of their data, and may 
wish to exert it manually. Patrol's policy language already enables very fine-grained specification of
what queries can be run over which data fields. We will modify this query policy language to specify rules where 
each query should be vetoed manually by the contributor, before being run on their data. 


\subsection{Anonymizing Participation}
	\label{sec:tor}

To protect against \textbf{contribution correlation},
 we plan to use an anonymizing mix network
 for
 all communication between a Critter client
 and the Critter server or the remote storage system.

\textbf{TOR:} We propose to use the Tor network as our anonymizing mix network.
 The Tor network is a publicly accessible network and is extensively used for
 anonymous communication worldwide.  In this segment of our proposed work we
 will develop and implement Critter client-server communication
 over the Tor network. 


Tor---which stands for The Onion Router---encrypts traffic in layers
 and sends encrypted traffic
 through an overlay network of relays
 run by volunteers~\cite{tor,torweb}.
At each hop along the path,
 the Tor relay decrypts a layer of the Tor encryption
 to determine where to send the traffic next.
The first hop in the Tor network can know
 a contributor is sending something,
 but not know that something is Critter-related.
The last hop can know 
 the data is Critter-related, 
 but will not know where that data
 originated.
In other words, the IP-address of a sender
 and the address of the recipient
 are never both in clear-text
 at any point along the path.
An observer
 eavesdropping at any one point
 is unable to identify both 
 end points of the connection. 

Tor does not protect against 
 end-to-end correlation---a form of attack
 where an observer is able to see
 traffic on both the entry point 
 and export point of the overlay network.
While potentially this threat
 could identify an IP-address
 as being a Critter participant,
 no further information is exposed, i.e. there is no way to 
 establish if this is a Critter data contributor or Critter data user.
The contributor would not be linked with
 a query result because of the delay in replying to queries and the aggregation 
 of query results, aka ``hiding in the crowd''.
Any data the contributor sent to the remote server
 likewise could not be linked.
Tor does not provide end-to-end encryption but Critter's traffic will
be encrypted end-to-end before it is handed off to Tor. 

While an exit node will not be able to
 see Critter traffic or know where it is coming from,
 there is potential for linking a limited amount 
 of a contributor's non-Critter traffic
 with their participation in Critter ~\cite{manils10}. 
 We will investigate if building Tor
 directly
 into the Critter client is necessary to avoid such linking.

\section{Interfacing with {\cah}  Contributors}
 \label{sec:interface}
 
The success of {\cah} is dependent on contributors' 
 satisfaction with the framework
 and our ability to recruit contributors.
Much of our proposed work 
 addresses the challenge of safeguarding
contributors' privacy, which was a major challenge for 
 obtaining content-rich data so far.
Another aspect of attracting contributors is achieving
 ease of use of the Critter framework.

\subsection{Critter User Interface}
\label{ui}

In this research segment, we will investigate how to achieve
the best ease-of-use,
 quality of service (both for Critter and regarding Critter's impact on other
	applications),
 and flexibility of {\cah} framework.
Success in this endeavor 
 will hinge on Critter's interface with data contributors.

Contributors will interact with Critter
 through the policy engine
 and optionally if they elect to examine and veto 
  research queries that can be run on their data.
We anticipate supporting
 a wide variety of contributors
 from the sophisticated computer experts
 to the naive home users.
We expect that some contributors
 will want to be actively involved,
 maintaining tight control over their data
 while others will wish to be hands-off. 

\textbf{UI1:} We will investigate related research on
 user interface design 
 and specifically design principles for
 usability of security mechanisms~\cite{zurko, bunnig09, lederer}.
Lederer et al. identify five pitfalls to avoid when designing 
 systems which have privacy implications: 
(1) emphasizing configuration over action, 
(2) lacking coarse-grained control,
(3) inhibiting existing practice,
(4) obscuring potential information flow, and
(5) obscuring actual information flow~\cite{lederer}.

We intend to address the first three pitfalls
 by providing a simple interface which handles
 policy configuration at a coarse-grain level.
Fine-grain control options will be offered to data contributors 
but their use will not be required.
Similarly, for all interactions with contributors we will attempt to 
minimize the amount of required input we need and to offer
 additional detailed information and options that can be
 explored by more-involved contributors.

We will need further research to address pitfalls associated with obscuring information flow.
We expect these pitfalls can be addressed 
 in three ways.

\textbf{UI2:} First, 
 with the complexity of today's systems
 even sophisticated computer users
 are not necessarily fully aware
 of all the network communication done on their behalf.
We will need to investigate methods,
 such as signature-based traffic classification,
 which give feedback to a contributor
 about what types of traffic
 Critter records.
Upon installing a Critter client, there will be 
a trial period
 where Critter stores only the general types 
 of traffic the recorder sees,
 and provides high-level summaries for a contributor to evaluate 
 before they commit to contributing. This should help users
 set up a recording policy they are comfortable with. 
 We further plan to develop a Compliance Checker tool that contributors can periodically run over their collected
data and which will summarize the general types of traffic that are being recorded and how they align
with the recording policy.

\textbf{UI2:} Second, we will investigate how to effectively 
 communicate different policy options to 
naive users, and how to produce a summary report of
existing policies that are in effect on their Critter client.
Both of these should be done in natural language to maximize understanding
by unsophisticated users.
We expect to explore and leverage
existing research into design principles for
 usability of security mechanisms, and research in natural language generation, for these tasks.

\textbf{UI3:} Third, we may need to communicate researcher queries to those
contributors that have elected to examine them. This communication 
again must be done in natural language, generated from Trol queries.  

\subsection{Recruiting Contributors}
\label{recruiting}

\textbf{RCR:} A major effort will be invested to attract contributors
 and advocate the reasons for data contribution.
Promoting the advantages of data contribution
 will be easiest within
 the research community because
 the community has direct experience
 with what data can do for advancement.
For contributors though, \cah\ is not just about advancing research,
 but also about learning more about their systems and Internet behavior.
Contributors, if they choose,
 can find out how unique or common
 their behavior is without relinquishing their privacy,
 and track what types of traffic their computer generates---both malicious
 and non-malicious.
Wide participation in projects like
Panopticlick~\cite{browser-uniqueness}---which
 give information on how unique and trackable a user's browser is---indicate
 that there is a broad audience
 who is interested in learning how their Internet behavior
 compares to others.

We plan to attract contributors by advertising \cah\ in professional meetings,
on technical mailing lists, at science and technical fairs. We will also seek 
outreach insights from similar projects that have been successful such as
 SETI@home~\cite{seti,boinc}, Panopticlick~\cite{browser-uniqueness}, etc.

%\section{Traffic Generation: TrafGen}
%\label{trafgen}
%
%Traffic generation with realistic application-level information 
% is one of 
% the many
% use cases for the
% content-rich data provided by our Critter-at-home system. 
%We discuss here this particular use case for 
% \cah because of our related 
% NSF-funded work with realistic traffic generation.
%
%The key reason
%why realistic traffic generation is hard is that ``realistic'' means
%different things to different people.  A researcher testing a defense, which
%detects largest hitters, may care only that the traffic volume per source
%resembles values seen in real networks.  Another researcher testing a DDoS
%defense requires congestion-responsive traffic generation but may not care
%about address distribution or traffic contents.  All existing traffic
%generators have a \textbf{fixed definition of realism}.  This definition --
%a set of traffic dimensions that users supposedly care about -- is
%hard-coded in the generator's code.  The generator then mines information
%about only these dimensions from traffic traces and it generates traffic
%that fits the mined values.  The generated traffic is ``realistic'' but only
%along those fixed dimensions.  A researcher caring about a different
%dimension set must change the generator's source code, which is often a huge
%effort, in order to get the desired realism.
%
%In our NSF-funded effort, we are building a traffic generator whose
%definition of realism can be fully specified by a user.  The generator
%extracts models for user-selected features out of network traffic traces and
%reproduces the traffic that fits those models, thus achieving level of
%``realism'' that a given researcher cares about.  Model extraction deploys
%our secure-query framework, i.e., Trol and Patrol, to obtain information
%about distributions of selected features' values in network traces.  We
%expect that it will be straightforward to replace our current generator
%input with Critter's replies to researchers to generate realistic traffic at
%the application layer.  The first code release of our traffic generator will
%occur before the start of the effort in this proposal.

\section{Evaluation}
	\label{sec:eval}

We will start evaluation phase early in the project, as soon as we have the prototypes of
any of the modules we have proposed. The evaluation will then continue until the end of the project
providing feedback to our design and prototyping. We now describe how we plan to evaluate each module.

 \textbf{E-REC:} We will evaluate the Recorder module code to measure: (1) its CPU and memory cost, (2) the 
accuracy of its traffic capture at high rates, and (3) its portability. This evaluation will require access to a variety
of hardware and operating systems. We will perform it on  DeterLab~\cite{deter, deterweb} 
 and Emulab~\cite{emulab} testbeds, 
which host a variety of hardware systems and support multiple versions of popular operating systems. During
evaluation we will generate varying traffic rates, and also vary traffic capture rules, while measuring operation 
cost and accuracy of the module.
 
 \textbf{E-PPI:} We will evaluate the PPI Sanitizer in multiple environments. During 
initial development we will use publicly available sources of application-level data such as ENRON Email
archives~\cite{enron1, enron2},
AOL search data~\cite{aol}, 
 and Netflix dataset~\cite{netflix}, to evaluate the accuracy of our PPI data identification and 
the quality of our sanitization. These evaluations will be performed manually by members of our team. 

When we have 
the first prototype of PPI sanitization code we will seek approval from the USC's Institutional Review Board to engage
humans in its evaluation. We will recruit paid volunteers from the Mechanical Turk \cite{turk} and engage them in three tasks: (1) labeling
PPI data in original datasets, (2) labeling PPI data in PPI-sanitized datasets, (3) specifying searches about PPI data. The first and
the second tasks will help us measure the accuracy of our PPI data identification, i.e. they will provide the ground truth for our
PPI-Sanitization module. The second task will also help us measure if our sanitization preserved the natural distribution of the 
selected PPI data type, e.g., if Anglo-Saxon names were all mapped to other Anglo-Saxon names. The third task will help us
measure the consistency of our PPI sanitization across multiple data sets. In addition to publicly available datasets used in our 
manual evaluation, we will use other public sources of PPI data in this task, such as Web pages, online directories of people
names, addresses, emails and phone numbers, etc. This kind of evaluation may be repeated multiple times, providing feedback 
into our design and implementation,  until we reach a satisfactory identification accuracy and sanitization quality.

When we have the first fully-working prototype of the entire {\cah}, we will again seek the approval from the USC's Institutional Review
Board, and we will recruit a small group of beta participants. These data contributors will run Critter clients and record their own data. Critter client will then run PPI sanitization on this data, and it will invite user feedback on its accuracy and quality. This will result in frequent dialogues with the user but we believe this is the necessary step in system evaluation before we release it to a broader customer base. We will use user feedback to further improve the system.

Finally, when we release our code to public, interested data contributors will be able to examine their PPI sanitized data and provide input to the sanitization process in the form of lists of additional PPI items to be replaced. While the input data will remain private, we will mine the type of the data (e.g., password, credit card number, etc.) and its format (e.g., five letters and a digit), and use it to try to improve our PPI identification accuracy.

%\textbf{E-RP:} We will evaluate if contributor-specified Recording Policy is properly honored and enforced by our Critter client. For this evaluation we will first engage paid volunteers from Mechanical Turk  \cite{turk} and ask them to run our Critter client with a predefined recording policy and then perform some online tasks based on the scenario we give them. The recording policy we will supply will ask for only of this scripted traffic to be recorded. We will then obtain traffic records and verify the compliance through both manual and automated trace analysis.
%
%In the second part of our evaluation we will ask our beta testers to periodically revise their recordings policies and then run our Compliance Checker on their PPI-sanitized data to verify that the policy has been honored. 
%

\textbf{E-COM:} To evaluate if we can effectively communicate our policies and queries to inexperienced users we will engage paid volunteers from Mechanical Turk  \cite{turk}. We will present to them our simplified versions of 
several recording, storage and query policies. For each policy rule we show, we will ask the volunteer to answer a multiple-choice question about whether some actions are permitted or not by this rule. For example, our simplified version of a recording policy may state that ``Only Web traffic will be recorded" and our questionnaire may ask if ``Your online bank access will be recorded" to test if the volunteer understood the policy. When we test the query policy, we will communicate to the volunteer several queries and ask her to flag those that are permitted by the policy. We may run this evaluation
multiple times, with different volunteer sets, and use each feedback for improving the clarity of our communication.

One of our main privacy protection mechanisms is the use of Patrol for data processing. Patrol protections will be evaluated in the NSF-funded project under which it is being developed.

\section{Related Work}
\label{related}

There are two areas of work related to \cah: 
 (1) work which addresses privacy risks inherent in data sharing 
 and 
 (2) frameworks to connect researchers to data sources.

There are a multitude of endeavors 
 to overcome the privacy risks inherent in sharing 
 Internet user data
 motivated by 
 the great need for this data in research. Much work has been done on network trace sanitization, e.g., ~\cite{cryptopan, devil}, 
 however as discussed in Section~\ref{sec:challenges},
 these methods are not well suited to protect
 content-rich data and we do not discuss them further.

Many efforts to anonymize data with unbounded content
 have resulted in unintended information leaks.
Some examples include:
 (1) the de-anonymization of 
 the Netflix movie ratings dataset
 when correlated with the Internet Move Database (IMDb)~\cite{netflix-imdb}
 (2) privacy violations by the public release 
 of AOL search data~\cite{aol-leak}
 and 
 (3) the de-anonymization of medical records
 by correlating them with a publicly available voter
 database~\cite{sweeney}.
Such failures motivate our proposed 
 use of secure queries.

Pang and Paxson investigate the problem of parsing and sanitizing
 content-rich network data~\cite{high}.
Their sanitization tool rewrites packets
 according to human-input policy scripts
 and can replace application-level header
 and content.
While the packet transformation is flexible,
human input process is tedious and error prone. Our automated PPI sanitizer
should achieve better identification and rewriting accuracy. 
Further, in presence of auxiliary data \textit{no packet field is privacy-risk free}, thus
sanitization in itself cannot be the entire answer to privacy risk in content-rich data sharing.

Another approach to releasing content-rich data safely,
 is to generate synthetic traffic based on some features drawn from the real data.
The DARPA Datasets for Intrusion Detection System
 Evaluation~\cite{Lippmann00,mitdarpa} are an example
 of such generated traffic. The original traffic was collected at 
 an Air Force base,  and the features that were mined from the traffic and
 replicated in synthetic traffic, as well as the synthesis approach for other
 traffic features, were not publicly disclosed. Researchers thus cannot quantify how
 data generation artifacts 
 may influence their research outcomes.
Another problem lies in the fixed choice of relevant traffic features to be mined and replicated.
Researchers who are interested in 
 a different set of features cannot utilize these datasets.
In spite of these and other deficiencies~\cite{Mahoney03, McHugh00, Thomas08}, 
these datasets---generated in 1998, 1999 and 2000 by MIT
 Lincoln Laboratory---are still in active use today,  over a decade after their release.
This
 highlights the grave need researchers have for content-rich datasets.


 
We believe {\cah} will be useful for synthetic traffic generation
 by
 allowing researchers to query diverse and current data
\textit{along those traffic features
 pertinent to their work}.
The results from {\cah} queries 
 can then be fed into an open traffic generation 
 process,
 allowing for full evaluation 
 of any generation artifacts.

Frameworks for connecting researchers to data
 vary in complexity from 
  portals providing curated data~\cite{ita,mawi,datcat,predict,caida,crawdad}
  to systems for the policy process of sharing data~\cite{ps2},
  to distributed infrastructures for network
  monitoring~\cite{lobster,como}. Our approach differs from these 
 because our main goal is to provide a framework 
 to collect data
 directly from individuals,
 and not from  network administrators.
This is a common goal 
 for commercial 
 enterprises for doing market research and quality of service studies.
We are aware of only one other research endeavor 
 which, like us, reaches out
 to a broad audience and creates 
 an ongoing study. 
The BISmark project~\cite{bismark} offers users
 a free device which
 collects active measurements of link performance
 from a user's home network.
We differ from BISmark 
 on our measurement goal to passively collect information about 
 network data---and specifically packet content.

\vspace{-0.1in}
\section{Education and Training Activities}
\label{edu}
PI Mirkovic will integrate findings from proposed effort in her classes. She usually teaches advanced graduate and undergraduate classes in networking and security, which have a series of practical projects. The PI will engage her students in these classes in exploration of privacy risks and protections associated with application-level data collection and sharing. 

Project budget asks for support for one graduate student for all 3 years. This student will participate in defining research direction, perform literature surveys, code development, deployment and evaluation. He/she will also be involved in paper writing, and will present this work in conferences and workshop, thus receiving valuable research training and exposure.

PI Mirkovic is committed to involving undergraduates in research. She expects to recruit promising undergraduates at USC for the proposed research through her classes and through the USC's Undergraduate Research Project Program (URAP). PI Mirkovic will further work with the  USC WISE program  to involve female students in the proposed research.

\section{Research Plan}

Figure \ref{schedule} shows our research timeline, with columns showing proposed work in each project quarter and labels denoting tasks described in the previous sections. 

%temporarily commented out!!
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{TEXT/schedule.pdf}
\caption{Our research timeline. REC refers to tasks labeled as such in Section \ref{recording}. PPI1-PPI3 refer to tasks in Section \ref{sec:ppi}, STO refer to tasks in Section \ref{storage}, QUE1-QUE3 refer to tasks in Sections \ref{sec:query}, TOR refers to tasks in Section \ref{sec:tor}, UI1-UI3 refer to tasks in Section \ref{ui}, RCR refers to tasks in Section \ref{recruiting} and E-REC, E-PPI, E-COM refer to tasks in Section \ref{sec:eval}.  \label{schedule}}
\end{figure}


\section{Results From Prior NSF Support}
 
Dr. Mirkovic has numerous projects funded by NSF that are either completed
or in progress. We list some relevant outcomes here. Her project ``DefCOM -
Distributed Defense Against DDoS Attacks,'' developed a distributed
defense against DDoS attacks by providing a framework called DefCOM for
heterogeneous
defenses, spanning source, victim and core networks, to collaborate in
detection, rate limiting and traffic differentiation. Dr Mirkovic has
published a
workshop paper on DefCOM in the 2003 New Security Paradigms Workshop, and
a paper in ACSAC 2006 conference. Two more papers were published in 2007 and
2008 in LADS'07 workshop and in ICC'08 conference. 

Her project ``CT-ISG: Collaborative research: Enabling Routers to Detect
and Filter Spoofed Traffic,'' developed functionalities in routers for
detection and filtering of spoofed packets. This project produced a journal
paper in IEEE Transactions on Dependable and Secure Computing (2009) and a
conference publication at ACSAC 2009. The project ``TC: Small: Privacy-safe
sharing of network data via secure queries (PSEQ),'' develops a novel method
for network data sharing where providers publish a query language and accept
and run queries on their data, returning results to users. This project
resulted in language definition and an early design of the prototype system
to process the queries. Her project ``Collaborative research: Hands-on
exercises on DETER testbed for security education,'' funded by NSF CCLI
program, has resulted in 12 hands-on exercises that were used in at least 20
courses over the project's two years. Dr Mirkovic is further a co-PI on the
project ``Collaborative Research: CT-M: Beyond Testbeds-Catalyzing
Transformative Research and Education through Cybersecurity Collaboratories
(BTCT),'' which aims to develop methods for safely running risky experiments
on network testbeds, and for detecting and diagnosing problems in
distributed experiments. This research resulted in two workshop publications
so far (Sarnoff Symposium 2008 and CSET 2008),  a poster at INFOCOM 2009,
and a publication at NSDI 2011. Its findings have also been implemented on
the DeterLab testbed.

