
\setcounter{page}{1}
\section{Project Description}

% From the NSF Grants Proposal Guide: "The Project Description should
% provide a clear statement of the work to be undertaken and must include:
% objectives for the period of the proposed work and expected significance;
% relation to longer-term goals of the PI's project; and relation to the
% present state of knowledge in the field, to work in progress by the PI
% under other support and to work in progress elsewhere."


\outline{Short overview} 
We propose to investigate, 
 build and deploy 
 a publicly accessible repository 
 of up to date application-level data. 
This data is sorely needed 
 in the networking and security communities.
Currently available network data with application-level information
 is often outdated and is either private
 or customized to specific, narrow research needs. 
We will address this problem 
 by both designing and deploying the technology that meets the following goals:
\begin{enumerate}
 \item data is current and diverse, 
 \item data utility is maximized, 
 \item contributor privacy and identity are fully protected.
\end{enumerate}
These goals will be achieved by: 
\begin{itemize}
 \item creating an anonymous contributor network that contributors can join and leave at will, 
       ensuring continuous data collection that is diverse and fresh,
 \item giving contributors full control over their data and its usage at a
       fine-grain level including mechanisms to withdraw data fully,
       store data remotely or locally,
       and contribute only what is comfortable
 \item minimally processing data and allowing researchers to query this
	``almost raw'' data for features that interest them,
 \item allowing only aggregate results to be returned to provide
 	data contributor privacy.
\end{itemize}

This research addresses the security and privacy focus
   of the Trustworthy Computing solicitation in two ways.
First,
   our work will enable 
   access to content-rich network data,
   which is
   essential to continued progress
   in networking and cybersecurity research.
Second,
   our work will explore new approaches
   to secure sharing of private data, which is the major challenge
   for obtaining content-rich network data.

\section{Introduction and Motivation}

\outline{Application level data very important, but not available}
Access to application- and data-layer packet information
  is vital to cybersecurity and networking research.
This content-rich data establishes the necessary ground truth
  required to properly evaluate 
  approaches in 
  many areas of cybersecurity
such as
  intrusion detection,
  steganography,
  traffic camouflaging
  and
  traffic classification.

\subsection{Challenges}
\outline{We need the data, but we can't get it}
%Such data is not publicly shared because of tremendous privacy risk. e.g, may contain
%names, SSNs X
Despite the need for content-rich data,
 such data
 is not publicly available because
 of the tremendous privacy risks
 associated with its sharing.
The very few content-rich datasets
 publicly available 
 address such privacy risks by 
 synthetically generating content
 or by only including malicious traffic.
These datasets are designed to meet only
 narrow research needs---such as
 intrusion detection evaluation
 and investigation of specific malware applications, and
 black-box synthetic generation of representative content-rich data has 
 its own flaws as we discuss in Section \ref{related}. 

\outline{protecting the data is non-trivial}
Protecting content-rich data, 
 obtained from Internet users,
 is a non-trivial problem.
Packet payloads include an unbounded
 amount of private information---such as 
 a person's address, personal conversations, 
 bank information, social security numbers, 
 and so forth.
 Even worse, they contain huge amount of information that
 may not appear private at the first glance, e.g., product and location
 preferences, that may still be used by someone acquainted with the 
 data contributor to uniquely identify him or her.  
Traditional methods for 
 protecting publicly available 
 network data through sanitization and anonymization
 are not well-suited to protect
 content-rich network data, and they have recently been shown
 to work poorly for content-less network data as well \cite{cryptopan, webtraffic, drift, advocate}.
 
%Current way of sharing data through anonymization removes content, but it
%still has huge privacy risk.  So this is not a way to go.  We have related
%research on how to do this via secure queries. X
\outline{current sharing: sanitization, not good for content-rich data}
Currently, 
 publicly available network data 
 from Internet users
 is \emph{sanitized}---a process
 which removes most or all of the application-level
 data and anonymizes sensitive information such as IP and MAC addresses.
Despite such mitigating measures,
 sanitization is still highly vulnerable to both 
 active and passive
 de-anonymizing attacks \cite{cryptopan, webtraffic, drift, advocate}.
Additionally, sanitization offers poor protection
 against future attacks.
Once 
 a user has downloaded the data,
 the provider has no control over how
 a dataset is used,
 and there are no mechanisms 
 for a data provider to retract published data
 once an attack has been discovered.
Due to the greater privacy risks 
 sharing content-rich data brings, 
 building upon traditional sanitization methods
 to protect privacy is not an option.
 
\outline{We have related work which addresses data protection}
Fortunately, 
 in her current NSF-funded work, proposal number 0914780,
 PI Mirkovic
 introduces a more robust option for protecting
 publicly shared data using \emph{secure queries}.
Secure queries protect data by allowing researchers to query for 
 only aggregate features of the data,
 and preserve user privacy by 
 applying $k$-anonymity and $l$-diversity principles 
 (we discuss secure queries 
 in detail in Section~\ref{sec:secure_queries}). 
In this endeavor,
 we propose to utilize PI Mirkovic's existing work
 to help address some aspects of privacy risks associated with
 sharing content-rich data.
  
\outline{Current, dynamic and public traces needed}
In addition to the need for content-rich data,
  there is a need for more diverse
  and more up-to-date network data, both content-rich and content-less.
Evaluation performed
  with outdated or homogeneous sources
  can give misleadingly 
  optimistic results.
For example,
  an intrusion detection system may have a
  very low false-positive rate when tested
  with data from a university environment,
  but may generate an overwhelming number
  of false-positives in a corporate environment, 
  which renders the system useless. Or a system
  may perform well on outdated traces, only to crumble when 
  deployed and subject to 
 the increased volume and diversity of up-to-date traffic.
\outline{currently available data is old and lacks content}
The research community's current public resources
 for network data are very limited.
% There are also a very few public repositories because people fear privacy risk X
The fear of privacy risks
 results in a limited number of public repositories and 
 a very limited number of data providers
 willing to release current data on a regular basis.
A lack of diverse and current datasets
 results in the research community
 heavily relying on a small set of
 publicly available datasets from 
 a limited number of environments.
Some publicly available 
 datasets are used for research
 up to a decade after collection.
Additionally, access to current and diverse network data
 enables a deeper understanding
 of trends and emergent behavior
 such as
 the evolution of 
 botnets, 
 Internet worms,
 denial of service attack tools
 and a myriad of other threats.
 
 \subsection{Proposed Effort}
\outline{We can't rely on network data providers}
Due to the fear of privacy risks,
 it is unrealistic to expect
 to find 
 network data providers
 willing to provide data
 from a diverse and large
 set of environments
 on a regular basis.
For collecting content-rich
 network data
 this expectation is 
 especially unrealistic.
The typical model 
 for network data collection---where
 packet-header information is collected
 for an entire network---is clearly
 not appropriate for collecting
 application-level information.
Not all users on a network would be comfortable
 with sharing their network data---especially
 if they have no control over what is collected
 and how their data is used. 

\outline{So we need to bypass data providers and reach users---and we need
to make them feel comfortable}
% What is needed is a way for people to contribute their data continuously. People will only be comfortable with it
% if we allow them to control what they contribute, how is it protected and used, and if they can stop at will and pull all their data.
Reaching out directly
 to individual users
 willing to share their data
 is a more
 realistic approach
 to obtaining 
 content-rich 
 network data.
Connecting to individuals 
 directly,
 and providing mechanisms for 
 them to contribute continuously,
 ensures up-to-date 
 data from a diverse set of environments.
The only way to engage
 such individuals
 is to provide mechanisms 
 for the user to contribute anonymously, 
 and have
 complete control over 
 exactly what data she contributes,
 and 
 how her data is protected and used.
The individual also needs
 the freedom
 to completely withdraw her data---both past 
 and current---at will.

   
%  just say here: this is what we propose. Highlight design goals and how we
% will meet them similar to summary part.
We propose
 to build and deploy
 a framework called Critter-at-home, that would appeal both to  \emph{data contributors}--individual Internet users willing to share their
 data --- and to
 researchers --- users of this data. 
 
 
%Our proposed work will result in 
% a content rich traffic trace repository called \emph{Critter}.
%The data in Critter will
% be controlled by contributors from their various environments, 
% making our framework
% \emph{\cah}.
\outline{for contributors}
For data contributors,
 our proposed work will
 provide a platform to 
 safely and 
 actively participate
 in research advancement.
Our goals are to fully
 protect a contributor's
 data and identity, and 
 provide a contributor with
 full, flexible, control
 over how, when and why her data is used in research.
A contributor's data is protected by multiple mechanisms, as explained in 
section \ref{cah}.


\outline{for researchers}
For researchers,
 our proposed work will provide 
 a publicly accessible repository
 of up-to-date content-rich data
 from a diverse set of environments.
Researchers will be able to query 
 the repository on demand and 
 receive aggregate information about features 
 that are most relevant for their research. 
One of our design goals is to not limit
 the scope and utility
 of collected data, thus all queries will run on the 
 data that will be minimally processed to remove personal and private information (PPI), 
 as we describe in Section \ref{nap}. Such processing is necessary to minimize risk to 
 our data contributors should data be stolen from their machines by some third party.
Query results may 
 directly answer a researcher's questions
 about network behavior (e.g., what is the distribution of email inter arrival times),
 or be used for realistic application-level
 traffic generation.
 

% introduce the critter name, main parts we have,  put that picture early.
% highlight where we use existing research and what is new.  our research
% will focus on addressing how to sanitize the data minimally, how to let
% users keep control over it at all times, how to preserve user anonymity
% Gen: Other than introducing the name, 
% I did not address this comment directly yet... it seems to 
% fit in the next section...?

\outline{Need summary paragraph?}

\section{Threat Model}

The privacy risk to data contributors is the biggest challenge to be addressed by our proposed work.
We now define our threat model.

\begin{enumerate}
\item{\textbf {Data Stealing.}} First, contributor data may be stolen by a third party, not necessarily related to Critter-at-home, e.g., a Trojan. A raw version of this data contains much sensitive and private information that would not otherwise exist in one place. Data could be stolen from disk or it could be stolen from memory or network. 

\item{\textbf {Data Querying.}} Second, contributor data may be queried by someone familiar with the contributor, i.e. someone with an auxiliary information channel.  

\item{\textbf {Contribution Correlation.}} Third, an observer may attempt to identify contributor identity or to infer private information from contributor's IP address or from the pattern of its contributions, e.g., analyzing queries that receive a reply from this specific contributor.
\end{enumerate}

In the next section we elaborate how we address these privacy risks. 
%First, may elect to host her data on her machine, thus
%never relinquishing it.
%
%
%Second, contributed data will be minimally processed 
%to modify all personal and private information (PPI), while
%preserving its statistical properties, as we discuss in Section \ref{nlp}. 
%Such processing is necessary to minimize risk to 
% our data contributors should data be stolen from their machines by some third party, 
%or should they lose their machine. 
%Third, contributed data will be encrypted to further minimize the risk of
%data stealing. 
%Four, no human apart from the contributor is ever allowed access to the raw,
%PPI-sanitized, data. Instead, researchers can query the data via our Critter-at-home
%framework, and they receive aggregate statistics (counts, distributions, etc.) of
%the traffic features they query for. 
%Five, all contact with a contributor is at 
% her discretion and
% is done through an anonymous network,
% where contributor identities are hidden both from
% researchers and the Internet at large.

\section{\cah}
\label{cah}

Protecting data contributors'
 privacy and identities
 while simultaneously offering 
 researchers maximal utility
 is challenging.
%We now give more details about challenges we plan to address in our research and how we will do that. Main challenges are:
% privacy, control over data and identity, utility to researchers 
In this section, 
 we introduce the basic proposed framework 
 for Critter-At-Home.
In the following subsections
 we give more details about the challenges involved,
 as well as how
 we plan to address these challenges in our research.

\begin{figure}
	\begin{center}
		\includegraphics[scale=0.10]{../figures/critter-at-home.eps}
	\end{center}
	\caption{The basic overview of Critter-at-Home.}
	\label{fig:all}
\end{figure}

\cah is a set of modular components we call \textit{Critter client},
 housed on a data contributor's local
 machine 
 and a centrally located
 \emph{Critter server} whose task is to collect and disseminate researcher queries and replies,
 and to apply some privacy protections to reply data.
Figure~\ref{fig:all} shows the basic
 components for \cah.
Researchers interact with \cah via
 a portal where queries are submitted
 and results returned.
Data contributors (shown in labeled boxes in Figure~\ref{fig:all})
 run the Critter client program on their local machines, similar to the 
 BOINC software run by SETI@home participants \cite{seti}.
The Critter client collects the contributor data,
applies PPI-sanitization, encrypts it and stores it.
A contributor may elect to store the data locally or 
at the remote location managed by the Critter server. If the data were stored locally, Critter client
also decrypts it when a new query arrives, processes
the query and returns results to the Critter server. 
When running the \cah system,
 contributors connect to the Critter server 
 via an anonymizing network, such as Tor \cite{tor}. 
Researchers can query 
 and receive aggregate statistics on
 this anonymous network of data 
 by submitting their queries to the public
 community portal, connected to the Critter server.
Critter clients at contributors' machines pull the stored researcher queries from the Critter server,
via an anonymizing network, apply them to their locally-stored data, and return replies to the server.
Researchers can pull these replies later from the Critter server.

The following paragraphs 
 summarize each part 
 of \cah.

\outline{recording}
On a data contributor's local machine
 data will be collected
 by a \emph{recording} module.
We propose to build a recorder
 which records at the 
 network layer.
All traffic encrypted by an application
 will be recorded in the encrypted format, 
 which will protect some sensitive information
 such as bank transactions, that always flows over an 
 encrypted channel.
We chose to record at the network layer for portability.
An alternative would be to instrument relevant applications
to record their data. This would go around the encryption and would
more accurately identify application data, but it would require extensive
changes to the contributor's machine. The rest of our framework, however,
is independent from our recording choice, and would apply to 
any application-level data, however obtained. 

\outline{recording policy}
A \emph{recording policy},
 set by the contributor,
 will govern what data is collected.
 The policy will be able to specify  the 
 applications, the application targets
 and application message patterns that rules
 apply to and will have both inclusion and exclusion rules. 
For example, 
 a contributor will be able to specify that  
  \cah can collect data on her 
 web browsing habits while 
 browsing YouTube,
 but not while connecting to 
 her bank's web portal.

\outline{archiver and remote storage}
While many contributors may choose
 to retain their data locally,
 we propose to build a 
 \emph{remote storage system}.
Contributors who have infrequent Internet access,
 or who wish to remain more hands-off,
 can store their data on the storage system,
 rather than on their local machine.
Contributors with limited storage
 could move data to 
 remote storage after the 
 data had reached a certain size.
An \emph{archiver} module
 would control aspects of 
 where to store data following
 the \emph{storage policy} set by the contributor.
Data housed on the centrally located
 storage system could be 
 withdrawn by a contributor at any point.
\comment{Gen}{Do we have money for the hardware of a storage system??}

\outline{query handler}
If a contributor retains her data locally,
 her data can only be queried
 when she is 
 connected
 to the central query system
 via the \cah program.
The \emph{query handler} on her local machine
 will poll for new queries from the 
 query system, 
 and answer these queries
 according to the \emph{query policy}
 she sets.
If a contributor chooses to store
 her data on the remote storage system,
 her query policy will be stored with
 her data and applied to queries that come in from the researchers. 

\outline{policy engine}
At any point, 
 a contributor will be able to 
 change their recording,
 storage and query policy.
The \emph{policy engine} will
 manage these policies.
If a contributor's data is stored remotely, 
 she still will be able to 
 change the policy at any point
 via the policy engine,
 as long as she is able to 
 connect to the system.

\outline{we only return aggregates}
To preserve contributor privacy,
 only aggregate statistics 
 about their data
 will be available. 
Query results 
 are aggregated and evaluated
 by the query system.
Outlying statistics---statistics
 which may leak information about a particular 
 individual---are protected through
 either \emph{binning} 
 or not releasing results 
 (details are in Section~\ref{sec:secure_queries}).
A contributor can control 
 how her outlying statistics
 are handled during aggregation
 through their query policy.
The majority of how queries
 are formed, interpreted
 and aggregated will come from
 PI Mirkovic's NSF-funded work on secure queries.
In Figure~\ref{fig:all},
 these components are shown in blue.

\outline{PPI anonymizer and other data protection}
While we only release aggregate  
 statistics,
 we do pull these statistics from
 raw data 
 (stored either locally on a contributor's machine
 or on the storage system).
While we have control over the 
 security of the storage system,
 we do not have any control 
 over the security of a contributor's machine.
To protect contributor data,
 we will store data in encrypted format.

\outline{PPI anonymizer}
Encrypted data still must be decrypted when queries
are applied to it. This leaves raw data vulnerable to stealing
from memory. 
To protect data from this threat
 we will anonymize any \emph{Private Personal Information (PPI)} 
 in contributor's data
 by designing a
 \emph{PPI anonymizer}.
The PPI anonymizer
 will anonymize 
 highly personal information---such as nicknames, phone numbers, friend's
 names and so on---while maintaining as much 
 research utility in anonymized information
 as possible.
In other words,
 we will attempt to retain 
 statistical properties of PPI data
 as best as we can without 
 sabotaging contributor security.
This task is non-trivial and is a 
 significant part of the research in this project.
We discuss details of how we 
 propose to anonymize PPI data in Section~\ref{sec:ppi}.

\outline{mix network}
To protect contributor identities,
 all connections from data contributors
 to the query system 
 and remote storage system
 will be done through
 an anonymizing mix network.
Using an
 existing
 overlay network we can
 keep contributors anonymous
 in \cah
 and 
 protect contributor
 identities from 
 surveillance and from correlation attacks.
We propose to use Tor 
 for the anonymizing mix network
 due to its reasonably 
 wide-spread deployment. 

In the following subsections we will
 go into details
 of the challenges involved in building \cah 
 and how we intend to address these challenges.
We discuss first the aspects of data collection (Section~\ref{sec:data},
 and then go into details on how researchers can query
 collected data (Section~\ref{sec:query}).
We then discuss
 how to interface with contributors
 and equip contributors with the knowledge needed
 to make confident decisions about their data (Section~\ref{sec:interface}). 
We conclude with a discussion
 on methods we can employ to 
 maximize the utility
 of \cah.

\subsection{Collecting Data}
 	\label{sec:data}

\begin{figure}
	\begin{center}
		\includegraphics[scale=0.1]{../figures/data_path.eps}
	\end{center}
	\caption{Data collection and storage in \cah.}
	\label{fig:data}
\end{figure}

In this section 
 we discuss how we collect and store data
 from data contributors.
Figure~\ref{fig:data} depicts that basic process
 for data collection and storage.
Data is recorded, 
 any Private Personal Information (PPI) is replaced,
 and data is then stored locally or on a remote storage system.

% Our vision is that we will give users some program to download, relatively
% small and portable.  The program collects the data.
This data collection process 
 takes place locally on a data contributor's machine.
Our vision is to build a 
 relatively small,
 and highly portable client program.
We want the program small
 so as not to impact a contributor's
 regular computer use.
We need the program to be highly 
 portable between environments
 to encourage a diverse set of contributors 
 to participate.

We plan to record data at the network level
 for several reasons.
First, 
 recording at the network level 
 is the most general approach
 to data recording
 and will not
 require any \emph{a priori} knowledge
 about the applications a 
 contributor uses.
Data capture at the network-level
 is quite well supported by the 
 vast majority of available platforms through
 a variety 
 of libraries---such as libpcap~\cite{}, wincap~\cite{} 
 and WAND's libtrace~\cite{}).
Second,
 by recording at the network-level
 we do not expose any data 
 which has been encrypted 
 by an application
 to undue privacy risks.

Although we currently plan
 to record at the network-level
 the rest of the \cah system
 could work with other data sources.
We plan on building the \cah client 
 in a modular fashion
 so in the future
 researchers could write an application
 specific plug-in which could
 record data at the application-level.
<<<<<<< .mine
=======

Data recording will not just be 
 an on/off process.
We envision a contributor
 having fine-grained control
 over exactly what data is
 recorded through specifying 
 the recording policy.
For example, this fine-grain control could apply 
 to applications (``never record traffic from BitTorrent''),
 to servers (``never record traffic to/from my bank site''),
 or to connections (``never record when my laptop is on my home network'').

After recording,
 we plan on removing any PPI found in the recorded data
 before storage. 
In the next section we discuss 
 the removal of PPI.
>>>>>>> .r45
 
\subsubsection{PPI Anonymizer}
	\label{sec:ppi}
<<<<<<< .mine
=======

We plan on recording data
 at the network-level
 which means that much of the Private Personal Information (PPI)
 in a contributor's data will already be encrypted.
However,
 not all applications encrypt PPI.
Unencrypted PPI may be found 
 in instant messages,
 email, 
 and low-security web traffic.

While no human ever sees raw data
 from a contributor in \cah,
 we still wish to protect PPI 
 from malicious attacks.
We also wish to retain as much
 research utility as possible,
 so we propose on not just 
 removing PPI, 
 but replacing such information
 with statistically similar data.

>>>>>>> .r45
Performing such replacement has two
 major challenges:
First, 
 network data has an unbounded amount of PPI,
 so recognizing PPI is not trivial.
Relying on contributors to itemize 
 their PPI in order to perform
 a ``search and replace''
 type operation would be prone to error.
Second, 
 understanding how to generate
 statistically similar data for 
 the replacement of PPI
 while maintaining the anonymity
 will require investigation.

After preliminary work with a graduate student,
 we plan to use natural language processing
 techniques to identify PPI
 in network data. 

 % Even if sensitive data is encrypted users may put sensitive information
 % in non-encrypted data such as instant message, email, etc.
% Say this is one of main parts of our research
% I'd say we did preliminary research on this with a grad student. We will use NLP approach. We will identify sensitive items such as =- full list that andy identified. I would also use his notes on how to deal with each type of info on that list. 
% Say which NLP tool we will use
% Say users can examine our list of sensitive data and how we sanitize it and can change/add to this
% Say no human ever sees raw data apart from contributor.

\subsubsection{Remote Storage System}

- double encryption using tor

\subsection{Querying Data}
	\label{sec:query}
% Say this is another main part of our research.  We will use Tor network. 
% Users will be able to pull queries and supply results.  How queries reach
% users and how results go back.  there will be some delay.  say our program
% on users machines will by default try to pull queries every so often.
\begin{figure}
\begin{center}
	\begin{center}
		\includegraphics[scale=.1]{../figures/query_path.eps}
	\end{center}
	\caption{Querying collected data. Patrol components (shown in light
         blue) will come from PI Mirkovic's NSF-funded project on secure queries. }
	\label{fig:query}
	\end{center}
\end{figure}

In this section we discuss
 how collected data can be queried by researchers.
\outline{summary: query stored, polled for, results aggregated and returned}
Figure~\ref{fig:query} illustrates the basic proposed process for 
 querying data in \cah.
Researchers specify queries
 through a community portal
 and 
 these queries are then stored in the query system.
Data contributors connect to the query server,
 poll for new queries
 and respond to these queries 
 through the query handler in
 their local \cah program.
Results from queries are aggregated by the query system
 and returned via the community portal.

We propose a store and poll mechanism
 for queries because 
 contributors may only connect to the \cah network
 intermittently,
 even if they are running the data recorder on 
 a continual basis.
By storing queries and collecting responses
 over a period of time
 we can maximize the number of contributors
 responding to a query.
Researchers will be able to configure
 how long to wait when
 gathering responses.
We also anticipate 
 supporting features such as 
 rolling aggregation and 
 ongoing queries which return
 periodic results.

As depicted in Figure~\ref{fig:query},
 all communication with
 connected contributors
 goes through an anonymizing mix network.
All communication is encrypted.
This anonymizing mix network
 prevents anyone within \cah
 and any external observers
 from connecting 
 contributor identities
 to their participation
 in \cah.
We discuss details for this
 anonymous network in 
 Section~\ref{sec:tor}.

\outline{data on the storage system}
Data housed on the storage system can be queried
 on a more instantaneous basis.
Other than the availability and 
 how this data is reached,
 data on the storage system will not be
 treated differently from 
 data on the contributor's local machine.
Results from both sources
 will be aggregated and protected 
 with the same mechanisms.

For our query language, interpreter
 and result aggregation
 we will draw on work from 
 PI Mirkovic's previous NSF funded project
 on secure queries.
This project resulted in 
 \emph{Trol}---an expressive query language---and
 \emph{Patrol}---a corresponding interpreter.
Trol and Patrol facilitate aggregation of query responses
 and preserve user privacy by applying
 $k$-anonymity and $l$-diversity
 principles.
In Figure~\ref{fig:all} and~\ref{fig:query},
 the roll of Patrol is shown in blue.
In the next section,
 we will discuss details about modifying
 and using the Trol/Patrol secure query framework.


\subsubsection{Secure Queries}
	\label{sec:secure_queries}
	
We now provide more details about our secure query framework, funded by our NSF grant. When sharing privacy-sensitive data via secure queries, the original data always remains under the control of its owner. Data owner releases a query language -- Trol -- and provides an online portal -- Patrol -- with the language interpreter. Queries are submitted by researchers, in a form of scripts written in the Trol query language. They are compared to owner-established policies to evaluate if they pose privacy risk. This evaluation is automatic and fine-grained, thus queries on sensitive fields are prohibited \textit{only in the specific context that poses a privacy risk}. 
Human involvement is necessary only to establish the query policies and our work provides sample policies that are likely to meet the needs of many data owners. Permitted queries are run on the original data and their results are returned to researchers. Results consist of aggregate features such as counts, averages, histograms, distributions, etc. Individual packet information is never returned as a result. 

While permitting only aggregate results hinders exploratory research, where researchers seek to understand a novel phenomenon by  observing individual packets, it is necessary to provide privacy guarantees against active and passive attacks on trace sanitization. 
Our secure query framework aims to facilitate validation of mature research, with well-formed hypotheses. It  can also be used for extraction of select traffic features to be fed into a realistic traffic generator, as we describe in Section \ref{trafgen}.

Trol and Patrol protect user privacy in several ways. First, only aggregate results are returned to the researchers, thus many active and passive attacks that work against sanitized network traces are ineffective in our context. Second, Trol restricts use of its language constructs over sensitive fields, according to the provider-specified policy. For example, a provider may require that manipulation of IP address values from the trace data can only be allowed if these are first hashed. Third, Patrol ensures that $k$-anonymity \cite{kanon} and $l$-diversity \cite{ldiversity} criteria is met for any result returned to the user, thus applying ``hiding in a crowd'' approach for privacy protection. For example, if a researcher asks for a count of email messages that are sent on New Year's Eve, $k$-anonymity ensures that the reply he will get is the set of bins associated with \textit{ranges of counts} such that at least $k$ different contributors' replies fall in each bin, and $l$-diversity ensures that the number of bins is higher than $l$. Via his security policy a user may decide to enforce one or both of these criteria and may customize $k$ and $l$ values per data field type. Fourth, Patrol audits researcher queries looking for attempts to defeat its privacy protection mechanisms by tracking and faking, as explained in \cite{nda}. 

Our Trol language is finalized and our Patrol interpreter is in the last stages of its development. We expect that both will be ready for use by the time this proposal would be funded. In our proposed work we will need to slightly modify Trol and Patrol framework to meet the unique needs of Critter-at-home data processing. First, for those users who elect to store their own Critter data, we will have to bundle Patrol with our Critter client code distribution. This means that our Patrol code must be portable to multiple operating systems and have no large dependencies, and that it should be small and cheap to run on variety of hardware types, some presumably very old. Second, in our original secure query framework, all data processing and privacy protections are applied at a central location under data provider's control. In our Critter-at-home framework some processing and privacy protections will be performed by Critter client and others by Critter server. Specifically, $k$-anonymity, $l$-diversity and auditing must be done at the server side. Further, since data being processed at the server belongs to multiple contributors they may have different $k$ and $l$ settings. Finally, some Trol language constructs may need to be modified to support manipulation of application-level data and application-field-specific operations. In our proposed research we will identify and apply necessary Trol and Patrol modifications, and integrate them with the Critter client and the Critter server code.


\subsection{Anonymizing Participation}
	\label{sec:tor}

- double encryption (last hop not encrypted)

- yes, timing attacks bad, but if a user can see traffic on both sides, 
 they can record all network data from a user any way


\subsection{Interfacing with Contributors}
 \label{sec:interface}
\outline{not all users understand interworkings of computer}
\outline{develop methods to explain what traffic a user is collecting
 themselves}
\outline{develop methods to explain what a query is asking}

\subsection{Maximizing Utility of \cah}
	\label{sec:util}

\subsubsection{Traffic Generation: TrafGen}
\label{trafgen}

Realistic application traffic generation is one of use cases for the
content-rich data provided by our Critter-at-home system.  The key reason
why realistic traffic generation is hard is that ``realistic'' means
different things to different people.  A researcher testing a defense, which
detects largest hitters, may care only that the traffic volume per source
resembles values seen in real networks.  Another researcher testing a DDoS
defense requires congestion-responsive traffic generation but may not care
about address distribution or traffic contents.  All existing traffic
generators have a \textbf{fixed definition of realism}.  This definition --
a set of traffic dimensions that users supposedly care about -- is
hard-coded in the generator's code.  The generator then mines information
about only these dimensions from traffic traces and it generates traffic
that fits the mined values.  The generated traffic is ``realistic'' but only
along those fixed dimensions.  A researcher caring about a different
dimension set must change the generator's source code, which is often a huge
effort, in order to get the desired realism.

<<<<<<< .mine
In our NSF-funded effort, proposal number 1127388, we are building a traffic generator whose definition of realism can be fully specified by a user. The generator  extracts models for user-selected features out of network traffic traces and reproduces the traffic that fits those models, thus achieving level of ``realism'' that a given researcher cares about. Model extraction deploys our secure-query framework, i.e., Trol and Patrol, to obtain information about distributions of selected features' values in network traces. We expect that it will be straightforward to replace our current generator input with Critter's replies to researchers to generate realistic traffic at the application layer. The first code release of our traffic generator will occur before the start of the effort in this proposal. 
=======
In our NSF-funded effort, we are building a traffic generator whose
definition of realism can be fully specified by a user.  The generator
extracts models for user-selected features out of network traffic traces and
reproduces the traffic that fits those models, thus achieving level of
``realism'' that a given researcher cares about.  Model extraction deploys
our secure-query framework, i.e., Trol and Patrol, to obtain information
about distributions of selected features' values in network traces.  We
expect that it will be straightforward to replace our current generator
input with Critter's replies to researchers to generate realistic traffic at
the application layer.  The first code release of our traffic generator will
occur before the start of the effort in this proposal.
>>>>>>> .r45



\section{Evaluation}
	\label{sec:eval}

sanitization
 - ground truth, small set of volunteers get - manual process, enron (step
 - 0), AOL search data

traffic capture 
 - make sure behavior is consistent with user policy
 - control test with volunteers with scenarios 

tor - investigate correlation  (not eval section)

deployment - iterative deployment, get feedback from users, how to attract
  users, type of people responding, 

tell people what we see so they know what we're collecting (not eval),
 user interface, usability related work to pull off of



\section{Related Work}
\label{related}

\outline{no one has been very successful at preserving privacy}
Efforts to sanitize such unbounded data
 have largely failed. 
\outline{eg. netflix, imdb correlation}
\outline{eg. AOL search data}
\outline{eg. Sweeney medical records}


\outline{DARPA IDS eval datasets highlight need for packet contents}
% Currently available data with packet content is either synthetic or fits a
%very narrow research purpose.  
For example, The DARPA Datasets for Intrusion Detection System
 Evaluation~\cite{Lipmann00}---generated in 1998, 1999 and 2000 by MIT
 Lincoln Laboratory---are some of the few datasets publicly available which
 have packet payload.  These datasets are still used extensively for
 training and evaluation over a decade after their release.
The continued use of these datasets,
 in the face of criticisms that 
 these datasets are outdated and
 homogeneous~\cite{Mahoney03},~\cite{McHugh00},~\cite{Thomas08}, highlights
 the need
 for more publicly available content-rich datasets.  

 % DARPA datasets were created by collecting full packet traces from XXX,
 % extracting some selected features, and then synthetically generating data
 % that fits these features.  There are three problems with this approach. 
 % First, data is now outdated.  Second, synthetic data resembles the
 % original data only along the selected features.  Since these are not
 % publicly disclosed researchers cannot quantify how the data generation
 % artifacts may influence their research outcomes.  REsearchers that need
 % realistic traffic along other dimensions, i.e.  that need to modify the
 % selected feature set, have no way of doing so.  Third, since the data was
 % collected in one specific environment it is unclear how it generalizes to
 % other network environments.  We expect to address the first and third
 % problem by allowing continuous contribution which should result n current
 % and diverse data.
% To address the second problem, what is needed is to store data securely in
% as raw format as possible, then allow researchers to specify what features
% interest them.  Mine these and return to researchers or use for synthetic
% traffic generation.  We have a related project - TrafGen



% To address the second problem, what is needed is to store data securely in
% as raw format as possible, then allow researchers to specify what features
% interest them.  Mine these and return to researchers or use for synthetic
% traffic generation.  We have a related project - TrafGen


 % Other example, CAIDA DDoS and Worm data only has malicious traffic. The major challenge for security
 % researchers is to obtain realistic legitimate traffic to inform their defense design and to be used in evaluation.


\section{Research Plan}

\required{Results From Prior NSF Support}

\cite{Thomas08}