\section{Evaluation}
\label{sec:eval}
This section tries to investigate the following 3 research questions:

\begin{itemize}

\item Did \ourtool's static optimized dynamic enforcement approach scarifies any precision compared to pure dynamic enforcement.

\item How effective are static optimizations in reducing runtime overhead when applying \ourtool on real malware.

\item Whether finer-grained privacy policy that allows user to specify both information flow and control flow properties can help identify more types of malicious behavior.
\end{itemize}

\subsection{Analysis Precision Benchmark}

We first evaluate the analysis precision (no false positive or false negative) of \ourtool by running it against DroidBench~\cite{droidbench}. DroidBench is the only benchmark suite that examines the effectiveness and accuracy of an information flow tracking tool on Android apps. It contains 64 small Android applications, divided into 9 categories. Each category represents a specific set of analysis challenges and contains several test apps that try to explore different scenarios. For example, the arrays and lists category contains 5 applications that test if the analysis tool can precisely track sensitive information going through arrays, lists and other collections. We compared \ourtool's performance with FlowDroid, a static analysis tool, and TaintDroid, a dynamic information flow tracking tool. Passing a test subject means the tool was able to precisely identify the information leakage with no false positive or false negative, and passing a category means the tool was able to pass all tests within that category.

\begin{figure*}
\centering
\begin{center}
    \begin{tabular}{| l | c | c | c | }
    \hline
    Category & FlowDroid & {\bf \ourtool} & TaintDroid  \\ \hline
    Arrays and Lists & \xmark & $?$ & $?$ \\ \hline 
    Callbacks & \cmark & \cmark & \cmark \\ \hline 
    Field and Object Sensitivity & \cmark & \cmark & \cmark \\ \hline 
    Inter-App Communication & \xmark & \xmark & \xmark \\ \hline 
    Lifecycle & \cmark & \cmark & \cmark \\ \hline 
    General Java & $?$ & \cmark & \cmark \\ \hline 
    Misc. Android-Specific & $?$ & \cmark & \cmark \\ \hline 
    Implicit Flows & \xmark & \xmark & \xmark \\ \hline 
    Reflection & \xmark & \xmark & \cmark \\ \hline \hline
    Total &  3(2)  & 5  & 6(1)   \\ \hline
      \end{tabular}
\end{center}
\caption{DroidBench. $?$ shows the tool is able to pass some of the tests within that category but not all of them.}
\label{fig:droidbench}
\end{figure*}

Figure~\ref{fig:droidbench} shows how \ourtool performed against FlowDroid and TaintDroid. Unsurprisingly, \ourtool and TaintDroid performed better than FlowDroid, because FlowDroid has to statically reason about app's runtime behaviors. It is limited by its inability to precisely determine context dependent runtime behaviors. FlowDroid wasn't able to pass any of the subject apps in the Arrays and Lists category because it can not statically track the sensitive data inserted into the collection. \ourtool and TaintDroid not able to pass all subject apps because both tools currently do not precisely track information through array of primitive data types.

FlowDroid also suffered in the General Java category due to exception handlers might or might not get triggered during runtime. It also didn't pass some tests in the Miscellaneous android specifics category. For example, in one case, it was not able to determine the information leakage actually happened in a disabled activity, which would never be executed during runtime.

\ourtool and FlowDroid did not pass tests in the reflection category because they were not able to resolve the reflection statically. Because \ourtool relies on a conservative static analysis to identify relevant code regions, if the static analysis can not resolve all the method invocations, it might miss some code paths in which sensitive information can be leaked. In comparison, TaintDroid passed all tests in the reflection category as it is a pure dynamic analysis. Lastly, all 3 tools were not able to handle inter-app communication and implicit flows. In the future work section, we will describe our plan to support implicit flow in the future.

As this evaluation has shown, \ourtool's static optimized dynamic enforcement approach did not scarifies any precision compared to pure dynamic enforcement.

\subsection{Malware Case Study}

Next, we applied \ourtool to enforce fine-grained privacy policy on 2 real malware. We first evaluated our static optimized dynamic enforcement technique to examine the reduction of runtime overhead. After that, we compared how finer-grained privacy policy allowed \ourtool to identify more types of malicious behavior compared to previous techniques.

\subsubsection{Evaluation Subjects}

This section details the 2 real malware we evaluated on and the privacy policies \ourtool enforced. One of them is developed by a DoD vendor as part of the DARPA APAC project, and another one is a spyware that was removed from Android's app store due to privacy concerns.

{\bf Kittey Kittey} is a real malware designed to hide malicious behavior within a benign application. It is a gallery app that allows users to browse through cat pictures. It does not require any permissions, aside from read Filesystem which is by default granted to every single Android application. The malicious behavior is triggered when a user scrolls through at least 2 pictures by clicking the Next button twice followed by clicking the About button. When the Next button is pressed twice, the app will secretly scan through all geotagged image files on the sdcard, and extract the time-stamp of when the picture was taken along with the GPS coordinates stored in the image files. When the About button is pressed, the app will open a new Internet browser tab and secretly upload those information as part of the HTTP POST request. The privacy policy we enforced on this app is that information from the Filesystem should not be allowed to be sent to the Internet.

{\bf SMS replicator} is a real spyware that when installed would automatically replicate all incoming SMS messages to a predefined number. The malicious behavior is triggered when the device receives an incoming SMS message. The privacy policy we enforced on this app is that incoming SMS messages should not be exfiltrated via SMS messages before any button is pressed.

\subsubsection{Runtime Overhead Comparison}

Figure~\ref{staticoptimized} shows the number of additional enforcement instructions needed to enforce the privacy policy for each malware. For both malware, \ourtool only requires an additional 2.4\% of enforcement instructions when using API summaries and relevant instructions. This is mainly because the number of instructions that are part of malicious information flow is often very small in a large app. Furthermore, static optimizations were able to further reduce the number of necessary enforcement due to the fact that when malware is trying to leak information, they often need a lot instructions just to parse, and package sensitive information for exfiltration. And all of those instructions would be optimized away because they are just passing sensitive information from one memory location to another. 

\begin{figure*}[th]
\centering
\begin{center}
    \begin{tabular}{| l | c | c | c | c | c |}
    \hline
    Subject Apps & No Optimization & \multicolumn{2}{|c|}{Half Optimization}  & \multicolumn{2}{|c|}{Full Optimization}  \\ \hline
    Kittey Kittey &  2757 & 75 & 2.7\% &  6 & 0.22\%  \\ \hline
    SMS Replicator & 886  & 20 &  2.2\% & 4 & 0.4\% \\ \hline \hline
%    App3 &  & &  & & \\ \hline
%    App2 &  & &  & &\\ \hline
%    App3 &  & &  & & \\ \hline
%    App2 &  & &  & &\\ \hline
%    App3 &  & &  & & \\ \hline
%    App4 &  & &   & &\\ \hline \hline
    Average &  &  & 2.4\%  & & 0.2\%   \\ \hline
      \end{tabular}
\end{center}
\caption{\# of additional Dalvik instructions needed to enforce the privacy policy for each subject. The number in the parentheses represent the \# of addition variables needed. The percentage shows the percent of additional instructions compared to the \# of Dalvik instructions in the subject apps. No optimization column also refers to the \# of Dalvik instruction in the subject, because without any optimization every single program instruction in the app has to be instrumented. This column shows the lower bound of the actual overhead because we haven't counted the method bodies of framework APIs. Half optimization refers applying API summaries and identification of relevant instructions. Full optimization refers to all 3 optimizations working together.  }
\label{staticoptimized}
\end{figure*}


\subsubsection{Enhanced privacy control}

To evaluate how \ourtool is able to capture more types of malware by supporting a finer-grained privacy policy, we compared it with 3 existing Android malware detection tools and the Android permission system. Here is a brief description of these tools:

\begin{itemize}

\item Android Permission System is the default security \& privacy enforcement mechanism on all Android devices. It forces the developer to declare uses of sensitive APIs (such as reading a file from the sdcard and sending SMS messages) as permissions and shows user a list of permissions when installing an app. Because it is very coarse-grained, being only able to identify uses of sensitive API, it can not track how sensitive information is actually used, thus allowing malicious behavior to hide within benign apps.

\item FlowDroid~\cite{flowdroid} is a context-, flow-, field-, object-sensitive and lifecycle-aware static taint analysis tool for Android apps. While it can statically reason about how sensitive information is used, it is not able to reason about the precondition of when sensitive information will be leaked. 

\item Aurasium~\cite{aurasium} is an inline dynamic enforcement tool that repackages unsafe apps with additional runtime checks. For example, it can check the specific phone number that the app is trying to send SMS to during runtime. However, it is not able to track how sensitive information is used or determine whether it is being exfiltrated. 

\item TaintDroid~\cite{taintdroid} is a runtime dynamic enforcement tool that can track sensitive information as it flow through the app. However, it does not support enforcing control flow properties, thus it can not distinguish between a malicious audio recorder secretly recording in the background from a benign one.

\end{itemize}

Figure~\ref{casestudy} shows the evaluation results. Unsurprisingly, Android permission system and Aurasium performed poorly because they do not support identifying sensitive information leakage, which is necessary to detect sensitive information stealing malware. TaintDroid was able to detect the malicious information leakage in Kittey Kittey but it was not able to precisely detect incoming SMS messages are being {\em automatically} replicated. In comparison, Aurasium was only able to detect the app automatically sending SMS messages, but was not able to determine exact what information are being leaked.

\begin{figure*}[th]
\centering
\begin{center}
    \begin{tabular}{| l | l | l | l | l |  l |}
    \hline
    App & Android Permission System & FlowDroid~\cite{flowdroid} & Aurasium~\cite{aurasium} & TaintDroid~\cite{taintdroid} &  \ourtool \\ \hline
    Kittey Kittey &  \xmark &  $?$ & \xmark & \cmark & \cmark  \\ \hline
%    App2 &  \xmark & \xmark & \xmark & \xmark & \cmark  \\ \hline
%    App3 &  \xmark & \xmark & \xmark & \xmark & \cmark  \\ \hline
   SMS Replicator &  \xmark & $?$ & $?$ & $?$ & \cmark  \\ \hline \hline
    Total &  0 & 0(2) & 0(1) & 1(1) & 2  \\ \hline
    
        \end{tabular}
\end{center}
\caption{This table shows how \ourtool compares to previous work in detecting the malicious behavior in subject apps. \xmark means the tool was not able to identify the malicious behavior. $?$ refers to the tool was able to detect a portion of the malicious behavior but was not able to precisely identify it in entirety. \cmark shows the tool was able to precisely identify the malicious behavior.}
\label{casestudy}
\end{figure*}



