\chapter{Evaluation}

\section{Testing the Integrity monitor functionality}


The success of this project is based on the ability to correctly monitor a change of the vulnerable VM from the monitoring VM.
Obtaining the needed kernel object for monitoring is done through a series of steps and each of these steps have the ability to misbehave and silently map a different memory location, leading to false detection.

\subsection{Testing pre-requisites}
The implemented prototype integrity monitor assumes that the entire system call table resides in one memory frame. Before going into execution testing, first thing to test is if the whole system call table resides on one memory frame. The virtual address of the vulnerable VM's \emph{sys\_call\_table} is 0xc02d64b8. This address resides in the linearly mapped address space, therefore the physical address is 0xc02d64b8 - 0xc0000000 = 0x002d64b8. The offset of this address ( 0x002d64b8 \& 0x00000111 ) = 0x000004b8. This frame can further accommodate ( 0x00000fff - 0x000004b8 ) = 0x00000b47 entries. This frame can hold 2887 more bytes. The system call table would not require more than 400 entries, which is about 400 * 4 bytes = 1600bytes. Therefore we can see that the entire system call table resides in one memory frame. This is a fortunate coincidence as it would have required more work to recreate the system call table if it had resided spanning two memory frames.

\subsection{Testing Integrity monitor detection capability}
Testing of the implementation can be done by checking if a change to system call table in the vulnerable VM is correctly detected by the detector running on detecting VM. To test the correct mapping, the prototype rootkit is used to change a known location of the system call table of the vulnerable VM. This location should be correctly identified as changed from the integrity detector. To do this checking exhaustively the rootkit is made to periodically change and restore the system call table from index 0 till 300. The periodic changes should be correctly identified by the integrity detector.

\lstset{language=C}
%\lstset{backgroundcolor=listinggray}
%\lstset{backgroundcolor=\color{listinggray}}
%\lstset{linewidth=90mm}
%\lstset{frameround=tttt}
%\lstset{frameround=trbl}
%\lstset{labelstep=1}
\lstset{keywordstyle=\color{blue}\textbf}
%\lstset{moredelim=[is][\ttfamily]{|}{|}}
\lstset{basicstyle=\ttfamily \small \bfseries}
\lstset{commentstyle=\ttfamily}
\lstset{stringstyle=\bfseries}
\lstset{showstringspaces=false}
\lstset{numbers=left,numberstyle=\ttfamily \small}
\lstset{breaklines=true}
\begin{center}
    \begin{minipage}{14cm}
        %\begin{lstlisting}[frame=trBL,indent=10mm,caption=My MATLAB Code,label=lst:matlab,gobble=4]{}
        \begin{lstlisting}[frame=trbl,caption=Code segment to exhaustively change system call entries ,label=lst:syscallIterate]{}

   int counter = 0;
   for(  counter=0 ; counter < 300 ; counter++){
      int sys_call_number = counter;
      swapin_sys_call(sys_call_number);
      ssleep(2);
      swapout_sys_call(sys_call_number);
   }

        \end{lstlisting}
    \end{minipage}
\end{center}

Listing \ref{lst:syscallIterate} is the code that is used to change and restore each system call reference iteratively. The change to each entry is maintained for two seconds and then restored back to its original reference.

A part of the execution log of integrity monitor is shown in listing \ref{lst:idslog}.

%\lstset{language=C}
%\lstset{backgroundcolor=listinggray}
%\lstset{backgroundcolor=\color{listinggray}}
%\lstset{linewidth=90mm}
%\lstset{frameround=tttt}
%\lstset{frameround=trbl}
%\lstset{labelstep=1}
\lstset{keywordstyle=\color{blue}\textbf}
%\lstset{moredelim=[is][\ttfamily]{|}{|}}
\lstset{basicstyle=\ttfamily \small \bfseries}
\lstset{commentstyle=\ttfamily}
\lstset{stringstyle=\bfseries}
\lstset{showstringspaces=false}
\lstset{numbers=left,numberstyle=\ttfamily \small}
\lstset{breaklines=true}
\begin{center}
    \begin{minipage}{14.7cm}
        %\begin{lstlisting}[frame=trBL,indent=10mm,caption=My MATLAB Code,label=lst:matlab,gobble=4]{}
        \begin{lstlisting}[frame=trbl,caption=Log of Integrity monitor ,label=lst:idslog]{}
Mon Jul 21 15:27:22 2008 : sys_call_table[0] has changed : original - c012acb0 , now - cd84a000 
Mon Jul 21 15:27:24 2008 : sys_call_table[1] has changed : original - c0120fe0 , now - cd84a000 
Mon Jul 21 15:27:26 2008 : sys_call_table[2] has changed : original - c0102ef0 , now - cd84a000 
Mon Jul 21 15:27:28 2008 : sys_call_table[3] has changed : original - c0163ee0 , now - cd84a000 
Mon Jul 21 15:27:30 2008 : sys_call_table[4] has changed : original - c0163f50 , now - cd84a000  
..... ..... ..... ..... ..... ..... 
..... ..... ..... ..... ..... ..... 
Mon Jul 21 15:37:16 2008 : sys_call_table[295] has changed : original - c01619b0 , now - cd84a000 
Mon Jul 21 15:37:18 2008 : sys_call_table[296] has changed : original - c0174d60 , now - cd84a000 
Mon Jul 21 15:37:20 2008 : sys_call_table[297] has changed : original - c0174e80 , now - cd84a000 
Mon Jul 21 15:37:22 2008 : sys_call_table[298] has changed : original - c0161d10 , now - cd84a000 
Mon Jul 21 15:37:24 2008 : sys_call_table[299] has changed : original - c0162190 , now - cd84a000 


        \end{lstlisting}
    \end{minipage}
\end{center}


A part of the kernel log of the vulnerable VM showing the rootkit activity is show in listing \ref{lst:kernellog}. The prototype rootkit is implemented to write in the kernel log.

%\lstset{language=C}
%\lstset{backgroundcolor=listinggray}
%\lstset{backgroundcolor=\color{listinggray}}
%\lstset{linewidth=90mm}
%\lstset{frameround=tttt}
%\lstset{frameround=trbl}
%\lstset{labelstep=1}
\lstset{keywordstyle=\color{blue}\textbf}
%\lstset{moredelim=[is][\ttfamily]{|}{|}}
\lstset{basicstyle=\ttfamily \small \bfseries}
\lstset{commentstyle=\ttfamily}
\lstset{stringstyle=\bfseries}
\lstset{showstringspaces=false}
\lstset{numbers=left,numberstyle=\ttfamily \small}
\lstset{breaklines=true}
\begin{center}
    \begin{minipage}{15cm}
        %\begin{lstlisting}[frame=trBL,indent=10mm,caption=My MATLAB Code,label=lst:matlab,gobble=4]{}
        \begin{lstlisting}[frame=trbl,caption=Log of vulnerable VM's kernel ,label=lst:kernellog]{}
Jul 21 15:27:22 vmGutsy kernel: [table modifier] Redirecting sys_call_table[0] from c012acb0 to cd84a000 
Jul 21 15:27:24 vmGutsy kernel: [table modifier] Restoring sys_call_table[0]
Jul 21 15:27:24 vmGutsy kernel: [table modifier] Redirecting sys_call_table[1] from c0120fe0 to cd84a000 
Jul 21 15:27:26 vmGutsy kernel: [table modifier] Restoring sys_call_table[1]
Jul 21 15:27:26 vmGutsy kernel: [table modifier] Redirecting sys_call_table[2] from c0102ef0 to cd84a000 
Jul 21 15:27:28 vmGutsy kernel: [table modifier] Restoring sys_call_table[2]
..... ..... ..... ..... ..... ..... 
..... ..... ..... ..... ..... ..... 
Jul 21 15:37:19 vmGutsy kernel: [table modifier] Redirecting sys_call_table[297] from c0174e80 to cd84a000 
Jul 21 15:37:21 vmGutsy kernel: [table modifier] Restoring sys_call_table[297]
Jul 21 15:37:21 vmGutsy kernel: [table modifier] Redirecting sys_call_table[298] from c0161d10 to cd84a000 
Jul 21 15:37:23 vmGutsy kernel: [table modifier] Restoring sys_call_table[298]
Jul 21 15:37:23 vmGutsy kernel: [table modifier] Redirecting sys_call_table[299] from c0162190 to cd84a000 
Jul 21 15:37:25 vmGutsy kernel: [table modifier] Restoring sys_call_table[299]


        \end{lstlisting}
    \end{minipage}
\end{center}





Using the above outputs we can see that the change occurring in each system call entry is successfully detected by the integrity monitor. This verifies that the mapping of memory has happened accurately and that the changes are been reported accurately.

Even though the clocks of the two VMs are not synchronized to great accuracy, at most a accuracy of 1 second difference is their among the two VMs. The sampling of the two VMs happen independent of each other as well. By looking at the log times of the two logs, we can also see that the detection has happened within the same second of been modified. It can be concluded that the modifications to the vulnerable VMs can be seen immediately.






\section{Performance Evaluation}

The integrity monitor constantly keeps on generating hashes and comparing it with existing hash to check if a change occurs. This action is done every two seconds, and could cause a considerable performance degration to the system. In order to measure the performance penalty a classical benchmark is used. The benchmark is the time taken to decompress the linux-2.6.18 linux kernel. The benchmark used the average time outputs of unix command \emph{time} been run for 10 times to decompress the kernel source.

The test was run on a normal VM, on the Dom0 when the integrity detector is not executing and on the Dom0 when the integrity detector is running. Table \ref{table:time} shows the time values obtained for each situation.

\begin{table}
\caption{Average time for kernel source archive decompression}

\begin{center}
    \begin{tabular}{ | l | l | l | l |}	
    \hline
    		&  		Normal		& 	\multicolumn{2} {|c|}{Xen Virtualized}	\\    \hline 
      		& 	Normal Host 	&	Dom0 with detection & Dom0 without detection \\ \hline \hline
    real	& 		24.93s		&		31.48s			&		31.48s			\\ \hline	
    user	& 		18.97s		&		18.40s			&		18.42s			\\ \hline	
    sys		& 		03.09s		&		08.83s			&		10.01s			\\ \hline	
    
    \end{tabular}
    \label{table:time}
\end{center}
\end{table}

Under virtualized host time detection, both when the detector is running and not running there exists a vulnerable VM.

The real time to execute by non virtualized and virtualized system has a 5 second. This can be considered as the execution degration caused due to the execution of two OSs under virtualization when compared to the normal one instance execution.
The system time too has a significant difference between virtualized and non-virtualized as there are two OS instances running under virtualized.

When considering between the Virtualized system with detector running and without the detector, the table does not show any significant difference in real time and user time. But when the detector is running, the system shows a speedup in system time. This might be due to the fact that Dom0 has a GUI with which the user interacts but the GUI of the vulnerable VM has no user interactions.

What we see here is that there is no significant performance degration when the integrity monitor keeps on checking for hash changes every two seconds.
 

 


