\chapter{Project Design}

This chapter will describe the solution that is provided in this project, how it is designed and why it was designed in such a way. The project aims at detecting kernel integrity violations through the use of introspection, where we will be detecting the integrity of a kernel from outside the kernel been detected.

The main technology behind Introspection is the use of Virtualization. The hypervisor acts as a trusted computing base (TCB), providing a secure base to the whole system. Xen hypervisor will be the software layer directly running on top of the x86 hardware. Xen will act as the resource manager of the whole system by allocating resources and by providing access control to these resources. The main resource considered in this project is memory. Memory will be allocated to virtual machines and access of the memory will be controlled by Xen hypervisor.

In the simplest instance of the system, the vulnerable VM that needs to be detected will be run on a guest VM or as a domU. The integrity monitor will run on a completely different VM which is on the Dom0 privileged host.

Dom0 where the integrity monitor runs (monitoring VM) is the most crucial component of the system, as this is a point which can break the whole system making it unreliable again. Therefore the Dom0 has to be protected well against any access to it. There would be many possible ways to deny access to the Dom0 from the network. Using these techniques, dom0 will be protected thoroughly making access to it very hard.

The OS where services are running will be open for access by anyone on the network. Therefore it is open for attack and is vulnerable. This OS will be run on a DomU while being constantly checked for integrity violations, through the monitoring VM. 

The above mentioned architecture provides some desirable features that should exist in introspection which also can be considered as design requirements. As the chapter progresses these design requirements will be considered one by one as to how they have been realized in the solution.
%{there exists other ways to implement introspection such as, inserting code into the hypervisor to trap signals issued by hosts. doing some modifications to the vulnerable VM to extract information from the host.}


\section{Design Requirements}

The implementation of a Robust monitoring architecture with good security guidelines requires the monitoring architecture to adhere a few high-level requirements for monitoring VMs \cite{bryanSecureVM}. These are as follows.

\begin{description}
\item[No superfluous modifications to the VMM  ] - The VMM should not be modified as modifications have the ability to introduce vulnerabilities. If the VMM lacks the necessary primitives, then only a minimal modification should be done.


\item[No modifications to VM or the target OS  ] - Any OS that runs on Xen ( with Xen supporting kernel ) should without more modification be able to run the Monitor.

\item[Small performance impact] - The monitoring architecture should not prevent the Target OS from performing its intended functions, and the monitoring software should not cost much performance.

\item[Provide semantics for the Monitors] - The semantics should be regenerated and provided so that monitors can use the mapped Kernel Objects for monitoring.

\item[Ability to monitor any data on target OS] - The Full range of memory of the target OS should be visible to the monitor.

\item[Target OS cannot tamper with the monitors] - The monitor and the Monitoring OS resides on two different isolated VMs. The monitored OS should not be able to access the monitoring OS. 

\end{description}


The project is build on XenAccess \cite{bryanSecureVM} which provides these requirements, therefore this integrity monitor will inherit those properties, but when designing the further developments have to make sure that the above features are preserved.


\section{XenAccess Architecture}

\begin{figure}
\begin{center}
\caption{XenAccess Architecture \label{fig:XenAccessDesign}}

\ifpdf
    \includegraphics[width=5in]{images/xenAccess.png}
\else
    \includegraphics[width=5in]{images/xenAccess.png}
\fi
\end{center}
\end{figure}


Xenaccess is a user level framework which provides functionality required to map memory of one VM to another VM while preserving the above mentioned requirements. 


This is a user level library and run on the Xen enabled kernel without require any modifications to be done on the Kernel. It uses XenControl library and XenStore library to communicate with the Xen hypervisor, therefore it maps memory between virtual machines without any modification to the hypervisor or the kernels of the virtual machines.

This model does not make any hooks or traps for monitoring therefore a large performance impact is not expected. The initial calculating of the machine address and mapping of the memory locations would be the only additional performance requirement of XenAccess. After the memory is mapped that memory can be accessed as normal paravitualized memory accessing of its own memory space.

The monitor and monitored OSs are on two different VMs, and Xen provides sufficient isolation between the VMs. Even if the Monitored VM is attacked it won't compromise the monitoring VM, keeping the monitor reliable. In the design of the proposed system none of the mapped memory locations are given a direct ability to execute. This leading to the assumption that malicious code of a vulnerable VM cannot be made to execute on the detection VM. Yet this will have to be looked into in much detail to find whether security breaches can be done through memory mapping. Even if a breach would be possible, it won't be a easy task to do.

Any part of the memory of the VM can be mapped to the monitored VM. The difficulty resides in providing the exact same semantics again. Providing Semantics requires the kernel Structures of the monitored VM to be known precisely. A consistent and robust semantic generation technique is still not a reality and would require more extensive research. Simple semantic regeneration will be used in the project to regenerate the semantics of well known, simple kernel data structures.

In the process of mapping memory XenAccess has the ability to find virtual addresses of kernel objects, convert the virtual addresses to physical addresses of a VM using that VM's page tables, then it finds the machine address of the physical addresses using the hypervisor maintained m2p tables and finally maps the memory region to another VM's memory. XenAccess maps a page of memory where that particular memory address resides.

 


\section{Mapping Memory on Xen}

Xen Memory Management partitions memory in three levels.


\begin{description}
\item[Machine Addresses ] - This memory addresses are used by the hardware and are managed by the VMM


\item[Physical Addresses ] - Addresses that each paravirtualized OS. From the view of the VM, these addresses are used by the hardware. This abstraction enables non-contiguous allocation of memory to VMs.

\item[Virtual / logical Addresses] - Same as the logical addresses of a normal OS.

\end{description}

The VMM as well as each VM needs to be provided memory. Xen traps the ownership and use of each page, allowing secure partitioning between domains \cite{xenInterface}. Each VM is allocated the memory for its use by the VMM, The memory the VMs accesses are the physical addresses. These memory addresses are really mapped to the physical memory through the VMM, and each physical address has a relevant machine address.

Memory Management in Xen is done at page frame level, where each physical memory frame is mapped to a machine memory frame. This mapping is stored in a globally readable 'machine-to-physical' table.

Usually memory space allocated for a VM is independent from each other VM. Through mapping of memory for introspection we make the concerned memory locations accessible to the monitoring VM as well.

The memory address translation is done in two different ways for kernel memory space and user memory space. In the process of mapping memory both these address translation methods are used. Kernel memory areas can be translated from virtual to physical addresses by reducing the PAGE OFFSET from the kernel address. User space addresses can be translated using the page tables of the particular VM. In order to access these page tables residing on another VM, the memory areas where the page tables as well as the referencing page tables will have to be mapped. 

The source code of XenAccess was studied to learn how the mapping of memory takes place, the procedure is as follows.

\begin{figure}
\begin{center}
\caption{Physical to machine address mapping in Xen \label{fig:pageMapping}}

\includegraphics[width=6in]{images/pageMapping.png}

\end{center}
\end{figure}

\begin{description}

\item[Initializing domain information] A interface into the Xen Control is opened and used to populate the domain information of the vulnerable VM. Init PAGE OFFSET values.

\item[Find the Page Global Directory] Read the System.map file of the kernel and find the virtual address of \emph{swapper pg dir} entry. Page Global Directory starts at symbol \emph{swapper pg dir}. Translate this virtual address to the physical address by reducing the PAGE OFFSET ( 0xc0000000 ).

\item[Map the Physical address frame of GPD] Partition the physical address of the PGD pointer into physical frame number and offset. Map the corresponding physical frame into the needed VM's memory space. The exact reference can be accessed using the earlier offset value.

\item[Memory Translation using page tables] Only the GPD table's reference is maintained initially. When a address needs to be translated, the translation starts in the Global Page then using the reference values obtained in sequence should map and keep on finding pages of PUD, PMD, PTE and then finally the physical address can be found.

\item[Physical address to Machine Address Translation] The Physical to machine address translation is illustrated in figure \ref{fig:pageMapping}. The obtained physical address is broken down into the physical frame number and an offset. Using the p2m ( physical to machine ) table in the hypervisor the Machine frame number of the Physical Frame number can be found.

\item[Map the memory location] When the needed memory location is translated into machine or physical address, XenControl library is used to map the page frame into the other VM's memory space. The exact memory location can be accessed by using the offset.


\end{description}







\section{Integrity Monitor Design}




The Integrity monitor will be implemented as in figure \ref{fig:MonitorDesign}. The Monitored VM and the Monitoring VM will be running on two different VMs. The two VMs will be running x86\_32 bit Ubuntu Linux with non PAE memory addressing.

Dom0 is the privileged domain where the monitor will be running. The XenControl library running on Dom0 provides low-level access to the Xen control interfaces ( hypervisor interface ). The main functionality used of XenControl is the memory mapping feature. Other functionality of this library too will be used by the upper layer which is XenAccess. 

XenStore library Daemon provides a simple tree-like database which can be used to access information about the VMs running on the hypervisor.

When a kernel object of the vulnerable host is needed by the monitor, the initial work of mapping the memory will be done by XenAccess. XenAccess provides the requested memory location of a particular VM by mapping its relevant page frame to the monitoring VM. XenAccess can find the starting physical address of a particular kernel memory object through the System.map file of its kernel. XenAccess finds the physical address of the virtual address by traversing the page tables. After the physical address is found the machine address is found and then the page including this machine memory address is mapped to the required VM. Once this page of raw memory is provided together with an offset into the proper location, the upper layer casts the provided memory to the appropriate memory structures.

After the mapping of memory is done, the major challenge is to regenerate the semantics of the memory. In this project simple static kernel objects that are essential to integrity checking are regenerated and checked for integrity violations. As a further effort, the regeneration of the task list can be done. This is much challenging as this is a dynamically changing list and the list varies at compile time according to various configurations that are used.

The same kernel memory structures can vary even by the simple change of compiling kernel with a different configuration. Therefore a proper way of obtaining the exact kernel structures is a necessity. The structure definitions of the Monitored VM will be stored and accessed by the layer generating the Kernel Objects. Another difficulty of reconstructing the semantics is the difficulty of finding the end of a particular object. The ending memory location is essential when it comes to accessing kernel functions residing in the memory.

When the required kernel memory objects are reconstructed the integrity monitor can work on the provided objects and do the detection of exploitations in these kernel objects.

The original values of the static kernel objects will be stored so that when a exploitations occurs, these original values will help identify what has been changed.

Integrity checking requires the verification of a another type of memory contents, which are instruction code that are executable ( such as system calls ). For the checking of code segments exact semantic regeneration is not needed. Yet the ending location of a code has to be found for the proper integrity checking, this is a challenging thing.


\begin{figure}
\begin{center}
\caption{Integrity Monitor Model \label{fig:MonitorDesign}}

\ifpdf
    \includegraphics[width=4in]{images/projectDesign.png}
\else
    \includegraphics{images/projectDesign.png}
\fi
\end{center}
\end{figure}

\subsection{Critical Kernel Areas to monitor}

The success of the detector's ability to detect many exploits depends on the amount of kernel objects been monitored. There are two types of kernel objects which are static objects and dynamic objects.

\subsubsection{Static Kernel Objects}

These are the data structures maintained by the kernel which does not change during execution. These data structures are populated to the memory at the time of booting the system. It is very rarely that some of these static kernel objects are required to be changed. During the normal functioning of the kernel these data structures do not change. These static kernel objects can be monitored for any change by taking hash values of these data structures periodically and checking if the hash value is the same. If a change occurs to these static objects, the hash value is reported to be changed. Then the object can be compared with a original version of the object to find where the change has occurred. The change can then be reported so that corrective action can be taken.

Following are some of the static kernel data structures that are more vulnerable to be attacked.

\begin{description}
	\item[Kernel Text] can be monitored to check if the kernel instructions change. The locations to obtain for kernel text is range[\_text , \_etext] and [\_sinittext , \_einittext]
	\item[System call table] Whenever a system call is made, the references for the system call is located from sys\_call\_table. This table is one of the most attacked places of the kernel.
	\item[Other system tables] There are some static kernel objects that are attacked often such as Interrupt descriptor table and page-fault handler exception table. These objects which need to be monitored usually are not available in the Xen enabled host, so it is not needed to scan those tables.
	
\end{description}

\subsubsection{Dynamic Kernel Objects}
These data structures change as the kernel execute. Most of these data structures form a linked list to maintain the data structure. The list as well as each node itself can change anytime, therefore it is hard to device a method to accurately detect violations on these lists. 
One method is to detect if there are hidden entries. To detect hidden entries the same information can be obtained from both a user level utility and using the kernel data structures. If there is a mismatch it is possible to say that entries have been hidden.
Another method is to have a white list of allowed entries for dynamic objects and detect for any unlisted objects.

Following are some of the dynamic kernel data structures that are vulnerable to be attacked.

\begin{description}
\item[Kernel module] show the loaded modules to the kernel. Loading a rootkit can be done by installing a kernel module. Detecting for these can find malicious modules in execution.

\item[User Process] this is the list of processes that are executing on the OS. Malicious application can be run as a processes. 

\item[Network Socket] rootkits can run hidden daemons to provide remote access to outside. These unwanted network connections can be detected.

\end{description}



\subsection{Semantics Generation}

The link between the lower level raw memory obtained by Xen and the application level integrity monitor is the semantics generation step. The proper functioning and the reliability of the integrity monitor depends most on the semantic generation. 

The kernel is built up using various data structures and executable code. Each of the lists, tables, trees, graphs maintained by the kernel has to be handled differently according to the functionality they provide. Static kernel structures are easier to handle as they done change over time. The exact data type of a kernel object is needed when dynamic kernel structures are needed to be monitored. Dynamic kernel structures usually create a collection of objects by linking between each other. The creation of the proper links requires the exact data structure to be known. Without the exact data structure it would be impossible to obtain the references maintained to create the links between the structures.

In Dynamic kernel structures another difficulty lies in maintaining the ever changing list of kernel objects. The dynamic kernel objects constantly keep on adding nodes, deleting nodes and modifying nodes. This change should be reflected in the mapped data structures.

\subsection{Kernel access interface}
The kernel objects with the proper semantics should be easily accessible to the developers of the intrusion detection system. The kernel access interface provides a higher level view to obtain the created kernel objects.

The kernel objects that should be generated are requested by the access interface. When a object is requested, the lower level layers do the address translation, mapping of the required memory frames, discovery of the kernel objects exact data type and casts the memory into the proper data type to provide semantics and is maintained as mapped kernel objects. The integrity monitors can access the obtained data structures through this interface. This interface hides the complexity of the underlying layers from the integrity monitor.













