\documentclass[a4paper]{article}
\usepackage[english]{babel}

\newcommand{\super}[1]{\ensuremath{^{\textrm{#1}}}}
\newcommand{\sub}[1]{\ensuremath{_{\textrm{#1}}}}

\usepackage[top=2.0cm, bottom=2.0cm, left=2.0cm, right=2.0cm]{geometry}

\begin{document}

\title{The Turtles Project: Design and Implementation of Nested Virtualization - summary}

\author{Si-Mohamed Lamraoui,
\and Liviu Varga}

\maketitle

\section{Introduction}

Nested virtualization allows an hypervisor running on bar-metal to run other hypervisors as a guest.  This is needed as the operating systems nowadays gain virtualization capabilities. Nested virtualization could be also used in platforms running hypervisors embedded in firmware. As cloud computing is becoming more and more popular nested virtualization can offer live migration to users' hypervisors and their guest virtual machines. It also opens new approaches to computer security as well as testing, demonstrating, benchmarking and debugging hypervisors and virtualization setups.

\section{Related Work}

There has been a lot of work around this concept of nested virtualization for quite a lot of time. It was first mentioned in the 70's and the first practical implementation was done by IBM in the early 90's. Later, because of the extension brought by IBM and AMD to the hardware architecture virtualization become much more efficient. Xen hypervisor have a recent effort to incorporate nested virtualization on the same principle as Turtle. There is also a commercial product from ScaleMP to provide "VM on VM" but there haven't been published any details about the implementation.

\section{Design and Implementation}

Turtle project is based on the KVM hypervisor. It can host multiple guest hypervisors simultaneously, each with its own multiple nested guest operating systems. 

\subsection{Theory of Operation}

There are two possible models for nested virtualization: \textit{multi-level architectural support for nested virtualization}, model implemented in the IBM System z architecture; and \textit{single-level architectural support for nested virtualization}, which has a single hypervisor. This one uses the processors virtualization instructions (VMX for Intel and SVM for AMD) and the guest hypervisors have to emulate these, which is done recursively. The discussion is limited to 3 levels: host hypervisor, guest hypervisor and guest OS. 

The main idea for nested virtualization is to \textit{multiplex} multiple levels of virtualization on the single level of architectural support for virtualization.

\subsection{CPU: Nested VMX Virtualization}

Virtualization on x86 processors used to be slow because it was only done by software.
Thanks to the Intel and AMD's CPU extension, we can now achieve good performences with a single level of virtualization. In the case of nested virtualization, this hardware extension is not sufficient and need some adjustment. The authors of the Turtle project manage to use this set of instructions and its control structures in a nested scenario. As the guest hypervisors cannot use directly the VMX instructions, they emulate it within the host hypervisor. 

\subsection{MMU: Multi-dimensional Paging}

With nested virtualization there is a need of a third layer of address translation, compared to two in the usual case. Modern hardware architectures offer two dimensional page tables. Before this, a technique known as \textit {shadow page tables} was used. A guest creates a guest table page, which translates guest virtual addresses to guest physical addresses. Based on this table, the hypervisor creates a new page table, the shadow page table, which translates guest virtual addresses directly to the corresponding host physical address.

x86 architecture has two-dimensional page tables in the hardware MMU, EPT by Intel (and NPT by AMD). Based on the processor capabilities, both hypervisors, L\sub{0} and L\sub{1}, determine independently the preferred mechanism:
Shadow-on-shadow is used when the processor does not support two-dimensional page tables and is the leas efficient method.

Shadow-on-EPT is used when the processor supports EPT.

Multi-dimensional page table, as in two dimensional tables, each level creates own separate translation table. L\sub{0} exposes EPT capabilities to L\sub{1} and it then compresses the EPT$_{0\rightarrow 1}$ and EPT$_{1\rightarrow 2}$ into a single EPT$_{0\rightarrow 2}$ which translates directly from the L\sub{2} guest physical address to the L\sub{0} host physical address, reducing the number of page fault exits and improving nested virtualization performance.

\subsection{I/O: Multi-level Device Assignement}

The authors explore serveral ways to deal with IO in nested environement. They came up with a new technique called \textit{Multi-level Device Assignement}. This technique allows a guest OS to directly communicate with a device using DMA (with IOMMU). As the guest hypervisors do not have acces to the IOMMU, the host hypervisor will emulate it. At the end, the guest OS communicates directly with the real IOMMU.

\subsection{Micro Optimizations}

Mainly, there are two places where a guest of a nested hypervisor is slower than the same guest running on a bare-metal hypervisor:
	\begin{itemize}
	\item Transitions between L\sub{1} and L\sub{2}. One solution would be to copy the data between VMCS's only if relevant values were modified. This has to be carefully balanced, because also the tracking of values implies a cost. Another one is to copy multiple VMCS fields at once, but this approach it's not recommended by Intel specifications. 
	\item Exit handling in L\sub{1}. Additional exits are caused by the privileged instructions vmread and vmwrite. This trapping on every instruction can be improved by using binary translation of them, replacing with non-trapping memory load or store. This was tested by modifying L\sub{1} to directly read/write VMCS$_{1\rightarrow 2}$ in memory (\textit{DRW}).
	\end{itemize}
	
\section{Evaluation}

We will not show any graphs in here, but what can be concluded is that nested virtualization adds as little as 6\% to 8\% overhead when running unmodified binary applications. The best results where obtained when not taking into account the full specifications of Intel about the virtualization instructions. 

\section{Conclusion}

Despite of the lack of architectural support for nested virtualization this paper showed that it can be done rather efficiently. It also shows that this could be used in cloud computing, where migration can be a point to take into account as well as security. Developing, testing and debugging of other hypervisors and OSs can also take advantage of nested virtualization. More over, a fact which we find important is that the project is already implemented in KVM and can be used by anyone who is interested in this area.

\end{document}