\documentclass[11pt,twocolumn]{article}

\usepackage{hyperref}
\usepackage{graphicx}
\usepackage{subfig}
\usepackage{float}
\usepackage{epsfig}
\usepackage{epstopdf}
\usepackage{listings}
\lstset{language=C}
\begin{document}

\title{\emph{"maylloc"}, a fast and scalable multi-threaded memory allocator.}
\author{Anuj Goel \and Naresh Singh}
\date{\today}

\maketitle

\begin{abstract}
In this project, we introduce \emph{maylloc} (for 'may allocate':)), a fast and scalable memory allocator for multi-threaded applications.
Our main goal is to make memory allocation fast and suitable for multi-threaded applications. We achieve this goal at the expense 
of using additional memory for book-keeping. Since memory is growing in size, getting cheaper and faster, we believe 
extra booking is a reasonable trade-off. However, our booking is simple (presumably without any locks) which makes our allocator
to work fast irrespective of the number of CPUs or number of threads. In principle, it is similar to hoard \cite{Berger00hoard:a} and follows its core idea of keeping
the allocator as simple as possible. \emph{maylloc} also avoids \emph{ False Sharing} of allocations by not sharing pages between threads.
In this report, we intend to cover our current design, and implementation. We will also discuss the preliminary tests we performed to test 
our alloctor. We would also suggest additional key design concepts which are not implemented in the current version.
\end{abstract}

\section{Motivation}
The main principle behind our design is simple book-keeping which allows the allocator to scale without any reasonable overhead. We achieve this simplicity by rounding-off our allocations to next power of 2. However, this wastes some memory. Since we are living in \emph{second} decade of $21^{st}$ century, we believe this is a reasonable trade-off. With the increasing memory sizes and reducing costs, this overhead is a good price for the gain in performance. We insist on using least (possibly none) locking in our design. During the course of this report, we will show how we can achieve almost lock free implementation of the allocator. Our allocator avoids both \textbf{active and passive} false sharing by maintaing exclusive page access for each thread. That said, if the threads voluntarily share memory, we do nothing to avoid that.

\section{Design Overview}
In this section, we discuss our current design, and key ideas. We request memory from the underlying OS at Page granularity. Essentially, there are two kinds of requests. Ones which are less than \emph{PAGE\_SIZE/2} bytes and other which are higher than that. We call these two requests \emph{small page requests} and \emph{large page requests}. Each page in small allocations contain a header which records the number of allocated blocks, allocation size, a bitmap for allocated blocks and free blocks in the current page. Given any allocated pointer, we can get to this header by simple maths. This is possible because our headers are in the beginning of a \emph{PAGE}. Each page can be used to allocate only a fixed size objects for a particular thread. For pages larger than \emph{PAGE\_SIZE/2}, we directly allocate from the system using \emph{mmap}. These large pages have a different header, called \emph{large\_page\_header} which stores the number of allocated pages. To distingish between these two kinds of allocations, we use the first bit of small or large page header. We set the bit to 0 for small headers and 1 for large headers. Rest of the bits are used for other book-keeping.
\begin{figure}[H]
    \centering
    \subfloat[Small Page Allocations.]{\includegraphics[width=0.50\textwidth]{small_page_header.eps}}
    \caption{Small Page Allocations.}
\end{figure}
\begin{figure}[H]
    \ContinuedFloat
    \centering
    \subfloat[Large Page Allocations.]{\includegraphics[width=0.4\textwidth]{large_page_allocation.eps}}
    \caption{Small Page Allocations.}
\end{figure}
Since, all our allocations for a thread are page aligned, we avoid \emph{allocator induced} False Sharing. The small allocations are of sizes 4, 8, 16 and so on till 2048. We maintain a table, \emph{free\_list} in the Thread Local Storage which contains link to free pages for each allocation size. This table is of size \emph{NUMBER\_OF\_ALLOCATION\_UNITS*INT\_SIZE}. Each of the free pages of a particular allocation size are linked through a pointer in the allocation header. Large pages are freed using \emph{munmap} system calls. We store the size in pages in the large page header which is used in the \emph{unmap} system call. Next, we describe the implemented memory allocator functions in detail.
\begin{lstlisting}
//Main structures used by maylloc
struct free_entry {
  struct page_header *free_start;
  struct page_header *free_end;
};

//small page header.
struct page_header {
  short size;
  short allocated_blocks;
  void *free_ptr;
  char bitmap[BITMAP_SIZE];
};

//large page header.
struct large_page_header {
  short size;
};

//TLS for free_list.
static __thread struct 
free_entry free_list[FREE_LIST_SIZE];
\end{lstlisting}
\subsection{Allocation}
\emph{maylloc} allocates memory in two different ways depending on the size of allocation. If the memory requirement is less than or equal to \emph{PAGE\_SIZE/2}, we follow the link to the free page for that allocation size in the \emph{free\_list}.Each Page Header contains a bitmap to keep track of allocated and free pages. A given bit in the bitmap is set if the page is allocated and reset is the page is free. We scan the bitmap to find the first free block available for allocation. We have optimised this search using the \emph{repeat-compare} instructions in extended inline assembly. The free list will point to \emph{NULL} if all the blocks in the page are allocated. In this case, we allocate a new page using  \emph{mmap}. We initialise the header fields, update the \emph{free\_page} pointer in the \emph{free\_list} corresponding to this allocation size and return the address of the allocated block. For example, if the allocation request is of size 20 bytes, we round it up to next power of 2, i.e.32 bytes. Then follow pointer in the \emph{free\_list} to get the page with available allocation units of 32 bytes. We search the bitmap for the first free block and return its pointer to the caller.

\subsection{Deallocation}
Deallocation is done using the \emph{free} function. This function expects a pointer to the block of memory to be freed. Here the trick lies in finding the block which was allocated to this memory request. The algorithm works as follows. The passed pointer is rounded down to \emph{PAGE\_SIZE}. The allocation size is read from the Page Header at the begining of the Page. The block number is calculated by subtracting the page base address from the pointer and dividing the result by the block size. The bitmap entry for the block is reset and the allocation count in this page is decremented. If the page was initially full, it would not have occured on the free list of pages. For such a case, we add this page to the free list so that a new request can be serviced from this page. If this was the only block allocated in this page, we reset its allocation count and keep it reserved for future requests.     

\subsection{Realloc}
The \emph{realloc} function changes the size of the memory block pointed to by ptr to size bytes. (cited from realloc(3) - Linux man page). In our implementation, we check if the new size can be accomodated in the already allocated block. If so, we return the same address to the caller. For example, if the original malloc was for 300 bytes and realloc is called for 400 bytes, we can reuse the already allocated block of 512 bytes. In case the new request exceeds the block size, we allocate a new block on \emph{best fit} basis, copy the old data to the new block and return the address of the new block to the caller.

\subsection{Calloc}
Our implementation of calloc directly calls the \emph{malloc} function with a request for \emph{num*size} bytes.

\section{Analytical Results}
In this section, we discuss bounds on memory used v/s allocated by our allocator. Since, our allocator allocates memory in powers of 2, its memory usage is bounded by a factor of $\frac{1}{2}$. However, we also have some book-keeping in every page. This adds up a constant factor of 136 bytes to the bound. This factor plus constant, at first, would seem a little high. This is because we are allocating memory in pages. However, it we increase our allocation sizes of small allocations to 16 pages, this constant factor would loose its prior significance. This idea is similar to the idea of superblock used by \cite{Berger00hoard:a}. In the current setting, our allocator would have the least memory wastage if majority of allocations are of size 4. It would perform worst if the allocations are of size from 1024 to 2048. This is because only a single block would be allocated from a page. This would end-up wasting $\frac{3}{4}^{th}$ of memory. However, the above discussed enhancement of using 16 page allocation would render this deficiency unnoticeable.
\section{Experiments}
We tested our allocator with linux utilities like ls, cat, vim etc. to make sure it works for simple use cases. We measured the performance of our allocator using well-known \emph{threadtest} \cite{hoardurl,cdftorontourl} tool written by \emph{Emery Berger}, inventor of hoard. We tested our performance 
for different number of threads from 1 to 5. Around 27 different objects sizes were randomly selected for allocation. We present two plots here. One showing time taken to allocate 3000 objects on an average against number of threads. Another one shows time taken to allocate 3000 objects for various sizes. The \emph{Thread v/s Time} plot on a single processor machine is not a very accurate method of testing performance of multi-threaded allocators. This plot does not reflect correctly the fact that our allocator, theoretically, perform similar irrepective of number of threads. A multi-core machine where threads are simultaneously executing would be a more reasonable choice for this plot. But, such a machine was unavailable to us.
\begin{figure}[H]
    \centering
    \subfloat[Threads v/s Average Time for 3000 allocations.]{\includegraphics[width=0.35\textwidth,angle=-90]{maylloc_thread.eps}}
    \caption{The numbers are averaged for different object sizes.}
\end{figure}
\begin{figure}[H]
    \ContinuedFloat
    \centering
    \subfloat[Size v/s Average Time for 3000 allocations.]{\includegraphics[width=0.35\textwidth,angle=-90]{maylloc_size.eps}}
    \caption{The numbers show time taken for 3000 allocations for each object size.}
\end{figure}

\section{Conclusions}
In this project, we introduced the design of \emph{maylloc}, the fast and scalable multi-threaded memory allocator. We implemented a basic design of our allocator and observed it capability to scale for multi-threaded applications. It achieves this by ensuring each thread allocates memory from a page exclusively designated for itself. Though, it doesn't prevent threads from passing pointers between themselves. We shall discuss that problem and a possible solution in future work section. Since, a thread does not allocate from any other thread's heap, this design avoids active and passive \emph{False Sharing}. To facilitate multiple allocations from the same page, it maintains an allocation bitmap and allocation count. Allocation bitmap keeps track of the free/used blocks. Allocation count keeps count of allocated blocks from the page. For any new allocation request, \emph{maylloc} uses a table stored in TLS which points to pages containing free blocks. Using TLS avoids any locking which would otherwise be necessary. We, thus, believe our design is highly scalable and fast. In future work, we discuss few approaches to improve this allocator.

\section{Future Work}
In the above discussions, we have not discussed some crucial points because they are not part of our implementations. Nevertheless, they are crucial to achieve the intended goal of \emph{maylloc}. We shall discuss them in this section. In the design outline, we said that small allocations are allocated as a single page. The beginning of that page contains the header. Our current small page header is 136 bytes which is too big. This header would be part of any page allocated for small allocation scheme. However, if we increase the size of our allocation request to the OS by a fraction, we would achive two fold benefit. First, we would make that \emph{fraction} less of system calls. For example, if we request size \emph{16*PAGE\_SIZE} instead of just \emph{PAGE\_SIZE}, we would make only $\frac{1}{16}^{th}$ of system calls. A second benefit of this modification would be reduced overall header size. This would give us better utilization of memory. Thus, we would be able to pack block allocations tighter from the allocations.

One problem which we did not discuss until now is what happens if a thread allocates a block and passes it on to another thread. In such a case, the second thread would use the block as ususual. The real problem would arise if it tries to free that block. Since, all the book-keeping is in the page and in the allocating thread TLS, we have to come up with a way to do this efficiently. Here, we propose a scheme to handle such case. Currently, we have just one bit to keep track of allocation/free of a block of allocation from a page. Instead, we could use two bits for book-keeping the blocks. To counter the extra space usage, we can restrict our allocator to allocate a minimum size of 8 instead of 4. This would free \emph{half} of the book-keeping bits. We can use these bits to keep track of \emph{free} by other threads. Here is how it works. The second thread, instead of freeing the block would set this second bit. This would not require any locking as the block is being used by the second thread, not the first thread. We shall assume if two threads are modifying a block together, that would probably be a bug. Other allocators would not be able to aboid this as well. So, all the second thread does is just setting the bit. Now, when an allocation request come to the first thread, it would check this flag to ascertain the block is not used by another thread.

With these enhancement, we conclude that our allocator would presumably be lockless, scalable and efficient. However, we would certainly require implementing these changes and testing them to verify these claims. Till now, we have not tested the case of two threads sharing memory. But, we believe that the approach we discussed can be implemented easily and it would essentially work fine.
\bibliographystyle{plain}
\bibliography{bib}
\end{document}
