\documentclass[a4paper,10pt]{article}
\usepackage[utf8]{inputenc}
\usepackage{enumerate}
\usepackage{listings}
\lstset{%
language=C,
basicstyle=\footnotesize,
stepnumber=1,
showspaces=false,
showstringspaces=false,
showtabs=false,
frame=single,
tabsize=2,
breaklines=true,
breakatwhitespace=false,
}

% Title Page
\title{Reverse Map for Optimised RAID Reconstruction}
\author{Anuj Goel}


\begin{document}
\maketitle

\begin{abstract}
In this project, we focus on understanding the Software RAID code in LINUX Kernel to derive the set of blocks that get destroyed 
when two or more disks fail in a RAID array. Given a Logical sector number in the Virtual Device, we identify which Physical Disk 
and offset corresponds to it and vice versa. This analysis can be used to reduce the data reconstruction effort when an 
unrecoverable failure of a RAID group occurs. We have created a user space library that supports such a mapping for RAID4/5/6/10.
\end{abstract}

\section{Introduction}
The distributed block-level storage system is built on a set of JBOD servers, each of which is a Linux machine with 30-40 disks 
organized into a few software RAID groups(e.g. 8 disks per RAID group). In addition, this storage system uses a three-way replication 
to improve data availability: every logical block has three copies, each on a different JBOD server.

Whenever an unrecoverable failure of a RAID group occurs on a JBOD server, a naive approach is to reconstruct the entire RAID group. 
But a more intelligent approach is to reconstruct \emph{only} those logical blocks residing on the disks that actually fail. 
To do this, one needs to first identify the logical blocks that reside on the failed disks or the RAID group, and then reconstruct 
them from each identified logical block's two other copies.

\section{Multiple Device Driver}
The Software RAID code is a part of the Multiple Device Driver aka MD driver in Linux kernel. This code can be located in the \emph{/drivers/md} directory. We have used the Linux Kernel version 3.3.2 for our analysis. Some of the features of MD driver are:

\begin{itemize}
\item Each RAID array registers itself with MD as a personality by calling register\_md\_personality().
\item If one is using the /proc filesystem, /proc/mdstat lists all active md devices with information about them.
\item the md driver marks an array as \emph{dirty} before writing any data to it, and marks it as \emph{clean} when the array is being disabled, e.g. at shutdown.
\item Requesting a scrub will cause md to read every block on every device in the array, and check that the data is consistent.
\item MDADM is a Linux utility that can be used to create, manage, and monitor MD devices.
\end{itemize}

\section{Terms Involved}
Before going into the Linux Software RAID code, we explain some terms used frequently in various operations \cite{Remzi}.

\subsection{Striping}
\begin{table}[!htbp]
	\centering
		\begin{tabular}{|c|c|c|c|}\hline
  		Disk0  & Disk1  & Disk2 & Disk3  \\\hline
			0 & 1 & 2 & 3 \\\hline
			4 & 5 & 6 & 7 \\\hline
			8 & 9 & 10 & 11 \\\hline
			12 & 13 & 14 & 15 \\\hline
		\end{tabular}
	\caption{Simple Striping}
	\label{tab1}
\end{table}

Striping refers to spreading data across the disks in a round-robin fashion. This approach helps to achieve a high degree of parallelism when contiguous blocks are accessed (for example, reading a large file sequentially). The blocks in the same row form a stripe. In Table \ref{tab1}, blocks 0, 1, 2, and 3 are in the same stripe, and so on.

\subsection{Chunk Size}
\begin{table}[!htbp]
	\centering
		\begin{tabular}{|c|c|c|c|}\hline
  		Disk0  & Disk1  & Disk2 & Disk3  \\\hline
			0 & 2 & 4 & 6 \\\hline
			1 & 3 & 5 & 7 \\\hline
			8 & 10 & 12 & 14 \\\hline
			9 & 11 & 13 & 15 \\\hline
		\end{tabular}
	\caption{Striping with Bigger Chunk Size}
	\label{tab2}
\end{table}

Chunk size is defined as the number of blocks written to one disk before moving on to the next one.
In Table\ref{tab1}, we have made the simplifying assumption that only 1 block (each of say size 4KB) is placed on each disk before moving
on to the next. However, this arrangement need not be the case. For example, we could arrange the blocks across disks as in Table \ref{tab2}.

In Table\ref{tab2}, we place two 4KB blocks on each disk before moving on to the next disk. Thus, the \emph{chunk size} of this RAID array is 8KB, and a stripe thus consists of 4 chunks or 32KB of data.

\subsection{Superblock}
Each device in an array may have some meta data stored in the device. This meta data is called a \emph{superblock}. The meta data records information about the structure and state of the array. This allows the array to be reliably re-assembled after a shutdown.


\section{Understanding SW RAID Code}
In this section, we describe the current implementation of the Software RAID in the Linux kernel. 

\subsection{From File System to RAID Driver}
Here we describe the flow of the code when a block \emph{read} command is issued by the file system. We take the example of ext4 file system and a RAID5 array.
\begin{enumerate}
	\item The function ext4\_bread() in /fs/ext4/inode.c is responsible for issuing the low level read by calling ll\_rw\_block() with a pointer to the \emph{struct buffer\_head}.
	\item ll\_rw\_block: This function provides the low level access to the block devices. It checks if an updated copy of the block is already present in the buffer cache. If not, it issues a read request in the buffer head by calling submit\_bh().
	\item submit\_bh: This initializes a \emph{bio} structure where data can be read and calls submit\_bio(). A bio is the basic unit of operation for the block I/O layer.
	\item submit\_bio: This submits a bio to the Block Device Layer for I/O. It performs some basic accounting stuff and calls generic\_make\_request().
	\item generic\_make\_request: This function hands a buffer the the respective device driver for I/O. It is passed a \emph{struct bio} which describes the I/O that needs to be done. This function does not return any status. The success/failure status of the request, along with notification of completion, is delivered asynchronously through the bi\_end\_io function. It drops into the make\_request\_fn() of the device registered in the bio. If the underlying block device is a Logical Volume, the request function maps to \emph{dm\_request()} in dm.c, else it calls md\_make\_request().
	\item md\_make\_request: This function is a precursor to the personality's \emph{make\_request} function. IO requests come here first so that we can check if the device is being suspended pending a reconfiguration. Once a device is available, it goes inside the device specific make\_request function. 
\end{enumerate}

The make\_request function has different implementation for each RAID type. 

\subsection{Linux Software RAID5 code}
Here we list the sequence of steps after the control reaches the RAID5 make\_request function for a read operation.
This function is called with a pointer to the MD Device struct and the bio where the data needs to be read.

\begin{enumerate}
	\item First it is checked whether the superblock needs to be updated by calling md\_write\_start()
	\item If the incoming request is for \emph{read}, we check whether a reshape of the array is in progress. This can be checked in \emph{reshape\_position} field of struct mddev. If reshape\_position is MaxSector, then no reshape is happening.
	\item Next, we call the function chunk\_aligned\_read() for reads within a chunk.
	\item If the read request does not fit within a chunk, we return \emph{false} and the make\_request function splits the request into chunks.
	\item We create a copy of the bio so that the request is preserved in case of a read failure. This is done by calling bio\_clone\_mddev().
	\item set bi\_end\_io to raid5\_align\_endio. This function is called when the read from the disk completes or an error is reported.
	\item set bi\_private to the original bio. This will be needed during a \emph{retry} attempt in case the read fails.
	\item Map the sector number to the disk number and offset within the disk using raid5\_compute\_sector().
	\item Call rcu\_read\_lock() to acquire a read lock on the disk. Check if the disks are in synch and whether the bio request fits the queue size.
	\item Wait for the disk to enter the \emph{quiescent} state. Increment the active\_aligned\_reads counter and call generic\_make\_request() again.
	\item This call adds the bio to the head of the bio request queue. Later when the device specific make\_request\_fn() returns to generis\_make\_request(), it pops off this bio and schedules the read from the disk.

\end{enumerate}

\subsection{Aligned Read Status}
An aligned read request is one which can be completed within a chunk. The completion of this read request is notified to the RAID5 driver using raid5\_align\_endio() which was registered by chunk\_aligned\_read(). This function gets the status of the disk I/O as an argument. 
\begin{itemize}
	\item If the read was successful, it calls bio\_endio on the original bio. It decrements the active\_aligned\_reads counter and wakes up the threads waiting for this queue.
	\item If the read returned an error, it adds the bio to the retry\_list and calls md\_wakeup\_thread() which wakes up the \emph{raid5d} thread (explained in next section).
\end{itemize}

\subsection{RAID5 thread}
A RAID5 array is created by calling md\_ioctl() with ADD\_NEW\_DISK command in a loop. After all the disks have been added, md\_ioctl() is called with command RUN\_ARRAY. This calls the md\_run() function which in turn calls the personality's run() function. The RAID5 personality's run() function sets up the configuration of the array in setup\_conf().
It registers the Raid5 kernel thread with the MD driver and starts this thread.

This thread goes to sleep when there are no pending operations on the Raid device and is woken up by the MD driver after adding a bio to the request queue. It performs the below sequence of operations.

\begin{enumerate}
	\item Pop a bio from the retry list.
	\item Get the next stripe to process and pass it to handle\_stripe().
	\item handle\_stripe() calls handle\_stripe\_fill() which reads or computes data from other disks to satisfy pending requests using fetch\_block().
	\item fetch\_block() is responsible for issuing the actual read from the disk. If the disk has failed, it recomputes the data using parity, else it sets R5\_Wantread flags.
	\item handle\_stripe\_fill() sets the STRIPE\_HANDLE flag which was previously cleared by handle\_stripe().
	\item handle\_stripe() then calls ops\_run\_io() to register the I/O completion function, raid5\_end\_read\_request() and submit I/O requests for all the bio in the request queue of each disk. 
	\item Finally handle\_stripe() calls return\_io() and returns.
	\item raid5d then calls release\_stripe(). On completion of the I/O, the registered notification function raid5\_end\_read\_request() is called with the status of the I/O.
	\item raid5\_end\_read\_request() checks the bi\_flags field in the bio. If the BIO\_UPTODATE flag is set, the read request has finished successfully.
\end{enumerate}

	
\subsubsection{Successful Read from Disk}
\begin{enumerate}[a)]
	\item raid5\_end\_read\_request() sets the R5\_UPTODATE flag and STRIPE\_HANDLE flag, and calls release\_stripe(). 
	\item release\_stripe() wakes up the raid5d thread which again calls handle\_stripe(). handle\_stripe() calls raid\_run\_ops() which in turn calls ops\_run\_biofill() to copy the data previous read to user-submitted bio associated buffer. 
	\item Upon completion of this async copying, ops\_complete\_biofill() is called, which sets STRIPE\_HANDLE flag again.
	\item This time handle\_stripe() is called but it does nothing but clear STRIPE\_HANDLE flag.
	\item Control returns to make\_request, which calls bio\_endio to return user-submitted bio. 
	\item Finally, calls into release\_stripe() function which releases current stripe and if necessary, wake up raid5d to deal with other stripes.
\end{enumerate}
	
\subsubsection{Reading from a Failed Disk}
\begin{enumerate}[a)]
	\item If the disk from which the read is requested is failed or corrupt, the degraded field in the mddev struct is incremented.
	\item If this number becomes greater than or equal to the maximum allowed disk failures by this RAID configuration (2 in case of raid5), then raid5\_end\_read\_request() sets the R5\_ReadError flag and calls md\_error().
	\item md\_error() calls the error\_handler() function registered by the personality. for raid5, it is the function error() in raid5.c
	\item error() calls calc\_degraded() to re-calculate the number of failed disks in this array. It sets the BLOCKED and FAULTY flags in the superblock of the failed disks.
\end{enumerate}


\section{Our Implementation}
We provide a set of APIs to map a logical device sector number to the physical disk number and the sector offset into the disk and vice versa.
These APIs are defined in the file MapSector.c and can be directly used by a user program by linking with the library libsectormapper.a

\subsection{Data Structures Used}
We define a structure \emph{sector\_info} to hold the disk number and sector number within this disk corresponding to the given logical device sector number.
In RAID4/5/6, each Logical sector maps to only one Physical sector, so this structure is sufficient to generate this mapping.
However, in RAID10, there are \emph{near\_copies * far\_copies} number of copies of a logical sector. Hence we need an array of struct sector\_info to hold all the maped physical sectors. 

\begin{lstlisting}
  typedef struct sector_info {
    sector_t sector;
    int disknum;
  }sector_info_t;
                                                                             
  typedef struct raid10_sector_info {
    sector_info_t devs[0];
   }r10_sector_info_t;
\end{lstlisting}

\subsection{Interfaces Provided}
We provide the below interfaces to generate the sector mappings.

\begin{lstlisting}
void raid456_find_phys(raid5_conf_t *conf, sector_t r_sector, sector_info_t *s_info);
sector_t raid456_find_virt(raid5_conf_t *conf, sector_info_t *s_info);

void raid10_find_phys(raid10_conf_t *conf, sector_t r_sector, r10_sector_info_t *s_info);
sector_t raid10_find_virt(raid10_conf_t *conf, sector_info_t *s_info);
\end{lstlisting}

The *\_find\_phys() function maps the logical sector number, r\_sector into <sector number, disk number> tuple and stores it in the s\_info structure passed as an argument.
The *\_find\_virt() does the opposite, i.e. given a physical disk number and a sector offset in it, it computes the logical device sector number.
The first two functions generate this mapping for RAID 4/5/6.
The next two function do the same for RAID10.

\subsection{Testing}
The reverse mapping function has been tested in the following way.
\begin{itemize}
	\item Call *\_find\_phy() in a loop starting with Logical Sector number 0 upto a MAX\_LOGICAL\_SECTOR number. 
	\item For each <disk number, sector number> tuple returned, call the reverse map function (*\_find\_virt()) and compare the returned sector number with the original Logical Sector number.
	\item throw an error and exit in case of a mismatch.
\end{itemize}
	
This test code is a part of \emph{TestMap.c}. The various test parameters, such as MAX\_LOGICAL\_SECTOR, chunk\_size, number of raid disks, etc are defined and can be modified in the file TestMap.h

\subsection{Compiling code}
The sector mapping functions are provided in a Static Library, libsectormapper.a. This library can be generated using the provided Makefile.
For testing the mapping functions, the test code can be compiled using the \emph{\$make test} command.


\section{Conclusion and Future Work}
In this project we have devised a mapping from logical sector number to the <physical disk number, sector offset in this disk> and vice versa in a RAID array.
We do not maintain any look up tables to generate this mapping. Mapping in both the directions is done using a simple formula. Hence we do not see any scope of optimizing this also.
The current mapping only supports RAID4/5/6/10. However, if required, it can be extended to additional RAID configurations also supported by Linux Kernel, such as RAID0 and RAID1.

\begin{thebibliography}{1}

\bibitem{Katz} D. A. Patterson, G. Gibson and R. H. Katz, {\em A case for redundant arrays of inexpensive disks (RAID)} in Proceedings of the 1988 ACM SIGMOD international conference on Management of data, pp. 109-116, Chicago, Illinois, United States, 1988.

\bibitem{Remzi} Remzi H. Arpaci-Dusseau and Andrea C. Arpaci-Dusseau, {\em Redundant Arrays of Inexpensive Disks (RAIDs)} Chapter 37 of Operating Systems: Four Easy Pieces.

\bibitem{Disk Failure} Lakshmi Narayanan Bairavasundaram, {\em Characteristics, Impact and Tolerance of Partial Disk Failures} A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Computer Sciences) at the University of Wisconsin-Madison 2008.

\bibitem{linux journal}  Gadi Oxman, Ingo Molnar and Miguel de Icaza, {\em The Linux RAID-1, 4, 5 Code} from Linux Journal Article 2391, Dec 01, 1997.

\bibitem{S2-RAID} Jiguang Wan, Jibin Wang, Qing Yang, and Changsheng Xie, {\em S2-RAID: A New RAID Architecture for Fast Data Recovery} msst, pp.1-9, 2010 IEEE 26th Symposium on Mass Storage Systems and Technologies, 2010.

\bibitem{DDF} SNIA, {\em Common RAID Disk Data Format Specification} March 27, 2009.

\bibitem{speed}  Vivek Gite , {\em HowTo: Speed Up Linux Software Raid Building And Re-syncing} June 15, 2010.

\bibitem{mdadm} Bryan Smith, {\em mdadm (Linux Software RAID) Quick Reference} June 8 2010.

\bibitem{RAID Systems} Jimmy Persson, Gustav Evertsson, {\em RAID Systems} Research Paper, Blekinge Institute of Technology, Sweden, 2002-10-12.

\bibitem{Raid6} H. Peter Anvin, {\em The mathematics of RAID-6} http://kernel.org/pub/linux/kernel/people/hpa/raid6.pdf, 2011.

\bibitem{bio} Corbet, {\em Driver porting: the BIO structure} http://lwn.net/Articles/26404/, March 26, 2003. 

\bibitem{raid10} BXTra, {\em RAID 10 is not equal to RAID1+0} March 19, 2008.

\bibitem{LVM} Markus Gattol, {\em Logical Volume Manager} http://www.markus-gattol.name/ws/lvm.html, Last Updated 2012-05-17. 

\bibitem{notes} Suli Yang, {\em linux software raid5 code reading notes} 2010-11-28 

\bibitem{Mailing List} {\em Linux Kernel Mailing List} http://marc.info/?l=linux-raid.

\bibitem{road map} Neil Brown, {\em MD/RAID road-map 2011} 16 February 2011, the official MD/RAID development blog.

\bibitem{Failure trends} Eduardo Pinheiro, Wolf-Dietrich Weber and Luiz Andr'e Barroso, {\em Failure Trends in a Large Disk Drive Population} FAST '07: 5th USENIX Conference on File and Storage Technologies.

\end{thebibliography}


\end{document}

