\section{Background}
\label{background}
RAID (Redundant Arrays of Inexpensive Disks) systems were introduced in \cite{originalRAIDpaper}.  In RAID levels 1 to 5, a system of disks is used to store data, and this system of disks is tolerant to one failure at a time.  This fault tolerance is achieved by replication in RAID 1, and by a dedicated or distributed parity disk or disks in RAID levels 2 through 5.  The parity disks in RAID 2 are calculated using the Hamming code, while the parity disks/sections in RAID 3 through 5 are calculated using a simple exclusive-or operation.

RAID 6 is the next level of RAID.  In RAID 6, the system of disks must be tolerant to two disk failures, rather than one.  However, in order for the system to be tolerant to any two failures, there must be more advanced parity encoding than simple exclusive-or to find the final disk.  One of the parity disks can be calculated using the simple exclusive-or parity calculation.  The other disk, however, must be calculated in a separate, more advanced way. There are a variety of ways to calculate the parity disks for RAID 6 systems. Erasure codes are one of the ways that these systems are calculated.   

Erasure codes are used in storage systems in which failures need to be tolerated. Disk array systems, data grids, collaborative/distributed storage applications, peer to peer networking, and archival storage are all systems in which failures can occur and where data loss can be catastrophic \cite{planktutorial2005}.  Several different entities use erasure codes in their fault tolerant systems, including storage companies such as Cleversafe \cite{cleversafe} and DataDomain \cite{datadomain}; academic projects including Oceanstore \cite{oceanstore} and Pergamum \cite{pergamum}; and major technology corporations such as Hewlett Packard \cite{hewlettpackard}, IBM (\cite{ibm1, ibm2}), and Microsoft (\cite{microsoft1, microsoft2}). 

% See senior thesis for citations for all of the above

While erasure codes apply to RAID 6 systems, they are also more general in that storage systems using erasure codes do not necessarily have exactly two parity disks.  In general, erasure codes take $k$ data disks and encode them $m$ parity disks (Figure~\ref{fig:encoding}).  In Maximum Density Separable (MDS) codes, the systems of $k+m$ disks are tolerant to up to $m$ failures. The failed disks are obtained from the remaining $k$ or more disks through a decoding process (Figure~\ref{fig:decoding}). Erasure codes have a parameter called the word size, which is referred to as $w$.  Each of the disks is split into words and the word is operated on rather than individual bits.  The most general erasure code is Reed-Solomon coding \cite{planktutorial}. 

\begin{figure}[!t]
\centering
\includegraphics[width=2.5in]{encoding.png}
\caption{Example in which the encoding process creates two parity disks from six data disks, resulting in a total system of 8 disks.}
\label{fig:encoding}
\end{figure}

\begin{figure}[!t]
\centering
\includegraphics[width=2.5in]{decoding.png}
\caption{Example in which two of the eight disks are lost. The decoding process results in the reconstruction of the missing data disk. To regenerate the missing coding disk, another encoding process would have to take place.}
\label{fig:decoding}
\end{figure}


In Reed Solomon coding, the computation to obtain the parity disks is a matrix-vector multiply operation, where the vector is the data contained in the particular storage system (the data on the $k$ disks) and the matrix is a $m \times k$ Vandermonde matrix:

\begin{equation}
\begin{bmatrix}
1 & 1 & 1 & \cdots & 1 \\
1 & 2 & 3 & \cdots & n \\
\vdots & \vdots & \vdots & & \vdots \\
1 & 2^{m-1} & 3^{m-1} & \cdots & n^{m-1} 
\end{bmatrix}
\end{equation}

So, consider a system of $k$ data words (binary string of length $w$), denoted below by $d_i$, where $ 1 \le i \le k$.  The resulting $m$ parity disks (denoted $c_i$ for $1 \le i \le m$) would be obtained by the following calculation:

\begin{equation}
\begin{bmatrix}
1 & 1 & 1 & \cdots & 1 \\
1 & 2 & 3 & \cdots & k \\
\vdots & \vdots & \vdots & & \vdots \\
1 & 2^{m-1} & 3^{m-1} & \cdots & k^{m-1} 
\end{bmatrix}
* \begin{bmatrix}
d_1 \\
d_2 \\
\vdots \\
d_k
\end{bmatrix}
= 
\begin{bmatrix}
c_1 \\
c_2 \\
\vdots \\
c_m 
\end{bmatrix}
\end{equation}

The issue with this calculation is that the each resulting $c_i$ must also be a a binary string of length $w$.  To guarantee this kind of calculation, we cannot simply use modulo arithmetic, because division is undefined for some modulo arithmetic systems.  To guarantee that division is defined, a \textit{field} is required.  \textit{Fields}, in algebra, are defined as follows: they are closed under addition and multiplication, every element has an additive inverse, and every element except for 0 has a multiplicative inverse.  The field that is used in these calculations is the Galois field.  So, we are concerned with Galois fields with $2^w$ elements.  These fields are denoted $GF(2^w)$ \cite{planktutorial}.

In Galois fields, addition and subtraction operations are done using bitwise exclusive-ors.  Multiplication and division operations are much more complicated.  Specifically, multiplication and division use primitive polynomials of degree $w$ whose coefficients are elements in $GF(2)$. Note that $GF(2)$ is the set $ \lbrace 0, 1 \rbrace$.  The field of $GF(2^w)$ is formed by first finding the primitive polynomial of degree $w$ over $GF(2)$.  This primitive polynomial is called $q(x)$.  Then, the set is generated as follows. The base set is 0, 1 and $x$.  Then, the remaining elements in the set are enumerated by multiplying the last element in the set by $x$. If the resulting polynomial has degree greater than or equal to $w$, then the final step is to take the result modulo $q(x)$, the primitive polynomial.  The enumeration ends when there are $2^w$ elements.  Each of these polynomials maps uniquely to a binary number of length $w$ as follows: the $i^{th}$ bit of the $w$-bit number is the coefficient of $x^i$ in the polynomial.

Multiplication is done by converting the binary number to a polynomial, multiplying the polynomials modulo $q(x)$ and converting the result back to a binary number \cite{planktutorial}.  To speed up multiplication and division performance, table lookups are used.  For $w \le 8$, full tables containing the product or quotient, indexed by the operands of the multiplication and division, are used. For $9 \le w \le 16$, logarithm and inverse logarithm tables are used.  Two elements in $GF(2^w)$ can be multiplied by taking their logs, adding them together, and then finding the inverse log.  Similarly, to divide two elements in $GF(2^w)$, their logs can be subtracted, and then the inverse log can be taken to find the result.  For $w = 32$, there is a special case in which seven multiplication tables are used to obtain the result (\cite{PlankPFGA, planktutorial}). 

As all encoding and decoding operations require matrix-vector operations, all of these operations require arithmetic in $GF(2^w)$.  Encoding performance is important in the initial setup of the system. Mean time to repair (MTTR) of the system after a failure is always dependent on the decoding performance, and, if one of the disks lost is a parity disk, it is also dependent on encoding performance.  As noted in \cite{originalRAIDpaper}, MTTR is a very important metric in these systems.  The higher the MTTR is in a system, the more likely it is that more disks will fail during that time, resulting in data loss.  So, the less time it takes to encode and decode, the less time it takes to repair a disk, resulting less permanent data loss in the system.
