\section{Background}
\label{sec:background}

\subsection{GMP}
\label{ssec:gmp}
Before discussing the algorithms used by GMP it is important to understand how it stores these large numbers. The method is based off the idea of breaking each number into a seriers of limbs. Each limb then is composed of a number of digits. Each limb is defined as the largest part of a number that fits into a single word. On most machines this will be either 32 of 64 bits. These small limbs are strung together to form large numbers. It is also important to note that limbs are stored in little endian because of the nature of the \ttt{mpn\_*()} functions. The next section will discuss the four multiplication algorithms used by GMP, namely: basecase, Karatsuba's, Toom 3, and FFT.

\subsection{Algorithms}
\label{ssec:algorithms}

\subsubsection{Base Case}
\label{sssec:base_case}
Basecase NxM multiplication is a rectangular set of cross-products, the same as long multiplication done by hand and for that reason it is sometimes known as the schoolbook or grammar school method. \\

We know that the cross products above and below the diagonal are the same, hence a square can be done in half the time of a multiply. A triangle of products below the diagonal is formed, doubled (left shift by one bit), and then the products on the diagonal added. This procedure, shown below, produces a run time of $O(N\times M)$.\\

\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
 \hline
\ & \bf u0 & \bf u1 & \bf u2 & \bf u3 & \bf u4\\
\hline
\bf u0 & d & \ & \ & \  & \ \\
\hline
\bf u1 & \ & d & \ & \  & \ \\
\hline
\bf u2 & \ & \ & d & \  & \ \\
\hline
\bf u3 & \ & \ & \ & d  & \ \\
\hline
\bf u4 & \ & \ & \ & \  & d \\
\hline
\end{tabular}
\end{center}

\subsubsection{Karatsuba's}
\label{sssec:karatsubas}
The basic step of Karatsuba's algorithm is a formula that allows us to compute the product of two large numbers x and y using three multiplications of smaller numbers, each with about half as many digits as x or y, plus some additions and digit shifts. The inputs x and y are treated as each split into two parts of equal length (or the most significant part one limb shorter if N is odd). \\

\begin{center}
\begin{tabular}{|l|r|}
\hline
\bf high & \bf low \\
\hline
x1 & x0\\
\hline
y1 & y0 \\
\hline
\end{tabular}
\end{center}

Let $b$ be the power of 2 where the split occurs, ie. if $x_0$ is $k$ limbs ($y_0$ the same) then $b=2^(k*mp_bits_per_limb)$. With that $x=x_1*b+x_0$ and $y=y_1*b+y_0$, and the following holds, \\

\begin{center}
  $x*y = (b^2+b)*x_1*y_1 - b*(x_1-x_0)*(y_1-y_0) + (b+1)*x_0*y_0$
\end{center}

This formula means doing only three multiplies of $(N/2)\times (N/2)$ limbs, whereas a basecase multiply of $N\times N$ limbs is equivalent to four multiplies of $(N/2)\times (N/2)$. The factors $(b^2+b)$ etc represent the positions where the three products must be added. \\

\begin{center}
\includegraphics[scale=0.5]{karatsuba.png}
\end{center}

The term $(x_1-x_0)*(y_1-y_0)$ is best calculated as an absolute value, and the sign used to choose to add or subtract. Notice the sum high $(x_0*y_0)+low(x_1*y_1)$ occurs twice, so it's possible to do $5*k$ limb additions, rather than $6*k$, but in GMP extra function call overheads outweigh the saving. Squaring is similar to multiplying, but with $x=y$ the formula reduces to an equivalent with three squares, \\

\begin{center}
$x^2 = (b^2+b)*x_1^2 - b*(x_1-x_0)^2 + (b+1)*x_0^2$
\end{center}

The final result is accumulated from those three squares the same way as for the three multiplies above. The middle term $(x_1-x_0)^2$ is now always positive. A similar formula for both multiplying and squaring can be constructed with a middle term $(x_1+x_0)*(y_1+y_0)$. But those sums can exceed $k$ limbs, leading to more carry handling and additions than the form above. \\

\subsubsection{Toom-3}
\label{sssec:toom3}
The Karatsuba formula is the simplest case of a general approach to splitting inputs that leads to both Toom and FFT algorithms. The 3-way form used in GMP is described here. The operands are each considered split into 3 pieces of equal length (or the most significant part 1 or 2 limbs shorter than the other two). \\
% use packages: array
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
high & & low\\ \hline
$x_{2}$ & $x_{1}$ & $x_{0}$ \\ \hline
$y_{2}$ & $y_{1}$ & $y_{0}$ \\ \hline
\end{tabular}
\end{center}

These parts are treated as the coefficients of two polynomials 
\[
 X(t)=x_2t^2 + x_1t + x_0
\]
\[
 Y(t)=y_2t^2 + y_1t + y_0
\]
Let $b$ equal the power of 2 which is the size of the $x_0$, $x_1$, $y_0$ and $y_1$ pieces, ie. if they're $k$ limbs each then $b=2^{(k*mp\_bits\_per\_limb)}$. With this $x=X(b)$ and $y=Y(b)$. 

Let $a$ polynomial $W(t)=X(t)*Y(t)$ and suppose its coefficients are
\[
     W(t) = w_4*t^4 + w_3*t^3 + w_2*t^2 + w_1*t + w_1
\]
The $w_i$ are going to be determined, and when they are they'll give the final result using $w=W(b)$, since $x*y=X(b)*Y(b)=W(b)$. The coefficients will be roughly $b^2$ each, and the final $W(b)$ will be an addition like,

\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
 \hline
\ & \bf u0 & \bf u1 & \bf u2 & \bf u3 & \bf u4\\
\hline
\bf u0 & d & \ & \ & \  & \ \\
\hline
\bf u1 & \ & d & \ & \  & \ \\
\hline
\bf u2 & \ & \ & d & \  & \ \\
\hline
\bf u3 & \ & \ & \ & d  & \ \\
\hline
\bf u4 & \ & \ & \ & \  & d \\
\hline
\end{tabular}
\end{center}

The $w_i$ coefficients could be formed by a simple set of cross products, like $w_4=x_2*y_2$, $w_3=x_2*y_1+x_1*y_2$, $w_2=x_2*y_0+x_1*y_1+x_0*y_2$ etc, but this would need all nine $x_i*y_j$ for $i,j=0,1,2$, and would be equivalent merely to a basecase multiply. Instead the following approach is used. $X(t)$ and $Y(t)$ are evaluated and multiplied at 5 points, giving values of $W(t)$ at those points. In GMP the following points are used, \\

\begin{center}
% use packages: array
\begin{tabular}{ll}
Point & Value \\ 
$t=0$ & $x_0y_0$, which gives $w_0$ immediately \\ 
$t=1$ & $(x_2+x_1+x_0)(y_2+y_1+y_0)$ \\ 
$t=-1$ & $(x_2-x_1+x_0)(y_2-y_1+y_0)$  \\ 
$t=2$ & $(4x_2+x_1+x_0)(4y_2+y_1+y_0)$ \\ 
$t=\infty$ & $x_2y_2$, which gives $w_4$ immediately\\
\end{tabular}
\end{center}

At $t=-1$ the values can be negative and that's handled using the absolute values and tracking the sign separately. At $t=\infty$ the value is actually $\lim_{t \rightarrow \infty}(X(t)Y(t)/t^4)$ in the limit as t approaches infinity, but it's much easier to think of as simply $x_2y_2$ giving $w_4$ immediately (much like $x_0*y_0$ at $t=0$ gives $w_0$ immediately). Each of the points substituted into $W(t)=w_4t^4+...+w_0$ gives a linear combination of the $w[i]$ coefficients, and the value of those combinations has just been calculated. \\

\begin{center}
% use packages: array
\begin{tabular}{l c r}
$W(0)$ & =& $w_0$\\ 
$W(1)$ & =&$w_4 + w_3 + w_2 + w_1 + w_0$ \\ 
$W(-1)$ &=&$w_4 - w_3 + w_2 - w_1 + w_0$  \\ 
$W(2)$ & =&16$w_4 + 8w_3 + 4w_2 + 2w_1 + w_0$ \\ 
$W(\infty)$ & =& $w_4$\\
\end{tabular}
\end{center}

This is a set of five equations in five unknowns, and some elementary linear algebra quickly isolates each $w[i]$. This involves adding or subtracting one $W(t)$ value from another, and a couple of divisions by powers of 2 and one division by 3, the latter using the special mpn\_divexact\_by3.\\

The conversion of $W(t)$ values to the coefficients is interpolation. A polynomial of degree 4 like $W(t)$ is uniquely determined by values known at 5 different points. The points are arbitrary and can be chosen to make the linear equations come out with a convenient set of steps for quickly isolating the $w[i]$.\\

Squaring follows the same procedure as multiplication, but there's only one $X(t)$ and it's evaluated at the 5 points, and those values squared to give values of $W(t)$. The interpolation is then identical, and in fact the same toom3\_interpolate subroutine is used for both squaring and multiplying.\\

Toom-3 is asymptotically $O(N^{1.465})$, the exponent being $log(5)/log(3)$, representing 5 recursive multiplies of $1/3$ the original size each. This is an improvement over Karatsuba at $O(N^{1.585})$, though Toom does more work in the evaluation and interpolation and so it only realizes its advantage above a certain size.\\

Near the crossover between Toom-3 and Karatsuba there's generally a range of sizes where the difference between the two is small. MUL\_TOOM3\_THRESHOLD is a somewhat arbitrary point in that range and successive runs of the tune program can give different values due to small variations in measuring. A graph of time versus size for the two shows the effect.\\

At the fairly small sizes where the Toom-3 thresholds occur it's worth remembering that the asymptotic behaviour for Karatsuba and Toom-3 can't be expected to make accurate predictions, due of course to the big influence of all sorts of overheads, and the fact that only a few recursions of each are being performed. Even at large sizes there's a good chance machine dependent effects like cache architecture will mean actual performance deviates from what might be predicted.\\

The formula given for the Karatsuba algorithm has an equivalent for Toom-3 involving only five multiplies, but this would be complicated and unenlightening.

\subsubsection{FFT}
\label{sssec:fft}
At large to very large sizes a Fermat style FFT multiplication is used, following Sch\"{o}nhage and Strassen. A brief description of the form used in GMP is given here. The multiplication done is $x*y\ mod\ 2^N+1$, for a given $N$. A full product $x*y$ is obtained by choosing $N>=bits(x)+bits(y)$ and padding $x$ and $y$ with high zero limbs. The modular product is the native form for the algorithm, so padding to get a full product is unavoidable.\\

The algorithm follows a split, evaluate, pointwise multiply, interpolate and combine similar to that described above for Karatsuba and Toom-3. A $k$ parameter controls the split, with an FFT-k splitting into $2^k$ pieces of $M=N/2^k$ bits each. $N$ must be a multiple of $(2^k)*mp\_bits\_per\_limb$ so the split falls on limb boundaries, avoiding bit shifts in the split and combine stages.\\

The evaluations, pointwise multiplications, and interpolation, are all done modulo $2^{N^{\prime}}+1$ where $N^{\prime}$ is $2M+k+3$ rounded up to a multiple of $2^k$ and of $mp\_bits\_per\_limb$. The results of interpolation will be the following negacyclic convolution of the input pieces, and the choice of $N^{\prime}$ ensures these sums aren't truncated.\\
\[
 w[n]=\sum_{i+j=b*2^k+n,b=0,1 }{(-1)^b * x[i] * y[j]}
\]

The points used for the evaluation are $g^i$ for $i=0$ to $2^k-1$ where $g=2^{2N^{\prime}/2^k}$. $g$ is a $2^kth$ root of unity mod $2^{N^{\prime}}+1$, which produces necessary cancellations at the interpolation stage, and it's also a power of 2 so the fast fourier transforms used for the evaluation and interpolation do only shifts, adds and negations.\\

The pointwise multiplications are done modulo $2^{N^{\prime}}+1$ and either recurse into a further FFT or use a plain multiplication (Toom-3, Karatsuba or basecase), whichever is optimal at the size $N^{\prime}$. The interpolation is an inverse fast fourier transform. The resulting set of sums of $x[i]*y[j]$ are added at appropriate offsets to give the final result.\\

Squaring is the same, but $x$ is the only input so it's one transform at the evaluate stage and the pointwise multiplies are squares. The interpolation is the same. For a mod $2^N+1$ product, an FFT-k is an $O(N^{k/{k-1}})$ algorithm, the exponent representing $2^k$ recursed modular multiplies each $1/{2^{k-1}}$ the size of the original. Each successive $k$ is an asymptotic improvement, but overheads mean each is only faster at bigger and bigger sizes. In the code, $MUL\_FFT\_TABLE$ and $SQR\_FFT\_TABLE$ are the thresholds where each $k$ is used. Each new $k$ effectively swaps some multiplying for some shifts, adds and overheads.\\

A mod $2^N+1$ product can be formed with a normal $NxN->2N$ bit multiply plus a subtraction, so an FFT and Toom-3 etc can be compared directly. A $k=4$ FFT at $O(N^{1.333})$ can be expected to be the first faster than Toom-3 at $O(N^{1.465})$. In practice this is what's found, with $MUL\_FFT\_MODF\_THRESHOLD$ and $SQR\_FFT\_MODF\_THRESHOLD$ being between 300 and 1000 limbs, depending on the CPU. So far it's been found that only very large FFTs recurse into pointwise multiplies above these sizes.\\

When an FFT is to give a full product, the change of $N$ to $2N$ doesn't alter the theoretical complexity for a given $k$, but for the purposes of considering where an FFT might be first used it can be assumed that the FFT is recursing into a normal multiply and that on that basis it's doing $2^k$ recursed multiplies each $1/{2^{k-2}}$ the size of the inputs, making it $O(N^{k/{k-2}})$. This would mean $k=7$ at $O(N^{1.4})$ would be the first FFT faster than Toom-3. In practice $MUL\_FFT\_THRESHOLD$ and $SQR\_FFT\_THRESHOLD$ have been found to be in the $k=8$ range, somewhere between 3000 and 10000 limbs.\\

The way $N$ is split into $2^k$ pieces and then $2M+k+3$ is rounded up to a multiple of $2^k$ and $mp\_bits\_per\_limb$ means that when $2^k>=mp\_bits\_per\_limb$ the effective N is a multiple of $2^{2k-1}$ bits. The $+k+3$ means some values of $N$ just under such a multiple will be rounded to the next. The complexity calculations above assume that a favourable size is used, meaning one which isn't padded through rounding, and it's also assumed that the extra $+k+3$ bits are negligible at typical FFT sizes.\\

The practical effect of the $2^{2k-1}$ constraint is to introduce a step-effect into measured speeds. For example $k=8$ will round $N$ up to a multiple of 32768 bits, so for a 32-bit limb there'll be 512 limb groups of sizes for which $mpn\_mul\_n$ runs at the same speed. Or for $k=9$ groups of 2048 limbs, $k=10$ groups of 8192 limbs, etc. In practice it's been found each $k$ is used at quite small multiples of its size constraint and so the step effect is quite noticeable in a time versus size graph.\\

The threshold determinations currently measure at the mid-points of size steps, but this is sub-optimal since at the start of a new step it can happen that it's better to go back to the previous $k$ for a while. Something more sophisticated for $MUL\_FFT\_TABLE$ and $SQR\_FFT\_TABLE$ will be needed. \\