\chapter{Nonuniform Fast Fourier Transform (NUFFT)} \label{chap:3}

Unlike classical MRI sampling methods that uniformly sample the spatial frequency domain, SS-PARSE nonuniformly samples the $k$-space. In order to use FFTs in the reconstruction algorithm, a non-equally spaced frequency-domain trajectory must be rounded to an equally spaced grid. Oversampling can be used to reduce error introduced by this rounding. By analyzing the reconstruction errors of the fast algorithm, we know that the major source of the errors is the trajectory gridding. A grid with higher resolution can reduce the errors, but it could make the reconstruction process much slower. An algorithm that can accurately and quickly evaluate Fourier samples is desirable. In this chapter, we discuss the implementation of the fast algorithms for evaluating Fourier transforms (FTs). We will focus on 1-D FTs. The 1-D FTs can be easily extended to multidimensional Fourier samples because of the separability of Fourier kernels.

\section{Theory of NUFFT}

\subsection{Problem Statement}
Let $\mathbf{x} = \{ x_{-N/2},\cdots,x_{N/2-1} \}$ be a finite sequence of complex numbers. The discrete Fourier transform (DFT) is defined by the formula:
\begin{align}
    X_{k} = \sum _{n=-N/2}^{N/2-1} x_{n} e^{-\imath \frac{2\pi}{N}kn} \label{eq:DFT}
\end{align}
where $N$ is a positive even integer, $k=-N/2,\cdots,N/2-1$. The frequency components $2 \pi k / N$ are equally spaced, so \eqref{eq:DFT} can be evaluated by an FFT, which requires $O(N \log N)$ operations.

Now, we extend this definition to nonuniformly spaced frequency components. $\bbomega = \{ \omega_{0},\cdots,\omega_{K-1}\}$ is a finite sequence of real numbers, and $\omega_{k} \in [-\pi , \pi]$ for $k=0,\cdots,K-1$. The Fourier transform of the finite sequence $\mathbf{x}$ for the frequencies of $\bbomega$ is given by:
\begin{align}
    X_{k} = \sum _{n=-N/2}^{N/-1} x_{n} e^{-\imath n \omega_{k}} \label{eq:NUFFT}
\end{align}

The direct evaluation of \eqref{eq:NUFFT} requires $O(NK)$ operations. Our goal is to design an algorithm that only needs similar computational complexity of FFT and meets a required accuracy.

\subsection{Basic Concepts}
\cite{dutt1993fft} proved that any function of the form $e^{\imath cx}$ can be accurately represented on any finite interval on the real line using a small number of terms of the form $e^{bx^{2}} \cdot e^{\imath kx}$, and this number of terms is independent of the value of $c$.
\begin{theorem} \textup{\cite{dutt1993fft}}
 Let $b>\frac{1}{2}$, $c,d>0$ be real numbers, and let $m \geq 2$, $q \geq 4b\pi$ be integers. Then, for any $x \in [-d,d]$,
 \begin{align}
 \left| e^{\imath cx} - e^{b \left( x\pi / md \right) ^{2}} \cdot
 \sum _{k=[cmd/ \pi ] -q/2} ^{[cmd/ \pi ] + q/2} \rho _{k} e^{ \imath kx \pi /md} \right|
 < e^{-b \pi ^{2} \left( 1-1/m^{2} \right)} \cdot (4b+9) \label{eq:NUFFT-Theorem}
 \end{align}
 where
 \begin{align}
 \rho _{k} = \frac{1}{2\sqrt{b \pi}} e^{-(c-k)^{2}/4b}
 \end{align}
\end{theorem}

This theorem can be written in a general form that is consistent with \eqref{eq:NUFFT}:
\begin{align}
    \left| e^{\imath \omega _k n} - s_{n} ^{-1} \sum _{l=-\lfloor (J-1)/2 \rfloor} ^{\lceil (J-1)/2 \rceil}
    g_{l}(\omega _k) e^{ \imath 2 \pi (v_k +l) n /mN} \right|
    < \varepsilon \label{eq:Dutt-General}
\end{align}
where $s_{n}^{-1}$ is a function of $n$, $g_{l}(\omega _k)$ is a function $\omega _k$, $v_k = [\omega _k mN / 2\pi]$, integer $J \ll K$ and $\varepsilon$ is a nonnegative real number.

With this approximation, one can evaluate \eqref{eq:NUFFT} by an FFT and interpolation in the transform domain with two steps:
\begin{compactenum}[1.]
	\item Compute an $mN$-point FFT of the weighted $x_n$.
    \begin{align*}
        Y_k = \sum _{n=-N/2}^{N/2-1} s_n^{-1} x_n e^{-\imath \frac{2\pi}{mN}kn}
    \end{align*}
    \item Approximate each $X_m$ by a linear combination of $Y_k$'s.
    \begin{align*}
        X_k \approx \sum _{l=-\lfloor (J-1)/2 \rfloor} ^{\lceil (J-1)/2 \rceil}
        g_{l}(\omega _k) Y_{v_k + l}
    \end{align*}
\end{compactenum}

The computational complexity of this algorithm is $O(mN \log N +JK)$. If we choose constant scaling factor $\mathbf{s} = \{s_{-N/2},\cdots, s_{N/2-1} \}$ and $J=K$, this method exactly computes $X_m$, but there is no computational gain. The performance of this approximation is determined by $\mathbf{s}$ and the weighting coefficients $\mathbf{g}(\omega _k)$.

\cite{Fessler2003,Liu1998,Jacob2009} proposed different methods to compute $\mathbf{s}$ and $\mathbf{g}$ based on different criteria. All of the methods first optimize $\mathbf{g}$, then compute $\mathbf{s}$ using the optimized $\mathbf{g}$. We present an algorithm that simultaneously optimizes $\mathbf{s}$ and $\mathbf{g}$ by least-squares approximation.

If the Fourier transform of a set of certain frequencies is only evaluated once, it not worth to use the NUFFT scheme because of its computational complexity of optimizing scaling factor and weighting coefficients. In some applications, such as iterative reconstruction of MRI, the same set of frequencies is used for each iteration. The same $k$-space trajectory (spatial frequencies) could also be used for different MRI experiments. In both scenarios, the additional computational cost of precomputations of NUFFT can be afforded. Because of the periodicity of $g_l(\omega _k)$, it is only needed to precompute $g_l$ for several $\omega _k$ in one period. For a new set of frequencies, the precomputed $g_l$ are used to interpolate the $g_l$ for the new set of frequencies. The linear interpolation of $K$ frequencies requires $2JK$ operations.

\section{Least-Squares Optimization}\label{sec:3B}
\eqref{eq:NUFFT-Theorem} can be formulated as a minimization problem:
\begin{align}
    \underset{\mathbf{S},\mathbf{G}} {\operatorname{argmin}}
    \left\| \mathbf{B} - \mathbf{S}^{-1} \mathbf{AG} \right\| _{2}^{2} \label{eq:Dutt-Matrix}
\end{align}
where $\mathbf{A}$ is an $N \times J$ matrix, $\mathbf{G}$ is a $J \times K$ matrix vector, $\mathbf{S}$ is an $N \times N$ diagonal matrix, and $\mathbf{B}$ is an $N \times K$ column vector. These are defined by:
\begin{align*}
    A_{nl} &= e^{\imath 2 \pi n l /mN} \\
    S_{nn} &= s_{n}\\
    B_{n} &= e^{\imath \omega n} \cdot e^{-\imath 2 \pi (v-q/2)n/mN}
\end{align*}
where $n = -N/2,\cdots, N/2-1$, $l= 0, \cdots, J-1$, $K$ is the size of $\bbomega$.

\cite{Fessler2003} proved that this minimization is equivalent to minimizing the maximum absolute error of all possible sequences $x_m$ with the same norm.

\eqref{eq:Dutt-Matrix} is a nonlinear problem. It can be approximated by a linear minimization problem:
\begin{align}
    \underset{\mathbf{S},\mathbf{G}} {\operatorname{argmin}}
    \left\| \mathbf{SB} - \mathbf{AG} \right\| _{2}^{2} \label{eq:Linear-Dutt-Matrix}
\end{align}
Obviously, $\mathbf{0}$ is a solution to this problem. We can use $s_{0} = 1$ to avoid $\mathbf{0}$ solution. It can be shown that \eqref{eq:Linear-Dutt-Matrix} is equivalent to the standard linear minimization problem.

We write $\mathbf{G}$ as a column vector $\mathbf{g}$ with $G_{m,n} = g_{(n-1)J+m}$. Let $\mathbf{b}_{i}$ denote the $i$th column of $\mathbf{B}$. $\mathbf{db}_{i}$ is a diagonal matrix with the elements of $\mathbf{b}_{i}$ on its diagonal. We define a sparse matrix as:

\begin{align}
    \mathbf{F} &=
    \begin{pmatrix}
        -\mathbf{db}_{1} & \mathbf{A} & & \mathbf{0} \\
        -\mathbf{db}_{2} & & \mathbf{A} & \\
        \vdots & \mathbf{0} & & \ddots \\
        -\mathbf{db}_{K} & & & & \mathbf{A}
    \end{pmatrix} \label{eq:S-sparse}
\end{align}
We define column vector $\mathbf{s}$ that is composed of $s_{n}$. Now we have:
\begin{align}
    \mathbf{SB} - \mathbf{AG} &= \mathbf{F}
    \begin{pmatrix}
        \mathbf{s} \\ \mathbf{g}
    \end{pmatrix}
\end{align}

Let $s_{0} = 1$, and $-\mathbf{y}$ be the $(N/2+1)$th column of $\mathbf{F}$. We remove $-\mathbf{y}$ from the matrix $\mathbf{F}$ to form a new matrix $\mathbb{A}$. Column vector $\mathbf{x}$ is a stack of $\mathbf{f}$ and $\mathbf{g}$ without $f_{1}$. Now, \eqref{eq:Linear-Dutt-Matrix} is converted to a standard linear minimization problem:
\begin{align}
    \underset{\mathbf{x}} {\operatorname{argmin}}
    \left\| \mathbb{A} \mathbf{x} - \mathbf{y} \right\| ^{2}_{2} \label{eq:Std-Linear}
\end{align}
\begin{figure}[h!]
    \centering
    \includegraphics[width=4.5in]{C:/Tex/Thesis/Chapter3/Figures/NRMS_Dir.eps}
    \caption{NRMSE of Direct Solving}
    \label{fig:Fig-NRMS-Dir}
\end{figure}

We can further improve the NUFFT accuracy. We initialize $\mathbf{s}$ with the results of direct solving, Kaiser-Bessel window, or Gaussian function, then we solve $\mathbf{G}$ by:
\begin{align}
    \mathbf{G} &= \left( \mathbf{A}^{H} \mathbf{A}
    \right) ^{-1} \mathbf{A}^{H} \mathbf{SB} \label{eq:G-Solver}
\end{align}
We use $\mathbf{G}$ to optimize $\mathbf{s}$:
\begin{align}
    \mathbf{s} &= \left( \mathbf{C}^{H} \mathbf{C}
    \right) ^{-1} \mathbf{C}^{H} \mathbf{Z} \label{eq:s-Solver}
\end{align}
where $\mathbf{Z}$ is an $NK \times 1$ column vector from $\mathbf{AG}$, and $\mathbf{C} = \left[ \mathbf{db}_{1}, \cdots, \mathbf{db}_{K} \right]^{T}$.

If necessary, we can use the updated $\mathbf{s}$ to find the optimal $\mathbf{G}$.
\begin{figure}[h!]
    \centering
    \includegraphics[width=4.5in]{C:/Tex/Thesis/Chapter3/Figures/MaxErr_Comp.eps}
    \caption{Maximum Error}
    \label{fig:Fig-MaxErr-Comp}
\end{figure}
\begin{figure}[h!]
    \centering
    \includegraphics[width=4.5in]{C:/Tex/Thesis/Chapter3/Figures/NRMS_Comp.eps}
    \caption{NRMS Error}
    \label{fig:Fig-NRMS-Comp}
\end{figure}

\section{Interpolations}
In \ref{sec:3B}, we investigate the approximation for a set of specific frequencies. This approximation gives higher accuracy. In some applications, one can use the precomputed $g_l(\omega _k)$ to find the $g_l$ for given frequencies by interpolations. In this section, we analyze the errors for two kinds of interpolations, linear and cubic convolution.

Let $f$ be a real number with $|f| \le 1/m$, and $\omega = 2 \pi f/N$.
\begin{align}
    e^{\imath 2 \pi n f /N} &\approx
    s_{n} ^{-1} \sum _{l=-\lfloor (J-1)/2 \rfloor} ^{\lceil (J-1)/2 \rceil}
    g_{l}(f)e^{ \imath 2 \pi n l/mN} \label{eq:FT-kernel}
\end{align}

It can be shown that $g_{l} (\kappa)$ is a periodic function with period $1/m$:
\begin{align}
    e^{\imath 2 \pi n \left( f + k/m \right) /N} &\approx
    s_{n} ^{-1} e^{ \imath 2 \pi nk /mN}
    \sum _{l=-\lfloor (J-1)/2 \rfloor} ^{\lceil (J-1)/2 \rceil}
    g_{l}(f)e^{ \imath 2 \pi n l/mN} \nonumber \\
    &= s_{n} ^{-1} \sum _{l=-\lfloor (J-1)/2 \rfloor} ^{\lceil (J-1)/2 \rceil}
    g_{l}(f)e^{ \imath 2 \pi n \left( l + k \right)/mN} \label{eq:period}
\end{align}
where $k$ is an integer. With this property, we only need to study the NUFFT within one period.

We use the precomputed $g_l(f_k)$ for a set of frequencies $f_k$ to compute $g_l(x)$ of any other frequency sets by interpolation:
\begin{align}
    g_{l}(x) = \sum _{k} g_{l} (f_k) u (\frac{x}{h}-k) \label{eq:g-interpolation}
\end{align}
where $u(x)$ is the interpolation kernel.

There are several candidates for the interpolation kernel. We use the linear spline and cubic convolution as examples. The linear kernel \eqref{eq:linear-intrp} and cubic kernel \eqref{eq:3rd-Conv-intrp} are plotted in Figures \ref{fig:fig-LinearKernel} and \ref{fig:fig-ConvKernel}, respectively.
\begin{align}
    u(x) &=
    \begin{cases}
        1-|x|, & ~~~ |x| < 1 \\
        0, & ~~~ \text{otherwise}
    \end{cases} \label{eq:linear-intrp}
\end{align}
\begin{align}
    u(x) &=
    \begin{cases}
        \frac{3}{2} |x| ^3 - \frac{5}{2} |x| ^2 + 1, & 0 \le |x| < 1 \\
        -\frac{1}{2} |x| ^3 + \frac{5}{2} |x| ^2 - 4|x| + 1, & 1 \le |x| < 2 \\
        0 & 2 \le |x|
    \end{cases} \label{eq:3rd-Conv-intrp}
\end{align}
\begin{figure}[h!]
    \centering
    \includegraphics[width=4.5in]{C:/Tex/Thesis/Chapter3/Figures/LinearKernel.eps}
    \caption{Linear Interpolation Kernel}
    \label{fig:fig-LinearKernel}
\end{figure}
\begin{figure}[h!]
    \centering
    \includegraphics[width=4.5in]{C:/Tex/Thesis/Chapter3/Figures/ConvKernel.eps}
    \caption{Cubic Convolution Interpolation Kernel}
    \label{fig:fig-ConvKernel}
\end{figure}

Let $K$ be a positive integer, $J$ be an odd number, and $h=1/mK$. We choose the following frequency set:
\begin{align*}
    \mathbf{f} = \left\{f_{k}= -\frac{1}{2m}+kh, ~~k = -1,0,\cdots, K+1 \right\}
\end{align*}
so there are $K+3$ elements in $\mathbf{f}$. We compute the interpolator coefficients $\mathbf{G}(\mathbf{f})$ and the corresponding cubic B-spline coefficients $c_{l}(k)$. For any frequency $x \in \left[ -1/2m,1/2m \right]$, we use cubic convolution interpolation to compute $g_{l}(x)$.

For even $J$, we choose a different $\mathbf{f}$ to make $g_l(x)$ continuous:
\begin{align*}
    \mathbf{f} = \left\{f_{k}= kh, ~~k = -1,0,\cdots, K+1 \right\}
\end{align*}
In both cases, we set $v_k=0$ for all $k$.

For linear interpolation, we choose a slightly different $\mathbf{f}$:
\begin{align*}
    \mathbf{f} = \left\{f_{k}= -\frac{1}{2m}+kh, ~~k = 0,\cdots, K \right\}
\end{align*}
for odd $J$, and
\begin{align*}
    \mathbf{f} = \left\{f_{k}= kh, ~~k = 0,\cdots, K \right\}
\end{align*}
for even $J$. All of $v_k$'s are set to $0$.

\section{Error Analysis of the Interpolation}
For odd $J$, the normalized root mean square error (NRMSE) is computed by:
\begin{align}
    \varepsilon ^{2} &= \frac{m}{N}
    \sum _{n=-N/2} ^{N/2-1} \int _{\mathbf{x}}
    \left| e^{\imath 2 \pi n x/N} - s^{-1}_{n}
    \sum _{l=-\lfloor (J-1)/2 \rfloor }
    ^{ \lceil (J-1)/2 \rceil }
    g_{l} (x) e^{\imath 2 \pi n l /mN}
    \right| ^{2} dx \notag \\
    &= \frac{m}{N}
    \sum _{n=-N/2} ^{N/2-1}
    \sum _{k}
    \int _{f_k} ^{f_{k+1}}
    \left| e^{\imath 2 \pi n x/N} - s^{-1}_{n}
    \sum _{l=-\lfloor (J-1)/2 \rfloor }
    ^{ \lceil (J-1)/2 \rceil }
    g_{l} (x) e^{\imath 2 \pi n l /mN}
    \right| ^{2} dx \label{eq:Cont-NRMS}
\end{align}
For odd $J$, the integral interval $\mathbf{x}$ is $[-1/2m,1/2m]$; for even $J$, $\mathbf{x}$ is $[0,1/m]$.

The error performance of linear and cubic convolution interpolations are illustrated in Figures \ref{fig:fig-Linear-NRMS}, \ref{fig:fig-Linear-JK}, \ref{fig:fig-Conv-NRMS} and \ref{fig:fig-Conv-JK} with $m=2$. Figure \ref{fig:fig-Conv-Linear-Comp} compares the performance of linear and cubic convolution interpolations.

For $J \ge 8$, the linear interpolation can not approach the accuracy of the LS optimization because the performance is dominated by the accuracy of the linear interpolator. For cubic convolution interpolation, the error of the LS optimization plays the major role, so the performance can approach the possible limit with more precomputed frequencies.

\begin{figure}[h!]
    \centering
    \includegraphics[width=4.3in]{C:/Tex/Thesis/Chapter3/Figures/LinearNRMS.eps}
    \caption{NRMSE for different $J$ and the number of precomputed frequencies for linear interpolation. In this figure, NRMSEs are plotted as the functions of $J$'s for different numbers of the precomputed frequencies. The ``Exact" is the NRMSE for the precomputed frequencies. The ``Exact" is the accuracy limit of the method stated in this chapter.}
    \label{fig:fig-Linear-NRMS}
\end{figure}
\begin{figure}[h!]
    \centering
    \includegraphics[width=4.3in]{C:/Tex/Thesis/Chapter3/Figures/LinearJK.eps}
    \caption{NRMSE for different $J$ and the number of precomputed frequencies for linear interpolation. In this figure, NRMSEs are plotted as the functions of the numbers of the precomputed frequencies for different $J$'s.}
    \label{fig:fig-Linear-JK}
\end{figure}

\begin{figure}[h!]
    \centering
    \includegraphics[width=4.3in]{C:/Tex/Thesis/Chapter3/Figures/ConvNRMS.eps}
    \caption{NRMSE for different $J$ and the number of precomputed frequencies for cubic convolution interpolation. In this figure, NRMSEs are plotted as the functions of $J$'s for different numbers of the precomputed frequencies. The ``Exact" is the NRMSE for the precomputed frequencies. The ``Exact" is the accuracy limit of the method stated in this chapter.}
    \label{fig:fig-Conv-NRMS}
\end{figure}
\begin{figure}[h!]
    \centering
    \includegraphics[width=4.3in]{C:/Tex/Thesis/Chapter3/Figures/ConvJK.eps}
    \caption{NRMSE for different $J$ and the number of precomputed frequencies for cubic convolution interpolation. In this figure, NRMSEs are plotted as the functions of the numbers of the precomputed frequencies for different $J$'s.}
    \label{fig:fig-Conv-JK}
\end{figure}
\begin{figure}[h!]
    \centering
    \includegraphics[width=4.3in]{C:/Tex/Thesis/Chapter3/Figures/ConvLinearComp.eps}
    \caption{Comparison of the performance of linear and cubic convolution interpolations. The ``Exact" is the accuracy limit of the method stated in this chapter.}
    \label{fig:fig-Conv-Linear-Comp}
\end{figure}

\section{Inverse Fourier Transform}
We use \textit{inverse} to represent the Fourier transforms with uniform inputs and nonuniform outputs. The inverse FT is defined as:
\begin{align}
    x_n &= \sum _{k=0}^{K-1} X_k e^{\imath n \omega _k} \label{eq:inv-FT}
\end{align}

An inverse FT can be approximated by
\begin{align}
    x_n &\approx \sum _{k=0} ^{K-1} X_k s_{n} ^{-1}
    \sum _{l=-\lfloor (J-1)/2 \rfloor} ^{\lceil (J-1)/2 \rceil}
    g_{l}(\omega _k)e^{ \imath 2 \pi n (v_k + l) /mN} \notag \\
    &= s_{n} ^{-1} \sum _{k=0} ^{K-1} X_k
    \underbrace{\sum _{l=-\lfloor (J-1)/2 \rfloor} ^{\lceil (J-1)/2 \rceil}
    g_{l}(\omega _k)e^{ \imath 2 \pi n l /mN}} _{A_k}
    e^{ \imath 2 \pi n v_k /mN}
    \label{eq:iNUFFT}
\end{align}

The procedure of the inverse NUFFT is summarized as following:
\begin{compactenum}[1.]
	\item Compute $A_k$. This requires $JK$ operations.
    \begin{align}
        A_k = \sum _{l=-\lfloor (J-1)/2 \rfloor}
        ^{\lceil (J-1)/2 \rceil} g_{l}(\omega _k)
        e^{ \imath 2 \pi n l /mN} \notag
    \end{align}
    \item Weight $X_k$ by $A_k$.
    \begin{align}
        B_{k-N/2} &= X_{v_k} A_{v_k}, ~~ k = 0,\cdots,K-1 \notag
    \end{align}
    \item Compute an $mN$-point FFT.
    \begin{align}
        y_n &= \sum _{k=-N/2} ^{N/2-1} B_k e^{ \imath 2 \pi n k /mN} \notag
    \end{align}
    \item Weight $y_n$ by $s_n ^{-1}$
    \begin{align}
        x_n &= s_n ^{-1} y_n \notag
    \end{align}
\end{compactenum}

\section{Discussion}
For given $K$ frequencies, it requires $2O(JNM)$ operations to compute $g_l$ and $s_n$ for $N$-point FFT. So it is not economic to do this for a set of frequencies that are repeatedly used. Once we have $g_l$ and $s_n$, the Fourier transform requires $mO(N \log N) + O(JM)$ operations.

With precomputed $g_l$ and $s_n$, one can used linear or cubic interpolation to evaluate $g_l$ for the given $K$ frequencies. Linear interpolation requires $2O(JM)$ operations to compute $g_l$ for the given frequencies. The total computational complexity is $mO(N \log N) + 3O(JM)$. This method requires less computations than cubic interpolation. The disadvantage of linear interpolation is that its NRMSE reaches the limit of about $10^{-7}$ because of interpolation error. For hardware implementation, the storage of the precomputed data of a larger set of frequencies is also a possible problem. Its advantage is for $J \le 5$ since about 100 precomputed frequencies can reach the best performance.

The cubic convolution interpolation requires more computations. The interpolation needs $4O(JM)$ operations, so the total number is $mO(N \log N) + 4O(JM)$. The cubic algorithm can provide much higher accuracy than the linear algorithm. It also requires a much smaller storage because of the much less precomputations. This feature is useful for some hardware implementations. This is also possibly useful for software applications with enough memory because memory access could be the bottleneck.

