\documentclass[9pt,twocolumn,twoside]{osajnl}
%% Please use 11pt if submitting to AOP
% \documentclass[11pt,twocolumn,twoside]{osajnl}

\journal{optica} % Choose journal (ao, aop, josaa, josab, ol, optica, pr)
\usepackage{subfigure}
\usepackage{graphicx}

\usepackage{amsmath}
\usepackage{cleveref}
\DeclareMathOperator{\sinc}{sinc}
\DeclareMathOperator{\somb}{somb}
% See template introduction for guidance on setting shortarticle option
\setboolean{shortarticle}{false}
% true = letter / tutorial
% false = research / review article
% (depending on journal).
\usepackage[justification=centering,
            format=plain]{caption}

\title{0.3\% Nyquist Imaging via Deep Learning and Pink Noise Patterns}

\author[2,3,*]{Haotian Song}
\author[1,2,*]{Xiaoyu Nie}
\author[1]{Xingchen Zhao}
\author[4]{Zheng Li}
\author[1,$\dagger$]{Tao Peng}
\author[1,5,6]{Marlan O. Scully}
\affil[1]{Department of Physics and Astronomy, Texas A\&M University, College Station, Texas, 77843, USA}
\affil[2]{Physics Department, Xi'an Jiaotong University, Xi'an, Shaanxi 710049, China}
\affil[3]{School of Physics \& Astronomy, University of Manchester, Manchester M13 9PL, United Kingdom}
\affil[4]{State key Laboratory for Mesoscopic Physics, School of Physics, Peking University, Beijing 100871, China}
\affil[5]{Baylor Research and Innovatuon Collaborative, Baylor University, Waco, 76706, USA}
\affil[6]{Princeton University, Princeton, New Jersey 08544, USA}

\affil[*]{these two authors contribute equally to this work.}
\affil[$\dagger$]{taopeng@tamu.edu}

%% To be edited by editor
% \dates{Compiled \today}

%\ociscodes{(140.3490) Lasers, distributed feedback; (060.2420) Fibers, polarization-maintaining;(060.3735) Fiber Bragg gratings.}

%% To be edited by editor
% \doi{\url{http://dx.doi.org/10.1364/XX.XX.XXXXXX}}

\begin{abstract}
We present a novel framework for Computational Ghost Imaging based on Deep Learning (CGIDL) and pink noise patterns, which substantially decreases the sampling number that is over 20 times smaller than previous CGIDL work. Here, the deep neural network, which can learn the sensing model and increase the quality image reconstruction, is trained only by simulation results, with no necessity for conducting experiments to get training inputs. To demonstrate the level of our sub-Nyquist achievement, a group of detailed comparisons between conventional Computational Ghost Imaging results, imaging results reconstructed using pink noise and white noise via Deep Learning are shown in extremely low sampling rate. This method has great potentials in various applications that require low sampling rate and quick reconstruction efficiency.
 
\end{abstract}

\setboolean{displaycopyright}{true}

\begin{document}
\maketitle

\section{Introduction}
Ghost imaging (GI) \cite{Pittman1995Optical,bennink2002two,chen2009lensless}, a novel practice with single-pixel imaging, applies the second-order correlation algorithm to image reconstrcution. GI possesses various advanced features such as it immunes to interference of the media that turbulents or submerges the signal in strong noise background. To further ameliorate and simplify the framework, Computational Ghost Imaging (CGI) \cite{bromberg2009ghost,shapiro2008computational} is proposed, where the reference light path that functions as recording speckles is replaced by directly loading pre-generated patterns onto the Spatial Light Modulator, such as the Digital Micro-mirror Devices (DMD). These well-designed patterns correspond to a set of intensities which are collected by a single element detector. Thus, by correlating the sequentially recorded intensities and their corresponding patterns, an unconventional image will be given. CGI has a lot of non-conventional applications such as wide spectrum imaging \cite{radwell2014single,Yang2017Ghost}, depth mapping \cite{sun2016single}, and sub-Rayleigh imaging \cite{kuplickiHigh,Chen:2017aa,sprigg2016super}.

However, the limitation of CGI is that it requires a large number of samplings to reconstruct an high quality image, or the signal would have been submerged under correlation fluctuations. To suppress the environmental noise and correlation fluctuations, the minimum number of sampling is proportionate to the total number of pixels within patterns that is going to be applied on DMD, or in other words, the whole number of pixels within the reconstructed image \cite{Lyu2017Deep}. This is the Nyquist Limit of sampling \cite{cook1986stochastic,tropp2009beyond}. Based on this theory, the image could have very low signal-to-noise ratio if the sampling number does not reach the requirement. Previously, many schemes have been proposed to improve the speed of CGI \cite{Wang:16,Jiang:19,Xu:18} and decrease the number of sampling (sub-Nyquist). Here, we focus on the work of sub-Nyquist Ghost Imaging. For example, an Orthonormalization method was proposed to suppress the noise and improve the quality of image under limited sampling number \cite{Bin2018Orthonormalization}. Other work focus on combining Compressive Sensing \cite{Magana2013Compressive,Katz2009Compressive,Yi2019Compressive} and Deep Learning (DL) \cite{Lyu2017Deep,Fei2019Learning,Wu:2020,He2018Ghost}. Specifically, GIDL has been done with minimum ratio of Nyquist limit must be up to 5\% and the it use experimental images to train the DNN, which limits its application and restricts it to achieve quick reconstruction \cite{He2018Ghost}. Meanwhile, non-experimental CGI training DNN has been proposed \cite{shimobaba2018computational}, but the minimum ratio of Nyquist limit is required to be up to 0.12\%. In a word, retrieving high-quality images with a extremely low ratio of Nyquist limit remains a challenge for CGI system. 

Another recent work applied pink noise patterns which have positive cross-correlations to CGI \cite{nie2020noisefree}. After aggregation of cross-correlation in addition to the auto-correlation on the CGI results, under extreme noisy environment or pattern distortion, this method gives good quality image while traditional white noise or pseudo-thermal light (generated by rotating ground glass) method fails. This advantage provides us with opportunities. Firstly, due to its noise-free feature, the simulation results from pink noise patterns should have no difference with experimental results. On the contrary, the simulation results from conventional methods would have huge contrast to experimental reconstruction, which requires us to do experimental training every time when occurring with new environments. Thus, if we apply it to DL, there is no need to get DNN training with a large amount of experimental training inputs. We can just get training patterns from simulation without worrying about environmental noise. Secondly, the positive cross-correlation substantially enhance the signal so that it is quite beneficial for DNN to be built up while training, and easier for DNN to recognize the shape of CGI results in testing or in CGIDL.

In this letter, we aim to minimize the necessary sampling number as well as improve the signal-to-noise ratio (SNR) with the combination of DL and pink noise. To demonstrate our method's advantage in CGI system, we compare the original results which are reconstructed by commonly used white noise patterns and pink noise patterns loaded on DMD to the results which are further ameliorated by DL. Several sampling ratios are presented. The experimental setup is shown in Fig. \ref{fig:test}. A CW laser is used to illuminate the DMD, where the noise patterns are loaded on it. The pattern generated by the DMD is then projected onto the object. In our experiment, the size of the noise patterns is 540 by 980 pixels, in which the independent changeable mirrors count for 10 by 10 pixels. The DMD contains tiny pixel-mirrors each measuring 16 $\mu m\times 16 \mu m$.  

\section{Deep learning}
\begin{figure}[h!]
    \centering\includegraphics[width=\linewidth]{DNN}
    \caption{Schematic of DNN. Our DNN consists of four convolution layers, one image input layer, one fully connected layer (in yellow), ReLU and BNL (in red).}
    \label{fig:DNN}
\end{figure}

In the proposed scheme as shown in Fig. \ref{fig:DNN}, it is a two-step process to reconstruct the CGIDL. Firstly, we construct a network framework which will is going to be trained later. Specifically, we used a Deep Neutral Network (DNN) model with four convolution layers, one image input layer, and one image output layer. The Rectified Linear Unit (ReLU) layer and  Batch Normalization Layer (BNL) were added between each convolution layer. The BNL is functioned as avoiding internal covarivate shift during the training process, and to speed up training of DNN \cite{ioffe2015batch}. The ReLU layer applies a threshold operation to each element of the inputs \cite{10.5555/3104322.3104425}. To customize the size of training pictures, both size of the input layer and output layer were set as $58\times96$. The solver for training is employed by the Stochastic Gradient Descent with Momentum Optimizer (SGDMO), thus reducing the oscillation via using momentum. The parameter vector can be updated via equation Eq. \ref{eq:1} which demonstrate the updating process during the iteration.
\begin{equation}
    \label{eq:1}
    \theta_{\ell+1}=\theta_{\ell}-\alpha \nabla E\left(\theta_{\ell}\right)+\gamma\left(\theta_{\ell}-\theta_{\ell-1}\right)
\end{equation}
Here, $\ell$ is the iteration number, $\alpha>0$ is the learning rate, $\theta$ is the parameter vector, and $E(\theta)$ is the loss function. The third part of the equation is the feature of SGDMO, analog to the momentum where $\gamma$ determines the contribution of the previous gradient step to the current iteration \cite{murphy2012machine}. The third part of the equation is the feature of SGDMO, analog to the momentum where $\gamma$ determines the contribution of the previous gradient step to the current iteration \cite{murphy2012machine}. In the end of DNN, a dropout layer is aimed to prevent overfitting \cite{JMLR:v15:srivastava14a}. After the establishment of DNN, plenty of training images are reconstructed by the CGI algorithm as is mentioned above. Then the training images and reconstruction training images feed DNN model as inputs and outputs, respectively. Here we use a set of 10000 handwritten digits of $28\times28$ pixels in size from the MNIST handwritten digit database \cite{deng2012mnist}as training images. Compared to the original training images,  we resize images from $28\times28$ to $58\times96$ and normalize them so that we can test smaller sampling ratio. The maximum epoches were set as 600, and training interation is 46800. The program was implemented via MATLAB R2019a Update 5(9.6.0.1174912, 64-bit), and the DNN was implemented through Deep Learning Toolbox. The GPU-chip NVIDIA GTX1050 was used to accelerate the speed of the computation.

The next step after obtaining a trained DNN is testing the DNN by simulation as well as retrieving CGI results in the experiment. The schematic is shown in Fig. \ref{fig:test}. In the testing part, CGI algorithm generates reconstruction testing images from testing images which are not within the group of training images. The trained DNN is fed with reconstruction testing images and gives CGIDL results. Comparing the difference between CGIDL and testing images, we could measure the quality of the trained DNN. If this DNN works well, it could be used for retrieving CGI in the experiment.
\begin{figure}[h!]
    \centering\includegraphics[width=\linewidth]{test}
    \caption{The flow chart of CGIDL. The DNN model is trained in training process (in red), and test images is for testing stage (in orange). The experiment process is on the left bottom functioned as the input of trained DNN, and the experimental DL process is on the right bottom side (in purple).}
    \label{fig:test}
\end{figure}

In the CGI process, SNR of images is proportional to the measurement ratio. Therefore, the ratio between the number of illumination patterns $N_{pattern}$ and the (average) number of speckle in each of these patterns $N_{pixel}$\cite{PhysRevLett.105.219902,wang2015gerchberg}, namely, 
\begin{equation}
    \beta=N_{pattern}/N_{pixel}
\end{equation}
is used in our work to reflect SNR. Here we present several results with respect to a couple of $\beta$ by applying pink noise and conventional white noise patterns.

\section{Simulation}
\begin{figure}[h!]
\centering\includegraphics[width=\linewidth]{simu}
\caption{Main simulation results. For pink noise, we select $\beta=0.05, 0.01, 0.005, 0.003$. However, for white noise when $\beta$ equals to these small values, the results are totally smeared both under CGI and CGIDL. Thus, here we present $\beta=1, 0.1, 0.05, 0.01$ for white noise to make comparation.}
\label{fig:simu}
\end{figure}


To obtain the trained DNN, we carried out simulation using 10000 training images with different $\beta$. Patterns used in our simulation were white noise and pink noise. Because white noise pattern is globally used in CGI while pink noise pattern can improve the image quality significantly. $\beta$ was 1, 0.1, 0.05, 0.01 for white noise, and 0.05, 0.01, 0.005, 0.003 for pink noise. Then several testing images (digits'1','2',and '3') which are completely different from training images were chosen as example in our simulation. These images have $28\times28$ pixels, and are resized into $58\times96$ by widening and amplification. As showed in the Fig. \ref{fig:simu}, both CGI and CGIDL images become clearer and brighter with increasing $\beta$. For smaller $\beta$, CGIDL can be visualized while CGI can not, which reflect the advantage of DL. Meanwhile, testing images using white noise patterns became unrecognizable when $\beta=0.05$, and we can optimize them using DL which is similar work with \cite{Lyu2017Deep} and when $\beta$ comes to 0.01, both CGI and CGIDL are completely distroyed because of the lack in averaging during correlation. However, the $\beta$ of CGIDL using pink noise can reach 0.003 where we can still distinguish these numbers, which means we realize 0.3\% Nyquist reconstruction via CGIDL and pink noise patterns in simulation.


\section{Experiment}
To further demonstrate the advantage and applicability, we perform experiments.

\section{Conclusion}


\section{Funding.}
Air Force Office of Scientific Research (Award No. FA9550-20-1-0366 DEF), Office of Naval Research (Award No. N00014-20-1-2184), Robert A. Welch Foundation (Grant No. A-1261), \textcolor{red}{National Science Foundation (Grant No. PHY-2013771), King Abdulaziz City for Science and Technology (KACST)}.
\section{Acknowledgement.}


%\noindent\textbf{Disclosures.} The authors declare no conflicts of interest.



\bibliography{sample}

\end{document}