\section{Introduction}\label{sec:intro}

Deep learning (DL) applications have gained significant success in many fields 
such as natural language processing (NLP), image recognition\cite{krizhevsky2012imagenet}, and recommendation. 
%
The basic processing of DL includes two separate phases, namely, 
training deep neural network (DNN) models on known input datasets,
and using the trained DNN models for inference of new input data.
%
To meet the need of high model accuracy and generality,
in recent years 
the sizes of both input training datasets and DNN models  
have been increasing drastically\cite{sze2017efficient}. 
%
For instance,
the youtube-8M dataset is about 1.5 terabytes (TB)\cite{youtube8m}, 
the Google OpenImages dataset has a total size of roughly 18 TB\cite{openimages},
and the OpenAI GPT-3 language prediction model 
has 175 billion parameters and over 45 TB of input training data\cite{gpt-3}.

The large sizes of datasets and models
necessitate distributed training in GPU clusters.
The entire dataset (epoch) is divided into mini-batches
each being trained in an iteration by the GPU cluster's worker nodes in parallel,
with appropriate synchronization mechanisms 
such as parameter server (PS)\cite{li2014scaling} or ring-all-reduce (RAR)\cite{abadi2016tensorflow} \cite{bao2020preemptive}.
%
The mini-batches\cite{sivathanu2019astra} of the input training dataset 
are read by the worker nodes (on demand) from a shared remote data store.

Storage has become a severe bottleneck 
for large-scale deep learning training (DLT),
as the compute capacity of high-end GPUs\cite{gpu} is continuously increasing
and the I/O speed of reading input data is too slow to saturate the GPUs.
For example,
an NVidia DGX A100 unit node consists of eight A100 GPUs 
(each with 80 GB of memory) 
and achieves the floating-point (FP16) performance of totally 5 PFLOPS 
(peta floating-point operations per second) \cite{url_dgx}.
Simple tests (Section~\ref{sec:back}) for training ResNet50\cite{he2016deep} on the ImageNet\cite{Imagenet}
dataset show that a single DGX A100 node can process 16,000 200KB images per second\cite{performance},
which imposes high I/O pressure on the remote storage 
to read the huge numbers of small image files fast enough to keep the GPUs busy.

Various solutions have been proposed 
to alleviate the I/O bottleneck problem of input data for DLT.
For example,
DLT frameworks (like PyTorch\cite{pytorch}) pipeline I/O with computation
by maintaining a queue of fetched input data on each worker node, 
allowing the computation engine to pick from the in-memory queue.
%
DeepIO \cite{zhu2018entropy}
focuses on the inefficiency of the multiple stages of TensorFlow's data I/O API
(\ttt{Source} $\rightarrow$ \ttt{Map} $\rightarrow$ \ttt{Shuffle} $\rightarrow$ \ttt{Repeat} $\rightarrow$ \ttt{Batch}),
and uses the memory of worker nodes to provide a distributed cache\cite{zaharia2012resilient} \cite{li2014tachyon} 
which realizes RDMA-based\cite{lu2017octopus} in-situ \ttt{Shuffle} (with pipelining)
and adjusts the order of consumed data items
to maximize the use of cached data\cite{gunda2010nectar}.
%
Most recently,
Quiver \cite{kumar2020quiver}
proposes to add an additional small cache layer 
(between the remote storage and the GPU cluster) %in public cloud,
which preloads a portion of the dataset from remote storage.
Quiver prepares mini-batches opportunistically with cached data
and prioritizes cache allocation to the DLT jobs
that benefit the most from the limited cache.
% so as to accelerate training data I/O.

\begin{figure*}[t]
\centering
\includegraphics[width=0.9\linewidth]{../figs/pic1.png}
\caption{Comparison of the data loading procedures 
of PyTorch, Quiver, DeepIO, and \sys.
Each iteration of the DLT task 
reads data samples with the predefined batch size, 
pre-processes them into tensor types, 
and passes them to the DLT model for calculation.}
\label{fig:path}
\end{figure*}

Unfortunately,
these solutions are still far from efficient for high-performance DLT,
even if the total memory capacity is large enough for the entire input dataset.
This is mainly because 
after being preloaded to the intermediate cache layer,
the input data still needs to be copied multiple times 
along the I/O path before reaching the GPU memory,
% they still have to perform multiple memory copies along the I/O path 
% (from input cache to GPU memory) 
% for input data,
the processing of which is time-consuming 
and causes significant waste of the expensive GPUs.


This paper presents \sys, an efficient data loading framework 
for bridging the performance gap between computation and I/O of large DLT jobs
in Alibaba's high-performance RDMA-enabled GPU clusters.
Unlike previous work \cite{kumar2020quiver, zhu2018entropy}
which mainly targets memory-constrained scenarios
and increases the hit ratio of the intermediate cache layer
% Unlike previous studies 
% which are focused on the bandwidth- and memory-scarce scenarios
% and improve the hit ratio of the fetched data in memory 
through cache-aware mini-batch construction,
\sys is mainly designed for Alibaba's high-performance GPU clusters
where each worker node is equipped with multiple RDMA NICs and TBs of memory\cite{muthitacharoen2001low}.
Since DLT jobs are mostly processed in GPU memory,
the worker nodes can provide plenty of dedicated memory 
% (for storing input data)
of which the total capacity is large enough 
to accommodate the entire input dataset for training 
in most cases. 

To keep the expensive high-end GPUs busy,
the key idea of \sys is 
to organize the dedicated memory of all worker nodes 
into an RDMA-aware distributed shared memory pool (called \cam),
which provides DLT jobs with a highly-optimized proactive data I/O service 
that avoids any unnecessary memory copies.
\cam adopts a two-stage pipelined model, 
which is respectively from remote storage to \cam and from \cam to GPU memory),
to effectively reduce the I/O wait time of worker nodes' GPUs.
\cam pulls raw training data from remote storage
and pushes formatted batches of input data 
directly to the workers' CPU/GPU memory (via RDMA).  


% which transparently provides DLT jobs with a fast intermediate buffer
% that (i) pulls raw input data from remote storage
% and (ii) pushes formatted batch data directly to CPU/GPU memory (via RDMA)% \sys keeps its input data transfer always in user-space, 
% thus avoiding kernel-/user-space memory copies.
% \sys adopts a two-stage pipeline model 
% (respectively from storage to \cam and from \cam to GPU memory),
% so as to effectively reduce I/O wait time of worker nodes.

\sys has been deployed in Alibaba's production DLT cloud.
Extensive evaluation shows that
\sys delivers significantly higher I/O performance 
than state-of-the-art memory-based storage and caching solutions
for various DLT workloads.

The rest of this paper is organized as follows.
Section~\ref{sec:back} introduces the background and motivation.
Sections~\ref{sec:design} and \ref{sec:impl} present the design and implementation of \sys (with the RDMA-aware \cam memory pool), respectively.
Section~\ref{sec:eval} evaluates the performance of \sys and compares it with the state-of-the-art solutions.
Section~\ref{sec:relate} discusses related work.
And Section~\ref{sec:conc} concludes the paper. 
%and discusses future work.


% In the development of deep learning, the improvement of model accuracy mainly depends on the structural change of neural networks. For example, from AlexNet to ResNet50, and then to EfficientNet searched by NAS, the accuracy of ImageNet Top-1 has been improved from 58 to 84. However, with the gradual maturity and convergence of neural network structure, it is difficult to break the precision limitation by optimizing neural network structure. With the continuous increase of data scale and model scale in recent years, the model accuracy has been further improved. Some research experiments also show that it is feasible to improve the model's accuracy from large models and big data. However, facing more and more complex tasks, the application scope of deep learning is increasingly limited by the amount of data and model scale.