\section{Background and Motivation}\label{sec:back}



Large-scale distributed DLT systems usually adopt a remote data store 
to provide scalable storage for the input training dataset.
However, 
the separation between computing and storage makes data loading
become a severe performance bottleneck for training,
especially in DLT-specific clusters with expensive high-performance GPUs.
%
For example,
Alibaba's GPU clouds consist of hundreds or even thousands of NVidia DGX A100 unit nodes.
Each NVidia DGX A100 node have eight A100 GPUs with totally 640 GB of memory, 
and can achieve the floating-point (FP16) performance of 5 PFLOPS (Peta FLOPS) 
and the tensor-float (TF32) performance of 2.5 PFLOPS \cite{url_dgx}.
%

Such high-performance GPU clusters require extremely fast input data loading.
% to saturate the GPUs.
As the training dataset is usually stored as huge numbers of small files
(of which the sizes range from KBs to MBs)\cite{li2014efficient}
that has relatively-high metadata access overhead,
it is challenging for the backend storage system 
to provide enough throughput and IOPS to saturate the GPU cluster,
even with fast storage devices (like NVMe SSDs) and prefetching/pipelining\cite{patterson1995informed}.
%
Moreover,
many DLT algorithms are synchronous in that
the next training iteration can be started 
only after the last iteration is completed.
This requires all workers to stably have almost the same data loading speed
otherwise the stragglers will severely degrade the overall training performance.
% high-performance DLT requires its data 

\begin{figure*}[t]
\centering
\includegraphics[width=0.9\linewidth]{../figs/aaa.png}
\caption{\sys architecture.}
\label{fig:arch}
\end{figure*}

Therefore,
in this paper we aim to address the problem of data I/O bottleneck for high-performance GPU clusters, 
and provide fast data loading services for distributed DLT. 
To achieve this,
we have the following observations.

First, 
high-performance GPU clusters have increasingly larger memory and higher network bandwidth.
For instance,
a DGX A100 worker node in Alibaba's GPU clusters is equipped with two TB memory.
Since DLT jobs are mostly processed in GPU memory,
the workers can provide plenty of dedicated memory.
For a cluster of 512 worker nodes,
the total memory capacity can be as high as 1PB,
most of which can be used for storing training data
which is large enough to accommodate the entire input dataset. 
in almost all cases. 
%
Further,
a DGX A100 worker node installs nine 200Gbps IB CX6 NICs, 
of which eight NICs service the eight A100 GPUs
and the ninth NIC is dedicated to backend storage.
%
Therefore,
we propose to organize the memory of the worker nodes 
into an RDMA-enabled memory pool (called \cam),
which can buffer all the input data for a DLT job during its training
so as to bridge the performance gap between storage and computation. 

Second,
the DLT jobs repeatedly read the entire dataset for many times (epochs)
with predictable read set (mini-batch) for each iteration. 
For example,
in distributed training of PyTorch, 
one can exactly know the input order of the dataset 
when an epoch is initialized. 
%
The predictability allows data 
not only to be preloaded from remote storage to a cache layer 
\cite{kumar2020quiver, zhu2018entropy}
of the GPU cluster with high concurrency, 
but also to be prefetched from the cache layer 
to the local memory of the target worker nodes\cite{rashmi2016ec}.
%
Therefore, 
we propose a two-stage pipeline model for data loading 
to minimize the waiting times of the fast GPUs,
of which in the first stage \cam pulls raw training data from remote storage
and in the second stage \cam pushes formatted batches of data 
(required by the worker nodes in each iteration)
to the local CPU/GPU memory of the workers.




Third, 
the input data loading path (from remote storage to GPU memory) 
of traditional distributed training systems 
includes multiple memory copies between kernel space and user space,
as shown in Fig.~\ref{fig:path}.
Frequent system context switching drastically increases the scheduling overhead
and resource competition,
which further lowers input data loading performance and thus causes GPU starvation.
%
Therefore, 
we propose to leverage RDMA to realize data loading completely in the user space,
which eliminate the influence of OS kernel
so as to maximize the loading performance of small training data items.

Fourth,
in many DLT scenarios, 
data preprocessing can be moved to GPUs to accelerate its procedure. 
For example,
the NVidia Dali acceleration library \cite{url_dali}
allows tensor calculation and picture preprocessing to be offloaded to GPUs. 
%
Therefore,
we propose to provide extremely-optimized GPU-direct data loading path
which directly loads data into GPU memory for such scenarios,
% actively loads data into GPU video memory, 
allowing users to define specific preprocessing operations 
such as \textsf{cuda\_array\_interface}
so as to minimize the data loading path length.




% At present, to solve the problem of large-scale parameter extension, the industry has put forward many solutions such as Parameter Server, Zero-infinity. It mainly adopts an independent storage cluster to solve the storage expansion problem of the dataset. 
% However, with the separation of computing and storage becoming the mainstream, the distance between data and computing is widened, so data I/O has become the performance bottleneck of distributed deep learning, especially in high-performance proprietary computing clusters.

% Even if the back-end uses a high-performance storage cluster, multi-process data prefetching, and pipeline loading and computing, the performance of the data loading cannot match the ever-increasing computing speed.

% Therefore, such high single-machine computing power puts forward higher requirements for data I/O:
% \begin{itemize}
% \item Low latency. A faster data load supply is needed to maintain the high utilization of expensive GPUs. However, the dataset samples trained by AI are often small data between KB and MB, mainly KB, so data loading requires higher network bandwidth and higher IOPS.
% \item Stable loading speed. Many DLT algorithms are synchronous. The next iteration can only be carried out after the last iteration is completed (all workers are completed), leading to a remarkable decline in overall performance if the performance gap between workers is considerable.
% \end{itemize}

% At the same time, according to the actual operation requirements of the worker, the data can be further prefetched from the distributed resources of the computing cluster to the local memory or video memory required by the worker to realize efficient data loading. 
% is to actively preload the dataset from the back-end storage to the distributed nodes of the computing cluster in parallel, and the second stage is to actively prefetch the data required by each iteration required by the worker to the local CPU memory or GPU video memory required by the computing at runtime. The waiting time of data is minimized through active prefetching of the two-stage pipeline.

% the local memory of the AI server configured by the computing cluster is getting larger and larger. For example, the DGX A100 server is configured with 8*80G video memory, 1-2TB memory and 15TB NVme SSD. Therefore, the dataset of deep learning tasks can be fully or partially preloaded on the storage resources of distributed nodes of computing clusters. 

% the mainstream high-performance computing clusters for deep learning are equipped with high-performance multi-track RDMA networks. For example, Nvidia DGX A100 is equipped with nine 200Gbps IB CX6 NICs, eight GPUs each correspond to one NIC, and the remaining one is used for back-end storage. In the deep learning training task, each GPU will start multiple or as many as ten processes for data loading. In the traditional data storage scheme, dozens of processes running on a single machine with 8 GPUs can only share and use one NIC for data access. \sys makes full use of the multi-track network architecture with high-performance cluster configuration and realizes concurrent data loading based on multiple NICs, which can further improve the concurrent data loading capability of a single machine and multiple cards.

% eight GPUs each correspond to one NIC, and the remaining one is used for back-end storage. In the deep learning training task, each GPU will start multiple or as many as ten processes for data loading.


% Another key starting point for \sys is that observing traditional network file systems (nfs, glusterfs) or parallel file systems (such as lustre, etc.) or TCP-based distributed memory caches (such as Redis), even running on a 100Gbps high-performance network, it is impossible to effectively utilize the high-performance network bandwidth capacity to provide high-throughput data I/O performance in the scenario of super-large deep learning data sets composed of massive KB-level small sample data. 


% From remote storage to local user space, storage and network I/O kernel copies have to be experienced at the remote end, and the near end, respectively, which further increases the reading delay and dramatically increases the unstable influence caused by resource competition system context switching and process scheduling. 

% \sys provides user-mode batch data loading-RDMA transfer protocol and semantics based on OS-bypass, realizes high-performance data loading path in full user-mode, bypasses the influence of the operating system, actively loads data batch directly into the user space memory of the application, and maximizes the reading performance of small data.

% \sys also observed that in the deep learning training task, data loading is first stored from the remote end, transmitted to the local through the network, then formed batch data after multiple kernel copies of the network and storage protocol stack and some preprocessing operations, and finally copied to GPU video memory for model calculation. 