\section{Design}\label{sec:design}
% \sys is a high-performance data service framework designed for distributed deep learning scenarios in high-performance dedicated computing clusters.  \sys
% , as a data loading acceleration service, can make full use of the high-performance hardware capability at the bottom of the computing cluster on the one hand and closely cooperates with the upper deep learning application on the other hand.




\sys is an all user-mode extremely-fast data loading framework. Data loading is implemented as a two-stage pipeline,
where the input training data is first loaded to a memory pool of the GPU cluster and then loaded to the CPU/GPU memory of the worker nodes,
so as to
reduce the runtime data loading time 
and enable DLT jobs with high concurrency.

% automatically pre-loading the dataset from the remote storage in parallel to the distributed computing nodes to which the task is scheduled or the nearest one when the task is started. At runtime, the preloaded data is actively loaded into the CPU memory or GPU video memory nearest to the task calculation through the data loading path of full user mode. This process can significantly reduce the data loading delay at task runtime and enable distributed deep learning tasks with high concurrency and low delay to be executed efficiently.


\begin{figure*}[ht]
\centering
\includegraphics[width=0.75\linewidth]{../figs/pic3.png}
\caption{Preloading data to DataServers' \cam.}
\label{fig:preload}
\end{figure*}

\subsection{Architecture}

The architecture of \sys is shown in Fig.~\ref{fig:arch}, which is mainly composed of DataServer, DataClient, and the DataService API exposed to the DLT jobs.

\begin{itemize}
\item 
Dataservers provide intermediate in-memory data storage functions, 
including memory resource registration, in-memory data storage, etc. 
DataServers divide memory into fixed-size (\eg, 512MB) chunks, and each chunk of memory is called a memory region (\textsf{mr}).  
DataServers are independent of each other
and jointly form a data service cluster
which can be accessed by DataClients.
The DataServers' available memory forms a distributed memory pool enabled by RDMA (\cam),
where the input training data is distributed 
through a universal mapping strategy.

\item 
DataClients provide DLT jobs with the core functions of the highly-optimized data I/O service,
including proactive preload, OS-bypass data prefetch, and I/O scheduling. 
DataClients and DataServers communicate 
through a multi-rail RDMA (Infiniband or RoCE) network.
When preloading the training data,
DataClients first request enough memory space
from the distributed resource manager,
which allocates enough memory (with universal address space)
from the DataServer memory pool (\cam).
The entire training dataset is pulled to the universal space
through RDMA Write.
During training,
DataClients realize all-user-node data loading 
(through RDMA Read or Send-Write)
according to the training order.


\item 
DataService API is called by DLT programs to realize high-performance training data loading. DLT programs can also customize their data loading process based on DataService API.
\end{itemize}


\subsection{Data Model}\label{subsec:model}



The dataset preloaded by DataServer is called \emph{working set}. 
It is used by DLT jobs at runtime 
and consists of a series of logically related data samples. 
\sys mainly provides two types of working set 
of which the loading process can be accelerated.

\begin{itemize}
\item {\verb|Array-like working set|}: The arrray-like working set includes datasets such as \ttt{npy}, \ttt{hdf5} format. The array-like dataset supports multidimensional data access, 
and can be addressed in the highest dimension.

\item {\verb|Fileset working set|}: 
The fileset working set includes file-based datasets such as \ttt{imagenet}. 
Specifically, the dataset of the picture type can be loaded as the original binary files or as the pixel value data.
\end{itemize}

The working set provides the following core API operations:

\begin{itemize}
\item {\verb|Preload|}: This API is to load working sets from backend storage to DataServers' distributed memory pool. 
Preload can be explicitly executed by the users before running DLT programs, or be implicitly performed by DataServers' data I/O service at runtime.
\item {\verb|Append|}: This API adds data samples to an existing working set in the memory pool, which is mainly for incremental update of working set.
\item {\verb|Load|}: This API loads a specified working set to the CPU/GPU memory of the worker.
%  and can specify whether to load to CPU or GPU.
\item {\verb|Read|}: This API returns a data sample 
specified by its unique identifier from the working set 
that has already been loaded to the worker's memory. 
For array-like dataset, 
the sample's identifier is simply the sample's serial number ID. For fileset, 
the sample's identifier is represented by a string 
such as the pathname of the picture.
\item {\verb|Readbatch|}: This API returns a batch of data samples from the loaded working set.
\end{itemize}

\subsection{Data Preloading}

DataServers can proactively realize data preloading for DLT jobs.
During the initialization of a DLT job,
\sys checks whether the working set already exists in the DataServers' memory. 
If the working set does not exist, 
then it starts the preloading process.
%
As shown in Fig.~\ref{fig:preload},
each DLT process is responsible for preloading its training data
to the DataServers' memory,
according to its rank number and the size of the input dataset.

The preloading procedure is as follows.

First, 
the first DLT process (Rank0) 
checks whether the DataServers' memory (\textsf{mr}) 
is capable of accommodating the entire dataset.
In most cases the distributed memory should be large enough 
owing to the high memory capacity of the worker nodes, 
and then \sys will allocate memory of DataServers
according to the size of the dataset. 
Otherwise if the distributed memory cannot store the entire dataset,
then \sys allocates memory for a relatively-small subset (working set) of the entire dataset. 
Each memory region (\textsf{mr}) allocated to the working set
will be assigned with a \textsf{mr\_gid},
which is increasingly numbered from 0.
All the memory regions form a logically continuous space
with distributed unified address.
%
A DataClient sends a memory request with \textsf{mr\_gid} to the corresponding DataServer, 
which will allocate and register the memory region
either as an RDMA transfer (remote) \textsf{mr} 
or as an inter-process communication (local) \textsf{mr}. 

Second,
the DataServer creates a working-set-independent table
for memory region metadata of each registered \textsf{mr}. 
The memory region metadata contains information such as mr\_gid, virtual address, and keys for local or remote RDMA access, 
and will be synchronized to all DLT processes
Other processes wait for Rank0 to complete the preparation
and then synchronize the memory region metadata.

Third,
each DLT process reads its corresponding data samples from the backend storage, 
and calculates the storage position in the distributed memory region 
according to the order of the data samples.

Fourth, 
based on the metadata of the memory region, 
the data samples are transferred to the DataServers' proper memory region (through RDMA Write).

For the rare scenarios 
that the DataServers' memory cannot hold the entire dataset,
\sys has to preload pnly a subset (working set) 
rather than the dataset.
However,
\sys knows exactly the sample sequence (of each epoch)
trained by each worker process, 
and thus each process will follow the processing order
to continuously preload the to-be-used data samples,
which is similar to the pipelined training of PyTorch and Quiver.

If a working set is proactively (and implicitly) preloaded 
then it will be automatically unloaded from DataServers after the DLT jobs are done,
so as to release the distributed memory for new DLT jobs.
Otherwise if a working set is preloaded by the Preload API 
from a DataClient or command-line tools,
then it will not be automatically deleted
so as to be used in scenarios where multitasking repeatedly uses the same dataset \cite{kumar2020quiver}.


\subsection{All-User-Mode Data Loading}
In this subsection we first introduce the basic data loading design
of \sys of which the entire procedure is performed in user-mode.
We then introduce the designs 
for batch data loading and GPU-direct data loading.

\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{../figs/pic4.png}
\caption{Load data to worker CPU memory with \cam,
initializing receive buffer according to the working set.}
\label{fig:loadcpu}
\end{figure}

\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{../figs/pic5.png}
\caption{Load data samples directly from \cam to receive buffer through \ttt{readbatch}.}
\label{fig:loadbatch}
\end{figure}

\subsubsection{User-mode data loading}
\sys organizes the data format in the memory pool (\cam) 
to leverage RDMA's bypass kernel support
for providing all-user-mode data loading
which directly load input samples into CPU or GPU memory of the worker nodes, 
enabling DLT jobs to access remote data with zero copy.

Fig.~\ref{fig:loadcpu} illustrates the procedure 
of loading input data directly to the worker's CPU memory.
%
The DataClient synchronizes the metadata information 
with the DataServer.
%
The DLT process sets the batch size of the loaded data as well as the memory's device (CPU) to place the data. 
The DataClient pre-allocates the receive batch\_rx\_buffer according to the metadata information,
the sizes of the data samples, and the loading parameters; 
registers it as a buffer that can perform RDMA; 
and renders the buffer as an \emph{array memory view}  of the DLT process. 
%
Then, the DataClient locates the DataServer where the data samples are stored and the memory address information 
(according to the  metadata). 
%
If the sample is stored in the same physical machine as the DataClient, 
then the data is copied from the source memory address to the pre-allocated batch\_rx\_buffers directly through the memory copy of IPC. 
%
Otherwise, the source address, destination address batch\_rx\_buffers, and their RDMA-related key of the data are encapsulated into a request sent to the target DataServer through RDMA\_SEND. When the DataServer receives the request, it writes back batch\_rx\_buffers directly through an RDMA Write.

Because registering an RDMA-enabled buffer is  time-consuming, 
we reuse the fixed \emph{array memory view} as the receive buffer in entire the data loading procedure. 
This avoids the overhead caused by repeatedly registering new receive buffer by RDMA. 
Further, with the fixed array memory view, DLT processes can directly use the array or convert it into a tensor for preprocessing without introducing another memory copy.


\subsubsection{Batch data loading}  
The input samples read by many DLT tasks are relatively small, ranging from several KB to a few MB, for different text and picture datasets. 
%
Batch data loading merges data loading requests of DLT tasks of the same mini-batch. 
In batch data loading, multiple DataServers return data to DataClient simultaneously, 
which leads to \textsf{incast} and potentially drastically reduces data loading throughput. 

To address the \textsf{incast} problem.,
we design an efficient batch loading mechanism.
As shown in Fig.~\ref{fig:loadbatch}, 
when DataClient loads data from multiple DataServers in a batch, 
it pipelines the requests of batch data through I/O scheduling
(listed in Algorithm~\ref{algorithm1}), 
and controls the inflight data requests 
so as not to exceed its processing capacity of DataClient, 
thus effectively reducing the \textsf{incast} problem and achieving efficient batch data loading.

\begin{algorithm}[t]
\caption{I/O schedule algorithm for batch data loading}
\label{algorithm1}
\KwIn{Avaliable Network Bandwidth per GPU $bw\_avail$, Process num $nprocs$ that used for data loading, Transport Latency $latency$ of the given data sample size $sample\_size$, pipline stages $stages$ that used to pipline to batch data loading requests, $batch\_idxs$ with size of $batch\_size$ to read}
$bw\_per\_proc = bw\_avail / nprocs$\;
$nreq\_inflight\_limit = bw\_per\_proc / (sample\_size / latency) $\;
$nre\_per\_stage = nreq\_inflight\_limit / stages$\;
$global \; req\_out = 0$\;
$global \; req\_finished = 0$ (which will be update when receive one data completion in another thread)\;
\While{$i \leq batch\_idxs $ }{$ start\_idx = i$\;
$end\_idx = i + nreq\_per\_stage$\;
\If{$req\_finished \geq batch\_size $}{$end\_idx = batch\_size$\;}
$dispatch\_reqs(batch\_idxs[start\_idx, end\_idx])$\;
$req\_out \quad += req\_out - start\_idx$\;
\While{$req\_out - req\_finished \geq nreq\_inflight\_limit $}{$wait \; for \; req\_finished \; update \; notification$}
}
\end{algorithm}


\subsubsection{GPU-Direct data loading}
In the traditional DLT architecture, the PCIe bus is the I/O bottleneck of the whole system. 
In many specific DLT scenarios, 
data preprocessing can be moved to GPUs to accelerate its procedure
(\eg, supported by the NVidia Dali acceleration library \cite{url_dali}).


As shown in Fig.~\ref{fig:loadcpu}, when the DLT program is initialized, 
the memory of the target device for loading data can be set as CPU or GPU. 
If set to GPU,
DataClient will pre-allocate GPU memory as the receive batch\_rx\_buffer through CUDA, 
register the GPU batch\_rx\_buffer for RDMA, 
and provide it to the upper-layer application as the array memory view of \textsf{cuda\_array\_interface}. 
Then, through GPU-Direct RDMA, 
DataServer can directly push the data into GPU memory.

\subsection{Multi-Rail Network Optimization}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{../figs/pic7.png}
\caption{Accelerating input data I/O through multi-rail network.}
\label{fig:multirail}
\end{figure}

High-performance GPU clusters for DLT are generally configured with multiple NICs. As shown in Fig.~\ref{fig:multirail}, one or two GPUs are configured with an independent high-bandwidth NIC. 
%
In our DLT tasks, each GPU runs a training process, which is configured with 4 $\sim$ 10 data loading processes. 
In order to further accelerate training, 
we make full use of the multi-rail network for data loading, 
by increasing the (aggregate) available bandwidth. 

As shown in Fig.~\ref{fig:multirail}, 
to realize multi-track network RDMA transfer, 
the memory regions managed by each DataServer are registered to each NIC during RDMA registration, 
and the RDMA key-related information of each NIC is recorded in the \textsf{mr} metadata information. 
When initializing data loading, 
DataClient proactively detects the GPU number used by the training process to which it belongs, selects the nearest NIC to its GPU, 
sets NIC affinity, 
and then uses the NIC for data loading. 
% As shown in Fig.~\ref{fig:multirail}, 
For instance,
the GPU corresponding to the training proc0 and training proc1 is GPU0, 
and the DataClient that is called in proc0 and proc1 will use NIC0 for data loading. 
To avoid introducing remote NUMA access across CPUs, 
DataClient will select the CPU core that is closest to the GPU 
to set CPU affinity of each training process.


% performs data preloading. 
% When the DataLoader is initialized, 
% the distributed training process has completed the distributed initialization. 
% Each training process is responsible for preloading part of the data into the DataServer according to the world\_size of the distributed process, 
% the size of the dataset and its Rank number. 
