\begin{abstract}

The continuously-increasing compute capacity of high-end GPUs
has made storage a severe bottleneck 
for large-scale deep learning training (DLT),
as
the I/O speed of reading input training data 
is too slow to saturate the GPUs.
%
Although various cache-based solutions like Quiver 
have been proposed to use an additional cache layer (with pipelining)
to alleviate the performance bottleneck of input data I/O,
they are still far from efficient for high-performance DLT
even if the total cache capacity is large enough for the entire input dataset.
This is mainly because they have to perform multiple memory copies 
along the loading path (from the cache layer to GPU memory) 
for input data,
the processing of which is time-consuming 
and causes severe waste of expensive GPU resources.

This paper presents \xld, an extremely-fast data loading framework 
for bridging the performance gap between computation and I/O of large DLT jobs
in Alibaba's high-performance RDMA-enabled GPU clusters.
Unlike previous work
which mainly targets memory-constrained scenarios
and increases the cache layer's hit ratio 
by preparing mini-batches opportunistically with cached data,
% Unlike previous work 
% which is focused on improving the hit ratio of the cache layer via scheduling,
\xld is designed for Alibaba's high-performance GPU clusters
where each worker node is equipped with multiple RDMA NICs and TBs of memory.
Since DLT jobs are mostly processed in GPU memory,
the workers can provide plenty of dedicated memory %(for storing input data)
of which the total capacity is large enough to accommodate the entire input dataset in almost all cases. 
%
To keep the expensive GPUs busy,
\xld organizes the dedicated memory of all worker nodes 
into an RDMA-aware distributed shared memory pool, %(\cam),
which provides DLT jobs with a highly-optimized proactive data I/O service 
that avoids any unnecessary memory copies
by pulling raw input data from remote storage
and pushing formatted batches of data 
directly to the workers' CPU/GPU memory (via RDMA).  
% avoiding kernel-/user-space memory copies. %whenever possible.
%
% \sys further adopts a two-level pipeline model 
% to effectively reduce I/O wait time of worker nodes.
% \sys has been deployed in our production DLT cloud.
We have integrated \xld to PyTorch 
to realize a high-performance DLT system (called \sys)
for Alibaba's high-end GPU clusters.
Evaluation shows that
\sys delivers significantly higher I/O performance 
than state-of-the-art memory-based storage/caching solutions
for various DLT workloads.

\end{abstract}