\section{Conclusion}\label{sec:conc}

This paper presents \sys, an extremely fast data loading framework 
for bridging the performance gap between computation and I/O of large DLT jobs
in Alibaba's high-performance RDMA-enabled GPU clusters.
Unlike previous work
which mainly targets memory-constrained scenarios
and increases the cache layer's hit ratio 
by preparing mini-batches opportunistically with cached data,
% Unlike previous work 
% which is focused on improving the hit ratio of the cache layer via scheduling,
\sys is designed for Alibaba's high-performance GPU clusters
where each worker node is equipped with multiple RDMA NICs and TBs of memory.
Since DLT jobs are mostly processed in GPU memory,
the workers can provide plenty of dedicated memory %(for storing input data)
of which the total capacity is large enough to accommodate the entire input dataset in almost all cases. 
%
To keep the expensive high-end GPUs busy,
\sys organizes the dedicated memory of all worker nodes 
into an RDMA-aware distributed shared memory pool (called \cam),
which provides DLT jobs with a highly-optimized proactive data I/O service 
that avoids any unnecessary memory copies.
\cam pulls raw input data from remote storage
and pushes formatted batches of data 
directly to the workers' CPU/GPU memory (via RDMA).  
Evaluation shows that
\sys delivers significantly higher I/O performance 
than state-of-the-art memory-based storage/caching solutions
<<<<<<< HEAD
for various DLT workloads.
=======
for various DLT workloads.In the future, we will continue to manage the local SSDs in computing server to the memory pool of \sys to supports super-large AI models tasks.
>>>>>>> aff77534158b47f4253040040f885e77c94e1113




