\section{Implementation}\label{sec:impl}

The implementation of \sys framework is mainly divided into three parts: the DataServer, the DataClient, and the python package adapted to specific deep learning frameworks. 
The underlying communication uses a high-performance communication library ACCL \cite{dong2021accl},
which provides an RDMA-based remote procedure call library for deep learning optimization, and realizes a lightweight user-mode thread and task processing framework to support large-scale node concurrency in a deep learning environment. Its native API provides a callback-oriented interface to manage asynchronous operations. Based on ACCL, DataServer encapsulates a layer of RPC Server and Client, which support asynchronous and synchronous operation, and is used for the low-level communication service of DataServer and DataClient. 

DataServer is a network server written in C++. It implements a memory management interface for node memory allocation/release and RDMA registration. It is responsible for managing user information and part of the metadata information of the cached dataset. Furthermore, the batch read request interface and the corresponding client data read request interface are also implemented based on this information. In addition, DataServer also exposes a monitoring interface to obtain metadata information, memory utilization rate, RDMA connection, and other information of the current dataset to view relevant information and assist troubleshooting when errors occur. 

DataClient implements the preloading and loading functions described in section 3.2 for different data types of the working set, mainly including distributed resource management, forming a distributed memory pool (\cam) of the available memory of DataServer cluster and allocating cache resources for the working set, and proactive preloading of the working set and direct data loading of CPU or GPU. 
DataClient provides DLT programs with the API 
described in Section~\ref{subsec:model}.

The python package is a data loading function \textsf{whl} package for framework adaptation based on DataClient. 
Currently, the \textsf{whl} package provides data loading function modules only for the PyTorch framework, 
including dataserver.utils.Dataset, dataserver.utils.ImageFolder, dataserver.utils.Dataloader, dataserver.utils.DistributedSampler, \textsf{dscmd}, benchmark, etc. PyTorch's data loading model includes three abstractions: Dataset, Sampler, and Dataloader. The Dataset returns a sample of data corresponding to a given index. Sampler creates a random arrangement of indexes over the length of the dataset for each training epoch. Dataloader obtains a mini-batch index from Sampler, and the worker thread of DataLoader uses these indexes and obtains data samples from Dataset to form a mini-batch and send it to GPU for model calculation. To use the data loading service provided by DataServer, users only need to define their datasets using \textsf{dataserver.utils.Dataset} instead of \textsf{torch.utils.Dataset} or \textsf{dataserver.utils.ImageFolder} instead of \textsf{torchvision.ImageFolder}. In order to use the Direct-GPU data loading provided by DataServer and in scenarios where the dataset cannot be fully cached, we need to use Dataset, DataLoader, and Sampler provided by DataServer to build a data loading framework. In a scenario where the dataset cannot be fully cached, the DataLoader will start a separate preload process for active data preloading, and at the beginning of each epoch, pass the full data sample sequence created by Sampler to the preload process for active data preloading. \textsf{dscmd} mainly provides a command-line tool for users to preload the dataset offline, view cached dataset information and other operations.




