\section{Evaluation}\label{sec:eval}
In this section, we evaluate the performance of the \sys data I/O framework. We test the data loading performance of \sys's various modes,
including the simple mode, batch-mode, multiple-rail network mode, direct-to-GPU mode and pipelining mode. 
We also measure the speedups of \sys for different DLT jobs.
We compare \sys against Alluxio \cite{Li:EECS-2018-29}, the state-of-the-art open-source data orchestration framework for DLT.

\subsection{Experimental setup}
For our evaluation, we use a cluster of 64 GPUs across 12 compute node. Each node is equipped with 8 NVIDIA A100 GPU, 104 dual-socket Intel(R) Xeon(R) Platinum 8269CY CPU @ 2.50GH, 754GB memory. All nodes are connected through HDR InfiniBand interconnect with four ConnectX-5 NICs, and 400 Gbps network bandwidth in total per node. We mainly evaluate two Deep-Learning workloads: ResNet50 on ImageNet dataset and Inception v3 on OpenImages dataset.

\subsection{Read performance}
Here we first show the read throughput achieved by \sys with different data sample sizes and numbers of clients. Since Deep-Learning jobs typically configure four or more data loading processes for each training process on a GPU, we mainly measured data read performance of a 8-GPU node with concurrent clients range from 32 to 64.

\subsubsection{Read Throughput of Different sample sizes}

\begin{figure}[t]
  \centering
  \includegraphics[width=\linewidth]{../figs/4.2.1.png}
  \caption{Read throughput of one 100Gbps NIC with different data sample sizes}
\end{figure}

Since \sys is userspace data loading system and Alluxio is a kernel-space memory file system, we also compare read performance with distributed in-memory cache Redis. In our experiment, Alluxio and Redis both run in 100Gbps TCP , while \sys runs 100Gbps RDMA. Figure 7 shows the performance comparison of concurrent random read throughput for different data samples range from 4KB to 32MB. As can be seen, in typical dataset sample size range from 32KB to 256KB), \sys has a 1.5-4x performance improvement over Redis, and more than 100x over Alluxio. In 4KB and 32KB size, the average read throughput is even less than 1MB. \sys performs better for small messages because its data transfer is based on os-bypass RDMA. Further more, \sys pulls the data placement metadata to local at the start of data transfer , then uses the hardware-assisted RDMA READ to read the data back. The read performance of  64 clients of \sys is better than that of 32 clients, while redis and Alluxio are the opposite, which indicating that \sys has low resource consumption and competition, and better scalability. Additionally, when servers and clients are collocated on the same nodes, data transport of \sys used local memory copy.  So the read throughput of \sys is up to 120Gbps when use one 100Gbps NIC.


\subsubsection{Performance of Different NICs configurations}
\begin{figure}[t]
  \centering
  \includegraphics[width=\linewidth]{../figs/4.2.2.png}
  \caption{Read throughput of different number of 100Gbps NICs with 32 clients}
\end{figure}
In this experiment, We evaluate the effect of the number of NICs on \sys data reading performance. Figure 8 shows a normalized comparison of read throughput using 2 or 4 NICs for data transfer with respect to 1 NIC. As can be seen, the 2-NICs read throughput has 1.25x~1.9x over 1-NIC. Due to the bandwidth limitation of the system I/O bus PCIe Gen3, the performance of 4-NICs only improves by 2.2x. However, there is theoretically no bandwidth bottleneck in our evaluation platform, when data is transferred to GPU directly by GDR. So the performance of 4-NICs data direct to GPU  can achieve nearly to 4x when message is large than 1MB.

\subsubsection{Performance of batch mode}
\begin{figure}[t]
  \centering
  \includegraphics[width=\linewidth]{../figs/4.2.3.png}
  \caption{Read throughput of kspeed batch mode with 32 clients and 4 NICs}
\end{figure}
In some text-based DL scenarios, such as quantitative financial, the training datasets can reach tens of terabytes, while its sample size is only a few KB to tens of KB. In the face of such small and high-frequency data loading, data I/O has become a major performance bottleneck. \sys's batch mode is born from this. Figure 9 shows the performance comparison of  \sys batch data loading with batch\_size=200, compared to single data reading.  As can be seen from the figure, using batch mode data loading can effectively improve data loading performance by 2.5x. In other words, the same number of mini-batches are processed much faster with \sys, resulting in better efficiency.

\subsubsection{Performance of GPU mode}

\begin{figure}[t]
  \centering
  \includegraphics[width=\linewidth]{../figs/4.2.4.png}
  \caption{Read throughput of kspeed batch mode with 32 clients and 4 NICs}
\end{figure}

In this experiment, we demonstrate the efficacy of data-loading direct to GPU in \sys, and compare it with that to CPU. The results are shown in Fig.10, When data sample size is larger than or equal to 1MB, there is a 1.5x~1.7x performance improvement. Because the data tranfer to GPU is completed by GDR(GPU\-to\-RDMA) through the PCIe P2P channle, which there is no PCIe Bus I/O bottleneck. Surprisingly, when the data sample is small (such as 4K to 128K), the performance of data direct to GPU is even lower than that of CPU mode.

\subsection{Improvement in AI job}
We now evaluate the performance gains from \sys on two different AI workloads: ResNet50 and Inception v3. In each workload, we run a multi-GPUs AI job range from 32 to 64, and compare the performance of job epoch time with dataset built with the KSpeed API and the Alluxio POSIX-compliant API.

\subsubsection{Improvement of Data Loading}

\begin{figure}[t]
  \centering
  \includegraphics[width=\linewidth] {../figs/4.3.1.1.png}
  \caption{Images loading performance of KSpeed Userspace APIs and Alluxio Kernel Space APIs}
\end{figure}

In this experiment, we run 8-GPUs and 16-GPUs ResNet50 jobs accessing the 154GB ImageNet dataset in Pytorch framework while bypass the data-prepare and computing stage.Each GPU traning process is configured with 10 dataloader workers for data-loading. Figure 11 shows the average epooch time for loading ImageNet dataset(154GB with millions of picture samples) 3 times without real training. As can be seen, the  \sys  outperfom Alluxio by up to 4x in both two cases. We also observed that at the beginning of epoch, Alluxio spent a long time traversing data samples to form dataset objects. This results in longer actual running time for Alluxio.

\subsubsection{Improvement of job efficiency}
\begin{figure}[t]
  \centering
  \includegraphics[width=\linewidth]{../figs/4.3.2.1.png}
  \caption{speedup of AI job epoch time of  \sys  across two workloads }
\end{figure}
further more, We compares the average epoch execution time between \sys and Alluxio in real ResNet50 and Inception training on 64-GPUs.The end-to-end overall speedup achieved by  \sys  for the two workloads is shown in Figure 12. And KSpeed has up to 4.5x for ResNet50 and 5.3x for inception v2, which is higher than pure data-loading. However, In real training job, the data loader maintains several child-processes for data prefetching and  a shared queue of fetched mini-batch inputs. the computation engine pull data from the in-memory queue to compute, and the data-loading I/O can be overlapped with computation. Both Alluxio and  \sys  benefit from the pipelining pattern. We now analyze the time breakdown of pipeline stages within each iteration, to understand how exactly the data I/O performance impact training efficiency.
\begin{figure}[t]
  \centering
  \includegraphics[width=\linewidth]{../figs/4.3.2.2.png}
  \caption{Data time for each iteration of 64-GPUs ResNet50 job based on Alluxio }
\end{figure}

\begin{figure}[t]
  \centering
  \includegraphics[width=\linewidth]{../figs/4.3.2.3.png}
  \caption{Data time for each iteration of 64-GPUs ResNet50 job based on \sys}
\end{figure}
The time of one iteration compose of data time and compute time. Data time is the latency waiting for next batch data, and compute time is time that the GPU used to compute. Figure 13 shows the data time of each iteration of each GPU processor in the 64-GPUs ResNet50 job based on Alluxio.The X-axis plots training iteration, while the Y-axis plots the data time waiting for next data to compute. Ideally, the data time is close to 0, and the GPU keeps being busy for computation. However, as shown in Figure 14, with Alluxio data I/O, the GPU periodically has a data time of more than ten seconds. Figure 5 shows the data time that based on \sys. As can be seen, \sys I/O mostly stays close to zero, while the maximum is not exceed 0.5 second except for the first one. This is the cause that \sys finishes ResNet50 job more efficient than Alluxio.

\subsection{Performance of Proactive Preloading under Cache-constrained}
\subsubsection{Improvement of job efficiency}
\begin{figure}[t]
  \centering
  \includegraphics[width=\linewidth]{../figs/4.4.1.png}
  \caption{Epoch execution time under different cache rate for dataset}
\end{figure}
In this experiment, we demonstrate the efficacy of proactive preloading in \sys under cache-constrained scenario. The dataset resides a GlusterFS based on SSD. We run 32-GPUs and 64-GPUs ResNet50 jobs under different cache rate ${r}$ : 0, 0.2, 0.5, 1.  Figure 15 shows the normalized average training epoch time comparison. As we can see, as the cache size is much lower than the combined sizes of the dataset, \sys is still able to improve the training efficiency by 1.8x ~2.3x in 32-GPUs testcase and 1.4x~1.7x in 64-GPUs testcase. While \sys improve 2.4x and 1.85x respectivly in fully cached. In these configs (r=0.2 and r=0.5), \sys time is able to proactive preloading data based on the sequence of indexes used in each epoch training. The preloading latency from slow back-end storage is overlapped by computing stage. Meanwhile, its precise data preloading with multiple threads to the fix memory regions ring registered in advance, improves the reading performance further. Overall, even with a tiny cache rate, \sys achieves higher data delivery efficiency through proactive and efficient preloading techniques. 

