\begin{cabstract}

随着全球大数据产业的迅速发展，
以深度学习为代表的智能计算已经成为一种新兴的大数据分析处理方式。
近年来，数据存储和智能计算越来越依赖于云计算技术提供基础支撑。
% 面向云环境的高效能（高性能、低成本）需求，
针对公用云环境中的可信性挑战与动态性挑战，
论文深入研究了高效可信、弹性伸缩的数据存储与智能计算技术，
主要包括：基于自适应元信息比特树的高可用协同存储机制，基于可验证索引哈希树的高安全日志结构存储技术，GPU-CPU协同的深度学习计算弹性调度框架，以及面向多核CPU的深度学习计算访存加速方法。

针对云环境中数据存储的可靠性问题，论文提出一种DM-cache的扩展机制MapperX。
目前，基于Linux内核DM-cache的SSD-HDD混合架构广泛应用于云环境中的数据存储，HDD作为主要存储设备持久化所有数据，SSD作为HDD的缓存来提高整体的数据I/O性能。
DM-cache的异步元数据维护机制使得SSD缓存块的脏位（dirty-bit）信息不能及时更新，从而在发生故障时的恢复时间过长导致DM-cache系统可用性低。
为了解决该问题，MapperX设计了自适应元信息比特树（ABT，Adaptive meta-data Bit Tree），以分层树形结构的方式同步维护脏位的元数据。
MapperX通过在ABT的不同层级中自适应地添加或删除叶子来描述脏位的分布情况。
%
论文基于持久化延迟的服务水平协议（SLA，Service-Level Aggrement）来控制ABT叶子的添加和删除，实现了自适应的元数据更新粒度调整。
实验结果表明，基于MapperX的协同存储机制有效降低了SSD-HDD的故障恢复时间。


针对云环境中数据存储的安全性问题，
% 加密安全存储I/O效率低下的问题，
论文提出一种高效的数据加密存储方案SwornDisk。
SwornDisk基于LSM树（log-structured merge tree）和MHT树（Merkel hash tree）结构实现了数据I/O的机密性、完整性、新鲜性和匿名性保护。
% 采用异地更新加密机制，SwornDisk避免了每次验证任何一个数据单元都需要对整个磁盘进行验证的困难，从而可以在不影响I/O性能的前提下有效保证数据的安全性。
%
对于写操作，SwornDisk以追加写日志（log）的方式把数据持久化到物理磁盘上，
相同逻辑地址数据的不同历史版本被记录在不同的物理位置上（即异地更新），因此攻击者无法通过将某一物理位置的数据回滚到某个历史版本来进行攻击。
SwornDisk把逻辑地址（LBA）到物理地址（PBA）的映射以及数据的key与MAC（message authentication code）都保存在LSM树的内存结构中，
% 得益于LSM树的特性，其写开销为O(1)。对于读操作，每次要读某一个逻辑地址的数据，需要先在LSM树中根据逻辑地址查找其物理地址的索引，并且该数据的key与MAC也记录在这条索引上，用来解密和对数据进行验证。在LSM架构中查找物理位置的过程可以看做在磁盘上的SSTable中进行二分查找。
% 数据的key与MAC也记录在LSM树中，
而LSM树的持久化存储结构（SSTable）则使用MHT进行加密来保证其安全性。
% 如果每次在SSTable中进行二分查找时都对“中位点”进行一次额外O(logN)复杂度的解密和验证，那么其复杂度将会是O(logN*logN)的。
% 针对该问题，论文对MHT进行优化，在传统的MHT结构中增加了索引标识，使其不仅具有MHT的加密功能，还同时还具有B树快速查找的功能，可以在O(logN)的复杂度内同时完成验证和二分查找。
% 并且，SwornDisk使用三级缓存策略来降低读开销，其缓存分别包括加密数据Cache、Memtable Cache和SSTable Cache。
实验结果表明，SwornDisk可以有效提高加密存储方法的I/O性能。

针对GPU/CPU协同计算的动态弹性调度问题，论文提出一种高效的GPU-CPU协同的深度学习计算弹性调度框架Elastic Scheduler（ES）。ES所使用的本地梯度积累算法，有效解决了CPU/GPU计算速度不匹配问题和动态计算过程中长时间动量补偿问题。ES支持协同计算（可使用不同类型的GPU和CPU设备进行深度学习计算），以及动态计算（计算过程中GPU和CPU数量可随时间的动态变化）。为了解决GPU和CPU之间的速度不匹配问题，ES使用本地梯度积累算法在GPU上累积本地梯度以模拟出多个虚拟GPU，虚拟GPU的吞吐量之和等同于物理GPU，而每个虚拟GPU的速度下降为物理GPU的$1/n$（$n$是虚拟GPU个数），使虚拟GPU速度与CPU速度相匹配，再将虚拟GPU和物理CPU进行同步以实现并行计算，从而解决了协同计算问题。在动态计算场景中，大幅调整设备数量的动作将引起一个长时间动量补偿过程并降低模型精度，论文使用本地梯度积累算法，在设备数量大幅增加时仍然可以保持整体批次的稳定性，
% 并缩短了动量补偿的时间，
从而保护了模型的收敛精度。
实验结果表明，
ES既有效提高了云环境中GPU-CPU协同深度学习计算任务的效率，又能保证
弹性调度中深度学习训练模型的精度。

针对多核CPU深度学习计算的访存竞争问题，
% 本文首次通过大量的实验和分析证实了CPU在深度学习应用中效率低下的原因主要来源于多核并行时的访存带宽竞争问题，并基于该分析
论文提出一种面向多核CPU深度学习的访存加速方法ParaX。
ParaX通过“单核心单实例”（One-Instance-per-Core）将深度学习实例分配到每个CPU核心上执行数据并行，以允许每个核心单独对其数据批次进行处理，从而避免了DNN模型每层执行时的核心同步屏障。论文将DNN中的网络层分为两类，即执行复杂算术运算的计算密集型层（如卷积和矩阵乘法），以及访存密集型层（如BN层和激活层），“单核心单实例”方法实现了访存密集型与计算密集型网络层的混合执行和不同层之间的带宽共享，大幅提高了CPU的内存带宽利用率。ParaX采用同步SGD策略，在模型训练过程中每一轮迭代的最后同步更新模型参数。针对ParaX特有的CPU多核心参数同步通信需求，设计了支持NUMA（non-uniform memory access）架构的梯度服务器通信机制，利用共享内存有效减少了CPU的参数同步通信开销。
实验结果表明，ParaX可以有效提高多核CPU深度学习模型训练和推理计算的性能。


\end{cabstract}
\ckeywords{云计算，数据存储，智能计算，DM-cache，高可用异构协同存储，高安全日志结构存储，GPU-CPU协同弹性调度，多核CPU访存带宽瓶颈}

\begin{eabstract}
With the rapid development of the big data industry, intelligent computing (represented by deep learning) has become an emerging big data analysis and processing method. In recent years, data storage and intelligent computing have increasingly relied on cloud computing technology to provide basic support. In response to the credibility and dynamic challenges in the public cloud environment, in this thesis we study efficient, reliable, and elastically scalable data storage and intelligent computing technologies, including: high-availability collaborative storage based on an adaptive meta-information bit tree, high-security log-structured storage based on verifiable index hash trees, flexible scheduling framework for deep learning computing with GPU-CPU coordination, and memory access acceleration for multi-core CPU-based deep learning. 

To address the reliability problem of data storage in the cloud, we propose a DM-cache extension called MapperX. Currently, the SSD-HDD hybrid architecture based on the Linux kernel DM-cache is widely used for data storage in the cloud environment. HDD is used as the main storage device to persist all data, and SSD is used as the HDD cache to improve overall data I/O performance. The asynchronous metadata maintenance mechanism of DM-cache prevents the dirty-bit information of SSD cache blocks from being updated in a timely manner. As a result, the recovery time in the event of a failure is too long, resulting in low availability of the DM-cache system. In order to solve this problem, MapperX has designed an adaptive meta-data bit tree (ABT, Adaptive meta-data Bit Tree) to synchronize and maintain the metadata of dirty bits in a hierarchical tree structure. MapperX describes the distribution of dirty bits by adaptively adding or removing leaves in different levels of ABT. The paper controls the addition and deletion of ABT leaves based on the service-level agreement (SLA, Service-LevelAggrement) of persistent delay, and realizes the adaptive metadata update granularity adjustment. Experimental results show that the collaborative storage mechanism based on MapperX is far superior to the existing DM-cache mechanism in terms of failure recovery time, and only introduces negligible metadata persistence overhead. 

To address the security problem of data storage in the cloud, we propose SwornDisk, an efficient data encryption storage scheme. SwornDisk realizes the confidentiality, integrity, freshness and anonymity protection of data I/O based on the LSM tree (log-structured merge tree) and MHT tree (Merkel hash tree) structure. For write operations, SwornDisk persists the data to the physical disk by appending to the log (log). Different historical versions of the same logical address data are recorded in different physical locations (that is, remotely updated), so the attacker cannot pass Roll back the data of a certain physical location to a certain historical version to attack. SwornDisk stores the mapping of logical address (LBA) to physical address (PBA) and the key and MAC (message authentication code) of the data in the memory structure of the LSM tree, while the persistent storage structure (SSTable) of the LSM tree uses MHT Encryption to ensure its security. Experimental results show that SwornDisk has almost no impact on I/O performance under the premise of significantly improving data security. 

To address the dynamic elastic scheduling problem of GPU/CPU collaborative computing, we propose Elastic Scheduler (ES),
an efficient GPU-CPU collaborative deep learning computing elastic scheduling framework. ES has proposed a new local gradient accumulation algorithm, which effectively solves the problem of CPU/GPU calculation speed mismatch and long-term momentum compensation in the dynamic calculation process. ES supports collaborative computing (different types of GPU and CPU devices can be used for deep learning calculations), and dynamic computing (the number of GPUs and CPUs can dynamically change over time during the calculation process). In order to solve the problem of speed mismatch between GPU and CPU, ES uses the local gradient accumulation algorithm to accumulate local gradients on the GPU to simulate multiple virtual GPUs. The sum of the throughput of the virtual GPUs is equivalent to the physical GPU, and each virtual GPU The speed drops to 1/$n$ of the physical GPU ($n$ is the number of virtual GPUs), so that the speed of the virtual GPU matches the speed of the CPU, and then the virtual GPU and the physical CPU are synchronized to achieve parallel computing, thereby solving the problem of collaborative computing. In dynamic computing scenarios, the action of greatly adjusting the number of devices will cause a long-term momentum compensation process and reduce the accuracy of the model. The paper uses a local gradient accumulation algorithm to maintain the stability of the overall batch when the number of devices increases significantly, thereby protecting The convergence accuracy of the model is improved. Experimental results show that ES can effectively support flexible scheduling of deep learning computing tasks on GPUs and CPUs in the cloud.
 

To address the memory access competition problem of multi-core CPU deep learning computing, we propose ParaX, a memory access acceleration method for multi-core CPU deep learning. ParaX uses "One-Instance-per-Core" instead of the traditional "single CPU single instance" method to allocate deep learning instances to each CPU core to perform data parallelism, allowing each core to independently pair The data is processed in batches, thereby avoiding the core synchronization barrier when each layer of the DNN model is executed. The paper divides the network layer in DNN into two categories, namely, computationally intensive layers that perform complex arithmetic operations (such as convolution and matrix multiplication), and memory-intensive layers (such as BN layer and activation layer). The "example" method realizes the mixed execution of memory-intensive and computationally-intensive network layers and bandwidth sharing between different layers, which greatly improves the memory bandwidth utilization of the CPU. ParaX adopts a synchronous SGD strategy to update the model parameters synchronously at the end of each iteration in the model training process. In response to ParaX's unique CPU multi-core parameter synchronization communication requirements, a gradient server communication mechanism supporting NUMA (non-uniform memory access) architecture is designed, and shared memory is used to effectively reduce the CPU parameter synchronization communication overhead. Experimental results show that ParaX can significantly improve the performance of deep learning model training and inference on multi-core CPUs.



\end{eabstract}
\ekeywords{
Cloud computing; data storage; intelligent computing; DM-cache; 
highly-available heterogeneous collaborative storage; highly-secure log-structured storage; GPU-CPU collaborative elastic scheduling; many-core CPU memory access bandwidth bottleneck
}

