%---------------------------------------------------------------------------%
%->> Frontmatter
%---------------------------------------------------------------------------%
%-
%-> 生成封面
%-

\maketitle% 生成中文封面
\MAKETITLE% 生成英文封面
%-
%-> 作者声明
%-
\makedeclaration% 生成声明页
%-
%-> 中文摘要
%-
\intobmk\chapter*{摘\quad 要}% 显示在书签但不显示在目录
\setcounter{page}{1}% 开始页码
\pagenumbering{Roman}% 页码符号


随着云计算、大数据等前沿技术的迅速演进，现代微处理器正在面临日益增长的数据处理压力。高速CPU与慢速内存之间的速度差异，即“内存墙”问题，成为了阻碍处理器在重内存负载下取得更高性能的主要挑战。高性能的末级缓存，以及新兴的混合内存是缓解“内存墙”问题的重要手段。

然而由于高性能末级缓存内部设计过于复杂，导致可用的寄存器传输级(Register-Transfer Level, RTL)末级缓存实验平台数量较少，且目前开源的末级缓存存在着诸多不足。而新型混合内存内面临着巨大的元数据访问开销问题，阻碍了其性能的发挥。

为了解决这两点问题，本文围绕着微处理器的片上末级缓存开展研究工作，进行高性能末级缓存的设计，以及复用末级缓存空间以加速混合内存的访问。本文的主要贡献及创新点包括：

第一，设计并实现了一个高性能的末级缓存“320 LLC”。为了优化处理器在高内存负载下的性能，其采用了“320”总线以提升总线带宽利用率，同时完全消除了额外写缓冲区的需求，提升了片上资源的利用率。本文为“320 LLC”设计了高效的请求缓冲-调度队列，有效地改进缺失状态寄存器(Miss-status Handling Registers, MSHR)冲突时缓存对请求的暂存能力，在libquantum测试片段中取得了81\%的性能提升。本文为“320 LLC”设计了同组(Set)请求并行处理的机制，在请求发生剧烈组冲突的测试片段中，获得了高达10倍的请求命中处理效率。本文还为“320 LLC”实现了基于ExTag的非包含式缓存设计，提升了缓存的有效使用空间。在全系统测试结果表明，搭载“320 LLC”的“320 CPU”表现出了超过商用处理器Intel i3 10100的性能水平。“320 LLC”采用Spinal HDL敏捷开发语言编写，为体系结构领域的研究者提供了一个高性能的末级缓存实验平台。

第二，设计了一个新型的混合末级缓存FuseLLC，从而进一步提升处理器访存效率。本文复用了片上末级缓存的空间作为动态随机存储器(Dynamic Random Access Memory, DRAM)缓存的元数据缓存，从而提升DRAM缓存的访问效率。本文设计了一种高效的异构数据管理结构MPtrArray，使得FuseLLC的设计与末级缓存与DRAM缓存的物理参数解耦合。本文还设计了一种基于多点采样的全局动态缓存划分方式，使得在全局可以保持一个高性能的划分比例。对于局部冲突剧烈的情况，本文设计了局部争抢避让机制与对应的替换算法，使得DRAM缓存元数据能避让末级缓存的热点区域。在测试中，FuseLLC以额外125KB的开销，性能达到了额外引入3MB SRAM的基线模型的性能，在大部分子项中获得了90\%以上的DRAM缓存元数据命中率。

\keywords{末级缓存，混合缓存，缓存划分，高性能}% 中文关键词
%-
%-> 英文摘要
%-
\intobmk\chapter*{Abstract}% 显示在书签但不显示在目录

With the rapid evolution of cutting-edge technologies such as cloud computing and big data, modern microprocessors are increasingly challenged by the escalating pressures of data processing. The speed disparity between high-speed CPUs and slower memory, known as the "memory wall" problem, is a significant barrier to achieving higher performance under memory-intensive loads. High-performance last-level caches and emerging hybrid memory technologies are crucial in mitigating the memory wall issue.

However, due to the complexity of high-performance last-level cache designs, there are few available register-transfer level (RTL) platforms for experimentation, and current open-source last-level caches have several deficiencies. Additionally, new hybrid memory technologies face significant metadata access overheads, which hinder their performance.

This paper focuses on the research and development of on-chip last-level caches for microprocessors, designing high-performance last-level caches, and repurposing cache space to accelerate access to hybrid memory. The primary contributions and innovations of this study include:

Firstly, the design and implementation of a high-performance last-level cache, "320 LLC." To optimize processor performance under high memory loads, it employs a "320" bus to enhance bus bandwidth utilization while entirely eliminating the need for additional write buffers, thereby improving on-chip resource efficiency. This paper has developed an efficient request buffer-scheduling queue for "320 LLC," effectively improving its temporary storage capacity for requests during miss-status handling register (MSHR) conflicts, achieving an 81\% performance improvement in libquantum test segments. It also introduces a set-based parallel request processing mechanism, achieving up to a tenfold increase in request hit processing efficiency during severe set conflicts. Moreover, "320 LLC" implements an ExTag-based non-inclusive cache design, enhancing the effective use of cache space. System-wide testing indicates that a "320 CPU" equipped with "320 LLC" surpasses the performance levels of the commercial processor Intel i3 10100. "320 LLC" is written using the Spinal HDL agile development language, providing researchers in the architectural field with a high-performance last-level cache experimental platform.

Secondly, a novel hybrid last-level cache, FuseLLC, has been designed to further enhance processor memory access efficiency. This paper repurposes the space in on-chip last-level caches as metadata caches for Dynamic Random Access Memory (DRAM), thereby boosting the efficiency of DRAM cache accesses. It introduces an efficient heterogeneous data management structure, MPtrArray, which decouples the design of FuseLLC from the physical parameters of last-level and DRAM caches. The paper also designs a global dynamic cache partitioning method based on multi-point sampling, maintaining a high-performance partition ratio globally. For areas with severe local conflicts, it proposes a local contention avoidance mechanism and corresponding replacement algorithm, allowing DRAM cache metadata to avoid hotspots in the last-level cache. In tests, FuseLLC, with an additional overhead of 125KB, achieves performance levels comparable to a baseline model with an additional 3MB SRAM, obtaining over 90\% hit rates for DRAM cache metadata in most sub-tasks.




    %- the current style, comment all the lines in plain style definition.

\KEYWORDS{Last-Level Cache, Hybrid Cache, Cache Partitioning, High Performance}% 英文关键词

\pagestyle{enfrontmatterstyle}%
\cleardoublepage\pagestyle{frontmatterstyle}%

%---------------------------------------------------------------------------%
