<!doctype html>
<html lang="en">
<head>
	<meta charset="UTF-8">
	<title></title>
	<link rel="stylesheet" type="text/css" href="http://paranoid.net.cn/semantic.css" >
</head>
<body>
<section-title-en>2.11 Cache Memories</section-title-en>
<section-title-ch>2.11 缓存存储器</section-title-ch>
<p-en>
	At the time of this writing, CPU cores can process data ≈ 200× faster than DRAM can supply it. This gap is bridged by an hierarchy of cache memories, which are orders of magnitude smaller and an order of magnitude faster than DRAM. While caching is transparent to application software, the system software is responsible for managing and coordinating the caches that store address translation (§2.5) results.
</p-en>
<p-ch>
	在撰写本文时，CPU的核处理数据的速度比DRAM提供数据的速度快≈200倍。这种差距是通过缓存存储器的层次结构来弥补的，缓存存储器比DRAM小一个数量级，快一个数量级。虽然缓存对应用软件是透明的，但系统软件负责管理和协调 存储地址转换（§2.5）结果 的缓存。
</p-ch>
<p-en>
	Caches impact the security of a software system in two ways. First, the Intel architecture relies on system software to manage address translation caches, which becomes an issue in a threat model where the system software is untrusted. Second, caches in the Intel architecture are shared by all the software running on the computer. This opens up the way for cache timing attacks, an entire class of software attacks that rely on observing the time differences between accessing a cached memory location and an uncached memory location.
</p-en>
<p-ch>
	缓存对软件系统的安全性有两个方面的影响。首先，英特尔架构依靠系统软件来管理地址转换缓存，这在系统软件不受信任的威胁模型中成为一个问题。其次，英特尔架构中的缓存是由计算机上运行的所有软件共享的。这就为缓存定时攻击开辟了道路，这是一整类软件攻击，依靠观察访问 已缓存内存位置 和 未缓存内存位置 之间的时间差。
</p-ch>
<p-en>
	This section summarizes the caching concepts and implementation details needed to reason about both classes of security problems mentioned above. [170], [150] and [76] provide a good background on low-level cache implementation concepts. §3.8 describes cache timing attacks.
</p-en>
<p-ch>
	本节总结了推理上述两类安全问题所需的缓存概念和实现细节。[170]、[150]和[76]提供了一个很好的低级缓存实现概念的背景。 §3.8描述了缓存定时攻击。
</p-ch>
<section-title-en>2.11.1 Caching Principles</section-title-en>
<section-title-ch>2.11.1 缓存原则</section-title-ch>
<p-en>
	At a high level, caches exploit the high locality in the memory access patterns of most applications to hide the main memory's (relatively) high latency. By caching (storing a copy of) the most recently accessed code and data, these relatively small memories can be used to satisfy 90%-99% of an application's memory accesses.
</p-en>
<p-ch>
	在一个较高的层次上，缓存利用大多数应用程序的内存访问模式的高局部性来隐藏主内存的（相对）高延迟。通过缓存（存储最近访问的代码和数据的副本），这些相对较小的内存可以用来满足应用程序90%-99%的内存访问。
</p-ch>
<p-en>
	In an Intel processor, the first-level (L1) cache consists of a separate data cache (D-cache) and an instruction cache (I-cache). The instruction fetch and decode stage is directly connected to the L1 I-cache, and uses it to read the streams of instructions for the core's logical processors. Micro-ops that read from or write to memory are executed by the memory unit (MEM in Figure 23), which is connected to the L1 D-cache and forwards memory accesses to it.
</p-en>
<p-ch>
	在英特尔处理器中，一级（L1）缓存由独立的数据缓存（D缓存）和指令缓存（I缓存）组成。指令获取和解码阶段直接连接到L1 I-cache，并利用它来读取核心逻辑处理器的指令流。从内存中读取或写入内存的微操作由内存单元（图23中的MEM）执行，它与L1 D-cache相连，并将内存访问转发给它。
</p-ch>
<img src="fig.25.jpg" width="" height="" border="0" alt="">
<p-en>
	Figure 25: The steps taken by a cache memory to resolve an access to a memory address A. A normal memory access (to cacheable DRAM) always triggers a cache lookup. If the access misses the cache, a fill is required, and a write-back might be required.
</p-en>
<p-ch>
	图25. 缓存存储器为解决对内存地址A的访问而采取的步骤。正常的内存访问（对可缓存DRAM）总是会触发缓存查找。如果访问错过了缓存，则需要进行填充，可能需要回写。
</p-ch>
<p-en>
	Figure 25 illustrates the steps taken by a cache when it receives a memory access. First, a cache lookup uses the memory address to determine if the corresponding data exists in the cache. A cache hit occurs when the address is found, and the cache can resolve the memory access quickly. Conversely, if the address is not found, a cache miss occurs, and a cache fill is required to resolve the memory access. When doing a fill, the cache forwards the memory access to the next level of the memory hierarchy and caches the response. Under most circumstances, a cache fill also triggers a cache eviction, in which some data is removed from the cache to make room for the data coming from the fill. If the data that is evicted has been modified since it was loaded in the cache, it must be written back to the next level of the memory hierarchy.
</p-en>
<p-ch>
	图25说明了缓存器接收到内存访问时采取的步骤。首先，缓存查找使用内存地址来确定缓存中是否存在相应的数据。当地址被找到时，就会发生缓存命中，缓存可以快速解析内存访问。反之，如果没有找到地址，则发生缓存遗漏，需要进行缓存填充来解决内存访问。在进行填充时，缓存会将内存访问转发到下一级内存层次结构中，并缓存响应。在大多数情况下，缓存填充也会触发缓存驱逐，即从缓存中删除一些数据，为来自填充的数据腾出空间。如果被驱逐的数据在缓存中加载后被修改过，则必须将其写回内存层次结构的下一级。
</p-ch>
<p-en>
	Table 8 shows the key characteristics of the memory hierarchy implemented by modern Intel CPUs. Each core has its own L1 and L2 cache (see Figure 23), while the L3 cache is in the CPU's uncore (see Figure 22), and is shared by all the cores in the package.
</p-en>
<p-ch>
	表8显示了现代英特尔CPU所实现的内存层次结构的主要特征。每个核心都有自己的L1和L2缓存(见图23)，而L3缓存在CPU的非核心中(见图22)，由封装中的所有核心共享。
</p-ch>
<img src="table.8.jpg" width="" height="" border="0" alt="">
<p-en>
	Table 8: Approximate sizes and access times for each level in the memory hierarchy of an Intel processor, from [127]. Memory sizes and access times differ by orders of magnitude across the different levels of the hierarchy. This table does not cover multi-processor systems.
</p-en>
<p-ch>
	表8：英特尔处理器内存层次结构中每一级的大概大小和访问时间，来自[127]。不同层次的内存大小和访问时间存在数量级的差异。此表不包括多处理器系统。
</p-ch>
<p-en>
	The numbers in Table 8 suggest that cache placement can have a large impact on an application's execution time. Because of this, the Intel architecture includes an assortment of instructions that give performance sensitive applications some control over the caching of their working sets. PREFETCH instructs the CPU's prefetcher to cache a specific memory address, in preparation for a future memory access. The memory writes performed by the MOVNT instruction family bypass the cache if a fill would be required. CLFLUSH evicts any cache lines storing a specific address from the entire cache hierarchy.
</p-en>
<p-ch>
	表8中的数字表明，缓存的位置对应用程序的执行时间有很大影响。正因为如此，英特尔架构中包含了各种各样的指令，让对性能敏感的应用程序对其工作集的缓存有一定的控制。PREFETCH指示CPU的预取器缓存一个特定的内存地址，为将来的内存访问做准备。如果需要填充的话，由MOVNT指令系列执行的内存写入会绕过缓存。CLFLUSH从整个缓存层次结构中驱逐任何存储特定地址的缓存行。
</p-ch>
<p-en>
	The methods mentioned above are available to software running at all privilege levels, because they were designed for high-performance workloads with large working sets, which are usually executed at ring 3 (§2.3). For comparison, the instructions used by system software to manage the address translation caches, described in §2.11.5 below, can only be executed at ring 0.
</p-en>
<p-ch>
	上述方法可供所有权限级别的软件运行，因为它们是为具有大型工作集的高性能工作负载而设计的，这些工作负载通常在环3（§2.3）处执行。作为比较，下面§2.11.5中描述的系统软件用来管理地址转换缓存的指令，只能在0环执行。
</p-ch>
<section-title-en>2.11.2 Cache Organization</section-title-en>
<section-title-ch>2.11.2缓存组织</section-title-ch>
<p-en>
	In the Intel architecture, caches are completely implemented in hardware, meaning that the software stack has no direct control over the eviction process. However, software can gain some control over which data gets evicted by understanding how the caches are organized, and by cleverly placing its data in memory.
</p-en>
<p-ch>
	在英特尔架构中，缓存完全在硬件中实现，这意味着软件栈无法直接控制驱逐过程。然而，软件可以通过了解缓存的组织方式，并巧妙地将其数据放置在内存中，从而获得对哪些数据被驱逐的一些控制。
</p-ch>
<p-en>
	The cache line is the atomic unit of cache organization. A cache line has data, a copy of a continuous range of DRAM, and a tag, identifying the memory address that the data comes from. Fills and evictions operate on entire lines.
</p-en>
<p-ch>
	缓存行是缓存组织的原子单位。一条缓存行有数据，一个连续的DRAM范围的副本，还有一个标签，标识数据来自的内存地址。填充和驱逐是在整行上操作的。
</p-ch>
<p-en>
	The cache line size is the size of the data, and is always a power of two. Assuming n-bit memory addresses and a cache line size of 2^l bytes, the lowest l bits of a memory address are an offset into a cache line, and the highest n − l bits determine the cache line that is used to store the data at the memory location. All recent processors have 64-byte cache lines.
</p-en>
<p-ch>
	缓存行的大小就是数据的大小，而且总是2的幂。假设内存地址为n位，缓存行大小为2^l字节，则内存地址的最低l位是缓存行内的偏移量，最高的n - l位决定了用来存储在内存位置处数据的缓存行。最近的处理器都有64字节的缓存行。<remark-ch>[物理内存大，cache小，物理内存跟cache之间的对应，类似较大的逻辑地址空间到较小的物理地址空间的映射。高n-l位相当于页号，低l位相当于页内偏移]</remark-ch>
</p-ch>
<p-en>
	The L1 and L2 caches in recent processors are multiway set-associative with direct set indexing, as shown in Figure 26. A W-way set-associative cache has its memory divided into sets, where each set has W lines. A memory location can be cached in any of the w lines in a specific set that is determined by the highest n − l bits of the location's memory address. Direct set indexing means that the S sets in a cache are numbered from 0 to S − 1, and the memory location at address A is cached in the set numbered A_n−1…n−l mod S.
</p-en>
<p-ch>
	近代处理器中的L1和L2高速缓存是多路集关联式的，直接进行集索引，如图26所示。W路集关联式缓存的内存被分成若干个集，其中每个集有W行。一个内存位置可以缓存在特定集的w行中的任何一行，该行由该位置内存地址的最高n - l位决定。直接集索引是指缓存中的S集的编号从0到S - 1，地址A处的内存位置被缓存在编号为A_n-1...n-l mod S的集中。
</p-ch>
<img src="fig.26.jpg" width="" height="" border="0" alt="">
<p-en>
	Figure 26: Cache organization and lookup, for a W-way set associative cache with 2^l-byte lines and S = 2^s sets. The cache works with n-bit memory addresses. The lowest l address bits point to a specific byte in a cache line, the next s bytes index the set, and the highest n − s − l bits are used to decide if the desired address is in one of the W lines in the indexed set.
</p-en>
<p-ch>
	图26: 缓存的组织和查找，对于一个2^l字节行和S=2^s组的W-way集关联缓存。缓存工作在n位内存地址。最低的l位地址位指向一个缓存行中的特定字节，接下来的s位字节对集进行索引，最高的n - s - l位用来决定所需地址是否在索引集的W行中的某一行。
</p-ch>
<p-en>
	In the common case where the number of sets in a cache is a power of two, so S = 2^s, the lowest l bits in an address make up the cache line offset, the next s bits are the set index. The highest n−s−l bits in an address are not used when selecting where a memory location will be cached. Figure 26 shows the cache structure and lookup process.
</p-en>
<p-ch>
	在常见的情况下，缓存中的集数是2的幂，所以S=2^s，地址中最低的l位组成缓存行偏移，接下来的s位是集索引。在选择内存位置的缓存位置时，不使用地址中最高的n-s-l位。图26是缓存结构和查找过程。
</p-ch>
<section-title-en>2.11.3 Cache Coherence</section-title-en>
<section-title-ch>2.11.3 Cache 一致性</section-title-ch>
<p-en>
	The Intel architecture was designed to support application software that was not written with caches in mind. One aspect of this support is the Total Store Order (TSO) [147] memory model, which promises that all the logical processors in a computer see the same order of DRAM writes.
</p-en>
<p-ch>
	英特尔架构的设计是为了支持那些没有考虑到缓存的应用软件。这种支持的一个方面是总存储顺序(TSO)[147]内存模型，它承诺计算机中所有的逻辑处理器都能看到相同的DRAM写入顺序。
</p-ch>
<p-en>
	The same memory location might be simultaneously cached by different cores' caches, or even by caches on separate chips, so providing the TSO guarantees requires a cache coherence protocol that synchronizes all the cache lines in a computer that reference the same memory address.
</p-en>
<p-ch>
	同一内存位置可能同时被不同核心的缓存，甚至被不同芯片上的缓存所缓存，因此提供TSO保证需要一个缓存一致性协议，使计算机中所有引用同一内存地址的缓存行同步。
</p-ch>
<p-en>
	The cache coherence mechanism is not visible to software, so it is only briefly mentioned in the SDM. Fortunately, Intel's optimization reference [96] and the datasheets referenced in §2.9.3 provide more information. Intel processors use variations of the MESIF [66] protocol, which is implemented in the CPU and in the protocol layer of the QPI bus.
</p-en>
<p-ch>
	缓存一致性机制对软件来说是不可见的，所以在SDM中只是简单地提到了它。幸运的是，Intel的优化参考[96]和§2.9.3中引用的数据表提供了更多的信息。英特尔处理器使用MESIF[66]协议的变体，它在CPU和QPI总线的协议层中实现。
</p-ch>
<p-en>
	The SDM and the CPUID instruction output indicate that the L3 cache, also known as the last-level cache (LLC) is inclusive, meaning that any location cached by an L1 or L2 cache must also be cached in the LLC. This design decision reduces complexity in many implementation aspects. We estimate that the bulk of the cache coherence implementation is in the CPU's uncore, thanks to the fact that cache synchronization can be achieved without having to communicate to the lower cache levels that are inside execution cores.
</p-en>
<p-ch>
	SDM和CPUID指令输出表明，L3缓存，也就是最后一级缓存(LLC)是包容的，这意味着L1或L2缓存所缓存的任何位置也必须在LLC中缓存。这一设计决定降低了许多实现方面的复杂性。我们估计缓存一致性的大部分实现是在CPU的非核心中，这要归功于缓存同步可以不需要与执行核心内部的低级缓存进行通信。
</p-ch>
<p-en>
	The QPI protocol defines cache agents, which are connected to the last-level cache in a processor, and home agents, which are connected to memory controllers. Cache agents make requests to home agents for cache line data on cache misses, while home agents keep track of cache line ownership, and obtain the cache line data from other cache line agents, or from the memory controller. The QPI routing layer supports multiple agents per socket, and each processor has its own caching agents, and at least one home agent.
</p-en>
<p-ch>
	QPI协议定义了缓存代理和主代理，前者连接到处理器中的最后一级缓存，后者连接到内存控制器。缓存代理在缓存缺失时向主代理提出缓存行数据的请求，而主代理则跟踪缓存行的所有权，并从其他缓存行代理，或者从内存控制器获得缓存行数据。QPI路由层支持每个socket的多个代理，每个处理器都有自己的缓存代理，以及至少一个主代理。
</p-ch>
<p-en>
	Figure 27 shows that the CPU uncore has a bidirectional ring interconnect, which is used for communication between execution cores and the other uncore components. The execution cores are connected to the ring by CBoxes, which route their LLC accesses. The routing is static, as the LLC is divided into same-size slices (common slice sizes are 1.5 MB and 2.5 MB), and an undocumented hashing scheme maps each possible physical address to exactly one LLC slice.
</p-en>
<p-ch>
	图27显示，CPU非核心有一个双向环形互连，用于执行核心和其他非核心组件之间的通信。执行核通过CBoxes连接到环上，CBoxes对它们的LLC访问进行路由。路由是静态的，因为LLC被分成大小相同的片（常见的片大小是1.5 MB和2.5 MB），一个未记录的哈希方案将每个可能的物理地址准确地映射到一个LLC片。
</p-ch>
<img src="fig.27.jpg" alt="">
<p-en>
	Figure 27: The stops on the ring interconnect used for inter-core and core-uncore communication.
</p-en>
<p-ch>
	图27. 用于核心间和核心与非核心间通信的环形互连上的止点。
</p-ch>
<p-en>
	Intel's documentation states that the hashing scheme mapping physical addresses to LLC slices was designed to avoid having a slice become a hotspot, but stops short of providing any technical details. Fortunately, independent researches have reversed-engineered the hash functions for recent processors [85, 135, 197].
</p-en>
<p-ch>
	英特尔的文档指出，将物理地址映射到LLC片的散列方案是为了避免片成为热点，但没有提供任何技术细节。幸运的是，独立的研究已经对最近的处理器的哈希函数进行了逆向工程[85，135，197]。
</p-ch>
<p-en>
	The hashing scheme described above is the reason why the L3 cache is documented as having a “complex” indexing scheme, as opposed to the direct indexing used in the L1 and L2 caches.
</p-en>
<p-ch>
	上述散列方案是L3缓存被记录为具有 "复杂 "索引方案的原因，而不是L1和L2缓存中使用的直接索引。
</p-ch>
<p-en>
	The number of LLC slices matches the number of cores in the CPU, and each LLC slice shares a CBox with a core. The CBoxes implement the cache coherence engine, so each CBox acts as the QPI cache agent for its LLC slice. CBoxes use a Source Address Decoder (SAD) to route DRAM requests to the appropriate home agents. Conceptually, the SAD takes in a memory address and access type, and outputs a transaction type (coherent, non-coherent, IO) and a node ID. Each CBox contains a SAD replica, and the configurations of all SADs in a package are identical.
</p-en>
<p-ch>
	LLC分片的数量与CPU中的核心数量相匹配，每个LLC分片与一个核心共享一个CBox。CBox实现了缓存一致性引擎，因此每个CBox作为其LLC片的QPI缓存代理。CBox使用源地址解码器(SAD)将DRAM请求路由到相应的主代理。从概念上讲，SAD接收一个内存地址和访问类型，并输出一个事务类型（一致、不一致、IO）和一个节点ID。每个CBox都包含一个SAD副本，一个包中所有SAD的配置是相同的。
</p-ch>
<p-en>
	The SAD configurations are kept in sync by the UBox, which is the uncore configuration controller, and connects the System agent to the ring. The UBox is responsible for reading and writing physically distributed registers across the uncore. The UBox also receives interrupts from system and dispatches them to the appropriate core.
</p-en>
<p-ch>
	SAD配置由UBox保持同步，UBox是非核心配置控制器，将系统代理连接到环上。UBox负责在非核心上读写物理分布的寄存器。UBox还接收来自系统的中断，并将其调度到相应的核心。
</p-ch>
<p-en>
	On recent Intel processors, the uncore also contains at least one memory controller. Each integrated memory controller (iMC or MBox in Intel's documentation) is connected to the ring by a home agent (HA or BBox in Intel's datasheets). Each home agent contains a Target Address Decoder (TAD), which maps each DRAM address to an address suitable for use by the DRAM chips, namely a DRAM channel, bank, rank, and a DIMM address. The mapping in the TAD is not documented by Intel, but it has been reverse-engineered [151].
</p-en>
<p-ch>
	在最近的英特尔处理器上，非核心还包含至少一个内存控制器。每个集成内存控制器(Intel文档中的iMC或MBox)通过一个主代理(Intel数据表中的HA或BBox)连接到环上。每个主代理包含一个目标地址解码器(TAD)，它将每个DRAM地址映射到适合DRAM芯片使用的地址，即DRAM通道、bank、rank和DIMM地址。TAD中的映射没有被Intel记录下来，但它已经被逆向工程化了[151]。
</p-ch>
<p-en>
	The integration of the memory controller on the CPU brings the ability to filter DMA transfers. Accesses from a peripheral connected to the PCIe bus are handled by the integrated I/O controller (IIO), placed on the ring interconnect via the UBox, and then reach the iMC. Therefore, on modern systems, DMA transfers go through both the SAD and TAD, which can be configured to abort DMA transfers targeting protected DRAM ranges.
</p-en>
<p-ch>
	集成在CPU上的内存控制器带来了过滤DMA传输的能力。来自连接到PCIe总线的外设的访问由集成的I/O控制器（IIO）处理，通过UBox放置在环形互连上，然后到达iMC。因此，在现代系统中，DMA传输会同时经过SAD和TAD，而TAD可以被配置为中止针对受保护DRAM范围的DMA传输。
</p-ch>
<section-title-en>2.11.4 Caching and Memory-Mapped Devices</section-title-en>
<section-title-ch>2.11.4 缓存和内存映射设备</section-title-ch>
<p-en>
	Caches rely on the assumption that the underlying memory implements the memory abstraction in §2.2. However, the physical addresses that map to memory-mapped I/O devices usually deviate from the memory abstraction. For example, some devices expose command registers that trigger certain operations when written, and always return a zero value. Caching addresses that map to such memory-mapped I/O devices will lead to incorrect behavior.
</p-en>
<p-ch>
	缓存依赖于底层内存实现§2.2中内存抽象的假设。然而，映射到内存映射的I/O设备的物理地址通常偏离了内存抽象。例如，有些设备暴露了命令寄存器，在写入时触发某些操作，并且总是返回一个零值。缓存映射到这种内存映射的I/O设备的地址将导致不正确的行为。
</p-ch>
<p-en>
	Furthermore, even when the memory-mapped devices follow the memory abstraction, caching their memory is sometimes undesirable. For example, caching a graphic unit's framebuffer could lead to visual artifacts on the user's display, because of the delay between the time when a write is issued and the time when the corresponding cache lines are evicted and written back to memory.
</p-en>
<p-ch>
	此外，即使内存映射的设备遵循内存抽象，缓存其内存有时也是不可取的。例如，缓存一个图形单元的framebuffer可能会导致用户显示器上的视觉伪影，因为从发出写到相应的缓存行被驱逐并写回内存的时间之间有延迟。
</p-ch>
<p-en>
	In order to work around these problems, the Intel architecture implements a few caching behaviors, described below, and provides a method for partitioning the memory address space (§2.4) into regions, and for assigning a desired caching behavior to each region.
</p-en>
<p-ch>
	为了解决这些问题，英特尔架构实现了一些缓存行为，如下所述，并提供了一种方法，用于将内存地址空间(§2.4)划分为区域，并为每个区域分配所需的缓存行为。
</p-ch>
<p-en>
	Uncacheable (UC) memory has the same semantics as the I/O address space (§2.4). UC memory is useful when a device's behavior is dependent on the order of memory reads and writes, such as in the case of memory mapped command and data registers for a PCIe NIC (§2.9.1). The out-of-order execution engine (§2.10) does not reorder UC memory accesses, and does not issue speculative reads to UC memory.
</p-en>
<p-ch>
	不可缓存(UC)内存的语义与I/O地址空间(§2.4)相同。当设备的行为依赖于内存读和写的顺序时，UC内存是有用的，例如在PCIe NIC的内存映射命令和数据寄存器的情况下（§2.9.1）。失序执行引擎(§2.10)不会对UC内存访问进行重新排序，也不会向UC内存发出推测性读取。
</p-ch>
<p-en>
	Write Combining (WC) memory addresses the specific needs of framebuffers. WC memory is similar to UC memory, but the out-of-order engine may reorder memory accesses, and may perform speculative reads. The processor stores writes to WC memory in a write combining buffer, and attempts to group multiple writes into a (more efficient) line write bus transaction.
</p-en>
<p-ch>
	写组合（WC）内存解决了帧缓冲器的特殊需求。WC内存类似于UC内存，但失序引擎可以重新安排内存访问顺序，并可以执行推测性读取。处理器将写入WC内存的数据存储在写组合缓冲区中，并试图将多个写组合成一个（更高效的）行写总线事务。
</p-ch>
<p-en>
	Write Through (WT) memory is cached, but write misses do not cause cache fills. This is useful for preventing large memory-mapped device memories that are rarely read, such as framebuffers, from taking up cache memory. WT memory is covered by the cache coherence engine, may receive speculative reads, and is subject to operation reordering.
</p-en>
<p-ch>
	Write Through (WT)内存会被缓存，但写错过不会导致缓存填充。这对于防止很少被读取的大型内存映射设备内存（如帧缓冲器）占用缓存内存非常有用。WT内存被缓存一致性引擎覆盖，可能会收到推测性读数，并且会被操作重新排序。
</p-ch>
<p-en>
	DRAM is represented as Write Back (WB) memory, which is optimized under the assumption that all the devices that need to observe the memory operations implement the cache coherence protocol. WB memory is cached as described in §2.11, receives speculative reads, and operations targeting it are subject to reordering.
</p-en>
<p-ch>
	DRAM表示为回写(WB)内存，它是在假设所有需要观察内存操作的设备都实现了缓存一致性协议的情况下进行优化的。WB内存按照§2.11中描述的方式进行缓存，接收推测性读取，针对它的操作会被重新排序。
</p-ch>
<p-en>
	Write Protected (WP) memory is similar to WB memory, with the exception that every write is propagated to the system bus. It is intended for memory-mapped buffers, where the order of operations does not matter, but the devices that need to observe the writes do not implement the cache coherence protocol, in order to reduce hardware costs.
</p-en>
<p-ch>
	写保护(WP)内存与WB内存类似，但例外的是每次写都会传播到系统总线上。它适用于内存映射的缓冲区，在这种情况下，操作的顺序并不重要，但需要观察写入的设备不执行缓存一致性协议，以降低硬件成本。
</p-ch>
<p-en>
	On recent Intel processors, the cache's behavior is mainly configured by the Memory Type Range Registers (MTRRs) and by Page Attribute Table (PAT) indices in the page tables (§2.5). The behavior is also impacted by the Cache Disable (CD) and Not-Write through (NW) bits in Control Register 0 (CR0, §2.4), as well as by equivalent bits in page table entries, namely Page-level Cache Disable (PCD) and Page-level Write-Through (PWT).
</p-en>
<p-ch>
	在最新的英特尔处理器上，缓存的行为主要由内存类型范围寄存器（MTRRs）和页表（§2.5）中的页属性表（PAT）索引配置。这种行为还受到控制寄存器0（CR0，§2.4）中的缓存禁用（CD）和不写通过（NW）位，以及页表项中的等效位，即页级缓存禁用（PCD）和页级写通过（PWT）的影响。
</p-ch>
<p-en>
	The MTRRs were intended to be configured by the computer's firmware during the boot sequence. Fixed MTRRs cover pre-determined ranges of memory, such as the memory areas that had special semantics in the computers using 16-bit Intel processors. The ranges covered by variable MTRRs can be configured by system software. The representation used to specify the ranges is described below, as it has some interesting properties that have proven useful in other systems.
</p-en>
<p-ch>
	MTRRs旨在由计算机的固件在启动序列中进行配置。固定的MTRRs覆盖了预先确定的内存范围，例如在使用16位英特尔处理器的计算机中具有特殊语义的内存区域。可变MTRRs覆盖的范围可以由系统软件配置。下面介绍用于指定范围的表示方法，因为它具有一些有趣的特性，在其他系统中被证明是有用的。
</p-ch>
<p-en>
	Each variable memory type range is specified using a range base and a range mask. A memory address belongs to the range if computing a bitwise AND between the address and the range mask results in the range base. This verification has a low-cost hardware implementation, shown in Figure 28.
</p-en>
<p-ch>
	每一个可变的内存类型范围都是用一个范围基数和一个范围掩码来指定的。如果在地址和范围掩码之间计算位性AND的结果是范围基，那么一个存储器地址就属于该范围。这种验证有一个低成本的硬件实现，如图28所示。
</p-ch>
<img src="fig.28.jpg" alt="">
<p-en>
	Figure 28: The circuit for computing whether a physical address matches a memory type range. Assuming a CPU with 48-bit physical addresses, the circuit uses 36 AND gates and a binary tree of 35 XNOR (equality test) gates. The circuit outputs 1 if the address belongs to the range. The bottom 12 address bits are ignored, because memory type ranges must be aligned to 4 KB page boundaries.
</p-en>
<p-ch>
	图28.计算物理地址是否与内存类型范围匹配的电路。计算物理地址是否符合内存类型范围的电路。假设CPU的物理地址为48位，该电路使用36个AND门和35个XNOR（平等测试）门的二进制树。如果地址属于该范围，电路输出1。底部12位地址位被忽略，因为内存类型范围必须对准4KB页边界。
</p-ch>
<p-en>
	Each variable memory type range must have a size that is an integral power of two, and a starting address that is a multiple of its size, so it can be described using the base / mask representation described above. A range's starting address is its base, and the range's size is one plus its mask.
</p-en>
<p-ch>
	每个可变内存类型的范围必须有一个大小是二的整数倍，起始地址是其大小的倍数，所以可以用上面描述的基数/掩码表示法来描述。一个范围的起始地址是它的基数，而范围的大小是一加上它的掩码。
</p-ch>
<p-en>
	Another advantage of this range representation is that the base and the mask can be easily validated, as shown in Listing 1. The range is aligned with respect to its size if and only if the bitwise AND between the base and the mask is zero. The range's size is a power of two if and only if the bitwise AND between the mask and one plus the mask is zero. According to the SDM, the MTRRs are not validated, but setting them to invalid values results in undefined behavior.
</p-en>
<p-ch>
	这种范围表示法的另一个优点是，基数和掩码可以很容易地进行验证，如清单1所示。如果且仅当基数和掩码之间的位 AND 为零时，范围就其大小而言是对齐的。如果且仅当掩码和一加掩码之间的位智AND为零时，范围的大小是二的幂。根据SDM，MTRRs不被验证，但将它们设置为无效值会导致未定义的行为。
</p-ch>
<img src="listing.1.jpg" width="" height="" alt="" />
<p-en>
	Listing 1: The checks that validate the base and mask of a memorytype range can be implemented very easily.
</p-en>
<p-ch>
	清单1：可以非常容易地实现验证内存类型范围的基数和掩码的检查。
</p-ch>
<p-en>
	No memory type range can partially cover a 4 KB page, which implies that the range base must be a multiple of 4 KB, and the bottom 12 bits of range mask must be set. This simplifies the interactions between memory type ranges and address translation, described in §2.11.5.
</p-en>
<p-ch>
	任何内存类型范围都不能部分覆盖4 KB的页面，这意味着范围基数必须是4 KB的倍数，而且范围掩码的底部12位必须设置。这简化了内存类型范围和地址转换之间的相互作用，详见§2.11.5。
</p-ch>
<p-en>
	The PAT is intended to allow the operating system or hypervisor to tweak the caching behaviors specified in the MTRRs by the computer's firmware. The PAT has 8 entries that specify caching behaviors, and is stored in its entirety in a MSR. Each page table entry contains a 3-bit index that points to a PAT entry, so the system software that controls the page tables can specify caching behavior at a very fine granularity.
</p-en>
<p-ch>
	PAT旨在允许操作系统或管理程序调整计算机固件在MTRRs中指定的缓存行为。PAT有8个指定缓存行为的条目，并完整地存储在MSR中。每个页表条目都包含一个指向PAT条目的3位索引，因此控制页表的系统软件可以以非常细的粒度指定缓存行为。
</p-ch>
<section-title-en>2.11.5 Caches and Address Translation</section-title-en>
<section-title-ch>2.11.5 缓存和地址转换</section-title-ch>
<p-en>
	Modern system software relies on address translation (§2.5). This means that all the memory accesses issued by a CPU core use virtual addresses, which must undergo translation. Caches must know the physical address for a memory access, to handle aliasing (multiple virtual addresses pointing to the same physical address) correctly. However, address translation requires up to 20 memory accesses (see Figure 15), so it is impractical to perform a full address translation for every cache access. Instead, address translation results are cached in the translation look-aside buffer (TLB).
</p-en>
<p-ch>
	现代系统软件依赖于地址转换（§2.5）。这意味着CPU核发出的所有内存访问都使用虚拟地址，而虚拟地址必须经过转换。缓存必须知道内存访问的物理地址，才能正确处理别名（多个虚拟地址指向同一个物理地址）。然而，地址翻译最多需要20次内存访问(见图15)，因此对每次缓存访问进行完整的地址翻译是不切实际的。取而代之的是，地址翻译结果会被缓存在翻译旁观缓冲区（TLB）中。
</p-ch>
<img src="table.9.jpg" width="" height="" alt="" />
<p-en>
	Table 9: Approximate sizes and access times for each level in the TLB hierarchy, from [4].
</p-en>
<p-ch>
	表9：TLB层次结构中每一级的大概大小和访问时间，来自[4]。
</p-ch>
<p-en>
	In the Intel architecture, the PMH is implemented in hardware, so the TLB is never directly exposed to software and its implementation details are not documented. The SDM does state that each TLB entry contains the physical address associated with a virtual address, and the metadata needed to resolve a memory access. For example, the processor needs to check the writable (W) flag on every write, and issue a General Protection fault (#GP) if the write targets a read-only page. Therefore, the TLB entry for each virtual address caches the logicaland of all the relevant W flags in the page table structures leading up to the page.
</p-en>
<p-ch>
	在英特尔架构中，PMH是在硬件中实现的，所以TLB从来没有直接暴露在软件中，其实现细节也没有被记录下来。SDM确实指出，每个TLB条目都包含与虚拟地址相关联的物理地址，以及解析内存访问所需的元数据。例如，处理器需要在每次写入时检查可写(W)标志，如果写入的目标是只读页，则发出一般保护故障(#GP)。因此，每个虚拟地址的 TLB 条目都会缓存所有相关 W 标志的逻辑和，在页表结构中，导致该页。
</p-ch>
<p-en>
	The TLB is transparent to application software. However, kernels and hypervisors must make sure that the TLBs do not get out of sync with the page tables and EPTs. When changing a page table or EPT, the system software must use the INVLPG instruction to invalidate any TLB entries for the virtual address whose translation changed. Some instructions flush the TLBs, meaning that they invalidate all the TLB entries, as a side-effect.
</p-en>
<p-ch>
	TLB对应用软件是透明的。然而，内核和管理程序必须确保TLB不会与页表和EPT不同步。当更改页表或EPT时，系统软件必须使用INVLPG指令使翻译发生变化的虚拟地址的任何TLB条目无效。有些指令会刷新TLB，也就是说会使所有的TLB条目无效，这是一个副作用。
</p-ch>
<p-en>
	TLB entries also cache the desired caching behavior (§2.11.4) for their pages. This requires system software to flush the corresponding TLB entries when changing MTRRs or page table entries. In return, the processor only needs to compute the desired caching behavior during a TLB miss, as opposed to computing the caching behavior on every memory access.
</p-en>
<p-ch>
	TLB条目还可以为其页面缓存所需的缓存行为（§2.11.4）。这就要求系统软件在改变MTRRs或页表项时刷新相应的TLB项。作为回报，处理器只需要在TLB缺失期间计算所需的缓存行为，而不是在每次内存访问时计算缓存行为。
</p-ch>
<p-en>
	The TLB is not covered by the cache coherence mechanism described in §2.11.3. Therefore, when modifying a page table or EPT on a multi-core / multi-processor system, the system software is responsible for performing a TLB shootdown, which consists of stopping all the logical processors that use the page table / EPT about to be changed, performing the changes, executing TLB invalidating instructions on the stopped logical processors, and then resuming execution on the stopped logical processors.
</p-en>
<p-ch>
	TLB 不在 §2.11.3 中描述的缓存一致性机制的覆盖范围内。因此，当在多核/多处理器系统上修改页表或EPT时，系统软件负责执行TLB shootdown，包括停止所有使用即将修改的页表/EPT的逻辑处理器，执行修改，在停止的逻辑处理器上执行TLB无效指令，然后在停止的逻辑处理器上恢复执行。
</p-ch>
<p-en>
	Address translation constrains the L1 cache design. On Intel processors, the set index in an L1 cache only uses the address bits that are not impacted by address translation, so that the L1 set lookup can be done in parallel with the TLB lookup. This is critical for achieving a low latency when both the L1 TLB and the L1 cache are hit.
</p-en>
<p-ch>
	地址转换制约了L1缓存的设计。在英特尔处理器上，L1缓存中的集索引只使用不受地址转换影响的地址位，这样L1集查找就可以与TLB查找并行完成。这对于实现L1 TLB和L1缓存都受到冲击时的低延迟至关重要。
</p-ch>
<p-en>
	Given a page size P = 2^p bytes, the requirement above translates to l + s ≤ p. In the Intel architecture, p = 12, and all recent processors have 64-byte cache lines (l = 6) and 64 sets (s = 6) in the L1 caches, as shown in Figure 29. The L2 and L3 caches are only accessed if the L1 misses, so the physical address for the memory access is known at that time, and can be used for indexing.
</p-en>
<p-ch>
	给定页面大小P=2^p字节，上述要求转化为l+s≤p，在Intel架构中，p=12，所有最新的处理器在L1缓存中都有64字节的缓存行（l=6）和64组（s=6），如图29所示。L2和L3缓存只有在L1失误的情况下才会被访问，所以当时就知道内存访问的物理地址，可以用来做索引。
</p-ch>
<img src="fig.29.jpg" alt="">
<p-en>
	Figure 29: Virtual addresses from the perspective of cache lookup and address translation. The bits used for the L1 set index and line offset are not changed by address translation, so the page tables do not impact L1 cache placement. The page tables do impact L2 and L3 cache placement. Using large pages (2 MB or 1 GB) is not sufficient to make L3 cache placement independent of the page tables, because of the LLC slice hashing function (§2.11.3).
</p-en>
<p-ch>
	图29.从缓存查找和地址转换的角度看虚拟地址。从缓存查找和地址翻译的角度看虚拟地址。L1集索引和行偏移所用的位不会因地址翻译而改变，所以页表不会影响L1缓存的位置。页表确实会影响L2和L3缓存的位置。由于LLC分片哈希函数（§2.11.3），使用大页（2 MB或1 GB）不足以使L3缓存放置独立于页表。
</p-ch>

</body>
</html>	