<!doctype html>
<html lang="en">
<head>
	<meta charset="UTF-8">
	<title></title>
	<link rel="stylesheet" type="text/css" href="http://paranoid.net.cn/semantic.css" >
</head>
<body>
<section-title-en>2.10 Out-of-Order and Speculative Execution</section-title-en>
<section-title-ch>2.10 乱序和投机性执行</section-title-ch>
<p-en>
	CPU cores can execute instructions orders of magnitude faster than DRAM can read data. Computer architects attempt to bridge this gap by using hyper-threading (§2.9.3), out-of-order and speculative execution, and caching, which is described in §2.11. In CPUs that use out-of-order execution, the order in which the CPU carries out a program's instructions (execution order) is not necessarily the same as the order in which the instructions would be executed by a sequential evaluation system (program order).
</p-en>
<p-ch>
	CPU核执行指令的速度比DRAM读取数据的速度快了好几个数量级。计算机架构师试图通过使用超线程(§2.9.3)、无序执行和投机执行以及缓存来弥补这一差距，这在§2.11中有所描述。在使用无序执行的CPU中，CPU执行程序指令的顺序(执行顺序)不一定与顺序求值系统执行指令的顺序(程序顺序)相同。
</p-ch>
<p-en>
	An analysis of a system's information leakage must take out-of-order execution into consideration. Any CPU actions observed by an attacker match the execution order, so the attacker may learn some information by comparing the observed execution order with a known program order. At the same time, attacks that try to infer a victim's program order based on actions taken by the CPU must account for out-of-order execution as a source of noise.
</p-en>
<p-ch>
	对系统信息泄露的分析必须考虑到无序执行。攻击者观察到的任何CPU动作都与执行顺序相匹配，因此攻击者可以通过将观察到的执行顺序与已知的程序顺序进行比较来了解一些信息。同时，试图根据CPU所做的动作推断受害者的程序顺序的攻击必须考虑到失序执行是一种噪声源。
</p-ch>
<p-en>
	This section summarizes the out-of-order and speculative execution concepts used when reasoning about a system's security properties. [150] and [76] cover the concepts in great depth, while Intel's optimization manual [96] provides details specific to Intel CPUs.
</p-en>
<p-ch>
	本节总结了推理系统安全属性时使用的失序和推测执行概念。[150]和[76]深入地介绍了这些概念，而英特尔的优化手册[96]则提供了针对英特尔CPU的细节。
</p-ch>
<p-en>
	Figure 24 provides a more detailed view of the CPU core components involved in out-of-order execution, and omits some less relevant details from Figure 23.
</p-en>
<p-ch>
	图24提供了参与失序执行的CPU核心部件的更详细视图，并省略了图23中一些不太相关的细节。
</p-ch>
<img src="fig.24.jpg" />
<p-en>
	Figure 24: The structures in a CPU core that are relevant to out-of-order and speculative execution. Instructions are decoded into micro-ops, which are scheduled on one of the execution unit's ports. The branch predictor enables speculative execution when a branch is encountered.
</p-en>
<p-ch>
	图24：CPU核中与失序和投机执行相关的结构。指令被解码成微操作，并被安排在执行单元的一个端口上。当遇到分支时，分支预测器可以实现推测性执行。
</p-ch>
<p-en>
	The Intel architecture defines a complex instruction set (CISC). However, virtually all modern CPUs are architected following reduced instruction set (RISC) principles. This is accomplished by having the instruction decode stages break down each instruction into micro-ops, which resemble RISC instructions. The other stages of the execution pipeline work exclusively with micro-ops.
</p-en>
<p-ch>
	英特尔架构定义了一个复杂指令集（CISC）。然而，几乎所有的现代CPU都是按照精简指令集(RISC)的原则进行架构的。这是通过让指令解码阶段将每条指令分解成类似于RISC指令的微操作来实现的。执行流水线的其他阶段完全是用微操作工作的。
</p-ch>

<section-title-en>2.10.1 Out-of-Order Execution</section-title-en>
<section-title-ch>2.10.1 乱序执行</section-title-ch>
<p-en>
	Different types of instructions require different logic circuits, called functional units. For example, the arithmetic logic unit (ALU), which performs arithmetic operations, is completely different from the load and store unit, which performs memory operations. Different circuits can be used at the same time, so each CPU core can execute multiple micro-ops in parallel.
</p-en>
<p-ch>
	不同类型的指令需要不同的逻辑电路，称为功能单元。例如，执行算术运算的算术逻辑单元（ALU）与执行存储器操作的加载和存储单元完全不同。不同的电路可以同时使用，因此每个CPU核可以并行执行多个微操作。
</p-ch>
<p-en>
	The core's out-of-order engine receives decoded micro-ops, identifies the micro-ops that can execute in parallel, assigns them to functional units, and combines the outputs of the units so that the results are equivalent to having the micro-ops executed sequentially in the order in which they come from the decode stages.
</p-en>
<p-ch>
	核心的乱序引擎接收解码后的微操作，识别出可以并行执行的微操作，将它们分配给功能单元，并将单元的输出结合起来，这样的结果就相当于让微操作按照来自解码阶段的顺序依次执行。
</p-ch>
<p-en>
	For example, consider the sequence of pseudo micro-ops in Table 5 below. The OR uses the result of the LOAD, but the ADD does not. Therefore, a good scheduler can have the load store unit execute the LOAD and the ALU execute the ADD, all in the same clock cycle.
</p-en>
<p-ch>
	例如，考虑下面表5中的伪微操作序列。OR使用LOAD的结果，但ADD不使用。因此，一个好的调度器可以让加载存储单元执行LOAD，ALU执行ADD，都在同一个时钟周期内。
</p-ch>
<img src="table.5.jpg" />
<p-en>
	Table 5: Pseudo micro-ops for the out-of-order execution example.
</p-en>
<p-ch>
	表5：乱序执行伪微操作的示例。
</p-ch>
<p-en>
	The out-of-order engine in recent Intel CPUs works roughly as follows. Micro-ops received from the decode queue are written into a reorder buffer (ROB) while they are in-flight in the execution unit. The register allocation table (RAT) matches each register with the last reorder buffer entry that updates it. The renamer uses the RAT to rewrite the source and destination fields of micro-ops when they are written in the ROB, as illustrated in Tables 6 and 7. Note that the ROB representation makes it easy to determine the dependencies between micro-ops.
</p-en>
<p-ch>
	近代英特尔CPU中的乱序引擎工作原理大致如下。从解码队列中接收到的微操作在执行单元中飞行时被写入重排序缓冲区（ROB）。寄存器分配表（RAT）将每个寄存器与更新它的最后一个重排序缓冲区条目进行匹配。重命名器在ROB中写入微操作的源字段和目的字段时，使用RAT重写它们，如表6和表7所示。请注意，ROB的表示方式可以很容易地确定微操作之间的依赖关系。
</p-ch>
<img src="table.6.jpg" width="" height="" border="0" alt="">
<p-en>
	Table 6: Data written by the renamer into the reorder buffer (ROB), for the micro-ops in Table 5.
</p-en>
<p-ch>
	表6：被重命名器写入重排序缓冲区的数据。对应表5中的微操作。
</p-ch>
<img src="table.7.jpg" width="" height="" border="0" alt="">
<p-en>
	Table 7: Relevant entries of the register allocation table after the micro-ops in Table 5 are inserted into the ROB.
</p-en>
<p-ch>
	表7：表5中的微操作插入ROB后，寄存器分配表的相关条目。
</p-ch>
<p-en>
	The scheduler decides which micro-ops in the ROB get executed, and places them in the reservation station. The reservation station has one port for each functional unit that can execute micro-ops independently. Each reservation station port port holds one micro-op from the ROB. The reservation station port waits until the micro-op's dependencies are satisfied and forwards the micro-op to the functional unit. When the functional unit completes executing the micro-op, its result is written back to the ROB, and forwarded to any other reservation station port that depends on it.
</p-en>
<p-ch>
	调度器决定ROB中哪些微操作被执行，并将其放置在预留站中。预留站为每个功能单元设有一个端口，功能单元可以独立执行微操作。每个预约站端口口都能容纳ROB中的一个微操作。预留站端口等到微操作的依赖性满足后，将微操作转发给功能单元。当功能单元完成执行微操作时，其结果会被写回ROB，并转发到任何其他依赖于它的保留站端口。
</p-ch>
<p-en>
	The ROB stores the results of completed micro-ops until they are retired, meaning that the results are committed to the register file and the micro-ops are removed from the ROB. Although micro-ops can be executed out-oforder, they must be retired in program order, in order to handle exceptions correctly. When a micro-op causes a hardware exception (§2.8.2), all the following micro-ops in the ROB are squashed, and their results are discarded.
</p-en>
<p-ch>
	ROB存储已完成的微操作的结果，直到它们被退役，这意味着结果被提交到寄存器文件，微操作从ROB中删除。虽然微操作可以不按顺序执行，但为了正确处理异常，它们必须按程序顺序退役。当一个微操作导致硬件异常时(§2.8.2)，ROB中所有后续的微操作都会被压制，其结果被丢弃。
</p-ch>
<p-en>
	In the example above, the ADD can complete before the LOAD, because it does not require a memory access. However, the ADD's result cannot be committed before LOAD completes. Otherwise, if the ADD is committed and the LOAD causes a page fault, software will observe an incorrect value for the RSI register.
</p-en>
<p-ch>
	在上面的例子中，ADD可以在LOAD之前完成，因为它不需要内存访问。但是，ADD的结果不能在LOAD完成之前提交。否则，如果ADD被提交，而LOAD导致页面故障，软件将观察到RSI寄存器的值不正确。
</p-ch>
<p-en>
	The ROB is tailored for discovering register dependencies between micro-ops. However, micro-ops that execute out-of-order can also have memory dependencies. For this reason, out-of-order engines have a load buffer and a store buffer that keep track of in-flight memory operations and are used to resolve memory dependencies.
</p-en>
<p-ch>
	ROB是为发现微操作之间的寄存器依赖性而定制的。然而，非顺序执行的微操作也会有内存依赖性。出于这个原因，失序引擎有一个加载缓冲区和一个存储缓冲区，用来跟踪飞行中的内存操作，并用来解决内存依赖性。
</p-ch>
<section-title-en>2.10.2 Speculative Execution</section-title-en>
<section-title-ch>2.10.2 投机性执行</section-title-ch>
<p-en>
	Branch instructions, also called branches, change the instruction pointer (RIP, §2.6), if a condition is met (the branch is taken). They implement conditional statements (if) and looping statements, such as while and for. The most well-known branching instructions in the Intel architecture are in the jcc family, such as je (jump if equal).
</p-en>
<p-ch>
	分支指令也叫分支，如果满足条件（采取分支），则改变指令指针（RIP，§2.6）。它们实现了条件语句（if）和循环语句，如while和for。英特尔架构中最著名的分支指令属于jcc系列，如je(jump if equal)。
</p-ch>
<p-en>
	Branches pose a challenge to the decode stage, because the instruction that should be fetched after a branch is not known until the branching condition is evaluated. In order to avoid stalling the decode stage, modern CPU designs include branch predictors that use historical information to guess whether a branch will be taken or not.
</p-en>
<p-ch>
	分支对解码阶段构成了挑战，因为在评估分支条件之前，并不知道在分支之后应该取用的指令。为了避免解码阶段的停滞，现代CPU设计中包含了分支预测器，利用历史信息来猜测是否会采取分支。
</p-ch>
<p-en>
	When the decode stage encounters a branch instruction, it asks the branch predictor for a guess as to whether the branch will be taken or not. The decode stage bundles the branch condition and the predictor's guess into a branch check micro-op, and then continues decoding on the path indicated by the predictor. The micro-ops following the branch check are marked as speculative.
</p-en>
<p-ch>
	当解码阶段遇到分支指令时，会要求分支预测器猜测该分支是否会被采取。解码阶段将分支条件和预测器的猜测捆绑在一个分支检查微操作中，然后沿着预测器指示的路径继续解码。分支检查之后的微操作被标记为猜测性的。
</p-ch>
<p-en>
	When the branch check micro-op is executed, the branch unit checks whether the branch predictor's guess was correct. If that is the case, the branch check is retired successfully. The scheduler handles mispredictions by squashing all the micro-ops following the branch check, and by signaling the instruction decoder to flush the micro-op decode queue and start fetching the instructions that follow the correct branch.
</p-en>
<p-ch>
	当执行分支检查微操作时，分支单元检查分支预测者的猜测是否正确。如果是这样，则分支检查成功退出。调度器处理错误预测的方法是压制所有分支检查后的微操作，并向指令解码器发出信号，以冲洗微操作解码队列，开始获取正确分支后的指令。
</p-ch>
<p-en>
	Modern CPUs also attempt to predict memory read patterns, so they can prefetch the memory locations that are about to be read into the cache. Prefetching minimizes the latency of successfully predicted read operations, as their data will already be cached. This is accomplished by exposing circuits called prefetchers to memory accesses and cache misses. Each prefetcher can recognize a particular access pattern, such as sequentially reading an array's elements. When memory accesses match the pattern that a prefetcher was built to recognize, the prefetcher loads the cache line corresponding to the next memory access in its pattern.
</p-en>
<p-ch>
	现代CPU还试图预测内存读取模式，因此它们可以预取即将被读入缓存的内存位置。预取可以最大限度地减少成功预测读的操作的延迟，因为它们的数据将已经被缓存。这是通过将称为预取器的电路暴露给内存访问和缓存缺失来实现的。每个预取器可以识别一个特定的访问模式，例如按顺序读取一个数组的元素。当内存访问与预取器构建的识别模式相匹配时，预取器就会加载其模式中下一个内存访问对应的缓存行。
</p-ch>

</body>
</html>	