<!doctype html>
<html lang="en">
<head>
	<meta charset="UTF-8">
	<title></title>
	<link rel="stylesheet" type="text/css" href="http://paranoid.net.cn/semantic.css" >
</head>
<body>
<section-title-en>3.8 Cache Timing Attacks</section-title-en>
<section-title-ch>3.8 缓存定时攻击</section-title-ch>
<p-en>
	Cache timing attacks [19] are a powerful class of software attacks that can be mounted entirely by application code running at ring 3 (§2.3). Cache timing attacks do not learn information by reading the victim's memory, so they bypass the address translation-based isolation measures (§2.5) implemented in today's kernels and hypervisors.
</p-en>
<p-ch>
	缓存定时攻击[19]是一类强大的软件攻击，完全可以由运行在环3的应用程序代码挂载（§2.3）。缓存定时攻击不通过读取受害者的内存来学习信息，因此它们绕过了当今内核和管理程序中实施的基于地址转换的隔离措施（§2.5）。
</p-ch>
<subsection-title-en>3.8.1 Theory</subsection-title-en>
<subsection-title-ch>3.8.1 理论</subsection-title-ch>

<p-en>
	Cache timing attacks exploit the unfortunate dependency between the location of a memory access and the time it takes to perform the access. A cache miss requires at least one memory access to the next level cache, and might require a second memory access if a write-back occurs. On the Intel architecture, the latency between a cache hit and a miss can be easily measured by the RDTSC and RDTSCP instructions (§2.4), which read a high-resolution time-stamp counter. These instructions have been designed for benchmarking and optimizing software, so they are available to ring 3 software.
</p-en>
<p-ch>
	缓存定时攻击利用了内存访问位置与执行访问所需时间之间的不幸依赖性。缓存失误至少需要一次对下一级缓存的内存访问，如果发生回写，可能需要第二次内存访问。在英特尔架构上，缓存命中和错过之间的延迟可以通过RDTSC和RDTSCP指令(§2.4)轻松测量，这两条指令读取了一个高分辨率的时间戳计数器。这些指令是为基准测试和优化软件而设计的，因此它们可以用于环3软件。
</p-ch>
<p-en>
	The fundamental tool of a cache timing attack is an attacker process that measures the latency of accesses to carefully designated memory locations in its own address space. The memory locations are chosen so that they map to the same cache lines as those of some interesting memory locations in a victim process, in a cache that is shared between the attacker and the victim. This requires in-depth knowledge of the shared cache's organization (§2.11.2).
</p-en>
<p-ch>
	缓存定时攻击的基本工具是一个攻击者进程，它测量访问自己地址空间中精心指定的内存位置的延迟。这些内存位置的选择要使它们与受害者进程中的一些有趣的内存位置的内存线映射到相同的缓存中，在一个攻击者和受害者共享的缓存中。这就需要深入了解共享缓存的组织结构（§2.11.2）。
</p-ch>
<p-en>
	Armed with the knowledge of the cache's organization, the attacker process sets up the attack by accessing its own memory in such a way that it fills up all the cache sets that would hold the victim's interesting memory locations. After the targeted cache sets are full, the attacker allows the victim process to execute. When the victim process accesses an interesting memory location in its own address space, the shared cache must evict one of the cache lines holding the attacker's memory locations.
</p-en>
<p-ch>
	在掌握了缓存组织的知识后，攻击者进程通过访问自己的内存来设置攻击，使其填满所有会存放受害者有趣内存位置的缓存集。在目标缓存集被填满后，攻击者允许受害者进程执行。当受害者进程访问自己地址空间中的一个有趣的内存位置时，共享缓存必须驱逐持有攻击者内存位置的一个缓存行。
</p-ch>
<p-en>
	As the victim is executing, the attacker process repeatedly times accesses to its own memory locations. When the access times indicate that a location was evicted from the cache, the attacker can conclude that the victim accessed an interesting memory location in its own cache. Over time, the attacker collects the results of many measurements and learns a subset of the victim's memory access pattern. If the victim processes sensitive information using data-dependent memory fetches, the attacker may be able to deduce the sensitive information from the learned memory access pattern.
</p-en>
<p-ch>
	在受害者执行的过程中，攻击者进程反复对自己的内存位置进行访问次数。当访问时间表明某个位置被驱逐出缓存时，攻击者就可以得出结论，受害者访问了自己缓存中的一个有趣的内存位置。随着时间的推移，攻击者收集了许多测量的结果，并学习了受害者的内存访问模式的子集。如果受害者使用与数据相关的内存获取来处理敏感信息，攻击者可能会从学习到的内存访问模式中推断出敏感信息。
</p-ch>
<subsection-title-en>3.8.2 Practical Considerations</subsection-title-en>
<subsection-title-ch>3.8.2 实际考虑</subsection-title-ch>

<p-en>
	Cache timing attacks require control over a software process that shares a cache memory with the victim process. Therefore, a cache timing attack that targets the L2 cache would have to rely on the system software to schedule a software thread on a logical processor in the same core as the target software, whereas an attack on the L3 cache can be performed using any logical processor on the same CPU. The latter attack relies on the fact that the L3 cache is inclusive, which greatly simplifies the processor's cache coherence implementation (§2.11.3).
</p-en>
<p-ch>
	缓存定时攻击需要控制一个与受害者进程共享缓存内存的软件进程。因此，针对L2缓存的缓存时序攻击必须依靠系统软件在与目标软件处于同一核心的逻辑处理器上调度一个软件线程，而针对L3缓存的攻击则可以使用同一CPU上的任何逻辑处理器进行。后一种攻击依赖于L3缓存具有包容性，这大大简化了处理器的缓存一致性实现（§2.11.3）。
</p-ch>
<p-en>
	The cache sharing requirement implies that L3 cache attacks are feasible in an IaaS environment, whereas L2 cache attacks become a significant concern when running sensitive software on a user's desktop.
</p-en>
<p-ch>
	缓存共享的要求意味着L3缓存攻击在IaaS环境中是可行的，而当用户桌面上运行敏感软件时，L2缓存攻击就成为一个重要的问题。
</p-ch>
<p-en>
	Out-of-order execution (§2.10) can introduce noise in cache timing attacks. First, memory accesses may not be performed in program order, which can impact the lines selected by the cache eviction algorithms. Second, out-of-order execution may result in cache fills that do not correspond to executed instructions. For example, a load that follows a faulting instruction may be scheduled and executed before the fault is detected.
</p-en>
<p-ch>
	失序执行（§2.10）会在缓存时序攻击中引入噪声。首先，内存访问可能不按程序顺序执行，这可能会影响到缓存驱逐算法选择的行。其次，无序执行可能导致与执行指令不对应的缓存填充。例如，在故障指令之后的加载可能会在检测到故障之前被安排并执行。
</p-ch>
<p-en>
	Cache timing attacks must account for speculative execution, as mispredicted memory accesses can still cause cache fills. Therefore, the attacker may observe cache fills that don't correspond to instructions that were actually executed by the victim software. Memory prefetching adds further noise to cache timing attacks, as the attacker may observe cache fills that don't correspond to instructions in the victim code, even when accounting for speculative execution.
</p-en>
<p-ch>
	缓存时序攻击必须考虑到推测性执行，因为错误预测的内存访问仍然可以导致缓存填充。因此，攻击者可能会观察到与受害者软件实际执行的指令不对应的缓存填充。内存预取给缓存时序攻击增加了更多的噪音，因为攻击者可能会观察到与受害者代码中的指令不对应的缓存填充，即使考虑到推测性执行。
</p-ch>
<p-en>
	Despite these difficulties, cache timing attacks are known to retrieve cryptographic keys used by AES [25, 146], RSA [28], Diffie-Hellman [123], and elliptic-curve cryptography [27].
</p-en>
<p-ch>
	尽管存在这些困难，但已知缓存定时攻击可以检索AES[25，146]、RSA[28]、Diffie-Hellman[123]和椭圆曲线密码学[27]使用的加密密钥。
</p-ch>
<p-en>
	Early attacks required access to the victim's CPU core, but more sophisticated recent attacks [131, 196] are able to use the L3 cache, which is shared by all the cores on a CPU die. L3-based attacks can be particularly devastating in cloud computing scenarios, where running software on the same computer as a victim application only requires modest statistical analysis skills and a small amount of money [157]. Furthermore, cache timing attacks were recently demonstrated using JavaScript code in a page visited by a Web browser [145].
</p-en>
<p-ch>
	早期的攻击需要访问受害者的CPU核心，但最近更复杂的攻击[131，196]能够使用L3缓存，它由CPU裸片上的所有核心共享。在云计算场景中，基于L3的攻击尤其具有破坏性，在云计算场景中，在与受害者应用程序相同的计算机上运行软件只需要适度的统计分析技能和少量的资金[157]。此外，最近有人利用Web浏览器访问的页面中的JavaScript代码演示了缓存定时攻击[145]。
</p-ch>
<p-en>
	Given this pattern of vulnerabilities, ignoring cache timing attacks is dangerously similar to ignoring the string of demonstrated attacks which led to the deprecation of SHA-1 [3, 6, 9].
</p-en>
<p-ch>
	考虑到这种漏洞模式，忽略缓存时序攻击与忽略导致SHA-1被废弃的一连串已被证明的攻击[3，6，9]是危险的。
</p-ch>
<subsection-title-en>3.8.4 Defending against Cache Timing Attacks</subsection-title-en>
<subsection-title-ch>3.8.4 防范缓存定时攻击</subsection-title-ch>

<p-en>
	Fortunately, invalidating any of the preconditions for cache timing attacks is sufficient for defending against them. The easiest precondition to focus on is that the attacker must have access to memory locations that map to the same sets in a cache as the victim's memory. This assumption can be invalidated by the judicious use of a cache partitioning scheme.
</p-en>
<p-ch>
	幸运的是，使缓存时序攻击的任何一个先决条件失效，都足以对其进行防御。最容易关注的先决条件是，攻击者必须能够访问缓存中映射到与受害者内存相同集的内存位置。这个假设可以通过明智地使用缓存分区方案而失效。
</p-ch>
<p-en>
	Performance concerns aside, the main difficulty associated with cache partitioning schemes is that they must be implemented by a trusted party. When the system software is trusted, it can (for example) use the principles behind page coloring [117, 177] to partition the caches [129] between mutually distrusting parties. This comes down to setting up the page tables in such a way that no two mutually distrusting software module are stored in physical pages that map to the same sets in any cache memory. However, if the system software is not trusted, the cache partitioning scheme must be implemented in hardware.
</p-en>
<p-ch>
	抛开性能问题不谈，与缓存分区方案相关的主要困难是，它们必须由受信任的一方来实现。当系统软件是可信的，它可以（例如）利用页面着色[117，177]背后的原理，在相互不信任的各方之间划分缓存[129]。这归根结底是以这样的方式设置页表，即在任何缓存存储器中，没有两个相互不信任的软件模块存储在映射到相同集的物理页中。但是，如果系统软件不被信任，则必须在硬件中实现缓存分区方案。
</p-ch>
<p-en>
	The other interesting precondition is that the victim must access its memory in a data-dependent fashion that allows the attacker to infer private information from the observed memory access pattern. It becomes tempting to think that cache timing attacks can be prevented by eliminating data-dependent memory accesses from all the code handling sensitive data.
</p-en>
<p-ch>
	另一个有趣的前提条件是，受害者必须以数据依赖的方式访问其内存，使攻击者能够从观察到的内存访问模式中推断出私人信息。这就很容易让人想到，通过消除所有处理敏感数据的代码中与数据相关的内存访问，可以防止缓存定时攻击。
</p-ch>
<p-en>
	However, removing data-dependent memory accesses is difficult to accomplish in practice because instruction fetches must also be taken into consideration. [115] gives an idea of the level of effort required to remove data-dependent accesses from AES, which is a relatively simple data processing algorithm. At the time of this writing, we are not aware of any approach that scales to large pieces of software.
</p-en>
<p-ch>
	然而，去除数据依赖性的内存访问在实践中很难完成，因为还必须考虑指令获取。[115]给出了从AES中移除数据依赖性访问所需的努力程度，AES是一种相对简单的数据处理算法。在写这篇文章的时候，我们还不知道有什么方法可以扩展到大型软件中。
</p-ch>
<p-en>
	While the focus of this section is cache timing attacks, we would like to point out that any shared resource can lead to information leakage. A worrying example is hyper-threading (§2.9.4), where each CPU core is represented as two logical processors, and the threads executing on these two processors share execution units. An attacker who can run a process on a logical processor sharing a core with a victim process can use RDTSCP [152] to learn which execution units are in use, and infer what instructions are executed by the victim process.
</p-en>
<p-ch>
	虽然本节的重点是缓存时序攻击，但我们想指出，任何共享资源都可能导致信息泄露。一个令人担忧的例子是超线程(§2.9.4)，其中每个CPU核被表示为两个逻辑处理器，在这两个处理器上执行的线程共享执行单元。攻击者如果能在与受害者进程共享一个核心的逻辑处理器上运行一个进程，就可以使用RDTSCP[152]来了解哪些执行单元正在使用，并推断出受害者进程执行了哪些指令。
</p-ch>

</body>
</html>	