<!DOCTYPE HTML>
<html lang="en" class="sidebar-visible no-js light">
    <head>
        <!-- Book generated using mdBook -->
        <meta charset="UTF-8">
        <title>learning-gem5</title>
        <meta name="robots" content="noindex" />
        <!-- Custom HTML head -->
        <meta content="text/html; charset=utf-8" http-equiv="Content-Type">
        <meta name="description" content="">
        <meta name="viewport" content="width=device-width, initial-scale=1">
        <meta name="theme-color" content="#ffffff" />

        <link rel="icon" href="favicon.svg">
        <link rel="shortcut icon" href="favicon.png">
        <link rel="stylesheet" href="css/variables.css">
        <link rel="stylesheet" href="css/general.css">
        <link rel="stylesheet" href="css/chrome.css">
        <link rel="stylesheet" href="css/print.css" media="print">
        <!-- Fonts -->
        <link rel="stylesheet" href="FontAwesome/css/font-awesome.css">
        <link rel="stylesheet" href="fonts/fonts.css">
        <!-- Highlight.js Stylesheets -->
        <link rel="stylesheet" href="highlight.css">
        <link rel="stylesheet" href="tomorrow-night.css">
        <link rel="stylesheet" href="ayu-highlight.css">

        <!-- Custom theme stylesheets -->
    </head>
    <body>
        <!-- Provide site root to javascript -->
        <script type="text/javascript">
            var path_to_root = "";
            var default_theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "navy" : "light";
        </script>

        <!-- Work around some values being stored in localStorage wrapped in quotes -->
        <script type="text/javascript">
            try {
                var theme = localStorage.getItem('mdbook-theme');
                var sidebar = localStorage.getItem('mdbook-sidebar');

                if (theme.startsWith('"') && theme.endsWith('"')) {
                    localStorage.setItem('mdbook-theme', theme.slice(1, theme.length - 1));
                }

                if (sidebar.startsWith('"') && sidebar.endsWith('"')) {
                    localStorage.setItem('mdbook-sidebar', sidebar.slice(1, sidebar.length - 1));
                }
            } catch (e) { }
        </script>

        <!-- Set the theme before any content is loaded, prevents flash -->
        <script type="text/javascript">
            var theme;
            try { theme = localStorage.getItem('mdbook-theme'); } catch(e) { }
            if (theme === null || theme === undefined) { theme = default_theme; }
            var html = document.querySelector('html');
            html.classList.remove('no-js')
            html.classList.remove('light')
            html.classList.add(theme);
            html.classList.add('js');
        </script>

        <!-- Hide / unhide sidebar before it is displayed -->
        <script type="text/javascript">
            var html = document.querySelector('html');
            var sidebar = 'hidden';
            if (document.body.clientWidth >= 1080) {
                try { sidebar = localStorage.getItem('mdbook-sidebar'); } catch(e) { }
                sidebar = sidebar || 'visible';
            }
            html.classList.remove('sidebar-visible');
            html.classList.add("sidebar-" + sidebar);
        </script>

        <nav id="sidebar" class="sidebar" aria-label="Table of contents">
            <div class="sidebar-scrollbox">
                <ol class="chapter"><li class="chapter-item expanded affix "><a href="part0_introduction.html">Learning gem-5</a></li><li class="chapter-item expanded "><a href="part0_introduction.html"><strong aria-hidden="true">1.</strong> part0_introduction</a></li><li class="chapter-item expanded "><a href="part1/part1_1_building.html"><strong aria-hidden="true">2.</strong> part1</a></li><li><ol class="section"><li class="chapter-item expanded "><a href="part1/part1_1_building.html"><strong aria-hidden="true">2.1.</strong> part1_1_building</a></li><li class="chapter-item expanded "><a href="part1/part1_2_simple_config.html"><strong aria-hidden="true">2.2.</strong> part1_2_simple_config</a></li><li class="chapter-item expanded "><a href="part1/part1_3_cache_config.html"><strong aria-hidden="true">2.3.</strong> part1_3_cache_config</a></li><li class="chapter-item expanded "><a href="part1/part1_4_gem5_stats.html"><strong aria-hidden="true">2.4.</strong> part1_4_gem5_stats</a></li><li class="chapter-item expanded "><a href="part1/part1_5_gem5_example_configs.html"><strong aria-hidden="true">2.5.</strong> part1_5_gem5_example_configs</a></li><li class="chapter-item expanded "><a href="part1/part1_6_extending_configs.html"><strong aria-hidden="true">2.6.</strong> part1_6_extending_configs</a></li></ol></li><li class="chapter-item expanded "><a href="part2/part2_0_environment.html"><strong aria-hidden="true">3.</strong> part2</a></li><li><ol class="section"><li class="chapter-item expanded "><a href="part2/part2_0_environment.html"><strong aria-hidden="true">3.1.</strong> part2_0_environment</a></li><li class="chapter-item expanded "><a href="part2/part2_1_helloobject.html"><strong aria-hidden="true">3.2.</strong> part2_1_helloobject</a></li><li class="chapter-item expanded "><a href="part2/part2_2_debugging.html"><strong aria-hidden="true">3.3.</strong> part2_2_debugging</a></li><li class="chapter-item expanded "><a href="part2/part2_3_events.html"><strong aria-hidden="true">3.4.</strong> part2_3_events</a></li><li class="chapter-item expanded "><a href="part2/part2_4_parameters.html"><strong aria-hidden="true">3.5.</strong> part2_4_parameters</a></li><li class="chapter-item expanded "><a href="part2/part2_5_memoryobject.html"><strong aria-hidden="true">3.6.</strong> part2_5_memoryobject</a></li><li class="chapter-item expanded "><a href="part2/part2_6_simplecache.html"><strong aria-hidden="true">3.7.</strong> part2_6_simplecache</a></li><li class="chapter-item expanded "><a href="part2/part2_7_arm_power_modelling.html"><strong aria-hidden="true">3.8.</strong> part2_7_arm_power_modelling</a></li><li class="chapter-item expanded "><a href="part2/part2_8_arm_dvfs_support.html"><strong aria-hidden="true">3.9.</strong> part2_8_arm_dvfs_support</a></li></ol></li><li class="chapter-item expanded "><a href="part3/part3_00_MSIntro.html"><strong aria-hidden="true">4.</strong> part3</a></li><li><ol class="section"><li class="chapter-item expanded "><a href="part3/part3_00_MSIntro.html"><strong aria-hidden="true">4.1.</strong> part3_00_MSIntro</a></li><li class="chapter-item expanded "><a href="part3/part3_01_cache-intro.html"><strong aria-hidden="true">4.2.</strong> part3_01_cache-intro</a></li><li class="chapter-item expanded "><a href="part3/part3_02_cache-declarations.html"><strong aria-hidden="true">4.3.</strong> part3_02_cache-declarations</a></li><li class="chapter-item expanded "><a href="part3/part3_03_cache-in-ports.html"><strong aria-hidden="true">4.4.</strong> part3_03_cache-in-ports</a></li><li class="chapter-item expanded "><a href="part3/part3_04_cache_actions.html"><strong aria-hidden="true">4.5.</strong> part3_04_cache_actions</a></li><li class="chapter-item expanded "><a href="part3/part3_05_cache_transitions.html"><strong aria-hidden="true">4.6.</strong> part3_05_cache_transitions</a></li><li class="chapter-item expanded "><a href="part3/part3_06_directory.html"><strong aria-hidden="true">4.7.</strong> part3_06_directory</a></li><li class="chapter-item expanded "><a href="part3/part3_07_MSIbuilding.html"><strong aria-hidden="true">4.8.</strong> part3_07_MSIbuilding</a></li><li class="chapter-item expanded "><a href="part3/part3_08_configuration.html"><strong aria-hidden="true">4.9.</strong> part3_08_configuration</a></li><li class="chapter-item expanded "><a href="part3/part3_09_running.html"><strong aria-hidden="true">4.10.</strong> part3_09_running</a></li><li class="chapter-item expanded "><a href="part3/part3_10_MSIdebugging.html"><strong aria-hidden="true">4.11.</strong> part3_10_MSIdebugging</a></li><li class="chapter-item expanded "><a href="part3/part3_11_simple-MI_example.html"><strong aria-hidden="true">4.12.</strong> part3_11_simple-MI_example</a></li></ol></li><li class="chapter-item expanded "><a href="part4_gem5_101.html"><strong aria-hidden="true">5.</strong> part4_gem5_101</a></li><li class="chapter-item expanded "><a href="http://doxygen.gem5.org/develop/index.html"><strong aria-hidden="true">6.</strong> part4_gem5_102</a></li></ol>
            </div>
            <div id="sidebar-resize-handle" class="sidebar-resize-handle"></div>
        </nav>

        <div id="page-wrapper" class="page-wrapper">

            <div class="page">
                <div id="menu-bar-hover-placeholder"></div>
                <div id="menu-bar" class="menu-bar sticky bordered">
                    <div class="left-buttons">
                        <button id="sidebar-toggle" class="icon-button" type="button" title="Toggle Table of Contents" aria-label="Toggle Table of Contents" aria-controls="sidebar">
                            <i class="fa fa-bars"></i>
                        </button>
                        <button id="theme-toggle" class="icon-button" type="button" title="Change theme" aria-label="Change theme" aria-haspopup="true" aria-expanded="false" aria-controls="theme-list">
                            <i class="fa fa-paint-brush"></i>
                        </button>
                        <ul id="theme-list" class="theme-popup" aria-label="Themes" role="menu">
                            <li role="none"><button role="menuitem" class="theme" id="light">Light (default)</button></li>
                            <li role="none"><button role="menuitem" class="theme" id="rust">Rust</button></li>
                            <li role="none"><button role="menuitem" class="theme" id="coal">Coal</button></li>
                            <li role="none"><button role="menuitem" class="theme" id="navy">Navy</button></li>
                            <li role="none"><button role="menuitem" class="theme" id="ayu">Ayu</button></li>
                        </ul>
                        <button id="search-toggle" class="icon-button" type="button" title="Search. (Shortkey: s)" aria-label="Toggle Searchbar" aria-expanded="false" aria-keyshortcuts="S" aria-controls="searchbar">
                            <i class="fa fa-search"></i>
                        </button>
                    </div>

                    <h1 class="menu-title">learning-gem5</h1>

                    <div class="right-buttons">
                        <a href="print.html" title="Print this book" aria-label="Print this book">
                            <i id="print-button" class="fa fa-print"></i>
                        </a>
                    </div>
                </div>

                <div id="search-wrapper" class="hidden">
                    <form id="searchbar-outer" class="searchbar-outer">
                        <input type="search" id="searchbar" name="searchbar" placeholder="Search this book ..." aria-controls="searchresults-outer" aria-describedby="searchresults-header">
                    </form>
                    <div id="searchresults-outer" class="searchresults-outer hidden">
                        <div id="searchresults-header" class="searchresults-header"></div>
                        <ul id="searchresults">
                        </ul>
                    </div>
                </div>
                <!-- Apply ARIA attributes after the sidebar and the sidebar toggle button are added to the DOM -->
                <script type="text/javascript">
                    document.getElementById('sidebar-toggle').setAttribute('aria-expanded', sidebar === 'visible');
                    document.getElementById('sidebar').setAttribute('aria-hidden', sidebar !== 'visible');
                    Array.from(document.querySelectorAll('#sidebar a')).forEach(function(link) {
                        link.setAttribute('tabIndex', sidebar === 'visible' ? 0 : -1);
                    });
                </script>

                <div id="content" class="content">
                    <main>
                        <hr />
<h2>layout: documentation
title: Learning gem5
doc: Learning gem5
parent: learning_gem5
permalink: /documentation/learning_gem5/introduction/
author: Jason Lowe-Power</h2>
<h1 id="介绍"><a class="header" href="#介绍">介绍</a></h1>
<p>本文档的目标是向您全面介绍如何使用 gem5 和 gem5 代码库。本文档的目的不是提供 gem5 中每个特性的详细描述。阅读本文档后，您应该会在课堂和计算机体系结构研究中使用 gem5 感到自在。此外，您应该能够修改和扩展 gem5，然后将您的改进贡献给主 gem5 存储库。</p>
<p>我在威斯康星大学麦迪逊分校读研究生时，在过去 6 年中使用 gem5 的个人经历为这份文档增色不少。提供的示例只是一种方法。与 Python 不同的是，Python 的口头禅是“应该有一种——最好只有一种——明显的方法来做到这一点。” （来自 Python 之禅。参见 Python <a href="https://www.python.org/dev/peps/pep-0020/#the-zen-of-python">之禅</a>），在 gem5 中有许多不同的方法来完成相同的事情。因此，本书中提供的许多示例都是我对最佳做事方式的看法。</p>
<p>我学到的一个重要教训（艰难的方式）是在使用像 gem5 这样的复杂工具时，重要的是在使用它之前真正了解它是如何工作的。</p>
<p>您可以在https://gem5.googlesource.com/public/gem5-website/+/refs/heads/stable/_pages/documentation/learning_gem5/找到本书的来源 。</p>
<h2 id="什么是gem5"><a class="header" href="#什么是gem5">什么是gem5？</a></h2>
<p>gem5 是一个模块化离散事件驱动的计算机系统模拟器平台。这意味着：</p>
<ol>
<li>gem5 的组件可以轻松地重新排列、参数化、扩展或更换以满足您的需要。</li>
<li>它将时间的流逝模拟为一系列离散事件。</li>
<li>它的预期用途是以各种方式模拟一个或多个计算机系统。</li>
<li>它不仅仅是一个模拟器；它是一个模拟器平台，可让您根据需要使用任意数量的预制组件来构建自己的模拟系统。</li>
</ol>
<p>gem5 主要是用 C++ 和 python 编写的，大多数组件都是在 BSD 风格的许可下提供的。它可以在完整系统模式（FS 模式）下模拟具有设备和操作系统的完整系统，或者仅在系统调用仿真模式（SE 模式）下由模拟器直接提供系统服务的用户空间程序。在 CPU 模型上执行 Alpha、ARM、MIPS、Power、SPARC、RISC-V 和 64 位 x86 二进制文件有不同级别的支持，包括两个简单的单 CPI 模型、一个乱序模型和一个有序流水线模型。内存系统可以由缓存和交叉开关或提供更灵活内存系统建模的 Ruby 模拟器灵活构建。</p>
<p>这里没有提到许多组件和功能，但仅从这个部分列表中，很明显 gem5 是一个复杂且功能强大的模拟平台。即使现在 gem5 可以做的所有事情，通过个人和一些公司的支持，积极的开发仍在继续，并且定期添加新功能和改进现有功能。</p>
<h2 id="开箱即用的功能"><a class="header" href="#开箱即用的功能">开箱即用的功能</a></h2>
<p>gem5 设计用于计算机体系结构研究，但如果您尝试研究新事物，它可能无法立即评估您的想法。如果可以，那可能意味着有人已经评估了类似的想法并发表了它。</p>
<p>为了充分利用 gem5，您很可能需要添加特定于项目目标的新功能。gem5 的模块化设计应该可以帮助您进行修改，而无需了解模拟器的每个部分。</p>
<p>当您添加您需要的新功能时，请考虑将您的更改贡献回 gem5。这样其他人就可以利用您的辛勤工作，而 gem5 可以成为更好的模拟器。</p>
<h2 id="寻求帮助"><a class="header" href="#寻求帮助">寻求帮助</a></h2>
<p>gem5 有两个主要的邮件列表，您可以在其中寻求帮助或建议。gem5-dev 适用于正在开发 gem5 主版本的开发人员。这是从网站分发的版本，很可能是您自己工作的基础。gem5-users 是一个更大的邮件列表，适用于从事自己项目的人，这些项目至少在最初不会作为 gem5 正式版本的一部分分发。</p>
<p>大多数时候，gem5-users 是正确的邮件列表。gem5-dev 上的大多数人也在 gem5-users 上，包括所有主要开发人员，此外，gem5 社区的许多其他成员也会看到您的帖子。这对您有帮助，因为他们可能能够回答您的问题，这对他们也有帮助，因为他们将能够看到人们发送给您的答案。要查找有关邮件列表的更多信息、注册或浏览已归档的帖子，请访问<a href="https://www.gem5.org/mailing_lists">邮件列表</a>。</p>
<p>在邮件列表上报告问题之前，请阅读<a href="https://www.gem5.org/documentation/reporting_problems">报告问题</a>。</p>
<div style="break-before: page; page-break-before: always;"></div><hr />
<h2>layout: documentation
title: Learning gem5
doc: Learning gem5
parent: learning_gem5
permalink: /documentation/learning_gem5/introduction/
author: Jason Lowe-Power</h2>
<h1 id="介绍-1"><a class="header" href="#介绍-1">介绍</a></h1>
<p>本文档的目标是向您全面介绍如何使用 gem5 和 gem5 代码库。本文档的目的不是提供 gem5 中每个特性的详细描述。阅读本文档后，您应该会在课堂和计算机体系结构研究中使用 gem5 感到自在。此外，您应该能够修改和扩展 gem5，然后将您的改进贡献给主 gem5 存储库。</p>
<p>我在威斯康星大学麦迪逊分校读研究生时，在过去 6 年中使用 gem5 的个人经历为这份文档增色不少。提供的示例只是一种方法。与 Python 不同的是，Python 的口头禅是“应该有一种——最好只有一种——明显的方法来做到这一点。” （来自 Python 之禅。参见 Python <a href="https://www.python.org/dev/peps/pep-0020/#the-zen-of-python">之禅</a>），在 gem5 中有许多不同的方法来完成相同的事情。因此，本书中提供的许多示例都是我对最佳做事方式的看法。</p>
<p>我学到的一个重要教训（艰难的方式）是在使用像 gem5 这样的复杂工具时，重要的是在使用它之前真正了解它是如何工作的。</p>
<p>您可以在https://gem5.googlesource.com/public/gem5-website/+/refs/heads/stable/_pages/documentation/learning_gem5/找到本书的来源 。</p>
<h2 id="什么是gem5-1"><a class="header" href="#什么是gem5-1">什么是gem5？</a></h2>
<p>gem5 是一个模块化离散事件驱动的计算机系统模拟器平台。这意味着：</p>
<ol>
<li>gem5 的组件可以轻松地重新排列、参数化、扩展或更换以满足您的需要。</li>
<li>它将时间的流逝模拟为一系列离散事件。</li>
<li>它的预期用途是以各种方式模拟一个或多个计算机系统。</li>
<li>它不仅仅是一个模拟器；它是一个模拟器平台，可让您根据需要使用任意数量的预制组件来构建自己的模拟系统。</li>
</ol>
<p>gem5 主要是用 C++ 和 python 编写的，大多数组件都是在 BSD 风格的许可下提供的。它可以在完整系统模式（FS 模式）下模拟具有设备和操作系统的完整系统，或者仅在系统调用仿真模式（SE 模式）下由模拟器直接提供系统服务的用户空间程序。在 CPU 模型上执行 Alpha、ARM、MIPS、Power、SPARC、RISC-V 和 64 位 x86 二进制文件有不同级别的支持，包括两个简单的单 CPI 模型、一个乱序模型和一个有序流水线模型。内存系统可以由缓存和交叉开关或提供更灵活内存系统建模的 Ruby 模拟器灵活构建。</p>
<p>这里没有提到许多组件和功能，但仅从这个部分列表中，很明显 gem5 是一个复杂且功能强大的模拟平台。即使现在 gem5 可以做的所有事情，通过个人和一些公司的支持，积极的开发仍在继续，并且定期添加新功能和改进现有功能。</p>
<h2 id="开箱即用的功能-1"><a class="header" href="#开箱即用的功能-1">开箱即用的功能</a></h2>
<p>gem5 设计用于计算机体系结构研究，但如果您尝试研究新事物，它可能无法立即评估您的想法。如果可以，那可能意味着有人已经评估了类似的想法并发表了它。</p>
<p>为了充分利用 gem5，您很可能需要添加特定于项目目标的新功能。gem5 的模块化设计应该可以帮助您进行修改，而无需了解模拟器的每个部分。</p>
<p>当您添加您需要的新功能时，请考虑将您的更改贡献回 gem5。这样其他人就可以利用您的辛勤工作，而 gem5 可以成为更好的模拟器。</p>
<h2 id="寻求帮助-1"><a class="header" href="#寻求帮助-1">寻求帮助</a></h2>
<p>gem5 有两个主要的邮件列表，您可以在其中寻求帮助或建议。gem5-dev 适用于正在开发 gem5 主版本的开发人员。这是从网站分发的版本，很可能是您自己工作的基础。gem5-users 是一个更大的邮件列表，适用于从事自己项目的人，这些项目至少在最初不会作为 gem5 正式版本的一部分分发。</p>
<p>大多数时候，gem5-users 是正确的邮件列表。gem5-dev 上的大多数人也在 gem5-users 上，包括所有主要开发人员，此外，gem5 社区的许多其他成员也会看到您的帖子。这对您有帮助，因为他们可能能够回答您的问题，这对他们也有帮助，因为他们将能够看到人们发送给您的答案。要查找有关邮件列表的更多信息、注册或浏览已归档的帖子，请访问<a href="https://www.gem5.org/mailing_lists">邮件列表</a>。</p>
<p>在邮件列表上报告问题之前，请阅读<a href="https://www.gem5.org/documentation/reporting_problems">报告问题</a>。</p>
<div style="break-before: page; page-break-before: always;"></div><hr />
<h2>layout: documentation
title: Building gem5
doc: Learning gem5
parent: part1
permalink: /documentation/learning_gem5/part1/building/
author: Jason Lowe-Power</h2>
<h1 id="构建-gem5"><a class="header" href="#构建-gem5">构建 gem5</a></h1>
<p>本章详细介绍了如何搭建 gem5 开发环境和构建 gem5。</p>
<h2 id="gem5的要求"><a class="header" href="#gem5的要求">gem5的要求</a></h2>
<p>有关更多详细信息，请参阅<a href="http://www.gem5.org/documentation/general_docs/building#dependencies">gem5 要求</a>。</p>
<p>在 Ubuntu 上，您可以使用以下命令安装所有必需的依赖项。要求详述如下：</p>
<pre><code class="language-bash">sudo apt install build-essential git m4 scons zlib1g zlib1g-dev libprotobuf-dev protobuf-compiler libprotoc-dev libgoogle-perftools-dev python3-dev python3
</code></pre>
<ol>
<li>
<ul>
<li>
<p>git（<a href="https://git-scm.com/">git</a>）：</p>
<p>gem5 项目使用<a href="https://git-scm.com/">Git</a>进行版本控制。<a href="https://git-scm.com/">Git</a>是一个分布式版本控制系统。可以通过以下链接找到有关<a href="https://git-scm.com/">Git 的</a>更多信息 。Git 应该默认安装在大多数平台上。要自行在 Ubuntu 中安装 Git，请使用<code>sudo apt install git </code></p>
</li>
</ul>
</li>
<li>
<ul>
<li>
<p>gcc 7+</p>
<p>您可能需要使用环境变量来指向 gcc 的非默认版本。在 Ubuntu 上，您可以使用以下命令安装开发环境<code>sudo apt install build-essential </code></p>
</li>
</ul>
<p><strong>我们支持 GCC 版本 &gt;=7，最高 GCC 10</strong></p>
</li>
<li>
<ul>
<li>
<p><a href="http://www.scons.org/">SCons 3.0+</a></p>
<p>gem5 使用 SCons 作为其构建环境。SCons 就像类固醇一样，在构建过程的各个方面都使用 Python 脚本。这允许一个非常灵活（如果慢）的构建系统。要在 Ubuntu 上使用 SCons<code>sudo apt install scons </code></p>
</li>
</ul>
</li>
<li>
<ul>
<li>
<p>Python 3.6+</p>
<p>gem5 依赖于 Python 开发库。要在 Ubuntu 上安装这些，请使用<code>sudo apt install python3-dev </code></p>
</li>
</ul>
</li>
<li>
<ul>
<li>
<p><a href="https://developers.google.com/protocol-buffers/">protobuf</a> 2.1+（<strong>可选</strong>）</p>
<p>“protobuf是一种语言独立、平台独立的可扩展机制，用于序列化结构化数据。” 在 gem5 中，<a href="https://developers.google.com/protocol-buffers/">protobuf</a> 库用于跟踪生成和回放。 <a href="https://developers.google.com/protocol-buffers/">protobuf</a>不是必需的包，除非您计划将其用于跟踪生成和回放。<code>sudo apt install libprotobuf-dev protobuf-compiler libgoogle-perftools-dev </code></p>
</li>
</ul>
</li>
<li>
<ul>
<li>
<p><a href="https://www.boost.org/">boost</a>（<strong>可选</strong>）</p>
<p>boost 库是一组通用的 C++ 库。如果您希望使用 SystemC 实现，它是一个必要的依赖项。<code>sudo apt install libboost-all-dev </code></p>
</li>
</ul>
</li>
</ol>
<h2 id="获取代码"><a class="header" href="#获取代码">获取代码</a></h2>
<p>将目录更改为要下载 gem5 源代码的位置。然后，要克隆存储库，请使用该<code>git clone</code>命令。</p>
<pre><code class="language-bash">git clone https://gitee.com/mirrors/gem5
</code></pre>
<p>您现在可以进入<code>gem5</code>，里面包含所有 gem5 代码。</p>
<h2 id="构建你的第一个-gem5"><a class="header" href="#构建你的第一个-gem5">构建你的第一个 gem5</a></h2>
<p>让我们从构建一个基本的 x86 系统开始。目前，您必须为要模拟的每个 ISA 分别编译 gem5。此外，如果使用 ruby-intro-chapter，您必须对每个缓存一致性协议进行单独的编译。</p>
<p>为了构建 gem5，我们将使用 SCons。SCons 使用 SConstruct 文件 ( <code>gem5/SConstruct</code>) 设置多个变量，然后使用每个子目录中的 SConscript 文件查找和编译所有 gem5 源代码。</p>
<p>SCons在第一次执行时会自动创建一个<code>gem5/build</code>目录。在此目录中，您将找到由 SCons、编译器等生成的文件。用于编译 gem5 的每组选项（ISA 和缓存一致性协议）都有一个单独的目录。</p>
<p>目录中有许多默认编译选项<code>build_opts</code> 。这些文件指定最初构建 gem5 时传递给 SCons 的参数。我们将使用 X86 默认值并指定我们要编译所有 CPU 模型。您可以查看文件 <code>build_opts/X86</code>以查看 SCons 选项的默认值。您还可以在命令行上指定这些选项以覆盖任何默认值。</p>
<pre><code class="language-bash">python3 `which scons` build/X86/gem5.opt -j9
</code></pre>
<blockquote>
<p><strong>gem5 二进制类型</strong></p>
<p>gem5 中的 SCons 脚本目前有 5 种不同的二进制文件，您可以为 gem5 构建：debug、opt、fast、prof 和 perf。这些名称大多是不言自明的，但在下面进行了详细说明。</p>
<ul>
<li>
<p>debug</p>
<p>没有优化和调试符号构建。如果您需要查看的变量在 gem5 的 opt 版本中进行了优化，则此二进制文件在使用调试器进行调试时非常有用。与其他二进制文件相比，使用 debug 运行速度较慢。</p>
</li>
<li>
<p>opt</p>
<p>这个二进制文件是使用大多数优化（例如，-O3）构建的，但包含调试符号。这个二进制文件比调试快得多，但仍然包含足够的调试信息来调试大多数问题。</p>
</li>
<li>
<p>fast</p>
<p>构建了所有优化（包括支持平台上的链接时优化）并且没有调试符号。此外，任何断言都被删除，但仍包括恐慌和致命。fast 是性能最高的二进制文件，比 opt 小得多。但是，仅当您认为您的代码不太可能存在重大错误时，才适合使用 fast。</p>
</li>
<li>
<p>prof and perf</p>
<p>这两个二进制文件是为分析 gem5 而构建的。prof 包括 GNU 分析器 (gprof) 的分析信息，perf 包括 Google 性能工具 (gperftools) 的分析信息。</p>
</li>
</ul>
<p>传递给 SCons 的主要参数是您想要构建的内容， <code>build/X86/gem5.opt</code>. 在这种情况下，我们正在构建 gem5.opt（带有调试符号的优化二进制文件）。我们想在 build/X86 目录下构建 gem5。由于此目录当前不存在，SCons 将查找<code>build_opts</code>X86 的默认参数。（注意：我在这里使用 -j9 在我机器上的 8 个内核中的 9 个上执行构建。您应该为您的机器选择一个合适的数量，通常是内核数+1。）</p>
</blockquote>
<p>输出应如下所示：</p>
<pre><code class="language-bash">Checking for C header file Python.h... yes
Checking for C library pthread... yes
Checking for C library dl... yes
Checking for C library util... yes
Checking for C library m... yes
Checking for C library python2.7... yes
Checking for accept(0,0,0) in C++ library None... yes
Checking for zlibVersion() in C++ library z... yes
Checking for GOOGLE_PROTOBUF_VERIFY_VERSION in C++ library protobuf... yes
Checking for clock_nanosleep(0,0,NULL,NULL) in C library None... yes
Checking for timer_create(CLOCK_MONOTONIC, NULL, NULL) in C library None... no
Checking for timer_create(CLOCK_MONOTONIC, NULL, NULL) in C library rt... yes
Checking for C library tcmalloc... yes
Checking for backtrace_symbols_fd((void*)0, 0, 0) in C library None... yes
Checking for C header file fenv.h... yes
Checking for C header file linux/kvm.h... yes
Checking size of struct kvm_xsave ... yes
Checking for member exclude_host in struct perf_event_attr...yes
Building in /local.chinook/gem5/gem5-tutorial/gem5/build/X86
Variables file /local.chinook/gem5/gem5-tutorial/gem5/build/variables/X86 not found,
  using defaults in /local.chinook/gem5/gem5-tutorial/gem5/build_opts/X86
scons: done reading SConscript files.
scons: Building targets ...
 [ISA DESC] X86/arch/x86/isa/main.isa -&gt; generated/inc.d
 [NEW DEPS] X86/arch/x86/generated/inc.d -&gt; x86-deps
 [ENVIRONS] x86-deps -&gt; x86-environs
 [     CXX] X86/sim/main.cc -&gt; .o
 ....
 .... &lt;lots of output&gt;
 ....
 [   SHCXX] nomali/lib/mali_midgard.cc -&gt; .os
 [   SHCXX] nomali/lib/mali_t6xx.cc -&gt; .os
 [   SHCXX] nomali/lib/mali_t7xx.cc -&gt; .os
 [      AR]  -&gt; drampower/libdrampower.a
 [   SHCXX] nomali/lib/addrspace.cc -&gt; .os
 [   SHCXX] nomali/lib/mmu.cc -&gt; .os
 [  RANLIB]  -&gt; drampower/libdrampower.a
 [   SHCXX] nomali/lib/nomali_api.cc -&gt; .os
 [      AR]  -&gt; nomali/libnomali.a
 [  RANLIB]  -&gt; nomali/libnomali.a
 [     CXX] X86/base/date.cc -&gt; .o
 [    LINK]  -&gt; X86/gem5.opt
scons: done building targets.
</code></pre>
<p>编译完成后，您应该在<code>build/X86/gem5.opt</code>. 编译可能需要很长时间，通常需要 15 分钟或更长时间，尤其是在 AFS 或 NFS 等远程文件系统上进行编译时。</p>
<h2 id="常见错误"><a class="header" href="#常见错误">常见错误</a></h2>
<h3 id="错误的-gcc-版本"><a class="header" href="#错误的-gcc-版本">错误的 gcc 版本</a></h3>
<pre><code class="language-bash">Error: gcc version 5 or newer required.
       Installed version: 4.4.7
</code></pre>
<p>更新您的环境变量以指向正确的 gcc 版本，或安装更新版本的 gcc。请参阅构建要求部分。</p>
<h3 id="python-位于非默认位置"><a class="header" href="#python-位于非默认位置">Python 位于非默认位置</a></h3>
<p>如果您使用非默认版本的 Python（例如，当 2.5 是您的默认版本时使用 3.6 版本），则在使用 SCons 构建 gem5 时可能会出现问题。RHEL6 版本的 SCons 使用 Python 的硬编码位置，这会导致问题。在这种情况下，gem5 通常构建成功，但可能无法运行。以下是您在运行 gem5 时可能会看到的一个错误。</p>
<pre><code class="language-bash">Traceback (most recent call last):
  File &quot;........../gem5-stable/src/python/importer.py&quot;, line 93, in &lt;module&gt;
    sys.meta_path.append(importer)
TypeError: 'dict' object is not callable
</code></pre>
<p>要解决此问题，您可以通过运行<code>python3 </code>which scons<code> build/X86/gem5.opt</code>代替<code>scons build/X86/gem5.opt</code>.</p>
<h3 id="未安装-m4-宏处理器"><a class="header" href="#未安装-m4-宏处理器">未安装 M4 宏处理器</a></h3>
<p>如果未安装 M4 宏处理器，您将看到类似于以下内容的错误：</p>
<pre><code class="language-bash">...
Checking for member exclude_host in struct perf_event_attr...yes
Error: Can't find version of M4 macro processor.  Please install M4 and try again.
</code></pre>
<p>仅安装 M4 宏包可能无法解决此问题。您可能还需要安装所有<code>autoconf</code>工具。在 Ubuntu 上，您可以使用以下命令。</p>
<pre><code class="language-bash">sudo apt-get install automake
</code></pre>
<h3 id="protobuf-3123-问题"><a class="header" href="#protobuf-3123-问题">Protobuf 3.12.3 问题</a></h3>
<p>使用 protobuf 编译 gem5 可能会导致以下错误，</p>
<pre><code class="language-bash">In file included from build/X86/cpu/trace/trace_cpu.hh:53,
                 from build/X86/cpu/trace/trace_cpu.cc:38:
build/X86/proto/inst_dep_record.pb.h:49:51: error: 'AuxiliaryParseTableField' in namespace 'google::protobuf::internal' does not name a type; did you mean 'AuxillaryParseTableField'?
   49 |   static const ::PROTOBUF_NAMESPACE_ID::internal::AuxiliaryParseTableField aux[]
</code></pre>
<p>这里讨论了问题的根本原因：[https://gem5.atlassian.net/browse/GEM5-1032]。</p>
<p>要解决此问题，您可能需要更新 ProtocolBuffer 的版本，</p>
<pre><code class="language-bash">sudo apt update
sudo apt install libprotobuf-dev protobuf-compiler libgoogle-perftools-dev
</code></pre>
<p>之后，您可能需要<strong>在</strong>重新编译 gem5<strong>之前</strong>清理 gem5 构建文件夹，</p>
<pre><code class="language-bash">python3 `which scons` --clean --no-cache        # cleaning the build folder
python3 `which scons` build/X86/gem5.opt -j 9   # re-compiling gem5
</code></pre>
<p>如果问题仍然存在，您可能需要<strong>在</strong>再次编译 gem5<strong>之前</strong>完全删除 gem5 build 文件夹，</p>
<pre><code class="language-bash">rm -rf build/                                   # completely removing the gem5 build folder
python3 `which scons` build/X86/gem5.opt -j 9   # re-compiling gem5
</code></pre>
<div style="break-before: page; page-break-before: always;"></div><hr />
<h2>layout: documentation
title: Building gem5
doc: Learning gem5
parent: part1
permalink: /documentation/learning_gem5/part1/building/
author: Jason Lowe-Power</h2>
<h1 id="构建-gem5-1"><a class="header" href="#构建-gem5-1">构建 gem5</a></h1>
<p>本章详细介绍了如何搭建 gem5 开发环境和构建 gem5。</p>
<h2 id="gem5的要求-1"><a class="header" href="#gem5的要求-1">gem5的要求</a></h2>
<p>有关更多详细信息，请参阅<a href="http://www.gem5.org/documentation/general_docs/building#dependencies">gem5 要求</a>。</p>
<p>在 Ubuntu 上，您可以使用以下命令安装所有必需的依赖项。要求详述如下：</p>
<pre><code class="language-bash">sudo apt install build-essential git m4 scons zlib1g zlib1g-dev libprotobuf-dev protobuf-compiler libprotoc-dev libgoogle-perftools-dev python3-dev python3
</code></pre>
<ol>
<li>
<ul>
<li>
<p>git（<a href="https://git-scm.com/">git</a>）：</p>
<p>gem5 项目使用<a href="https://git-scm.com/">Git</a>进行版本控制。<a href="https://git-scm.com/">Git</a>是一个分布式版本控制系统。可以通过以下链接找到有关<a href="https://git-scm.com/">Git 的</a>更多信息 。Git 应该默认安装在大多数平台上。要自行在 Ubuntu 中安装 Git，请使用<code>sudo apt install git </code></p>
</li>
</ul>
</li>
<li>
<ul>
<li>
<p>gcc 7+</p>
<p>您可能需要使用环境变量来指向 gcc 的非默认版本。在 Ubuntu 上，您可以使用以下命令安装开发环境<code>sudo apt install build-essential </code></p>
</li>
</ul>
<p><strong>我们支持 GCC 版本 &gt;=7，最高 GCC 10</strong></p>
</li>
<li>
<ul>
<li>
<p><a href="http://www.scons.org/">SCons 3.0+</a></p>
<p>gem5 使用 SCons 作为其构建环境。SCons 就像类固醇一样，在构建过程的各个方面都使用 Python 脚本。这允许一个非常灵活（如果慢）的构建系统。要在 Ubuntu 上使用 SCons<code>sudo apt install scons </code></p>
</li>
</ul>
</li>
<li>
<ul>
<li>
<p>Python 3.6+</p>
<p>gem5 依赖于 Python 开发库。要在 Ubuntu 上安装这些，请使用<code>sudo apt install python3-dev </code></p>
</li>
</ul>
</li>
<li>
<ul>
<li>
<p><a href="https://developers.google.com/protocol-buffers/">protobuf</a> 2.1+（<strong>可选</strong>）</p>
<p>“protobuf是一种语言独立、平台独立的可扩展机制，用于序列化结构化数据。” 在 gem5 中，<a href="https://developers.google.com/protocol-buffers/">protobuf</a> 库用于跟踪生成和回放。 <a href="https://developers.google.com/protocol-buffers/">protobuf</a>不是必需的包，除非您计划将其用于跟踪生成和回放。<code>sudo apt install libprotobuf-dev protobuf-compiler libgoogle-perftools-dev </code></p>
</li>
</ul>
</li>
<li>
<ul>
<li>
<p><a href="https://www.boost.org/">boost</a>（<strong>可选</strong>）</p>
<p>boost 库是一组通用的 C++ 库。如果您希望使用 SystemC 实现，它是一个必要的依赖项。<code>sudo apt install libboost-all-dev </code></p>
</li>
</ul>
</li>
</ol>
<h2 id="获取代码-1"><a class="header" href="#获取代码-1">获取代码</a></h2>
<p>将目录更改为要下载 gem5 源代码的位置。然后，要克隆存储库，请使用该<code>git clone</code>命令。</p>
<pre><code class="language-bash">git clone https://gitee.com/mirrors/gem5
</code></pre>
<p>您现在可以进入<code>gem5</code>，里面包含所有 gem5 代码。</p>
<h2 id="构建你的第一个-gem5-1"><a class="header" href="#构建你的第一个-gem5-1">构建你的第一个 gem5</a></h2>
<p>让我们从构建一个基本的 x86 系统开始。目前，您必须为要模拟的每个 ISA 分别编译 gem5。此外，如果使用 ruby-intro-chapter，您必须对每个缓存一致性协议进行单独的编译。</p>
<p>为了构建 gem5，我们将使用 SCons。SCons 使用 SConstruct 文件 ( <code>gem5/SConstruct</code>) 设置多个变量，然后使用每个子目录中的 SConscript 文件查找和编译所有 gem5 源代码。</p>
<p>SCons在第一次执行时会自动创建一个<code>gem5/build</code>目录。在此目录中，您将找到由 SCons、编译器等生成的文件。用于编译 gem5 的每组选项（ISA 和缓存一致性协议）都有一个单独的目录。</p>
<p>目录中有许多默认编译选项<code>build_opts</code> 。这些文件指定最初构建 gem5 时传递给 SCons 的参数。我们将使用 X86 默认值并指定我们要编译所有 CPU 模型。您可以查看文件 <code>build_opts/X86</code>以查看 SCons 选项的默认值。您还可以在命令行上指定这些选项以覆盖任何默认值。</p>
<pre><code class="language-bash">python3 `which scons` build/X86/gem5.opt -j9
</code></pre>
<blockquote>
<p><strong>gem5 二进制类型</strong></p>
<p>gem5 中的 SCons 脚本目前有 5 种不同的二进制文件，您可以为 gem5 构建：debug、opt、fast、prof 和 perf。这些名称大多是不言自明的，但在下面进行了详细说明。</p>
<ul>
<li>
<p>debug</p>
<p>没有优化和调试符号构建。如果您需要查看的变量在 gem5 的 opt 版本中进行了优化，则此二进制文件在使用调试器进行调试时非常有用。与其他二进制文件相比，使用 debug 运行速度较慢。</p>
</li>
<li>
<p>opt</p>
<p>这个二进制文件是使用大多数优化（例如，-O3）构建的，但包含调试符号。这个二进制文件比调试快得多，但仍然包含足够的调试信息来调试大多数问题。</p>
</li>
<li>
<p>fast</p>
<p>构建了所有优化（包括支持平台上的链接时优化）并且没有调试符号。此外，任何断言都被删除，但仍包括恐慌和致命。fast 是性能最高的二进制文件，比 opt 小得多。但是，仅当您认为您的代码不太可能存在重大错误时，才适合使用 fast。</p>
</li>
<li>
<p>prof and perf</p>
<p>这两个二进制文件是为分析 gem5 而构建的。prof 包括 GNU 分析器 (gprof) 的分析信息，perf 包括 Google 性能工具 (gperftools) 的分析信息。</p>
</li>
</ul>
<p>传递给 SCons 的主要参数是您想要构建的内容， <code>build/X86/gem5.opt</code>. 在这种情况下，我们正在构建 gem5.opt（带有调试符号的优化二进制文件）。我们想在 build/X86 目录下构建 gem5。由于此目录当前不存在，SCons 将查找<code>build_opts</code>X86 的默认参数。（注意：我在这里使用 -j9 在我机器上的 8 个内核中的 9 个上执行构建。您应该为您的机器选择一个合适的数量，通常是内核数+1。）</p>
</blockquote>
<p>输出应如下所示：</p>
<pre><code class="language-bash">Checking for C header file Python.h... yes
Checking for C library pthread... yes
Checking for C library dl... yes
Checking for C library util... yes
Checking for C library m... yes
Checking for C library python2.7... yes
Checking for accept(0,0,0) in C++ library None... yes
Checking for zlibVersion() in C++ library z... yes
Checking for GOOGLE_PROTOBUF_VERIFY_VERSION in C++ library protobuf... yes
Checking for clock_nanosleep(0,0,NULL,NULL) in C library None... yes
Checking for timer_create(CLOCK_MONOTONIC, NULL, NULL) in C library None... no
Checking for timer_create(CLOCK_MONOTONIC, NULL, NULL) in C library rt... yes
Checking for C library tcmalloc... yes
Checking for backtrace_symbols_fd((void*)0, 0, 0) in C library None... yes
Checking for C header file fenv.h... yes
Checking for C header file linux/kvm.h... yes
Checking size of struct kvm_xsave ... yes
Checking for member exclude_host in struct perf_event_attr...yes
Building in /local.chinook/gem5/gem5-tutorial/gem5/build/X86
Variables file /local.chinook/gem5/gem5-tutorial/gem5/build/variables/X86 not found,
  using defaults in /local.chinook/gem5/gem5-tutorial/gem5/build_opts/X86
scons: done reading SConscript files.
scons: Building targets ...
 [ISA DESC] X86/arch/x86/isa/main.isa -&gt; generated/inc.d
 [NEW DEPS] X86/arch/x86/generated/inc.d -&gt; x86-deps
 [ENVIRONS] x86-deps -&gt; x86-environs
 [     CXX] X86/sim/main.cc -&gt; .o
 ....
 .... &lt;lots of output&gt;
 ....
 [   SHCXX] nomali/lib/mali_midgard.cc -&gt; .os
 [   SHCXX] nomali/lib/mali_t6xx.cc -&gt; .os
 [   SHCXX] nomali/lib/mali_t7xx.cc -&gt; .os
 [      AR]  -&gt; drampower/libdrampower.a
 [   SHCXX] nomali/lib/addrspace.cc -&gt; .os
 [   SHCXX] nomali/lib/mmu.cc -&gt; .os
 [  RANLIB]  -&gt; drampower/libdrampower.a
 [   SHCXX] nomali/lib/nomali_api.cc -&gt; .os
 [      AR]  -&gt; nomali/libnomali.a
 [  RANLIB]  -&gt; nomali/libnomali.a
 [     CXX] X86/base/date.cc -&gt; .o
 [    LINK]  -&gt; X86/gem5.opt
scons: done building targets.
</code></pre>
<p>编译完成后，您应该在<code>build/X86/gem5.opt</code>. 编译可能需要很长时间，通常需要 15 分钟或更长时间，尤其是在 AFS 或 NFS 等远程文件系统上进行编译时。</p>
<h2 id="常见错误-1"><a class="header" href="#常见错误-1">常见错误</a></h2>
<h3 id="错误的-gcc-版本-1"><a class="header" href="#错误的-gcc-版本-1">错误的 gcc 版本</a></h3>
<pre><code class="language-bash">Error: gcc version 5 or newer required.
       Installed version: 4.4.7
</code></pre>
<p>更新您的环境变量以指向正确的 gcc 版本，或安装更新版本的 gcc。请参阅构建要求部分。</p>
<h3 id="python-位于非默认位置-1"><a class="header" href="#python-位于非默认位置-1">Python 位于非默认位置</a></h3>
<p>如果您使用非默认版本的 Python（例如，当 2.5 是您的默认版本时使用 3.6 版本），则在使用 SCons 构建 gem5 时可能会出现问题。RHEL6 版本的 SCons 使用 Python 的硬编码位置，这会导致问题。在这种情况下，gem5 通常构建成功，但可能无法运行。以下是您在运行 gem5 时可能会看到的一个错误。</p>
<pre><code class="language-bash">Traceback (most recent call last):
  File &quot;........../gem5-stable/src/python/importer.py&quot;, line 93, in &lt;module&gt;
    sys.meta_path.append(importer)
TypeError: 'dict' object is not callable
</code></pre>
<p>要解决此问题，您可以通过运行<code>python3 </code>which scons<code> build/X86/gem5.opt</code>代替<code>scons build/X86/gem5.opt</code>.</p>
<h3 id="未安装-m4-宏处理器-1"><a class="header" href="#未安装-m4-宏处理器-1">未安装 M4 宏处理器</a></h3>
<p>如果未安装 M4 宏处理器，您将看到类似于以下内容的错误：</p>
<pre><code class="language-bash">...
Checking for member exclude_host in struct perf_event_attr...yes
Error: Can't find version of M4 macro processor.  Please install M4 and try again.
</code></pre>
<p>仅安装 M4 宏包可能无法解决此问题。您可能还需要安装所有<code>autoconf</code>工具。在 Ubuntu 上，您可以使用以下命令。</p>
<pre><code class="language-bash">sudo apt-get install automake
</code></pre>
<h3 id="protobuf-3123-问题-1"><a class="header" href="#protobuf-3123-问题-1">Protobuf 3.12.3 问题</a></h3>
<p>使用 protobuf 编译 gem5 可能会导致以下错误，</p>
<pre><code class="language-bash">In file included from build/X86/cpu/trace/trace_cpu.hh:53,
                 from build/X86/cpu/trace/trace_cpu.cc:38:
build/X86/proto/inst_dep_record.pb.h:49:51: error: 'AuxiliaryParseTableField' in namespace 'google::protobuf::internal' does not name a type; did you mean 'AuxillaryParseTableField'?
   49 |   static const ::PROTOBUF_NAMESPACE_ID::internal::AuxiliaryParseTableField aux[]
</code></pre>
<p>这里讨论了问题的根本原因：[https://gem5.atlassian.net/browse/GEM5-1032]。</p>
<p>要解决此问题，您可能需要更新 ProtocolBuffer 的版本，</p>
<pre><code class="language-bash">sudo apt update
sudo apt install libprotobuf-dev protobuf-compiler libgoogle-perftools-dev
</code></pre>
<p>之后，您可能需要<strong>在</strong>重新编译 gem5<strong>之前</strong>清理 gem5 构建文件夹，</p>
<pre><code class="language-bash">python3 `which scons` --clean --no-cache        # cleaning the build folder
python3 `which scons` build/X86/gem5.opt -j 9   # re-compiling gem5
</code></pre>
<p>如果问题仍然存在，您可能需要<strong>在</strong>再次编译 gem5<strong>之前</strong>完全删除 gem5 build 文件夹，</p>
<pre><code class="language-bash">rm -rf build/                                   # completely removing the gem5 build folder
python3 `which scons` build/X86/gem5.opt -j 9   # re-compiling gem5
</code></pre>
<div style="break-before: page; page-break-before: always;"></div><hr />
<h2>layout: documentation
title: Creating a simple configuration script
doc: Learning gem5
parent: part1
permalink: /documentation/learning_gem5/part1/simple_config/
author: Jason Lowe-Power</h2>
<h1 id="创建一个简单的配置脚本"><a class="header" href="#创建一个简单的配置脚本">创建一个简单的配置脚本</a></h1>
<p>本教程的这一章将引导您了解如何为 gem5 设置一个简单的模拟脚本并首次运行 gem5。假设您已经完成了本教程的第一章，并且已经成功构建了带有可执行文件 <code>build/X86/gem5.opt</code>的 gem5。</p>
<p>我们的配置脚本将模拟一个非常简单的系统。我们将只有一个简单的 CPU 内核。该 CPU 内核将连接到系统范围的内存总线。我们将有一个 DDR3 内存通道，也连接到内存总线。</p>
<h2 id="gem5-配置脚本"><a class="header" href="#gem5-配置脚本">gem5 配置脚本</a></h2>
<p>gem5 二进制文件将一个用于设置和执行模拟的 Python 脚本作为参数。在此脚本中，您将创建一个系统进行仿真，创建系统的所有组件，并指定系统组件的所有参数。然后，您可以从脚本开始模拟。</p>
<p>该脚本完全由用户定义。您可以选择在配置脚本中使用任何有效的 Python 代码。本书提供了一个在 Python 中严重依赖类和继承的样式示例。作为 gem5 用户，制作配置脚本的简单或复杂取决于您。</p>
<p>gem5 中<code>./configs/examples</code>附带了许多示例配置脚本。这些脚本中的大多数都是包罗万象的，并且允许用户在命令行上指定几乎所有选项。在本书中，我们不是从这些复杂的脚本开始，而是从可以运行 gem5 并从那里构建的最简单的脚本开始。希望在本节结束时，您将对模拟脚本的工作方式有一个很好的了解。</p>
<hr />
<blockquote>
<p><strong>SimObjects 旁白</strong></p>
<p>gem5的模块化设计是围绕构建<strong>SimObject</strong>类型。模拟系统中的大部分组件都是 SimObjects：CPU、缓存、内存控制器、总线等。 gem5 将所有这些对象从它们的<code>C++</code>实现导出到 python。因此，您可以从 python 配置脚本创建任何 SimObject，设置其参数，并指定 SimObject 之间的交互。</p>
<p>有关更多信息，请参阅<a href="http://doxygen.gem5.org/release/current/classSimObject.html#details">SimObject 详细</a>信息。</p>
</blockquote>
<hr />
<h2 id="创建配置文件"><a class="header" href="#创建配置文件">创建配置文件</a></h2>
<p>让我们首先创建一个新的配置文件并打开它：</p>
<pre><code>mkdir configs/tutorial
touch configs/tutorial/simple.py
</code></pre>
<p>这只是一个普通的python文件，将由gem5可执行文件中的嵌入式python执行。因此，您可以使用 Python 中可用的任何功能和库。</p>
<p>我们在这个文件中要做的第一件事是导入 m5 库和我们编译的所有 SimObjects。</p>
<pre><code>import m5
from m5.objects import *
</code></pre>
<p>接下来，我们将创建第一个 SimObject：我们要模拟的系统。该<code>System</code>对象将是我们模拟系统中所有其他对象的父对象。该<code>System</code>对象包含许多功能（非时序级）信息，如物理内存范围、根时钟域、根电压域、内核（在全系统仿真中）等。为了创建系统 SimObject，我们只需像普通的python类一样实例化它：</p>
<pre><code>system = System()
</code></pre>
<p>现在我们有了对要模拟的系统的引用，让我们在系统上设置时钟。我们首先必须创建一个时钟域。然后我们可以在该域上设置时钟频率。在 SimObject 上设置参数与在 python 中设置对象的成员完全相同，因此我们可以简单地将时钟设置为 1 GHz，例如。最后，我们必须为这个时钟域指定一个电压域。由于我们现在不关心系统电源，我们将只使用电压域的默认选项。</p>
<pre><code>system.clk_domain = SrcClockDomain()
system.clk_domain.clock = '1GHz'
system.clk_domain.voltage_domain = VoltageDomain()
</code></pre>
<p>一旦我们有了一个系统，让我们设置如何模拟内存。我们将使用<em>时序</em>模式进行内存模拟。您几乎总是使用计时模式进行内存模拟，除非在特殊情况下，例如快速转发和从检查点恢复。我们还将设置一个大小为 512 MB 的单个内存范围，这是一个非常小的系统。请注意，在 python 配置脚本中，无论何时需要大小，您都可以使用常用的方言和单位指定该大小，例如<code>'512MB'</code>. 同样，随着时间，您可以使用时间单位（例如， <code>'5ns'</code>）。这些将分别自动转换为通用表示。</p>
<pre><code>system.mem_mode = 'timing'
system.mem_ranges = [AddrRange('512MB')]
</code></pre>
<p>现在，我们可以创建一个 CPU。我们将从 gem5 中最简单的基于时序的 CPU 开始，<em>TimingSimpleCPU</em>。该 CPU 模型在单个时钟周期内执行每条指令以执行，除了内存请求，流经内存系统。要创建 CPU，您可以简单地实例化对象：</p>
<pre><code>system.cpu = TimingSimpleCPU()
</code></pre>
<p>接下来，我们将创建系统范围的内存总线：</p>
<pre><code>system.membus = SystemXBar()
</code></pre>
<p>现在我们有了一个内存总线，让我们将 CPU 上的缓存端口连接到它。在这种情况下，由于我们要模拟的系统没有任何缓存，我们将直接将 I-cache 和 D-cache 端口连接到 membus。在这个示例系统中，我们没有缓存。</p>
<pre><code>system.cpu.icache_port = system.membus.cpu_side_ports
system.cpu.dcache_port = system.membus.cpu_side_ports
</code></pre>
<hr />
<blockquote>
<p><strong>关于 gem5 端口的旁白</strong></p>
<p>为了将内存系统组件连接在一起，gem5 使用端口抽象。每个内存对象可以有两种端口， <em>请求端口</em>和<em>响应端口</em>。请求从请求端口发送到响应端口，响应从响应端口发送到请求端口。连接端口时，您必须将请求端口连接到响应端口。</p>
<p>从 python 配置文件中可以很容易地将端口连接在一起。您可以简单地将请求端口<code>=</code>设置为响应端口，它们将被连接。例如：</p>
<pre><code>system.cpu.icache_port = system.l1_cache.cpu_side
</code></pre>
<p>在这个例子中，cpu<code>icache_port</code>是一个请求端口，而缓存 <code>cpu_side</code>是一个响应端口。请求端口和响应端口可以位于 的任一侧，<code>=</code>并且将进行相同的连接。建立连接后，请求者可以向响应者发送请求。建立连接的幕后有很多魔法，其中的细节对大多数用户来说并不重要。</p>
<p><code>=</code>gem5 Python 配置中两个端口的另一种值得注意的魔法是，允许一侧有一个端口，另一侧有一组端口。例如：</p>
<pre><code>system.cpu.icache_port = system.membus.cpu_side_ports
</code></pre>
<p>在这个例子中，cpu<code>icache_port</code>是一个请求端口，而 membus <code>cpu_side_ports</code>是一个响应端口数组。在这种情况下，会在 上生成一个新的响应端口<code>cpu_side_ports</code>，并且这个新创建的端口将连接到请求端口。</p>
<p>我们将在<a href="http://www.gem5.org/documentation/learning_gem5/part2/memoryobject/">MemObject 章节</a>中更详细地讨论端口和 MemObject 。</p>
</blockquote>
<hr />
<p>接下来，我们需要连接一些其他端口以确保我们的系统能够正常运行。我们需要在 CPU 上创建一个 I/O 控制器并将其连接到内存总线。此外，我们需要将系统中的一个特殊端口连接到 membus。此端口是一个功能专用端口，允许系统读取和写入内存。</p>
<p>将 PIO 和中断端口连接到内存总线是 x86 特定的要求。其他 ISA（例如 ARM）不需要这 3 条额外的行。</p>
<pre><code>system.cpu.createInterruptController()
system.cpu.interrupts[0].pio = system.membus.mem_side_ports
system.cpu.interrupts[0].int_requestor = system.membus.cpu_side_ports
system.cpu.interrupts[0].int_responder = system.membus.mem_side_ports

system.system_port = system.membus.cpu_side_ports
</code></pre>
<p>接下来，我们需要创建一个内存控制器并将其连接到 membus。对于这个系统，我们将使用一个简单的 DDR3 控制器，它将负责我们系统的整个内存范围。</p>
<pre><code>system.mem_ctrl = MemCtrl()
system.mem_ctrl.dram = DDR3_1600_8x8()
system.mem_ctrl.dram.range = system.mem_ranges[0]
system.mem_ctrl.port = system.membus.mem_side_ports
</code></pre>
<p>在这些最后的连接之后，我们已经完成了我们的模拟系统的实例化！我们的系统应该如下图所示。</p>
<p><img src="part1/part1_2_simple_config.assets/simple_config.png" alt="没有缓存的简单系统配置。" /></p>
<p>接下来，我们需要设置我们希望 CPU 执行的进程。由于我们在系统调用仿真模式（SE 模式）下执行，我们只会将 CPU 指向编译后的可执行文件。我们将执行一个简单的“Hello world”程序。已经有一个随 gem5 一起编译的，所以我们将使用它。您可以指定任何为 x86 构建且已静态编译的应用程序。</p>
<blockquote>
<p><strong>完整系统与系统调用仿真</strong></p>
<p>gem5 可以在两种不同的模式下运行，称为“系统调用仿真”和“完整系统”或 SE 和 FS 模式。在全系统模式下（后面会介绍全系统部分），gem5 模拟整个硬件系统并运行未经修改的内核。完整系统模式类似于运行虚拟机。</p>
<p>另一方面，系统调用仿真模式不会仿真系统中的所有设备，而是专注于仿真 CPU 和内存系统。系统调用仿真更容易配置，因为您不需要实例化真实系统中所需的所有硬件设备。但是，系统调用仿真仅模拟 Linux 系统调用，因此仅对用户模式代码进行建模。</p>
<p>如果您不需要为您的研究问题建模操作系统，并且您想要额外的性能，您应该使用 SE 模式。但是，如果您需要系统的高保真建模，或者像页表遍历这样的操作系统交互很重要，那么您应该使用 FS 模式。</p>
</blockquote>
<p>首先，我们必须创建流程（另一个 SimObject）。然后我们将 processes 命令设置为我们要运行的命令。这是一个类似于 argv 的列表，可执行文件在第一个位置，可执行文件的参数在列表的其余部分。然后我们将 CPU 设置为使用进程作为它的工作负载，最后在 CPU 中创建功能执行上下文。</p>
<pre><code>binary = 'tests/test-progs/hello/bin/x86/linux/hello'

# for gem5 V21 and beyond, uncomment the following line
# system.workload = SEWorkload.init_compatible(binary)

process = Process()
process.cmd = [binary]
system.cpu.workload = process
system.cpu.createThreads()
</code></pre>
<p>我们需要做的最后一件事是实例化系统并开始执行。首先，我们创建<code>Root</code>对象。然后我们实例化模拟。实例化过程遍历我们在 python 中创建的所有 SimObjects 并创建<code>C++</code>等效项。</p>
<p>请注意，您不必实例化 python 类，然后将参数显式指定为成员变量。您还可以将参数作为命名参数传递，如<code>Root</code>下面的对象。</p>
<pre><code>root = Root(full_system = False, system = system)
m5.instantiate()
</code></pre>
<p>最后，我们可以开始实际的模拟了！现在作为一个方面，gem5 现在使用 Python 3 风格的<code>print</code>函数，因此<code>print</code>不再是语句，必须作为函数调用。</p>
<pre><code>print(&quot;Beginning simulation!&quot;)
exit_event = m5.simulate()
</code></pre>
<p>一旦模拟完成，我们就可以检查系统的状态。</p>
<pre><code>print('Exiting @ tick {} because {}'
      .format(m5.curTick(), exit_event.getCause()))
</code></pre>
<h2 id="运行-gem5"><a class="header" href="#运行-gem5">运行 gem5</a></h2>
<p>现在我们已经创建了一个简单的模拟脚本（完整版本可以在 gem5 代码库的 <a href="https://gem5.googlesource.com/public/gem5/+/refs/heads/stable/configs/learning_gem5/part1/simple.py">configs/learning_gem5/part1/simple.py 中找到</a> ），我们已经准备好运行 gem5。gem5 可以接受许多参数，但只需要一个位置参数，即模拟脚本。因此，我们可以简单地从 gem5 根目录运行 gem5，如下所示：</p>
<pre><code>build/X86/gem5.opt configs/tutorial/part1/simple.py
</code></pre>
<p>输出应该是：</p>
<pre><code>gem5 Simulator System.  http://gem5.org
gem5 is copyrighted software; use the --copyright option for details.

gem5 version 21.0.0.0
gem5 compiled May 17 2021 18:05:59
gem5 started May 17 2021 22:05:20
gem5 executing on amarillo, pid 75197
command line: build/X86/gem5.opt configs/tutorial/part1/simple.py

Global frequency set at 1000000000000 ticks per second
warn: No dot file generated. Please install pydot to generate the dot file and pdf.
warn: DRAM device capacity (8192 Mbytes) does not match the address range assigned (512 Mbytes)
0: system.remote_gdb: listening for remote gdb on port 7005
Beginning simulation!
info: Entering event queue @ 0.  Starting simulation...
Hello world!
Exiting @ tick 490394000 because exiting with last active thread context
</code></pre>
<p>配置文件中的参数可以更改，结果应该不同。例如，如果您将系统时钟加倍，则模拟应该完成得更快。或者，如果将DDR控制器改为DDR4，性能应该会更好。</p>
<p>此外，您可以将 CPU 模型更改为<code>MinorCPU</code>对有序 CPU<code>DerivO3CPU</code>建模，或对乱序 CPU 建模。但是，请注意，<code>DerivO3CPU</code>当前不适用于 simple.py，因为 <code>DerivO3CPU</code>需要一个具有单独指令和数据缓存的系统（<code>DerivO3CPU</code>适用于下一节中的配置）。</p>
<p>接下来，我们将向我们的配置文件添加缓存以对更复杂的系统进行建模。</p>
<div style="break-before: page; page-break-before: always;"></div><hr />
<h2>layout: documentation
title: Adding cache to configuration script
doc: Learning gem5
parent: part1
permalink: /documentation/learning_gem5/part1/cache_config/
author: Jason Lowe-Power</h2>
<h1 id="将缓存添加到配置脚本"><a class="header" href="#将缓存添加到配置脚本">将缓存添加到配置脚本</a></h1>
<p>以<a href="http://www.gem5.org/documentation/learning_gem5/part1/simple_config/">前面的配置脚本为起点</a>，本章将逐步完成一个更复杂的配置。我们将向系统添加缓存层次结构，如下图所示。此外，本章将介绍理解 gem5 统计输出和向脚本添加命令行参数。</p>
<p><img src="part1/part1_3_cache_config.assets/advanced_config.png" alt="具有两级缓存层次结构的系统配置。" /></p>
<h2 id="创建缓存对象"><a class="header" href="#创建缓存对象">创建缓存对象</a></h2>
<p>我们将使用经典缓存，而不是 ruby-intro-chapter，因为我们正在对单个 CPU 系统进行建模并且我们不关心建模缓存一致性。我们将扩展 Cache SimObject 并为我们的系统配置它。首先，我们必须了解用于配置 Cache 对象的参数。</p>
<blockquote>
<p><strong>经典缓存和 Ruby</strong></p>
<p>gem5 目前有两个完全不同的子系统来模拟系统中的片上缓存，“经典缓存”和“Ruby”。其历史原因是 gem5 是来自密歇根州的 m5 和来自威斯康星州的 GEMS 的组合。GEMS 使用 Ruby 作为其缓存模型，而经典缓存来自 m5 代码库（因此称为“经典”）。这两种模型之间的区别在于，Ruby 旨在对缓存一致性进行详细建模。Ruby 的一部分是 SLICC，一种用于定义缓存一致性协议的语言。另一方面，经典缓存实现了简化且不灵活的 MOESI 一致性协议。</p>
<p>要选择要使用的模型，您应该问问自己要模拟什么。如果您正在对缓存一致性协议的更改进行建模，或者一致性协议可能对您的结果产生一级影响，请使用 Ruby。否则，如果一致性协议对您不重要，请使用经典缓存。</p>
<p>gem5 的一个长期目标是将这两种缓存模型统一为一个整体模型。</p>
</blockquote>
<h3 id="缓存"><a class="header" href="#缓存">缓存</a></h3>
<p>Cache SimObject 声明可以在 src/mem/cache/Cache.py 中找到。这个 Python 文件定义了您可以设置 SimObject 的参数。在幕后，当 SimObject 被实例化时，这些参数被传递给对象的 C++ 实现。在 <code>Cache</code>从SimObject继承<code>BaseCache</code>对象如下所示。</p>
<p>在<code>BaseCache</code>类中，有许多<em>参数</em>。例如，<code>assoc</code>是一个整数参数。一些参数，比如 <code>write_buffers</code>有一个默认值，在这种情况下是 8。默认参数是 的第一个参数<code>Param.*</code>，除非第一个参数是字符串。每个参数的字符串参数是对参数是什么的描述（例如， <code>tag_latency = Param.Cycles(&quot;Tag lookup latency&quot;)</code>意味着 <code>tag_latency</code>控制“该缓存的命中延迟”）。</p>
<p>其中许多参数没有默认值，因此我们需要在调用之前设置这些参数<code>m5.instantiate()</code>。</p>
<hr />
<p>现在，要创建具有特定参数的缓存，我们首先要<code>caches.py</code>在与 simple.py 相同的目录中 创建一个新文件<code>configs/tutorial</code>。第一步是导入我们要在这个文件中扩展的 SimObject(s)。</p>
<pre><code>from m5.objects import Cache
</code></pre>
<p>接下来，我们可以像对待任何其他 Python 类一样对待 BaseCache 对象并对其进行扩展。我们可以随意命名新缓存。让我们从制作 L1 缓存开始。</p>
<pre><code>class L1Cache(Cache):
    assoc = 2
    tag_latency = 2
    data_latency = 2
    response_latency = 2
    mshrs = 4
    tgts_per_mshr = 20
</code></pre>
<p>在这里，我们正在设置 BaseCache 的一些没有默认值的参数。要查看所有可能的配置选项，并找出哪些是必需的，哪些是可选的，您必须查看 SimObject 的源代码。在这种情况下，我们使用 BaseCache。</p>
<p>我们已经扩展<code>BaseCache</code>并设置了<code>BaseCache</code>SimObject 中没有默认值的大部分参数。接下来，让我们再来两个 L1Cache 的子类，一个 L1DCache 和 L1ICache</p>
<pre><code>class L1ICache(L1Cache):
    size = '16kB'

class L1DCache(L1Cache):
    size = '64kB'
</code></pre>
<p>让我们也创建一个带有一些合理参数的 L2 缓存。</p>
<pre><code>class L2Cache(Cache):
    size = '256kB'
    assoc = 8
    tag_latency = 20
    data_latency = 20
    response_latency = 20
    mshrs = 20
    tgts_per_mshr = 12
</code></pre>
<p>现在我们已经指定了 所需的所有必要参数 <code>BaseCache</code>，我们所要做的就是实例化我们的子类并将缓存连接到互连。但是，将大量对象连接到复杂的互连可能会使配置文件快速增长并变得不可读。因此，让我们首先为我们的子类添加一些辅助函数<code>Cache</code>。记住，这些只是 Python 类，所以我们可以用它们做任何你可以用 Python 类做的事情。</p>
<p>让我们为 L1 缓存添加两个功能，<code>connectCPU</code>将 CPU 连接到缓存和<code>connectBus</code>将缓存连接到总线。我们需要将以下代码添加到<code>L1Cache</code>类中。</p>
<pre><code>def connectCPU(self, cpu):
    # need to define this in a base class!
    raise NotImplementedError

def connectBus(self, bus):
    self.mem_side = bus.cpu_side_ports
</code></pre>
<p>接下来，我们必须<code>connectCPU</code>为指令和数据缓存定义一个单独的函数，因为 I-cache 和 D-cache 端口具有不同的名称。我们的<code>L1ICache</code>和<code>L1DCache</code>类现在变成：</p>
<pre><code>class L1ICache(L1Cache):
    size = '16kB'

    def connectCPU(self, cpu):
        self.cpu_side = cpu.icache_port

class L1DCache(L1Cache):
    size = '64kB'

    def connectCPU(self, cpu):
        self.cpu_side = cpu.dcache_port
</code></pre>
<p>最后，让我们添加函数以<code>L2Cache</code>分别连接到内存端和 CPU 端总线。</p>
<pre><code>def connectCPUSideBus(self, bus):
    self.cpu_side = bus.mem_side_ports

def connectMemSideBus(self, bus):
    self.mem_side = bus.cpu_side_ports
</code></pre>
<p>完整的文件可以在 gem5 源文件中找到 <a href="https://gem5.googlesource.com/public/gem5/+/refs/heads/stable/configs/learning_gem5/part1/caches.py"><code>configs/learning_gem5/part1/caches.py</code></a>。</p>
<h2 id="将缓存添加到简单的配置文件"><a class="header" href="#将缓存添加到简单的配置文件">将缓存添加到简单的配置文件</a></h2>
<p>现在，让我们将刚刚创建的缓存添加到我们在<a href="http://www.gem5.org/documentation/learning_gem5/part1/simple_config/">上一章中</a>创建的配置脚本中。</p>
<p>首先，让我们将脚本复制到一个新名称。</p>
<pre><code>cp ./configs/tutorial/simple.py ./configs/tutorial/two_level.py
</code></pre>
<p>首先，我们需要将<code>caches.py</code>文件中的名称导入命名空间。我们可以将以下内容添加到文件顶部（在 m5.objects 导入之后），就像使用任何 Python 源一样。</p>
<pre><code>from caches import *
</code></pre>
<p>现在，在创建 CPU 之后，让我们创建 L1 缓存：</p>
<pre><code>system.cpu.icache = L1ICache()
system.cpu.dcache = L1DCache()
</code></pre>
<p>并使用我们创建的辅助函数将缓存连接到 CPU 端口。</p>
<pre><code>system.cpu.icache.connectCPU(system.cpu)
system.cpu.dcache.connectCPU(system.cpu)
</code></pre>
<p>您需要<em>删除</em>将缓存端口直接连接到内存总线的以下两行。</p>
<pre><code>system.cpu.icache_port = system.membus.cpu_side_ports
system.cpu.dcache_port = system.membus.cpu_side_ports
</code></pre>
<p>我们不能直接将 L1 缓存连接到 L2 缓存，因为 L2 缓存只需要一个端口连接到它。因此，我们需要创建一个 L2 总线来将我们的 L1 缓存连接到 L2 缓存。我们可以使用我们的辅助函数将 L1 缓存连接到 L2 总线。</p>
<pre><code>system.l2bus = L2XBar()

system.cpu.icache.connectBus(system.l2bus)
system.cpu.dcache.connectBus(system.l2bus)
</code></pre>
<p>接下来，我们可以创建 L2 缓存并将其连接到 L2 总线和内存总线。</p>
<pre><code>system.l2cache = L2Cache()
system.l2cache.connectCPUSideBus(system.l2bus)
system.l2cache.connectMemSideBus(system.membus)
</code></pre>
<p>文件中的其他所有内容都保持不变！现在我们有了一个带有两级缓存层次结构的完整配置。如果您运行当前文件，<code>hello</code>现在应该在 57467000 个滴答后完成。完整的脚本可以在<a href="https://gem5.googlesource.com/public/gem5/+/refs/heads/stable/configs/learning_gem5/part1/two_level.py">`configs/learning_gem5/part1/two_level.py</a>的 gem5 源代码中 <a href="https://gem5.googlesource.com/public/gem5/+/refs/heads/stable/configs/learning_gem5/part1/two_level.py">找到</a>。</p>
<h2 id="向脚本添加参数"><a class="header" href="#向脚本添加参数">向脚本添加参数</a></h2>
<p>使用 gem5 进行实验时，您不希望每次要使用不同参数测试系统时都编辑配置脚本。为了解决这个问题，您可以将命令行参数添加到您的 gem5 配置脚本中。同样，因为配置脚本只是 Python，所以您可以使用支持参数解析的 Python 库。尽管 pyoptparse 已被正式弃用，但 gem5 附带的许多配置脚本都使用它而不是 pyargparse，因为 gem5 的最低 Python 版本曾经是 2.5。Python 的最低版本现在是 3.6，因此在编写不需要与当前 gem5 脚本交互的新脚本时，Python 的 argparse 是更好的选择。要开始使用 :pyoptparse，您可以查阅在线 Python 文档。</p>
<p>要为我们的两级缓存配置添加选项，在导入我们的缓存后，让我们添加一些选项。</p>
<pre><code>import argparse

parser = argparse.ArgumentParser(description='A simple system with 2-level cache.')
parser.add\_argument(&quot;binary&quot;, default=&quot;&quot;, nargs=&quot;?&quot;, type=str,
                    help=&quot;Path to the binary to execute.&quot;)
parser.add\_argument(&quot;--l1i_size&quot;,
                    help=f&quot;L1 instruction cache size. Default: 16kB.&quot;)
parser.add\_argument(&quot;--l1d_size&quot;,
                    help=&quot;L1 data cache size. Default: Default: 64kB.&quot;)
parser.add\_argument(&quot;--l2_size&quot;,
                    help=&quot;L2 cache size. Default: 256kB.&quot;)

options = parser.parse\_args()
</code></pre>
<p>现在，您可以运行 <code>build/X86/gem5.opt configs/tutorial/two_level.py --help</code>它将显示您刚刚添加的选项。</p>
<p>接下来，我们需要将这些选项传递给我们在配置脚本中创建的缓存。为此，我们将简单地更改 two_level_opts.py 以将选项作为参数传递到缓存中，然后添加一个适当的构造函数。</p>
<pre><code>system.cpu.icache = L1ICache(options)
system.cpu.dcache = L1DCache(options)
...
system.l2cache = L2Cache(options)
</code></pre>
<p>在 caches.py 中，我们需要<code>__init__</code>为每个类添加构造函数（Python 中的函数）。从我们的基础 L1 缓存开始，我们将只添加一个空的构造函数，因为我们没有任何适用于基础 L1 缓存的参数。但是，在这种情况下我们不能忘记调用超类的构造函数。如果跳过对超类构造函数的调用，gem5 的 SimObject 属性查找函数将失败，<code>RuntimeError: maximum recursion depth exceeded</code>当您尝试实例化缓存对象时，结果将是“ ”。因此，<code>L1Cache</code>我们需要在静态类成员之后添加以下内容。</p>
<pre><code>def __init__(self, options=None):
    super(L1Cache, self).__init__()
    pass
</code></pre>
<p>接下来，在 中<code>L1ICache</code>，我们需要使用我们创建的选项 ( <code>l1i_size</code>) 来设置大小。在下面的代码中，如果<code>options</code>没有传递给<code>L1ICache</code>构造函数，并且在命令行上没有指定选项，则存在保护。在这些情况下，我们将只使用我们已经为大小指定的默认值。</p>
<pre><code>def __init__(self, options=None):
    super(L1ICache, self).__init__(options)
    if not options or not options.l1i_size:
        return
    self.size = options.l1i_size
</code></pre>
<p>我们可以使用相同的代码<code>L1DCache</code>：</p>
<pre><code>def __init__(self, options=None):
    super(L1DCache, self).__init__(options)
    if not options or not options.l1d_size:
        return
    self.size = options.l1d_size
</code></pre>
<p>和统一的<code>L2Cache</code>：</p>
<pre><code>def __init__(self, options=None):
    super(L2Cache, self).__init__()
    if not options or not options.l2_size:
        return
    self.size = options.l2_size
</code></pre>
<p>通过这些更改，您现在可以从命令行将缓存大小传递到您的脚本中，如下所示。</p>
<pre><code>build/X86/gem5.opt configs/tutorial/two_level.py --l2_size='1MB' --l1d_size='128kB'
gem5 Simulator System.  http://gem5.org
gem5 is copyrighted software; use the --copyright option for details.

gem5 version 21.0.0.0
gem5 compiled May 17 2021 18:05:59
gem5 started May 18 2021 00:00:33
gem5 executing on amarillo, pid 83118
command line: build/X86/gem5.opt configs/tutorial/two_level.py --l2_size=1MB --l1d_size=128kB

Global frequency set at 1000000000000 ticks per second
warn: No dot file generated. Please install pydot to generate the dot file and pdf.
warn: DRAM device capacity (8192 Mbytes) does not match the address range assigned (512 Mbytes)
0: system.remote_gdb: listening for remote gdb on port 7005
Beginning simulation!
info: Entering event queue @ 0.  Starting simulation...
Hello world!
Exiting @ tick 57467000 because exiting with last active thread context
</code></pre>
<p>完整的脚本可以在gem5源发现 <a href="https://gem5.googlesource.com/public/gem5/+/refs/heads/stable/configs/learning_gem5/part1/caches.py"><code>configs/learning_gem5/part1/caches.py</code></a>和 <a href="https://gem5.googlesource.com/public/gem5/+/refs/heads/stable/configs/learning_gem5/part1/two_level.py"><code>configs/learning_gem5/part1/two_level.py</code></a>。</p>
<div style="break-before: page; page-break-before: always;"></div><hr />
<h2>layout: documentation
title: Understanding gem5 statistics and output
doc: Learning gem5
parent: part1
permalink: /documentation/learning_gem5/part1/gem5_stats/
author: Jason Lowe-Power</h2>
<h1 id="了解-gem5-统计数据和输出"><a class="header" href="#了解-gem5-统计数据和输出">了解 gem5 统计数据和输出</a></h1>
<p>除了模拟脚本打印出的任何信息外，运行 gem5 后，在名为 的目录中生成了三个文件<code>m5out</code>：</p>
<ul>
<li>
<p><strong>配置文件</strong></p>
<p>包含为模拟创建的每个 SimObject 及其参数值的列表。</p>
</li>
<li>
<p><strong>配置文件</strong></p>
<p>与 config.ini 相同，但采用 json 格式。</p>
</li>
<li>
<p><strong>统计信息.txt</strong></p>
<p>为模拟注册的所有 gem5 统计数据的文本表示。</p>
</li>
</ul>
<h2 id="配置文件"><a class="header" href="#配置文件">配置文件</a></h2>
<p>该文件是模拟内容的最终版本。模拟的每个 SimObject 的所有参数，无论是在配置脚本中设置还是使用默认值，都显示在此文件中。</p>
<p>下面是<code>simple.py</code> 从<a href="http://www.gem5.org/documentation/learning_gem5/part1/simple_config/">simple-config-chapter 中</a>的配置文件 运行时生成的 config.ini 中提取的。</p>
<pre><code>[root]
type=Root
children=system
eventq_index=0
full_system=false
sim_quantum=0
time_sync_enable=false
time_sync_period=100000000000
time_sync_spin_threshold=100000000

[system]
type=System
children=clk_domain cpu dvfs_handler mem_ctrl membus
boot_osflags=a
cache_line_size=64
clk_domain=system.clk_domain
default_p_state=UNDEFINED
eventq_index=0
exit_on_work_items=false
init_param=0
kernel=
kernel_addr_check=true
kernel_extras=
kvm_vm=Null
load_addr_mask=18446744073709551615
load_offset=0
mem_mode=timing

...

[system.membus]
type=CoherentXBar
children=snoop_filter
clk_domain=system.clk_domain
default_p_state=UNDEFINED
eventq_index=0
forward_latency=4
frontend_latency=3
p_state_clk_gate_bins=20
p_state_clk_gate_max=1000000000000
p_state_clk_gate_min=1000
point_of_coherency=true
point_of_unification=true
power_model=
response_latency=2
snoop_filter=system.membus.snoop_filter
snoop_response_latency=4
system=system
use_default_range=false
width=16
master=system.cpu.interrupts.pio system.cpu.interrupts.int_slave system.mem_ctrl.port
slave=system.cpu.icache_port system.cpu.dcache_port system.cpu.interrupts.int_master system.system_port

[system.membus.snoop_filter]
type=SnoopFilter
eventq_index=0
lookup_latency=1
max_capacity=8388608
system=system
</code></pre>
<p>在这里我们看到，在每个 SimObject 的描述开头，首先是它在配置文件中创建的名称，用方括号括起来（例如，<code>[system.membus]</code>）。</p>
<p>接下来，SimObject 的每个参数都会显示其值，包括未在配置文件中明确设置的参数。例如，配置文件将时钟域设置为 1 GHz（在这种情况下为 1000 个滴答）。但是，它没有设置缓存行大小（在 中为 64 <code>system</code>）对象。</p>
<p>该<code>config.ini</code>文件是一种宝贵的工具，可确保您模拟您认为正在模拟的内容。在 gem5 中有许多可能的方法来设置默认值和覆盖默认值。始终检查<code>config.ini</code>配置文件中设置的值是否传播到实际 SimObject 实例化是一种“最佳实践” 。</p>
<h2 id="统计信息txt"><a class="header" href="#统计信息txt">统计信息.txt</a></h2>
<p>gem5 有一个灵活的统计生成系统。gem5 统计信息在<a href="http://www.gem5.org/Statistics">gem5 wiki 站点</a>上有详细介绍。SimObject 的每个实例都有它自己的统计信息。在模拟结束时，或发出特殊的统计转储命令时，所有 SimObject 的统计信息的当前状态将转储到文件中。</p>
<p>首先，统计文件包含有关执行的一般统计信息：</p>
<pre><code>---------- Begin Simulation Statistics ----------
simSeconds                                   0.000057                       # Number of seconds simulated (Second)
simTicks                                     57467000                       # Number of ticks simulated (Tick)
finalTick                                    57467000                       # Number of ticks from beginning of simulation (restored from checkpoints and never reset) (Tick)
simFreq                                  1000000000000                       # The number of ticks per simulated second ((Tick/Second))
hostSeconds                                      0.03                       # Real time elapsed on the host (Second)
hostTickRate                               2295882330                       # The number of ticks simulated per host second (ticks/s) ((Tick/Second))
hostMemory                                     665792                       # Number of bytes of host memory used (Byte)
simInsts                                         6225                       # Number of instructions simulated (Count)
simOps                                          11204                       # Number of ops (including micro ops) simulated (Count)
hostInstRate                                   247382                       # Simulator instruction rate (inst/s) ((Count/Second))
hostOpRate                                     445086                       # Simulator op (including micro ops) rate (op/s) ((Count/Second))

---------- Begin Simulation Statistics ----------
simSeconds                                   0.000490                       # Number of seconds simulated (Second)
simTicks                                    490394000                       # Number of ticks simulated (Tick)
finalTick                                   490394000                       # Number of ticks from beginning of simulation (restored from checkpoints and never reset) (Tick)
simFreq                                  1000000000000                       # The number of ticks per simulated second ((Tick/Second))
hostSeconds                                      0.03                       # Real time elapsed on the host (Second)
hostTickRate                              15979964060                       # The number of ticks simulated per host second (ticks/s) ((Tick/Second))
hostMemory                                     657488                       # Number of bytes of host memory used (Byte)
simInsts                                         6225                       # Number of instructions simulated (Count)
simOps                                          11204                       # Number of ops (including micro ops) simulated (Count)
hostInstRate                                   202054                       # Simulator instruction rate (inst/s) ((Count/Second))
hostOpRate                                     363571                       # Simulator op (including micro ops) rate (op/s) ((Count/Second))
</code></pre>
<p>统计转储以 <code>---------- Begin Simulation Statistics ----------</code>. 如果在 gem5 执行期间有多个统计转储，则单个文件中可能有多个这些。这对于长时间运行的应用程序或从检查点恢复时很常见。</p>
<p>每个统计数据都有一个名称（第一列）、一个值（第二列）和一个描述（最后一列以 # 开头），后跟统计的单位。</p>
<p>大多数统计数据从它们的描述中可以不言自明。几个重要的统计数据是<code>sim_seconds</code>仿真的总仿真时间，<code>sim_insts</code>CPU 提交的指令数，以及<code>host_inst_rate</code>告诉您 gem5 的性能。</p>
<p>接下来，打印 SimObjects 的统计数据。例如，CPU 统计信息，其中包含有关系统调用数量、缓存系统和翻译缓冲区的统计信息等。</p>
<pre><code>system.clk_domain.clock                          1000                       # Clock period in ticks (Tick)
system.clk_domain.voltage_domain.voltage            1                       # Voltage in Volts (Volt)
system.cpu.numCycles                            57467                       # Number of cpu cycles simulated (Cycle)
system.cpu.numWorkItemsStarted                      0                       # Number of work items this cpu started (Count)
system.cpu.numWorkItemsCompleted                    0                       # Number of work items this cpu completed (Count)
system.cpu.dcache.demandHits::cpu.data           1941                       # number of demand (read+write) hits (Count)
system.cpu.dcache.demandHits::total              1941                       # number of demand (read+write) hits (Count)
system.cpu.dcache.overallHits::cpu.data          1941                       # number of overall hits (Count)
system.cpu.dcache.overallHits::total             1941                       # number of overall hits (Count)
system.cpu.dcache.demandMisses::cpu.data          133                       # number of demand (read+write) misses (Count)
system.cpu.dcache.demandMisses::total             133                       # number of demand (read+write) misses (Count)
system.cpu.dcache.overallMisses::cpu.data          133                       # number of overall misses (Count)
system.cpu.dcache.overallMisses::total            133                       # number of overall misses (Count)
system.cpu.dcache.demandMissLatency::cpu.data     14301000                       # number of demand (read+write) miss ticks (Tick)
system.cpu.dcache.demandMissLatency::total     14301000                       # number of demand (read+write) miss ticks (Tick)
system.cpu.dcache.overallMissLatency::cpu.data     14301000                       # number of overall miss ticks (Tick)
system.cpu.dcache.overallMissLatency::total     14301000                       # number of overall miss ticks (Tick)
system.cpu.dcache.demandAccesses::cpu.data         2074                       # number of demand (read+write) accesses (Count)
system.cpu.dcache.demandAccesses::total          2074                       # number of demand (read+write) accesses (Count)
system.cpu.dcache.overallAccesses::cpu.data         2074                       # number of overall (read+write) accesses (Count)
system.cpu.dcache.overallAccesses::total         2074                       # number of overall (read+write) accesses (Count)
system.cpu.dcache.demandMissRate::cpu.data     0.064127                       # miss rate for demand accesses (Ratio)
system.cpu.dcache.demandMissRate::total      0.064127                       # miss rate for demand accesses (Ratio)
system.cpu.dcache.overallMissRate::cpu.data     0.064127                       # miss rate for overall accesses (Ratio)
system.cpu.dcache.overallMissRate::total     0.064127                       # miss rate for overall accesses (Ratio)
system.cpu.dcache.demandAvgMissLatency::cpu.data 107526.315789                       # average overall miss latency ((Cycle/Count))
system.cpu.dcache.demandAvgMissLatency::total 107526.315789                       # average overall miss latency ((Cycle/Count))
system.cpu.dcache.overallAvgMissLatency::cpu.data 107526.315789                       # average overall miss latency ((Cycle/Count))
system.cpu.dcache.overallAvgMissLatency::total 107526.315789                       # average overall miss latency ((Cycle/Count))
...
system.cpu.mmu.dtb.rdAccesses                    1123                       # TLB accesses on read requests (Count)
system.cpu.mmu.dtb.wrAccesses                     953                       # TLB accesses on write requests (Count)
system.cpu.mmu.dtb.rdMisses                        11                       # TLB misses on read requests (Count)
system.cpu.mmu.dtb.wrMisses                         9                       # TLB misses on write requests (Count)
system.cpu.mmu.dtb.walker.power_state.pwrStateResidencyTicks::UNDEFINED     57467000                       # Cumulative time (in ticks) in various power states (Tick)
system.cpu.mmu.itb.rdAccesses                       0                       # TLB accesses on read requests (Count)
system.cpu.mmu.itb.wrAccesses                    7940                       # TLB accesses on write requests (Count)
system.cpu.mmu.itb.rdMisses                         0                       # TLB misses on read requests (Count)
system.cpu.mmu.itb.wrMisses                        37                       # TLB misses on write requests (Count)
system.cpu.mmu.itb.walker.power_state.pwrStateResidencyTicks::UNDEFINED     57467000                       # Cumulative time (in ticks) in various power states (Tick)
system.cpu.power_state.pwrStateResidencyTicks::ON     57467000                       # Cumulative time (in ticks) in various power states (Tick)
system.cpu.thread_0.numInsts                        0                       # Number of Instructions committed (Count)
system.cpu.thread_0.numOps                          0                       # Number of Ops committed (Count)
system.cpu.thread_0.numMemRefs                      0                       # Number of Memory References (Count)
system.cpu.workload.numSyscalls                    11                       # Number of system calls (Count)
</code></pre>
<p>文件后面是内存控制器统计信息。这包含诸如每个组件读取的字节数以及这些组件使用的平均带宽之类的信息。</p>
<pre><code>system.mem_ctrl.bytesReadWrQ                        0                       # Total number of bytes read from write queue (Byte)
system.mem_ctrl.bytesReadSys                    23168                       # Total read bytes from the system interface side (Byte)
system.mem_ctrl.bytesWrittenSys                     0                       # Total written bytes from the system interface side (Byte)
system.mem_ctrl.avgRdBWSys               403153113.96105593                       # Average system read bandwidth in Byte/s ((Byte/Second))
system.mem_ctrl.avgWrBWSys                 0.00000000                       # Average system write bandwidth in Byte/s ((Byte/Second))
system.mem_ctrl.totGap                       57336000                       # Total gap between requests (Tick)
system.mem_ctrl.avgGap                      158386.74                       # Average gap between requests ((Tick/Count))
system.mem_ctrl.requestorReadBytes::cpu.inst        14656                       # Per-requestor bytes read from memory (Byte)
system.mem_ctrl.requestorReadBytes::cpu.data         8512                       # Per-requestor bytes read from memory (Byte)
system.mem_ctrl.requestorReadRate::cpu.inst 255033323.472601681948                       # Per-requestor bytes read from memory rate ((Byte/Second))
system.mem_ctrl.requestorReadRate::cpu.data 148119790.488454252481                       # Per-requestor bytes read from memory rate ((Byte/Second))
system.mem_ctrl.requestorReadAccesses::cpu.inst          229                       # Per-requestor read serviced memory accesses (Count)
system.mem_ctrl.requestorReadAccesses::cpu.data          133                       # Per-requestor read serviced memory accesses (Count)
system.mem_ctrl.requestorReadTotalLat::cpu.inst      6234000                       # Per-requestor read total memory access latency (Tick)
system.mem_ctrl.requestorReadTotalLat::cpu.data      4141000                       # Per-requestor read total memory access latency (Tick)
system.mem_ctrl.requestorReadAvgLat::cpu.inst     27222.71                       # Per-requestor read average memory access latency ((Tick/Count))
system.mem_ctrl.requestorReadAvgLat::cpu.data     31135.34                       # Per-requestor read average memory access latency ((Tick/Count))
system.mem_ctrl.dram.bytesRead::cpu.inst        14656                       # Number of bytes read from this memory (Byte)
system.mem_ctrl.dram.bytesRead::cpu.data         8512                       # Number of bytes read from this memory (Byte)
system.mem_ctrl.dram.bytesRead::total           23168                       # Number of bytes read from this memory (Byte)
system.mem_ctrl.dram.bytesInstRead::cpu.inst        14656                       # Number of instructions bytes read from this memory (Byte)
system.mem_ctrl.dram.bytesInstRead::total        14656                       # Number of instructions bytes read from this memory (Byte)
system.mem_ctrl.dram.numReads::cpu.inst           229                       # Number of read requests responded to by this memory (Count)
system.mem_ctrl.dram.numReads::cpu.data           133                       # Number of read requests responded to by this memory (Count)
system.mem_ctrl.dram.numReads::total              362                       # Number of read requests responded to by this memory (Count)
system.mem_ctrl.dram.bwRead::cpu.inst       255033323                       # Total read bandwidth from this memory ((Byte/Second))
system.mem_ctrl.dram.bwRead::cpu.data       148119790                       # Total read bandwidth from this memory ((Byte/Second))
system.mem_ctrl.dram.bwRead::total          403153114                       # Total read bandwidth from this memory ((Byte/Second))
system.mem_ctrl.dram.bwInstRead::cpu.inst    255033323                       # Instruction read bandwidth from this memory ((Byte/Second))
system.mem_ctrl.dram.bwInstRead::total      255033323                       # Instruction read bandwidth from this memory ((Byte/Second))
system.mem_ctrl.dram.bwTotal::cpu.inst      255033323                       # Total bandwidth to/from this memory ((Byte/Second))
system.mem_ctrl.dram.bwTotal::cpu.data      148119790                       # Total bandwidth to/from this memory ((Byte/Second))
system.mem_ctrl.dram.bwTotal::total         403153114                       # Total bandwidth to/from this memory ((Byte/Second))
system.mem_ctrl.dram.readBursts                   362                       # Number of DRAM read bursts (Count)
system.mem_ctrl.dram.writeBursts                    0                       # Number of DRAM write bursts (Count)
</code></pre>
<div style="break-before: page; page-break-before: always;"></div><hr />
<h2>layout: documentation
title: Using the default configuration scripts
doc: Learning gem5
parent: part1
permalink: /documentation/learning_gem5/part1/example_configs/
author: Jason Lowe-Power</h2>
<h1 id="使用默认配置脚本"><a class="header" href="#使用默认配置脚本">使用默认配置脚本</a></h1>
<p>在本章中，我们将探索使用 gem5 附带的默认配置脚本。gem5 附带了许多配置脚本，允许您非常快速地使用 gem5。然而，一个常见的陷阱是在没有完全了解正在模拟的内容的情况下使用这些脚本。使用 gem5 进行计算机体系结构研究时，充分了解您正在模拟的系统非常重要。本章将带您了解一些重要的选项和默认配置脚本的部分内容。</p>
<p>在最后几章中，您从头开始创建了自己的配置脚本。这非常强大，因为它允许您指定每个系统参数。但是，某些系统的设置非常复杂（例如，全系统 ARM 或 x86 机器）。幸运的是，gem5 开发人员提供了许多脚本来引导构建系统的过程。</p>
<h2 id="目录结构导览"><a class="header" href="#目录结构导览">目录结构导览</a></h2>
<p>gem5 的所有配置文件都可以在<code>configs/</code>. 目录结构如下图所示：</p>
<pre><code>configs/boot:
bbench-gb.rcS  bbench-ics.rcS  hack_back_ckpt.rcS  halt.sh

configs/common:
Benchmarks.py   Caches.py  cpu2000.py    FileSystemConfig.py  GPUTLBConfig.py   HMC.py       MemConfig.py   Options.py     Simulation.py
CacheConfig.py  cores      CpuConfig.py  FSConfig.py          GPUTLBOptions.py  __init__.py  ObjectList.py  SimpleOpts.py  SysPaths.py

configs/dist:
sw.py

configs/dram:
lat_mem_rd.py  low_power_sweep.py  sweep.py

configs/example:
apu_se.py  etrace_replay.py  garnet_synth_traffic.py  hmctest.py    hsaTopology.py  memtest.py  read_config.py  ruby_direct_test.py      ruby_mem_test.py     sc_main.py
arm        fs.py             hmc_hello.py             hmc_tgen.cfg  memcheck.py     noc_config  riscv           ruby_gpu_random_test.py  ruby_random_test.py  se.py

configs/learning_gem5:
part1  part2  part3  README

configs/network:
__init__.py  Network.py

configs/nvm:
sweep_hybrid.py  sweep.py

configs/ruby:
AMD_Base_Constructor.py  CHI.py        Garnet_standalone.py  __init__.py              MESI_Three_Level.py  MI_example.py      MOESI_CMP_directory.py  MOESI_hammer.py
CHI_config.py            CntrlBase.py  GPU_VIPER.py          MESI_Three_Level_HTM.py  MESI_Two_Level.py    MOESI_AMD_Base.py  MOESI_CMP_token.py      Ruby.py

configs/splash2:
cluster.py  run.py

configs/topologies:
BaseTopology.py  Cluster.py  CrossbarGarnet.py  Crossbar.py  CustomMesh.py  __init__.py  MeshDirCorners_XY.py  Mesh_westfirst.py  Mesh_XY.py  Pt2Pt.py
</code></pre>
<p>每个目录简要说明如下：</p>
<ul>
<li>
<p><strong>开机/</strong></p>
<p>这些是在全系统模式下使用的 rcS 文件。这些文件在 Linux 启动后由模拟器加载并由 shell 执行。其中大部分用于在全系统模式下运行时控制基准。有些是实用函数，例如 <code>hack_back_ckpt.rcS</code>. 这些文件在关于全系统仿真的章节中有更深入的介绍。</p>
</li>
<li>
<p><strong>常见的/</strong></p>
<p>该目录包含许多用于创建模拟系统的帮助脚本和函数。例如，<code>Caches.py</code>类似于前几章中创建的<code>caches.py</code>和<code>caches_opts.py</code>文件。<code>Options.py</code>包含可以在命令行上设置的各种选项。像 CPU 的数量、系统时钟等等。这是查看您要更改的选项是否已具有命令行参数的好地方。<code>CacheConfig.py</code> 包含为经典内存系统设置缓存参数的选项和功能。<code>MemConfig.py</code> 提供了一些设置内存系统的辅助函数。<code>FSConfig.py</code>包含为许多不同类型的系统设置全系统仿真所需的功能。全系统仿真在它自己的章节中进一步讨论。<code>Simulation.py</code>包含许多帮助函数来设置和运行 gem5。该文件中包含的许多代码管理保存和恢复检查点。下面的示例配置文件<code>examples/</code>使用该文件中 的函数来执行 gem5 模拟。这个文件相当复杂，但它也为模拟的运行方式提供了很大的灵活性。</p>
</li>
<li>
<p><strong>德拉姆/</strong></p>
<p>包含测试 DRAM 的脚本。</p>
</li>
<li>
<p><strong>例子/</strong></p>
<p>该目录包含一些示例 gem5 配置脚本，可以开箱即用地运行 gem5。具体来说，<code>se.py</code>和 <code>fs.py</code>是非常有用的。有关这些文件的更多信息，请参见下一节。此目录中还有一些其他实用程序配置脚本。</p>
</li>
<li>
<p><strong>学习_gem5/</strong></p>
<p>该目录包含 learning_gem5 书中的所有 gem5 配置脚本。</p>
</li>
<li>
<p><strong>网络/</strong></p>
<p>该目录包含 HeteroGarnet 网络的配置脚本。</p>
</li>
<li>
<p><strong>虚拟机/</strong></p>
<p>此目录包含使用 NVM 接口的示例脚本。</p>
</li>
<li>
<p><strong>红宝石/</strong></p>
<p>此目录包含 Ruby 的配置脚本及其包含的缓存一致性协议。更多细节可以在关于 Ruby 的章节中找到。</p>
</li>
<li>
<p><strong>飞溅2/</strong></p>
<p>该目录包含运行 splash2 基准套件的脚本，其中包含一些用于配置模拟系统的选项。</p>
</li>
<li>
<p><strong>拓扑/</strong></p>
<p>此目录包含在创建 Ruby 缓存层次结构时可以使用的拓扑的实现。更多细节可以在关于 Ruby 的章节中找到。</p>
</li>
</ul>
<h2 id="使用sepy和fspy"><a class="header" href="#使用sepy和fspy">使用<code>se.py</code>和<code>fs.py</code></a></h2>
<p>在本节中，我将讨论一些可以在命令行上传递的共同选择<code>se.py</code>和<code>fs.py</code>。有关如何运行全系统仿真的更多详细信息，请参见全系统仿真章节。在这里，我将讨论这两个文件共有的选项。</p>
<p>本节中讨论的大多数选项都可以在 Options.py 中找到并在函数中注册<code>addCommonOptions</code>。本节未详细说明所有选项。要查看所有选项，请使用 运行配置脚本<code>--help</code>，或阅读脚本的源代码。</p>
<p>首先，让我们简单地运行 hello world 程序，不带任何参数：</p>
<pre><code>build/X86/gem5.opt configs/example/se.py --cmd=tests/test-progs/hello/bin/x86/linux/hello
</code></pre>
<p>我们得到以下输出：</p>
<pre><code>gem5 Simulator System.  http://gem5.org
gem5 is copyrighted software; use the --copyright option for details.

gem5 version 21.0.0.0
gem5 compiled May 17 2021 18:05:59
gem5 started May 18 2021 00:33:42
gem5 executing on amarillo, pid 85168
command line: build/X86/gem5.opt configs/example/se.py --cmd=tests/test-progs/hello/bin/x86/linux/hello

Global frequency set at 1000000000000 ticks per second
warn: No dot file generated. Please install pydot to generate the dot file and pdf.
warn: DRAM device capacity (8192 Mbytes) does not match the address range assigned (512 Mbytes)
0: system.remote_gdb: listening for remote gdb on port 7005
**** REAL SIMULATION ****
info: Entering event queue @ 0.  Starting simulation...
Hello world!
Exiting @ tick 5943000 because exiting with last active thread context
</code></pre>
<p>然而，这根本不是一个非常有趣的模拟！默认情况下，gem5 使用原子 CPU 并使用原子内存访问，因此没有报告真实的计时数据！要确认这一点，您可以查看 m5out/config.ini。CPU 显示在第 51 行：</p>
<pre><code>[system.cpu]
type=AtomicSimpleCPU
children=interrupts isa mmu power_state tracer workload
branchPred=Null
checker=Null
clk_domain=system.cpu_clk_domain
cpu_id=0
do_checkpoint_insts=true
do_statistics_insts=true
</code></pre>
<p>要在计时模式下实际运行 gem5，让我们指定 CPU 类型。在此过程中，我们还可以指定 L1 缓存的大小。</p>
<pre><code>build/X86/gem5.opt configs/example/se.py --cmd=tests/test-progs/hello/bin/x86/linux/hello --cpu-type=TimingSimpleCPU --l1d_size=64kB --l1i_size=16kB
gem5 Simulator System.  http://gem5.org
gem5 is copyrighted software; use the --copyright option for details.

gem5 version 21.0.0.0
gem5 compiled May 17 2021 18:05:59
gem5 started May 18 2021 00:36:10
gem5 executing on amarillo, pid 85269
command line: build/X86/gem5.opt configs/example/se.py --cmd=tests/test-progs/hello/bin/x86/linux/hello --cpu-type=TimingSimpleCPU --l1d_size=64kB --l1i_size=16kB

Global frequency set at 1000000000000 ticks per second
warn: No dot file generated. Please install pydot to generate the dot file and pdf.
warn: DRAM device capacity (8192 Mbytes) does not match the address range assigned (512 Mbytes)
0: system.remote_gdb: listening for remote gdb on port 7005
**** REAL SIMULATION ****
info: Entering event queue @ 0.  Starting simulation...
Hello world!
Exiting @ tick 454646000 because exiting with last active thread context
</code></pre>
<p>现在，让我们检查 config.ini 文件并确保这些选项正确传播到最终系统。如果您搜索 <code>m5out/config.ini</code>“缓存”，您会发现没有创建缓存！即使我们指定了缓存的大小，我们也没有指定系统应该使用缓存，所以它们没有被创建。正确的命令行应该是：</p>
<pre><code>build/X86/gem5.opt configs/example/se.py --cmd=tests/test-progs/hello/bin/x86/linux/hello --cpu-type=TimingSimpleCPU --l1d_size=64kB --l1i_size=16kB --caches
gem5 Simulator System.  http://gem5.org
gem5 is copyrighted software; use the --copyright option for details.

gem5 version 21.0.0.0
gem5 compiled May 17 2021 18:05:59
gem5 started May 18 2021 00:37:03
gem5 executing on amarillo, pid 85560
command line: build/X86/gem5.opt configs/example/se.py --cmd=tests/test-progs/hello/bin/x86/linux/hello --cpu-type=TimingSimpleCPU --l1d_size=64kB --l1i_size=16kB --caches

Global frequency set at 1000000000000 ticks per second
warn: No dot file generated. Please install pydot to generate the dot file and pdf.
warn: DRAM device capacity (8192 Mbytes) does not match the address range assigned (512 Mbytes)
0: system.remote_gdb: listening for remote gdb on port 7005
**** REAL SIMULATION ****
info: Entering event queue @ 0.  Starting simulation...
Hello world!
Exiting @ tick 31680000 because exiting with last active thread context
</code></pre>
<p>在最后一行，我们看到总时间从 454646000 滴答滴到 31680000 滴答，快得多！看起来缓存现在可能已启用。但是，仔细检查<code>config.ini</code>文件总是一个好主意。</p>
<pre><code>[system.cpu.dcache]
type=Cache
children=power_state replacement_policy tags
addr_ranges=0:18446744073709551615
assoc=2
clk_domain=system.cpu_clk_domain
clusivity=mostly_incl
compressor=Null
data_latency=2
demand_mshr_reserve=1
eventq_index=0
is_read_only=false
max_miss_count=0
move_contractions=true
mshrs=4
power_model=
power_state=system.cpu.dcache.power_state
prefetch_on_access=false
prefetcher=Null
replace_expansions=true
replacement_policy=system.cpu.dcache.replacement_policy
response_latency=2
sequential_access=false
size=65536
system=system
tag_latency=2
tags=system.cpu.dcache.tags
tgts_per_mshr=20
warmup_percentage=0
write_allocator=Null
write_buffers=8
writeback_clean=false
cpu_side=system.cpu.dcache_port
mem_side=system.membus.cpu_side_ports[2]
</code></pre>
<h2 id="一些常见的选项sepy和fspy"><a class="header" href="#一些常见的选项sepy和fspy">一些常见的选项<code>se.py</code>和<code>fs.py</code></a></h2>
<p>运行时会打印所有可能的选项：</p>
<pre><code>build/X86/gem5.opt configs/example/se.py --help
</code></pre>
<p>以下是该列表中的一些重要选项：</p>
<ul>
<li><code>--cpu-type=CPU_TYPE</code>
<ul>
<li>要运行的 CPU 类型。这是一个需要始终设置的重要参数。默认是 atomic，它不执行时序模拟。</li>
</ul>
</li>
<li><code>--sys-clock=SYS_CLOCK</code>
<ul>
<li>以系统速度运行的块的顶级时钟。</li>
</ul>
</li>
<li><code>--cpu-clock=CPU_CLOCK</code>
<ul>
<li>以 CPU 速度运行的块的时钟。这与上面的系统时钟是分开的。</li>
</ul>
</li>
<li><code>--mem-type=MEM_TYPE</code>
<ul>
<li>要使用的内存类型。选项包括不同的 DDR 内存和 ruby 内存控制器。</li>
</ul>
</li>
<li><code>--caches</code>
<ul>
<li>使用经典缓存执行模拟。</li>
</ul>
</li>
<li><code>--l2cache</code>
<ul>
<li>如果使用经典缓存，则使用 L2 缓存执行模拟。</li>
</ul>
</li>
<li><code>--ruby</code>
<ul>
<li>使用 Ruby 代替经典缓存作为缓存系统模拟。</li>
</ul>
</li>
<li><code>-m TICKS, --abs-max-tick=TICKS</code>
<ul>
<li>运行到指定的绝对模拟刻度，包括来自恢复检查点的刻度。如果您只想模拟一定的模拟时间，这将非常有用。</li>
</ul>
</li>
<li><code>-I MAXINSTS, --maxinsts=MAXINSTS</code>
<ul>
<li>要模拟的指令总数（默认：永远运行）。如果您想在执行一定数量的指令后停止模拟，这将非常有用。</li>
</ul>
</li>
<li><code>-c CMD, --cmd=CMD</code>
<ul>
<li>在系统调用仿真模式下运行的二进制文件。</li>
</ul>
</li>
<li><code>-o OPTIONS, --options=OPTIONS</code>
<ul>
<li>传递给二进制文件的选项，在整个字符串周围使用“”。当您运行带有选项的命令时，这很有用。您可以通过此变量传递参数和选项（例如，–whatever）。</li>
</ul>
</li>
<li><code>--output=OUTPUT</code>
<ul>
<li>将标准输出重定向到文件。如果您想将模拟应用程序的输出重定向到文件而不是打印到屏幕，这将非常有用。注意：要重定向 gem5 输出，您必须在配置脚本之前传递一个参数。</li>
</ul>
</li>
<li><code>--errout=ERROUT</code>
<ul>
<li>将 stderr 重定向到文件。与上面类似。</li>
</ul>
</li>
</ul>
<div style="break-before: page; page-break-before: always;"></div><hr />
<h2>layout: documentation
title: Extending gem5 for ARM
doc: Learning gem5
parent: part1
permalink: /documentation/learning_gem5/part1/extending_configs
author: Julian T. Angeles, Thomas E. Hansen</h2>
<h1 id="为-arm-扩展-gem5"><a class="header" href="#为-arm-扩展-gem5">为 ARM 扩展 gem5</a></h1>
<p>本章假设您已经使用 gem5 构建了一个基本的 x86 系统并创建了一个简单的配置脚本。</p>
<h2 id="下载-arm-二进制文件"><a class="header" href="#下载-arm-二进制文件">下载 ARM 二进制文件</a></h2>
<p>让我们从下载一些 ARM 基准测试二进制文件开始。从 gem5 文件夹的根目录开始：</p>
<pre><code>mkdir -p cpu_tests/benchmarks/bin/arm
cd cpu_tests/benchmarks/bin/arm
wget dist.gem5.org/dist/current/gem5/cpu_tests/benchmarks/bin/arm/Bubblesort
wget dist.gem5.org/dist/current/gem5/cpu_tests/benchmarks/bin/arm/FloatMM
</code></pre>
<p>我们将使用这些来进一步测试我们的 ARM 系统。</p>
<h2 id="构建-gem5-来运行-arm-二进制文件"><a class="header" href="#构建-gem5-来运行-arm-二进制文件">构建 gem5 来运行 ARM 二进制文件</a></h2>
<p>正如我们第一次构建基本 x86 系统时所做的那样，我们运行相同的命令，但这次我们希望它使用默认的 ARM 配置进行编译。为此，我们只需将 x86 替换为 ARM：</p>
<pre><code>scons build/ARM/gem5.opt -j20
</code></pre>
<p>编译完成后，您应该在<code>build/ARM/gem5.opt</code>.</p>
<h2 id="修改-simplepy-以运行-arm-二进制文件"><a class="header" href="#修改-simplepy-以运行-arm-二进制文件">修改 simple.py 以运行 ARM 二进制文件</a></h2>
<p>在我们可以用我们的新系统运行任何 ARM 二进制文件之前，我们必须对我们的 simple.py 做一些微调。</p>
<p>如果您还记得我们创建简单配置脚本时的情况，请注意，对于 x86 系统以外的任何 ISA，我们不必将 PIO 和中断端口连接到内存总线。因此，让我们删除这 3 行：</p>
<pre><code>system.cpu.createInterruptController()
#system.cpu.interrupts[0].pio = system.membus.master
#system.cpu.interrupts[0].int_master = system.membus.slave
#system.cpu.interrupts[0].int_slave = system.membus.master

system.system_port = system.membus.slave
</code></pre>
<p>您可以像上面一样删除或注释掉它们。接下来让我们将 processes 命令设置为我们的 ARM 基准二进制文件之一：</p>
<pre><code>process.cmd = ['cpu_tests/benchmarks/bin/arm/Bubblesort']
</code></pre>
<p>如果您想像以前一样测试一个简单的 hello 程序，只需将 x86 替换为 arm：</p>
<pre><code>process.cmd = ['tests/test-progs/hello/bin/arm/linux/hello']
</code></pre>
<h2 id="运行-gem5-1"><a class="header" href="#运行-gem5-1">运行 gem5</a></h2>
<p>像以前一样简单地运行它，除了用 ARM 替换 X86：</p>
<pre><code>build/ARM/gem5.opt configs/tutorial/simple.py
</code></pre>
<p>如果您将流程设置为 Bubblesort 基准，您的输出应如下所示：</p>
<pre><code>gem5 Simulator System.  http://gem5.org
gem5 is copyrighted software; use the --copyright option for details.

gem5 compiled Oct  3 2019 16:02:35
gem5 started Oct  6 2019 13:22:25
gem5 executing on amarillo, pid 77129
command line: build/ARM/gem5.opt configs/tutorial/simple.py

Global frequency set at 1000000000000 ticks per second
warn: DRAM device capacity (8192 Mbytes) does not match the address range assigned (512 Mbytes)
0: system.remote_gdb: listening for remote gdb on port 7002
Beginning simulation!
info: Entering event queue @ 0.  Starting simulation...
info: Increasing stack size by one page.
warn: readlink() called on '/proc/self/exe' may yield unexpected results in various settings.
      Returning '/home/jtoya/gem5/cpu_tests/benchmarks/bin/arm/Bubblesort'
-50000
Exiting @ tick 258647411000 because exiting with last active thread context
</code></pre>
<h2 id="arm-全系统仿真"><a class="header" href="#arm-全系统仿真">ARM 全系统仿真</a></h2>
<p>要运行 ARM FS 模拟，需要对设置进行一些更改。</p>
<p>如果您还没有，从 gem5 存储库的根目录，通过运行<code>cd</code>进入该目录<code>util/term/</code></p>
<pre><code>$ cd util/term/
</code></pre>
<p>然后<code>m5term</code>通过运行编译二进制文件</p>
<pre><code>$ make
</code></pre>
<p>gem5 存储库带有示例系统设置和配置。这些可以在<code>configs/example/arm/</code>目录中找到。</p>
<p><a href="https://www.gem5.org/documentation/general_docs/fullsystem/guest_binaries">此处</a>提供了完整系统 Linux 映像文件的集合 。将它们保存在一个目录中并记住它的路径。例如，您可以将它们存储在</p>
<pre><code>/path/to/user/gem5/fs_images/
</code></pre>
<p>在<code>fs_images</code>本示例的其余部分，将假定该目录包含提取的 FS 映像。</p>
<p>下载图像后，在终端中执行以下命令：</p>
<pre><code>$ export IMG_ROOT=/absolute/path/to/fs_images/&lt;image-directory-name&gt;
</code></pre>
<p>用从下载的图像文件中提取的目录名称替换“<image-directory-name>”，不带尖括号。</p>
<p>我们现在已准备好运行 FS ARM 模拟。从 gem5 存储库的根目录，运行：</p>
<pre><code class="language-bash">$ ./build/ARM/gem5.opt configs/example/arm/fs_bigLITTLE.py \
    --caches \
    --bootloader=&quot;$IMG_ROOT/binaries/&lt;bootloader-name&gt;&quot; \
    --kernel=&quot;$IMG_ROOT/binaries/&lt;kernel-name&gt;&quot; \
    --disk=&quot;$IMG_ROOT/disks/&lt;disk-image-name&gt;&quot; \
    --bootscript=path/to/bootscript.rcS
</code></pre>
<p>用目录或文件的名称替换尖括号中的任何内容，不带尖括号。</p>
<p>然后，您可以通过在不同的终端窗口中运行来附加到模拟：</p>
<pre><code class="language-bash">$ ./util/term/m5term 3456
</code></pre>
<p><code>fs_bigLITTLE.py</code>可以通过运行以下命令获取脚本支持的完整详细信息：</p>
<pre><code class="language-bash">$ ./build/ARM/gem5.opt configs/example/arm/fs_bigLITTLE.py --help
</code></pre>
<blockquote>
<p><strong>关于 FS 模拟的旁白：</strong></p>
<p>请注意，FS 模拟需要很长时间；像“1小时加载内核”很长一段时间！有一些方法可以“快进”模拟，然后在有趣的点恢复详细模拟，但这超出了本章的范围。</p>
</blockquote>
<div style="break-before: page; page-break-before: always;"></div><h1 id="设置开发环境"><a class="header" href="#设置开发环境">设置开发环境</a></h1>
<p>这要讲开始开发gem5。</p>
<h2 id="gem5-风格的指南"><a class="header" href="#gem5-风格的指南">gem5 风格的指南</a></h2>
<p>在修改任何开源项目时，遵循项目的风格指南很重要。gem5 样式的详细信息可以在 gem5<a href="http://www.gem5.org/documentation/general_docs/development/coding_style/">编码样式页面上找到</a>。</p>
<p>为了帮助您遵守样式指南，gem5 包含一个脚本，每当您在 git 中提交变更集时都会运行该脚本。这个脚本应该在你第一次构建 gem5 时被 SCons 自动添加到你的 .git/config 文件中。请不要忽略这些警告/错误。但是，在极少数情况下，您尝试提交不符合 gem5 样式指南的文件（例如，来自 gem5 源代码树之外的文件），您可以使用 git 选项<code>--no-verify</code>跳过运行样式检查器。</p>
<p>风格指南的关键要点是：</p>
<ul>
<li>使用 4 个空格，而不是制表符</li>
<li>对包含进行排序</li>
<li>类名使用大写的驼峰式，成员变量和函数使用驼峰式，局部变量使用蛇形。</li>
<li>记录您的代码</li>
</ul>
<h2 id="git-分支"><a class="header" href="#git-分支">git 分支</a></h2>
<p>大多数使用 gem5 开发的人使用 git 的分支功能来跟踪他们的更改。这使得将更改提交回 gem5 变得非常简单。此外，使用分支可以更轻松地使用其他人所做的新更改来更新 gem5，同时将您自己的更改分开。该<a href="https://git-scm.com/book/en/v2">Git的书</a>有很大的 <a href="https://git-scm.com/book/en/v2/Git-Branching-Branches-in-a-Nutshell">章节</a> 描述了如何使用分支的细节。</p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="设置开发环境-1"><a class="header" href="#设置开发环境-1">设置开发环境</a></h1>
<p>这要讲开始开发gem5。</p>
<h2 id="gem5-风格的指南-1"><a class="header" href="#gem5-风格的指南-1">gem5 风格的指南</a></h2>
<p>在修改任何开源项目时，遵循项目的风格指南很重要。gem5 样式的详细信息可以在 gem5<a href="http://www.gem5.org/documentation/general_docs/development/coding_style/">编码样式页面上找到</a>。</p>
<p>为了帮助您遵守样式指南，gem5 包含一个脚本，每当您在 git 中提交变更集时都会运行该脚本。这个脚本应该在你第一次构建 gem5 时被 SCons 自动添加到你的 .git/config 文件中。请不要忽略这些警告/错误。但是，在极少数情况下，您尝试提交不符合 gem5 样式指南的文件（例如，来自 gem5 源代码树之外的文件），您可以使用 git 选项<code>--no-verify</code>跳过运行样式检查器。</p>
<p>风格指南的关键要点是：</p>
<ul>
<li>使用 4 个空格，而不是制表符</li>
<li>对包含进行排序</li>
<li>类名使用大写的驼峰式，成员变量和函数使用驼峰式，局部变量使用蛇形。</li>
<li>记录您的代码</li>
</ul>
<h2 id="git-分支-1"><a class="header" href="#git-分支-1">git 分支</a></h2>
<p>大多数使用 gem5 开发的人使用 git 的分支功能来跟踪他们的更改。这使得将更改提交回 gem5 变得非常简单。此外，使用分支可以更轻松地使用其他人所做的新更改来更新 gem5，同时将您自己的更改分开。该<a href="https://git-scm.com/book/en/v2">Git的书</a>有很大的 <a href="https://git-scm.com/book/en/v2/Git-Branching-Branches-in-a-Nutshell">章节</a> 描述了如何使用分支的细节。</p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="创建一个非常简单的-simobject"><a class="header" href="#创建一个非常简单的-simobject">创建一个<em>非常</em>简单的 SimObject</a></h1>
<p><strong>注意</strong>：gem5 的已经有一个 名为<code>SimpleObject</code>的 SimObject子类. 实现另一个 `SimpleObject将导致二义性问题。</p>
<p>gem5 中的几乎所有对象都继承自基本 SimObject 类型。SimObjects 将主要接口导出到 gem5 中的所有对象。SimObjects 是可从<code>Python</code> 配置脚本访问的<code>C++</code>包装对象。</p>
<p>SimObjects 可以有很多参数，这些参数是通过<code>Python</code> 配置文件设置的。除了像整数和浮点数这样的简单参数外，它们还可以将其他 SimObjects 作为参数。这允许您创建复杂的系统层次结构，就像真实机器一样。</p>
<p>在本章中，我们将逐步创建一个简单的“HelloWorld” SimObject。目标是向您介绍 SimObjects 的创建方式以及所有 SimObjects 所需的样板代码。我们还将创建一个简单的<code>Python</code>配置脚本来实例化我们的 SimObject。</p>
<p>在接下来的几章中，我们将对其进行扩展，以引入<a href="https://www.gem5.org/documentation/learning_gem5/part2/debugging">调试支持</a>、<a href="https://www.gem5.org/documentation/learning_gem5/part2/events">动态事件</a>和<a href="https://www.gem5.org/documentation/learning_gem5/part2/parameters">参数</a>。</p>
<p><strong>使用 git 分支</strong></p>
<p>为每个添加到 gem5 的新功能使用一个新的 git 分支是很常见的。</p>
<p>在 gem5 中添加新功能或修改某些内容时，第一步是创建一个新分支来存储您的更改。关于 git 分支的详细信息可以在 Git 书中找到。</p>
<pre><code class="language-bash">git checkout -b hello-simobject
</code></pre>
<h2 id="第-1-步为您的新-simobject-创建一个-python-类"><a class="header" href="#第-1-步为您的新-simobject-创建一个-python-类">第 1 步：为您的新 SimObject 创建一个 Python 类</a></h2>
<p>每个 SimObject 都有一个与之关联的 Python 类。这个 Python 类描述了可以从 Python 配置文件控制的 SimObject 的参数。我们从没有参数的情况下开始配置我们的简单 SimObject。因此，我们只需要为我们的 SimObject 声明一个新类，并设置它的名称和 C++ 头文件，为 SimObject 定义 C++ 类。</p>
<p>我们可以在<code>src/learning_gem5/part2</code>创建<code>HelloObject.py</code>. 如果您已经克隆了 gem5 存储库，您将在<code>src/learning_gem5/part2</code>和<code>configs/learning_gem5/part2</code>下完成本教程中提到的文件。您可以删除这些或将它们移动到其他地方以遵循本教程。</p>
<pre><code class="language-python">from m5.params import *
from m5.SimObject import SimObject

class HelloObject(SimObject):
    type = 'HelloObject'
    cxx_header = &quot;learning_gem5/part2/hello_object.hh&quot;
</code></pre>
<p>不要求<code>type</code>与类的名称相同，但这是约定。<code>type</code>是您使用此 Python SimObject 包装的 C++ 类。只有在特殊情况下 <code>type</code>和类名才应该不同。</p>
<p><code>cxx_header</code>是包含用作类的声明文件中<code>type</code>的参数。同样，约定是使用所有小写和下划线的 SimObject 名称，但这只是约定。您可以在此处指定任何头文件。</p>
<h2 id="第-2-步在-c-中实现您的-simobject"><a class="header" href="#第-2-步在-c-中实现您的-simobject">第 2 步：在 C++ 中实现您的 SimObject</a></h2>
<p>接下来，我们需要在 <code>src/learning_gem5/part2/</code>创建<code>hello_object.hh</code>和<code>hello_object.cc</code>，以实现<code>HelloObject</code>。</p>
<p>我们将从<code>C++</code>对象的头文件开始。按照惯例，gem5 将所有头文件内容写在以文件名及其所在目录命名的<code>#ifndef/#endif</code>宏定义之间，防止循环包含。</p>
<p>我们需要在文件中做的唯一一件事就是声明我们的类。由于 <code>HelloObject</code>是 SimObject，它必须从 C++ SimObject 类继承。大多数情况下，您的 SimObject 将继承自 SimObject 的实现类，而不是 SimObject 本身。</p>
<p>SimObject 类指定了许多虚函数。但是，这些函数都不是纯虚函数，所以在最简单的情况下，除了构造函数之外，不需要实现任何函数。</p>
<p>所有 SimObjects 的构造函数都假定它将接受一个参数对象。这个参数对象是由构建系统自动创建的，并且基于SimObject的<code>Python</code>类，就像我们上面创建的那个。此参数类型的名称是<strong>根据对象名称自动生成的</strong>。对于我们的“HelloObject”，参数类型的名称是“HelloObjectParams”。</p>
<p>下面列出了我们的简单头文件所需的代码。</p>
<pre><code class="language-cpp">#ifndef __LEARNING_GEM5_HELLO_OBJECT_HH__
#define __LEARNING_GEM5_HELLO_OBJECT_HH__

#include &quot;params/HelloObject.hh&quot;
#include &quot;sim/sim_object.hh&quot;

class HelloObject : public SimObject
{
  public:
    HelloObject(const HelloObjectParams &amp;p);
};

#endif // __LEARNING_GEM5_HELLO_OBJECT_HH__
</code></pre>
<p>接下来，我们需要在<code>.cc</code>文件中实现<em>两个</em>函数。第一个是<code>HelloObject</code>的构造函数。这里我们简单地将参数对象传递给基类并打印“Hello world!”</p>
<p>通常，您<strong>永远不会</strong>在 gem5 中使用<code>std::cout</code>。相反，您应该使用调试标志。在<a href="https://www.gem5.org/documentation/learning_gem5/part2/debugging">下一章中</a>，我们将修改它以使用调试标志。但是现在，我们将简单地使用<code>std::cout</code>，因为它很简洁。</p>
<pre><code class="language-cpp">#include &quot;learning_gem5/part2/hello_object.hh&quot;

#include &lt;iostream&gt;

HelloObject::HelloObject(const HelloObjectParams &amp;params) :
    SimObject(params)
{
    std::cout &lt;&lt; &quot;Hello World! From a SimObject!&quot; &lt;&lt; std::endl;
}
</code></pre>
<p><strong>注意</strong>：您的 SimObject 的构造函数应当遵循以下签名，</p>
<pre><code class="language-cpp">Foo(const FooParams &amp;)
</code></pre>
<p>然后<code>FooParams::create()</code>被自动定义。<code>create()</code>用于调用 SimObject 构造函数并返回 SimObject 的实例。大多数 SimObject 将遵循这种模式；但是，如果您的<a href="http://doxygen.gem5.org/release/current/classSimObject.html#details">SimObject</a>不遵循此模式， <a href="http://doxygen.gem5.org/release/current/classSimObject.html#details">gem5 SimObject 文档</a> 提供了有关手动实现该<code>create()</code>方法的更多信息。</p>
<h2 id="第-3-步注册-simobject-和-c-文件"><a class="header" href="#第-3-步注册-simobject-和-c-文件">第 3 步：注册 SimObject 和 C++ 文件</a></h2>
<p>为确保<code>C++</code>代码正确编译，<code>Python</code>文件正确解析，我们需要将这些文件的信息告诉构建系统。gem5 使用 SCons 作为构建系统，因此您只需在包含 SimObject 代码的目录中创建一个 SConscript 文件。如果该目录已经有一个 SConscript 文件，只需将以下声明添加到该文件中。</p>
<p>这个文件只是一个普通的<code>Python</code>文件，所以你可以在这个文件中编写任何你想要的<code>Python</code>代码。一些脚本可能会变得非常复杂。gem5 利用这一点自动为 SimObjects 创建代码并编译为特定领域的语言，如 SLICC 和 ISA 语言。</p>
<p>在 SConscript 文件中，有许多函数在您导入后自动定义。请参阅有关该部分的内容...</p>
<p>要编译新的 SimObject，您只需在<code>src/learning_gem5/part2</code>目录中创建一个名为“SConscript”的新文件。在此文件中，您必须声明 SimObject 和<code>.cc</code>文件。下面是所需的代码。</p>
<pre><code class="language-cpp">Import('*')

SimObject('HelloObject.py')
Source('hello_object.cc')
</code></pre>
<h2 id="第-4-步重新构建-gem5"><a class="header" href="#第-4-步重新构建-gem5">第 4 步：（重新）构建 gem5</a></h2>
<p>要编译和链接您的新文件，您只需重新编译 gem5。下面的示例假设您使用的是 x86 ISA，但我们的对象中没有任何东西需要 ISA，因此，这将适用于任何 gem5 的 ISA。</p>
<pre><code class="language-bash">scons build/X86/gem5.opt
</code></pre>
<h2 id="第-5-步创建配置脚本以使用您的新-simobject"><a class="header" href="#第-5-步创建配置脚本以使用您的新-simobject">第 5 步：创建配置脚本以使用您的新 SimObject</a></h2>
<p>现在，你已经实现了SimObject，它已被编译成gem5，您需要创建或修改<code>Python</code>配置文件<code>run_hello.py</code>中 <code>configs/learning_gem5/part2</code>，以实例化对象。由于您的对象非常简单，因此不需要系统对象！除了<code>Root</code>对象之外，不需要 CPU、缓存或任何东西。所有 gem5 实例都需要一个 <code>Root</code>对象。</p>
<p>创建一个<em>非常</em>简单的配置脚本，首先，导入 m5 和您编译的所有对象。</p>
<pre><code class="language-python">import m5
from m5.objects import *
</code></pre>
<p>接下来，根据所有 gem5 实例的要求，您必须实例化<code>Root</code>对象。</p>
<pre><code class="language-python">root = Root(full_system = False)
</code></pre>
<p>现在，您可以实例化您的<code>HelloObject</code>。您需要做的就是调用<code>Python</code>“构造函数”。稍后，我们将看看如何通过<code>Python</code>构造函数指定参数。除了实例化对象之外，您还需要确保它是Root的成员变量。只有作为<code>Root</code>成员的 SimObjects 才会在<code>C++</code>中实例化.</p>
<pre><code class="language-python">root.hello = HelloObject()
</code></pre>
<p>最后，你需要调用<code>m5</code>上的<code>instantiate</code>模块运行模拟！</p>
<pre><code class="language-python">m5.instantiate()

print(&quot;Beginning simulation!&quot;)
exit_event = m5.simulate()
print('Exiting @ tick {} because {}'
      .format(m5.curTick(), exit_event.getCause()))
</code></pre>
<p>修改src/目录下的文件后记得重建gem5。运行配置文件的命令行在“command line:”后面的输出中。输出应如下所示：</p>
<p>注意：如果后续“向 SimObjects 和更多事件添加参数”章节的代码 (goodbye_object) 在您的<code>src/learning_gem5/part2</code> 目录中，run_hello.py 将报错。如果您删除这些文件或将它们移到 gem5 目录之外，<code>run_hello.py</code>则应提供以下输出。</p>
<pre><code class="language-bash">    gem5 Simulator System.  http://gem5.org
    gem5 is copyrighted software; use the --copyright option for details.

    gem5 compiled May  4 2016 11:37:41
    gem5 started May  4 2016 11:44:28
    gem5 executing on mustardseed.cs.wisc.edu, pid 22480
    command line: build/X86/gem5.opt configs/learning_gem5/part2/run_hello.py

    Global frequency set at 1000000000000 ticks per second
    Hello World! From a SimObject!
    Beginning simulation!
    info: Entering event queue @ 0.  Starting simulation...
    Exiting @ tick 18446744073709551615 because simulate() limit reached
</code></pre>
<p>恭喜！您已经编写了您的第一个 SimObject。在接下来的章节中，我们将扩展这个 SimObject 并探索您可以使用 SimObjects 做什么。</p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="调试gem5"><a class="header" href="#调试gem5">调试gem5</a></h1>
<p>在<a href="https://www.gem5.org/documentation/learning_gem5/part2/helloobject">前面的章节中，</a>我们介绍了如何创建一个非常简单的 SimObject。在本章中，我们将用 gem5 的调试支持替换简单地打印到<code>stdout</code>。</p>
<p>gem5通过<em>debug flags</em>为代码的<code>printf</code>样式跟踪/调试提供支持。这些标志允许每个组件有许多调试打印语句，而不必同时启用所有这些语句。运行 gem5 时，您可以从命令行指定要启用的调试标志。</p>
<h2 id="使用调试标志"><a class="header" href="#使用调试标志">使用调试标志</a></h2>
<p>例如，当运行从 simple-config-chapter得到第一个 simple.py 脚本时，如果您启用<code>DRAM</code>调试标志，您将获得以下输出。请注意，这会向控制台生成<em>大量</em>输出（大约 7 MB）。</p>
<pre><code class="language-bash">    build/X86/gem5.opt --debug-flags=DRAM configs/learning_gem5/part1/simple.py | head -n 50
</code></pre>
<pre><code class="language-bash">gem5 Simulator System.  http://gem5.org
DRAM device capacity (gem5 is copyrighted software; use the --copyright option for details.

gem5 compiled Jan  3 2017 16:03:38
gem5 started Jan  3 2017 16:09:53
gem5 executing on chinook, pid 19223
command line: build/X86/gem5.opt --debug-flags=DRAM configs/learning_gem5/part1/simple.py

Global frequency set at 1000000000000 ticks per second
      0: system.mem_ctrl: Memory capacity 536870912 (536870912) bytes
      0: system.mem_ctrl: Row buffer size 8192 bytes with 128 columns per row buffer
      0: system.remote_gdb.listener: listening for remote gdb #0 on port 7000
Beginning simulation!
info: Entering event queue @ 0.  Starting simulation...
      0: system.mem_ctrl: recvTimingReq: request ReadReq addr 400 size 8
      0: system.mem_ctrl: Read queue limit 32, current size 0, entries needed 1
      0: system.mem_ctrl: Address: 400 Rank 0 Bank 0 Row 0
      0: system.mem_ctrl: Read queue limit 32, current size 0, entries needed 1
      0: system.mem_ctrl: Adding to read queue
      0: system.mem_ctrl: Request scheduled immediately
      0: system.mem_ctrl: Single request, going to a free rank
      0: system.mem_ctrl: Timing access to addr 400, rank/bank/row 0 0 0
      0: system.mem_ctrl: Activate at tick 0
      0: system.mem_ctrl: Activate bank 0, rank 0 at tick 0, now got 1 active
      0: system.mem_ctrl: Access to 400, ready at 46250 bus busy until 46250.
  46250: system.mem_ctrl: processRespondEvent(): Some req has reached its readyTime
  46250: system.mem_ctrl: number of read entries for rank 0 is 0
  46250: system.mem_ctrl: Responding to Address 400..   46250: system.mem_ctrl: Done
  77000: system.mem_ctrl: recvTimingReq: request ReadReq addr 400 size 8
  77000: system.mem_ctrl: Read queue limit 32, current size 0, entries needed 1
  77000: system.mem_ctrl: Address: 400 Rank 0 Bank 0 Row 0
  77000: system.mem_ctrl: Read queue limit 32, current size 0, entries needed 1
  77000: system.mem_ctrl: Adding to read queue
  77000: system.mem_ctrl: Request scheduled immediately
  77000: system.mem_ctrl: Single request, going to a free rank
  77000: system.mem_ctrl: Timing access to addr 400, rank/bank/row 0 0 0
  77000: system.mem_ctrl: Access to 400, ready at 101750 bus busy until 101750.
 101750: system.mem_ctrl: processRespondEvent(): Some req has reached its readyTime
 101750: system.mem_ctrl: number of read entries for rank 0 is 0
 101750: system.mem_ctrl: Responding to Address 400..  101750: system.mem_ctrl: Done
 132000: system.mem_ctrl: recvTimingReq: request ReadReq addr 400 size 8
 132000: system.mem_ctrl: Read queue limit 32, current size 0, entries needed 1
 132000: system.mem_ctrl: Address: 400 Rank 0 Bank 0 Row 0
 132000: system.mem_ctrl: Read queue limit 32, current size 0, entries needed 1
 132000: system.mem_ctrl: Adding to read queue
 132000: system.mem_ctrl: Request scheduled immediately
 132000: system.mem_ctrl: Single request, going to a free rank
 132000: system.mem_ctrl: Timing access to addr 400, rank/bank/row 0 0 0
 132000: system.mem_ctrl: Access to 400, ready at 156750 bus busy until 156750.
 156750: system.mem_ctrl: processRespondEvent(): Some req has reached its readyTime
 156750: system.mem_ctrl: number of read entries for rank 0 is 0
</code></pre>
<p>或者，您可能希望根据 CPU 正在执行的确切指令进行调试。为此，<code>Exec</code>调试标志可能很有用。此调试标志显示了模拟 CPU 如何执行每条指令的详细信息。</p>
<pre><code class="language-bash">    build/X86/gem5.opt --debug-flags=Exec configs/learning_gem5/part1/simple.py | head -n 50
</code></pre>
<pre><code class="language-bash">gem5 Simulator System.  http://gem5.org
gem5 is copyrighted software; use the --copyright option for details.

gem5 compiled Jan  3 2017 16:03:38
gem5 started Jan  3 2017 16:11:47
gem5 executing on chinook, pid 19234
command line: build/X86/gem5.opt --debug-flags=Exec configs/learning_gem5/part1/simple.py

Global frequency set at 1000000000000 ticks per second
      0: system.remote_gdb.listener: listening for remote gdb #0 on port 7000
warn: ClockedObject: More than one power state change request encountered within the same simulation tick
Beginning simulation!
info: Entering event queue @ 0.  Starting simulation...
  77000: system.cpu T0 : @_start    : xor   rbp, rbp
  77000: system.cpu T0 : @_start.0  :   XOR_R_R : xor   rbp, rbp, rbp : IntAlu :  D=0x0000000000000000
 132000: system.cpu T0 : @_start+3    : mov r9, rdx
 132000: system.cpu T0 : @_start+3.0  :   MOV_R_R : mov   r9, r9, rdx : IntAlu :  D=0x0000000000000000
 187000: system.cpu T0 : @_start+6    : pop rsi
 187000: system.cpu T0 : @_start+6.0  :   POP_R : ld   t1, SS:[rsp] : MemRead :  D=0x0000000000000001 A=0x7fffffffee30
 250000: system.cpu T0 : @_start+6.1  :   POP_R : addi   rsp, rsp, 0x8 : IntAlu :  D=0x00007fffffffee38
 250000: system.cpu T0 : @_start+6.2  :   POP_R : mov   rsi, rsi, t1 : IntAlu :  D=0x0000000000000001
 360000: system.cpu T0 : @_start+7    : mov rdx, rsp
 360000: system.cpu T0 : @_start+7.0  :   MOV_R_R : mov   rdx, rdx, rsp : IntAlu :  D=0x00007fffffffee38
 415000: system.cpu T0 : @_start+10    : and    rax, 0xfffffffffffffff0
 415000: system.cpu T0 : @_start+10.0  :   AND_R_I : limm   t1, 0xfffffffffffffff0 : IntAlu :  D=0xfffffffffffffff0
 415000: system.cpu T0 : @_start+10.1  :   AND_R_I : and   rsp, rsp, t1 : IntAlu :  D=0x0000000000000000
 470000: system.cpu T0 : @_start+14    : push   rax
 470000: system.cpu T0 : @_start+14.0  :   PUSH_R : st   rax, SS:[rsp + 0xfffffffffffffff8] : MemWrite :  D=0x0000000000000000 A=0x7fffffffee28
 491000: system.cpu T0 : @_start+14.1  :   PUSH_R : subi   rsp, rsp, 0x8 : IntAlu :  D=0x00007fffffffee28
 546000: system.cpu T0 : @_start+15    : push   rsp
 546000: system.cpu T0 : @_start+15.0  :   PUSH_R : st   rsp, SS:[rsp + 0xfffffffffffffff8] : MemWrite :  D=0x00007fffffffee28 A=0x7fffffffee20
 567000: system.cpu T0 : @_start+15.1  :   PUSH_R : subi   rsp, rsp, 0x8 : IntAlu :  D=0x00007fffffffee20
 622000: system.cpu T0 : @_start+16    : mov    r15, 0x40a060
 622000: system.cpu T0 : @_start+16.0  :   MOV_R_I : limm   r8, 0x40a060 : IntAlu :  D=0x000000000040a060
 732000: system.cpu T0 : @_start+23    : mov    rdi, 0x409ff0
 732000: system.cpu T0 : @_start+23.0  :   MOV_R_I : limm   rcx, 0x409ff0 : IntAlu :  D=0x0000000000409ff0
 842000: system.cpu T0 : @_start+30    : mov    rdi, 0x400274
 842000: system.cpu T0 : @_start+30.0  :   MOV_R_I : limm   rdi, 0x400274 : IntAlu :  D=0x0000000000400274
 952000: system.cpu T0 : @_start+37    : call   0x9846
 952000: system.cpu T0 : @_start+37.0  :   CALL_NEAR_I : limm   t1, 0x9846 : IntAlu :  D=0x0000000000009846
 952000: system.cpu T0 : @_start+37.1  :   CALL_NEAR_I : rdip   t7, %ctrl153,  : IntAlu :  D=0x00000000004001ba
 952000: system.cpu T0 : @_start+37.2  :   CALL_NEAR_I : st   t7, SS:[rsp + 0xfffffffffffffff8] : MemWrite :  D=0x00000000004001ba A=0x7fffffffee18
 973000: system.cpu T0 : @_start+37.3  :   CALL_NEAR_I : subi   rsp, rsp, 0x8 : IntAlu :  D=0x00007fffffffee18
 973000: system.cpu T0 : @_start+37.4  :   CALL_NEAR_I : wrip   , t7, t1 : IntAlu :
1042000: system.cpu T0 : @__libc_start_main    : push   r15
1042000: system.cpu T0 : @__libc_start_main.0  :   PUSH_R : st   r15, SS:[rsp + 0xfffffffffffffff8] : MemWrite :  D=0x0000000000000000 A=0x7fffffffee10
1063000: system.cpu T0 : @__libc_start_main.1  :   PUSH_R : subi   rsp, rsp, 0x8 : IntAlu :  D=0x00007fffffffee10
1118000: system.cpu T0 : @__libc_start_main+2    : movsxd   rax, rsi
1118000: system.cpu T0 : @__libc_start_main+2.0  :   MOVSXD_R_R : sexti   rax, rsi, 0x1f : IntAlu :  D=0x0000000000000001
1173000: system.cpu T0 : @__libc_start_main+5    : mov  r15, r9
1173000: system.cpu T0 : @__libc_start_main+5.0  :   MOV_R_R : mov   r15, r15, r9 : IntAlu :  D=0x0000000000000000
1228000: system.cpu T0 : @__libc_start_main+8    : push r14
</code></pre>
<p>实际上，该<code>Exec</code>标志实际上是多个调试标志的聚合。通过使用<code>--debug-help</code>参数运行 gem5，您可以看到这一点以及所有可用的调试标志。</p>
<pre><code class="language-bash">    build/X86/gem5.opt --debug-help
</code></pre>
<pre><code class="language-bash">Base Flags:
    Activity: None
    AddrRanges: None
    Annotate: State machine annotation debugging
    AnnotateQ: State machine annotation queue debugging
    AnnotateVerbose: Dump all state machine annotation details
    BaseXBar: None
    Branch: None
    Bridge: None
    CCRegs: None
    CMOS: Accesses to CMOS devices
    Cache: None
    CacheComp: None
    CachePort: None
    CacheRepl: None
    CacheTags: None
    CacheVerbose: None
    Checker: None
    Checkpoint: None
    ClockDomain: None
...
Compound Flags:
    All: Controls all debug flags. It should not be used within C++ code.
        All Base Flags
    AnnotateAll: All Annotation flags
        Annotate, AnnotateQ, AnnotateVerbose
    CacheAll: None
        Cache, CacheComp, CachePort, CacheRepl, CacheVerbose, HWPrefetch
    DiskImageAll: None
        DiskImageRead, DiskImageWrite
...
XBar: None
    BaseXBar, CoherentXBar, NoncoherentXBar, SnoopFilter
</code></pre>
<h2 id="添加新的调试标志"><a class="header" href="#添加新的调试标志">添加新的调试标志</a></h2>
<p>在<a href="https://www.gem5.org/documentation/learning_gem5/part2/helloobject">前面的章节中</a>，我们使用了一个简单的 <code>std::cout</code>从我们的 SimObject 打印。虽然可以在 gem5 中使用普通的 C/C++ I/O，但强烈建议不要这样做。因此，我们现在将替换它并使用 gem5 的调试工具。</p>
<p>创建新的调试标志时，我们首先必须在 SConscript 文件中声明它。将以下内容添加到您的 hello 目标代码 (src/learning_gem5/) 目录中的 SConscript 文件中。</p>
<pre><code class="language-python">DebugFlag('HelloExample')
</code></pre>
<p>这声明了“HelloExample”的调试标志。现在，我们可以在 SimObject 的调试语句中使用它。</p>
<p>在SConscript 文件中声明标志后，会自动生成一个调试头，允许我们使用调试标志。头文件位于<code>debug</code>目录中，与我们在 SConscript 文件中声明的名称（包括大小写）相同。因此，我们需要在使用该调试标志的c++文件中<code>include</code>自动生成的头文件。</p>
<p>在<code>hello_object.cc</code>文件中，我们需要包含头文件。</p>
<pre><code class="language-cpp">#include &quot;debug/HelloExample.hh&quot;
</code></pre>
<p>现在我们已经包含了必要的头文件，让我们用这样的调试语句替换<code>std::cout</code>调用。</p>
<pre><code class="language-cpp">DPRINTF(HelloExample, &quot;Created the hello object\n&quot;);
</code></pre>
<p><code>DPRINTF</code>是一个 C++ 宏。第一个参数是在 SConscript 文件中声明的<em>调试标志</em>。我们可以使用<code>Hello</code>标志，因为我们在<code>src/learning_gem5/SConscript</code>文件中声明了它。其余的参数是可变的，可以是您要传递给<code>printf</code> 语句的任何内容。</p>
<p>现在，如果您重新编译 gem5 并使用“Hello”调试标志运行它，您将得到以下结果。</p>
<pre><code class="language-bash">    build/X86/gem5.opt --debug-flags=Hello configs/learning_gem5/part2/run_hello.py
</code></pre>
<pre><code class="language-bash">gem5 Simulator System.  http://gem5.org
gem5 is copyrighted software; use the --copyright option for details.

gem5 compiled Jan  4 2017 09:40:10
gem5 started Jan  4 2017 09:41:01
gem5 executing on chinook, pid 29078
command line: build/X86/gem5.opt --debug-flags=Hello configs/learning_gem5/part2/run_hello.py

Global frequency set at 1000000000000 ticks per second
      0: hello: Created the hello object
Beginning simulation!
info: Entering event queue @ 0.  Starting simulation...
Exiting @ tick 18446744073709551615 because simulate() limit reached
</code></pre>
<p>你可以<a href="https://gem5.googlesource.com/public/gem5/+/refs/heads/stable/src/learning_gem5/part2/SConscript">在这里</a> 找到更新的SConcript文件,<a href="https://gem5.googlesource.com/public/gem5/+/refs/heads/stable/src/learning_gem5/part2/hello_object.cc">在这里</a>找到更新的Hello对象的代码 。</p>
<h2 id="调试输出"><a class="header" href="#调试输出">调试输出</a></h2>
<p>对于每次动态<code>DPRINTF</code>执行，都会将三样东西打印到 <code>stdout</code>。<code>DPRINTF</code>执行瞬间的时钟周期数;调用<code>DPRINTF</code>的<em>SimObject的名字</em>，此名称是SimObject的<code>name()</code>函数的返回值，通常就是 Python 配置文件中的 Python 变量名称;最后，您会看到传递给<code>DPRINTF</code>函数的格式化字符串。</p>
<p>您可以使用<code>--debug-file</code> 参数控制调试输出的位置。默认情况下，所有调试输出都打印到 <code>stdout</code>. 但是，您可以将输出重定向到任何文件。该文件相对于主 gem5 输出目录（m5out）存储，而不是当前工作目录。</p>
<h2 id="使用-dprintf-以外的功能"><a class="header" href="#使用-dprintf-以外的功能">使用 DPRINTF 以外的功能</a></h2>
<p><code>DPRINTF</code>是gem5中最常用的调试功能。但是，gem5 提供了许多在特定情况下有用的其他功能。</p>
<p>这些函数与前面的函数<code>:cppDDUMP</code>、<code>:cppDPRINTF</code>、<code>:cppDPRINTFR</code>类似，只是它们不将标志作为参数。因此，每当启用调试时，这些语句将<em>始终</em>打印。</p>
<p>只有在“opt”或“debug”模式下编译 gem5 时，所有这些功能才会启用。所有其他模式对上述功能使用空占位符宏。因此，如果要使用调试标志，则必须使用“gem5.opt”或“gem5.debug”。</p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="事件驱动编程"><a class="header" href="#事件驱动编程">事件驱动编程</a></h1>
<p>gem5 是一个事件驱动的模拟器。在本章中，我们将探讨如何创建和安排事件。我们将从<a href="https://www.gem5.org/documentation/learning_gem5/part2/helloobject">hello-simobject-chapter构建的简单<code>HelloObject</code>开始构建</a>。</p>
<h2 id="创建一个简单的事件回调"><a class="header" href="#创建一个简单的事件回调">创建一个简单的事件回调</a></h2>
<p>在 gem5 的事件驱动模型中，每个事件都有一个回调函数，在其中<em>处理</em>事件。通常，这是一个继承自:cppEvent 的类。但是，gem5 提供了一个用于创建简单事件的包装函数。</p>
<p>在<code>HelloObject</code>的头文件中，我们只需要声明一个我们想要在每次事件触发时执行的新函数 ( <code>processEvent()</code>)。此函数必须不带任何参数且不返回任何内容。</p>
<p>接下来，我们添加一个<code>Event</code>实例。在这种情况下，我们将使用<code>EventFunctionWrapper</code>执行任何函数。</p>
<p>我们还添加了一个<code>startup()</code>函数，如下所示：</p>
<pre><code class="language-cpp">class HelloObject : public SimObject
{
  private:
    void processEvent();

    EventFunctionWrapper event;

  public:
    HelloObject(HelloObjectParams *p);

    void startup();
};
</code></pre>
<p>接下来，我们必须在<code>HelloObject</code>的构造函数中构造这个<code>event</code>。在<code>EventFuntionWrapper</code>有两个参数，被调函数指针和名称字符串。该名称通常是持有该event的 SimObject 名。打印名称时，名称末尾会自动附加一个“.wrapped_function_event”。</p>
<p>第一个参数只是一个无参的void函数(<code>std::function&lt;void(void)&gt;</code>)。通常，这是一个调用成员函数的 lambda 函数。但是，它可以是您想要的任何函数。下面，我们在lambda ( <code>[this]</code>) 捕获<code>this</code>，以便我们可以调用类实例的成员函数。</p>
<pre><code class="language-cpp">HelloObject::HelloObject(HelloObjectParams *params) :
    SimObject(params), event([this]{processEvent();}, name())
{
    DPRINTF(Hello, &quot;Created the hello object\n&quot;);
}
</code></pre>
<p>我们还必须定义处理函数的实现。本例中，如果我们正在调试，我们将简单地打印一些东西。</p>
<pre><code class="language-cpp">void
HelloObject::processEvent()
{
    DPRINTF(Hello, &quot;Hello world! Processing the event!\n&quot;);
}
</code></pre>
<h2 id="安排事件"><a class="header" href="#安排事件">安排事件</a></h2>
<p>最后，要处理事件，我们首先必须<em>安排</em>事件。为此，我们使用 :cppschedule 函数。此函数在未来的某个时间安排某个<code>Event</code>实例发生（事件驱动的模拟不允许事件在过去执行）。</p>
<p>我们先在<code>startup()</code>安排我们添加到<code>HelloObject</code>类的<code>event</code>。<code>startup()</code>是允许 SimObjects 安排内部事件的地方。在模拟第一次开始之前它不会被执行（即从 Python 配置文件调用该<code>simulate()</code>函数）。</p>
<pre><code class="language-cpp">void
HelloObject::startup()
{
    schedule(event, 100);
}
</code></pre>
<p>在这里，我们简单地安排事件在第 100 个刻度处执行。通常，您会使用一些基于 <code>curTick()</code>的偏移量，但由于我们知道当时间当前为 0 时调用 startup() 函数，我们可以使用显式刻度值。</p>
<p>使用“Hello”调试标志运行 gem5 时的输出现在是</p>
<pre><code class="language-bash">gem5 Simulator System.  http://gem5.org
gem5 is copyrighted software; use the --copyright option for details.

gem5 compiled Jan  4 2017 11:01:46
gem5 started Jan  4 2017 13:41:38
gem5 executing on chinook, pid 1834
command line: build/X86/gem5.opt --debug-flags=Hello configs/learning_gem5/part2/run_hello.py

Global frequency set at 1000000000000 ticks per second
      0: hello: Created the hello object
Beginning simulation!
info: Entering event queue @ 0.  Starting simulation...
    100: hello: Hello world! Processing the event!
Exiting @ tick 18446744073709551615 because simulate() limit reached
</code></pre>
<h2 id="更多事件安排"><a class="header" href="#更多事件安排">更多<code>事件</code>安排</a></h2>
<p>我们还可以在事件流程操作中安排新事件。例如，我们将添加一个延迟参数和一个用于触发事件的次数的参数到<code>HelloObject</code>。在<a href="https://www.gem5.org/documentation/learning_gem5/part2/events/parameters-chapter">下一章中</a>，我们将使这些参数可以从 Python 配置文件中访问。</p>
<p>在 HelloObject 类声明中，为延迟和触发次数添加一个成员变量。</p>
<pre><code class="language-cpp">class HelloObject : public SimObject
{
  private:
    void processEvent();

    EventFunctionWrapper event;

    const Tick latency;

    int timesLeft;

  public:
    HelloObject(const HelloObjectParams &amp;p);

    void startup();
};
</code></pre>
<p>然后，在构造函数中为<code>latency</code>和<code>timesLeft</code>添加默认值。</p>
<pre><code class="language-cpp">HelloObject::HelloObject(HelloObjectParams *params) :
    SimObject(params), event([this]{processEvent();}, name()),
    latency(100), timesLeft(10)
{
    DPRINTF(HelloExample, &quot;Created the hello object\n&quot;);
}
</code></pre>
<p>最后，更新<code>startup()</code>和<code>processEvent()</code>。</p>
<pre><code class="language-cpp">void
HelloObject::startup()
{
    schedule(event, latency);
}

void
HelloObject::processEvent()
{
    timesLeft--;
    DPRINTF(HelloExample, &quot;Hello world! Processing the event! %d left\n&quot;, timesLeft);

    if (timesLeft &lt;= 0) {
        DPRINTF(HelloExample, &quot;Done firing!\n&quot;);
    } else {
        schedule(event, curTick() + latency);
    }
}
</code></pre>
<p>现在，当我们运行 gem5 时，事件应该触发 10 次，模拟将在 1000 个滴答后结束。输出现在应如下所示。</p>
<pre><code class="language-bash">gem5 Simulator System.  http://gem5.org
gem5 is copyrighted software; use the --copyright option for details.

gem5 compiled Jan  4 2017 13:53:35
gem5 started Jan  4 2017 13:54:11
gem5 executing on chinook, pid 2326
command line: build/X86/gem5.opt --debug-flags=Hello configs/learning_gem5/part2/run_hello.py

Global frequency set at 1000000000000 ticks per second
      0: hello: Created the hello object
Beginning simulation!
info: Entering event queue @ 0.  Starting simulation...
    100: hello: Hello world! Processing the event! 9 left
    200: hello: Hello world! Processing the event! 8 left
    300: hello: Hello world! Processing the event! 7 left
    400: hello: Hello world! Processing the event! 6 left
    500: hello: Hello world! Processing the event! 5 left
    600: hello: Hello world! Processing the event! 4 left
    700: hello: Hello world! Processing the event! 3 left
    800: hello: Hello world! Processing the event! 2 left
    900: hello: Hello world! Processing the event! 1 left
   1000: hello: Hello world! Processing the event! 0 left
   1000: hello: Done firing!
Exiting @ tick 18446744073709551615 because simulate() limit reached
</code></pre>
<p>你可以找到<a href="https://www.gem5.org/_pages/static/scripts/part2/events/hello_object.hh">更新的头文件</a>和<a href="https://www.gem5.org/_pages/static/scripts/part2/events/hello_object.cc">实现文件</a>。</p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="向-simobjects-和更多事件添加参数"><a class="header" href="#向-simobjects-和更多事件添加参数">向 SimObjects 和更多事件添加参数</a></h1>
<p>gem5 的 Python 接口最强大的部分之一是能够将参数从 Python 传递到 gem5 中的 C++ 对象。在本章，我们将研究几种SimObject参数以及如何利用它们继续构建<a href="http://www.gem5.org/documentation/learning_gem5/part2/helloobject/">前面章节</a>的<code>HelloObject</code>。</p>
<h2 id="简单参数"><a class="header" href="#简单参数">简单参数</a></h2>
<p>首先，我们在<code>HelloObject</code>中为延迟和触发事件的次数添加参数。要添加参数，请修改SimObject Python 文件 ( <code>src/learning_gem5/part2/HelloObject.py</code>) 中的<code>HelloObject</code>类。通过向包含<code>Param</code>类型的 Python 类添加新语句来设置参数。</p>
<p>例如，下面的代码有一个参数<code>time_to_wait</code>，它是一个“延迟”参数，<code>number_of_fires</code>它是一个整数参数。</p>
<pre><code class="language-python">class HelloObject(SimObject):
    type = 'HelloObject'
    cxx_header = &quot;learning_gem5/part2/hello_object.hh&quot;

    time_to_wait = Param.Latency(&quot;Time before firing the event&quot;)
    number_of_fires = Param.Int(1, &quot;Number of times to fire the event before &quot;
                                   &quot;goodbye&quot;)
</code></pre>
<p><code>Param.&lt;TypeName&gt;</code>声明一个类型为<code>TypeName</code>的参数。常见类型<code>Int</code>是整数、<code>Float</code>是浮点数等。这些类型的行为类似于常规 Python 类。</p>
<p>每个参数声明采用一个或两个参数。当给定两个参数时（如<code>number_of_fires</code>），第一个是参数的 <em>默认值</em>。在这种情况下，如果您在 Python 配置文件中实例化<code>HelloObject</code>而没有为 <code>number_of_fires</code> 指定任何值，它将采用默认值 1。</p>
<p>参数声明的第二个参数是对参数的<code>简短描述</code>。这必须是 Python 字符串。如果您只为参数声明指定一个参数，则它是描述（如<code>time_to_wait</code>）。</p>
<p>gem5 还支持许多复杂的参数类型，而不仅仅是内置类型。例如，<code>time_to_wait</code>是一个<code>Latency</code>。<code>Latency</code> 将一个值作为时间值的字符串并将其转换为模拟器的时钟周期数（<strong>ticks</strong>）。例如，具有1皮秒（每秒 1 THz 或 10^12^ ticks）的缺省tick速率，<code>&quot;1ns&quot;</code>自动转换至1000。还有其他便利的参数，如<code>Percent</code>， <code>Cycles</code>，<code>MemorySize</code>等等。</p>
<p>在 SimObject 文件中声明这些参数后，您需要将它们的值复制到 C++ 类的构造函数中。以下代码显示了对<code>HelloObject</code>构造函数的更改。</p>
<pre><code class="language-cpp">HelloObject::HelloObject(const HelloObjectParams &amp;params) :
    SimObject(params),
    event(*this),
    myName(params.name),
    latency(params.time_to_wait),
    timesLeft(params.number_of_fires)
{
    DPRINTF(Hello, &quot;Created the hello object with the name %s\n&quot;, myName);
}
</code></pre>
<p>在这里，我们使用参数的值作为延迟和时间的默认值。此外，我们存储来自参数对象的<code>name</code> ，以便稍后在成员变量<code>myName</code>中使用它。每个<code>params</code> 实例化都有一个名称，该名称来自实例化时的 Python 配置文件。</p>
<p>但是，此处分配名称只是使用 params 对象的一个示例。对于所有 SimObjects，都有一个<code>name()</code>函数总是返回名称。因此，永远不需要像上面那样存储名称。</p>
<p>在 HelloObject 类声明中，为名称添加一个成员变量。</p>
<pre><code class="language-cpp">class HelloObject : public SimObject
{
  private:
    void processEvent();

    EventWrapper event;

    const std::string myName;//为名称添加的成员变量

    const Tick latency;

    int timesLeft;

  public:
    HelloObject(HelloObjectParams *p);

    void startup();
};
</code></pre>
<p>当我们用上面的代码运行 gem5 时，我们得到以下错误：</p>
<pre><code class="language-bash">gem5 Simulator System.  http://gem5.org
gem5 is copyrighted software; use the --copyright option for details.

gem5 compiled Jan  4 2017 14:46:36
gem5 started Jan  4 2017 14:46:52
gem5 executing on chinook, pid 3422
command line: build/X86/gem5.opt --debug-flags=Hello configs/learning_gem5/part2/run_hello.py

Global frequency set at 1000000000000 ticks per second
fatal: hello.time_to_wait without default or user set value
</code></pre>
<p>这是因为该<code>time_to_wait</code>参数没有默认值。因此，我们需要更新 Python 配置文件 ( <code>run_hello.py</code>) 以指定此值。</p>
<pre><code class="language-python">root.hello = HelloObject(time_to_wait = '2us')
</code></pre>
<p>或者，我们可以指定<code>time_to_wait</code>为成员变量。两种做法是等价的，因为 C++ 对象在<code>m5.instantiate()</code>被调用之前不会被创建 。</p>
<pre><code class="language-python">root.hello = HelloObject()
root.hello.time_to_wait = '2us'
</code></pre>
<p>附加<code>Hello</code>调试标志运行时，这个简单脚本的输出如下 。</p>
<pre><code class="language-bash">gem5 Simulator System.  http://gem5.org
gem5 is copyrighted software; use the --copyright option for details.

gem5 compiled Jan  4 2017 14:46:36
gem5 started Jan  4 2017 14:50:08
gem5 executing on chinook, pid 3455
command line: build/X86/gem5.opt --debug-flags=Hello configs/learning_gem5/part2/run_hello.py

Global frequency set at 1000000000000 ticks per second
      0: hello: Created the hello object with the name hello
Beginning simulation!
info: Entering event queue @ 0.  Starting simulation...
2000000: hello: Hello world! Processing the event! 0 left
2000000: hello: Done firing!
Exiting @ tick 18446744073709551615 because simulate() limit reached
</code></pre>
<p>您还可以修改配置脚本以多次触发事件。</p>
<h2 id="其他-simobjects-作为参数"><a class="header" href="#其他-simobjects-作为参数">其他 SimObjects 作为参数</a></h2>
<p>您还可以指定其他 SimObject 作为参数。为了证明这一点，我们将创建一个新的SimObject， <code>GoodbyeObject</code>。这个对象将有一个简单的函数，对另一个 SimObject 说“再见”。为了让它更有趣一点，<code>GoodbyeObject</code>将有一个缓冲区来写入消息，并有一个有限的带宽来写入消息。</p>
<p>首先，在 SConscript 文件中声明 SimObject：</p>
<pre><code class="language-python">Import('*')

SimObject('HelloObject.py')
Source('hello_object.cc')
Source('goodbye_object.cc')

DebugFlag('Hello')
</code></pre>
<p>可以在<a href="https://www.gem5.org/_pages/static/scripts/part2/parameters/SConscript">此处</a>下载新的 SConscript 文件 。</p>
<p>接下来，您需要在 SimObject Python 文件中声明新的 SimObject。由于<code>GoodbyeObject</code>与<code>HelloObject</code>高度相关，我们将使用相同的文件。您可以将以下代码添加到 <code>HelloObject.py</code>.</p>
<pre><code class="language-python">class GoodbyeObject(SimObject):
    type = 'GoodbyeObject'
    cxx_header = &quot;learning_gem5/part2/goodbye_object.hh&quot;

    buffer_size = Param.MemorySize('1kB',
                                   &quot;Size of buffer to fill with goodbye&quot;)
    write_bandwidth = Param.MemoryBandwidth('100MB/s', &quot;Bandwidth to fill &quot;
                                            &quot;the buffer&quot;)
</code></pre>
<p>这个对象有两个参数，都有默认值。第一个参数是缓冲区的大小，是一个<code>MemorySize</code>参数。其次是<code>write_bandwidth</code>指定填充缓冲区的速度。一旦缓冲区已满，模拟将退出。</p>
<p>更新后的<code>HelloObject.py</code>文件可以在<a href="https://www.gem5.org/_pages/static/scripts/part2/parameters/HelloObject.py">这里</a>下载 。</p>
<p>现在，我们需要实现<code>GoodbyeObject</code>.</p>
<pre><code class="language-cpp">#ifndef __LEARNING_GEM5_GOODBYE_OBJECT_HH__
#define __LEARNING_GEM5_GOODBYE_OBJECT_HH__

#include &lt;string&gt;

#include &quot;params/GoodbyeObject.hh&quot;
#include &quot;sim/sim_object.hh&quot;

class GoodbyeObject : public SimObject
{
  private:
    void processEvent();

    /**
     * Fills the buffer for one iteration. If the buffer isn't full, this
     * function will enqueue another event to continue filling.
     */
    void fillBuffer();

    EventWrapper&lt;GoodbyeObject, &amp;GoodbyeObject::processEvent&gt; event;

    /// The bytes processed per tick
    float bandwidth;

    /// The size of the buffer we are going to fill
    int bufferSize;

    /// The buffer we are putting our message in
    char *buffer;

    /// The message to put into the buffer.
    std::string message;

    /// The amount of the buffer we've used so far.
    int bufferUsed;

  public:
    GoodbyeObject(GoodbyeObjectParams *p);
    ~GoodbyeObject();

    /**
     * Called by an outside object. Starts off the events to fill the buffer
     * with a goodbye message.
     *
     * @param name the name of the object we are saying goodbye to.
     */
    void sayGoodbye(std::string name);
};

#endif // __LEARNING_GEM5_GOODBYE_OBJECT_HH__
</code></pre>
<pre><code class="language-cpp">#include &quot;learning_gem5/part2/goodbye_object.hh&quot;

#include &quot;debug/Hello.hh&quot;
#include &quot;sim/sim_exit.hh&quot;

GoodbyeObject::GoodbyeObject(const GoodbyeObjectParams &amp;params) :
    SimObject(params), event(*this), bandwidth(params.write_bandwidth),
    bufferSize(params.buffer_size), buffer(nullptr), bufferUsed(0)
{
    buffer = new char[bufferSize];
    DPRINTF(Hello, &quot;Created the goodbye object\n&quot;);
}

GoodbyeObject::~GoodbyeObject()
{
    delete[] buffer;
}

void
GoodbyeObject::processEvent()
{
    DPRINTF(Hello, &quot;Processing the event!\n&quot;);
    fillBuffer();
}

void
GoodbyeObject::sayGoodbye(std::string other_name)
{
    DPRINTF(Hello, &quot;Saying goodbye to %s\n&quot;, other_name);

    message = &quot;Goodbye &quot; + other_name + &quot;!! &quot;;

    fillBuffer();
}

void
GoodbyeObject::fillBuffer()
{
    // There better be a message
    assert(message.length() &gt; 0);

    // Copy from the message to the buffer per byte.
    int bytes_copied = 0;
    for (auto it = message.begin();
         it &lt; message.end() &amp;&amp; bufferUsed &lt; bufferSize - 1;
         it++, bufferUsed++, bytes_copied++) {
        // Copy the character into the buffer
        buffer[bufferUsed] = *it;
    }

    if (bufferUsed &lt; bufferSize - 1) {
        // Wait for the next copy for as long as it would have taken
        DPRINTF(Hello, &quot;Scheduling another fillBuffer in %d ticks\n&quot;,
                bandwidth * bytes_copied);
        schedule(event, curTick() + bandwidth * bytes_copied);
    } else {
        DPRINTF(Hello, &quot;Goodbye done copying!\n&quot;);
        // Be sure to take into account the time for the last bytes
        exitSimLoop(buffer, 0, curTick() + bandwidth * bytes_copied);
    }
}

GoodbyeObject*
GoodbyeObjectParams::create()
{
    return new GoodbyeObject(this);
}
</code></pre>
<p>头文件可以在<a href="https://www.gem5.org/_pages/static/scripts/part2/parameters/goodbye_object.hh">这里</a>下载 ，实现可以在<a href="https://www.gem5.org/_pages/static/scripts/part2/parameters/goodbye_object.cc">这里</a>下载 。</p>
<p><code>GoodbyeObject</code>的接口是一个简单的<code>sayGoodbye</code>函数 ，它将一个字符串作为参数。调用此函数时，模拟器会构建消息并将其保存在成员变量中。然后，我们开始填充缓冲区。</p>
<p>为了对有限的带宽进行建模，每次我们将消息写入缓冲区时，我们都会暂停写入消息所需的延迟。我们使用一个简单的事件来模拟这个暂停。</p>
<p>由于我们在 SimObject 声明中使用了一个<code>MemoryBandwidth</code>参数，<code>bandwidth</code>变量会自动转换为每字节读写的tick数，因此计算延迟只是带宽乘以我们想要写入缓冲区的字节数。</p>
<p>最后，当缓冲区已满时，我们调用函数<code>exitSimLoop</code>，它将退出模拟。这个函数有三个参数，第一个是返回Python配置脚本的消息（<code>exit_event.getCause()</code>），第二个是退出代码，第三个是什么时候退出。</p>
<h3 id="将-goodbyeobject-作为参数添加到-helloobject"><a class="header" href="#将-goodbyeobject-作为参数添加到-helloobject">将 GoodbyeObject 作为参数添加到 HelloObject</a></h3>
<p>首先，我们将<code>GoodbyeObject</code>作为参数添加到 <code>HelloObject</code>. 要做到这一点，你只需指定SimObject类名作为<code>TypeName</code>的<code>Param</code>。您可以有一个默认值，也可以没有，就像普通参数一样。</p>
<pre><code class="language-python">class HelloObject(SimObject):
    type = 'HelloObject'
    cxx_header = &quot;learning_gem5/part2/hello_object.hh&quot;

    time_to_wait = Param.Latency(&quot;Time before firing the event&quot;)
    number_of_fires = Param.Int(1, &quot;Number of times to fire the event before &quot;
                                   &quot;goodbye&quot;)

    goodbye_object = Param.GoodbyeObject(&quot;A goodbye object&quot;) # 添加GoodbyeObject
</code></pre>
<p>更新后的<code>HelloObject.py</code>文件可以在<a href="https://www.gem5.org/_pages/static/scripts/part2/parameters/HelloObject.py">这里</a>下载 。</p>
<p>其次，我们会添加<code>GoodbyeObject</code>的引用到 <code>HelloObject</code>类。不要忘记在 <code>hello_object.hh</code> 文件的顶部包含 <code>goodbye_object.hh</code>！</p>
<pre><code class="language-cpp">#include &lt;string&gt;

#include &quot;learning_gem5/part2/goodbye_object.hh&quot;
#include &quot;params/HelloObject.hh&quot;
#include &quot;sim/sim_object.hh&quot;

class HelloObject : public SimObject
{
  private:
    void processEvent();

    EventWrapper event;

    /// Pointer to the corresponding GoodbyeObject. Set via Python
    GoodbyeObject* goodbye;

    /// The name of this object in the Python config file
    const std::string myName;

    /// Latency between calling the event (in ticks)
    const Tick latency;

    /// Number of times left to fire the event before goodbye
    int timesLeft;

  public:
    HelloObject(HelloObjectParams *p);

    void startup();
};
</code></pre>
<p>然后，我们需要更新<code>HelloObject</code>的构造函数和事件处理函数。 我们还在构造函数中添加了检查以确保<code>goodbye</code>指针有效。当<code>goodbye</code>指针为空时我们应该<em>抛出panic()</em>，因为这个对象不是被编码和接受的。</p>
<pre><code class="language-cpp">#include &quot;learning_gem5/part2/hello_object.hh&quot;

#include &quot;base/misc.hh&quot;
#include &quot;debug/Hello.hh&quot;

HelloObject::HelloObject(HelloObjectParams &amp;params) :
    SimObject(params),
    event(*this),
    goodbye(params.goodbye_object),
    myName(params.name),
    latency(params.time_to_wait),
    timesLeft(params.number_of_fires)
{
    DPRINTF(Hello, &quot;Created the hello object with the name %s\n&quot;, myName);
    panic_if(!goodbye, &quot;HelloObject must have a non-null GoodbyeObject&quot;);
}
</code></pre>
<p>一旦我们处理事件的次数达到参数指定值，我们应该调用<code>GoodbyeObject</code>的<code>sayGoodbye</code>函数。</p>
<pre><code class="language-cpp">void
HelloObject::processEvent()
{
    timesLeft--;
    DPRINTF(Hello, &quot;Hello world! Processing the event! %d left\n&quot;, timesLeft);

    if (timesLeft &lt;= 0) {
        DPRINTF(Hello, &quot;Done firing!\n&quot;);
        goodbye.sayGoodbye(myName);
    } else {
        schedule(event, curTick() + latency);
    }
}
</code></pre>
<p>你可以找到更新的头文件 <a href="https://www.gem5.org/_pages/static/scripts/part2/parameters/hello_object.hh">在这里</a>和实现文件 <a href="https://www.gem5.org/_pages/static/scripts/part2/parameters/hello_object.cc">在这里</a>。</p>
<h3 id="更新配置脚本"><a class="header" href="#更新配置脚本">更新配置脚本</a></h3>
<p>最后，我们需要将<code>GoodbyeObject</code>加入到配置脚本中。创建一个新的配置脚本<code>hello_goodbye.py</code>并实例化 hello 和 goodbye 对象。例如，一种可能的脚本如下：</p>
<pre><code class="language-python">import m5
from m5.objects import *

root = Root(full_system = False)

root.hello = HelloObject(time_to_wait = '2us', number_of_fires = 5)
root.hello.goodbye_object = GoodbyeObject(buffer_size='100B')

m5.instantiate()

print(&quot;Beginning simulation!&quot;)
exit_event = m5.simulate()
print('Exiting @ tick %i because %s' % (m5.curTick(), exit_event.getCause()))
</code></pre>
<p>您可以<a href="https://www.gem5.org/_pages/static/scripts/part2/parameters/hello_goodbye.py">在此处</a>下载此脚本 。</p>
<p>运行此脚本会生成以下输出。</p>
<pre><code class="language-bash">gem5 Simulator System.  http://gem5.org
gem5 is copyrighted software; use the --copyright option for details.

gem5 compiled Jan  4 2017 15:17:14
gem5 started Jan  4 2017 15:18:41
gem5 executing on chinook, pid 3838
command line: build/X86/gem5.opt --debug-flags=Hello configs/learning_gem5/part2/hello_goodbye.py

Global frequency set at 1000000000000 ticks per second
      0: hello.goodbye_object: Created the goodbye object
      0: hello: Created the hello object
Beginning simulation!
info: Entering event queue @ 0.  Starting simulation...
2000000: hello: Hello world! Processing the event! 4 left
4000000: hello: Hello world! Processing the event! 3 left
6000000: hello: Hello world! Processing the event! 2 left
8000000: hello: Hello world! Processing the event! 1 left
10000000: hello: Hello world! Processing the event! 0 left
10000000: hello: Done firing!
10000000: hello.goodbye_object: Saying goodbye to hello
10000000: hello.goodbye_object: Scheduling another fillBuffer in 152592 ticks
10152592: hello.goodbye_object: Processing the event!
10152592: hello.goodbye_object: Scheduling another fillBuffer in 152592 ticks
10305184: hello.goodbye_object: Processing the event!
10305184: hello.goodbye_object: Scheduling another fillBuffer in 152592 ticks
10457776: hello.goodbye_object: Processing the event!
10457776: hello.goodbye_object: Scheduling another fillBuffer in 152592 ticks
10610368: hello.goodbye_object: Processing the event!
10610368: hello.goodbye_object: Scheduling another fillBuffer in 152592 ticks
10762960: hello.goodbye_object: Processing the event!
10762960: hello.goodbye_object: Scheduling another fillBuffer in 152592 ticks
10915552: hello.goodbye_object: Processing the event!
10915552: hello.goodbye_object: Goodbye done copying!
Exiting @ tick 10944163 because Goodbye hello!! Goodbye hello!! Goodbye hello!! Goodbye hello!! Goodbye hello!! Goodbye hello!! Goo
</code></pre>
<p>你可以修改这两个<strong>SimObject</strong>的参数，看看整体执行时间（Exiting <strong>@tick 10944163</strong>）是如何变化的。要运行这些测试，您可能需要删除调试标志，以便减少终端的输出。</p>
<p>在接下来的章节中，我们将创建一个更复杂、更有用的 SimObject，最终实现一个简单的阻塞单处理器缓存实现。</p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="在内存系统中创建-simobjects"><a class="header" href="#在内存系统中创建-simobjects">在内存系统中创建 SimObjects</a></h1>
<p>在本章中，我们将创建一个位于 CPU 和内存总线之间的简单内存对象。在<a href="https://www.gem5.org/documentation/learning_gem5/part2/simplecache">下一章中，</a> 我们将利用这个简单的内存对象并为其添加一些逻辑，使其成为一个非常简单的阻塞单处理器缓存。</p>
<h2 id="gem5-主从端口"><a class="header" href="#gem5-主从端口">gem5 主从端口</a></h2>
<p>在深入研究内存对象的实现之前，我们应该首先了解 gem5 的主从端口接口。正如之前在<a href="https://www.gem5.org/documentation/learning_gem5/part1/simple_config">simple-config-chapter 中</a>讨论的，所有内存对象都通过端口连接在一起。这些端口在这些内存对象之间提供了一个严格的接口。</p>
<p>这些端口实现了三种不同的内存系统<em>模式</em>：定时、原子和功能。最重要的模式是<em>计时模式</em>。计时模式是唯一能够产生正确仿真结果的模式。其他模式仅在特殊情况下使用。</p>
<p><em>原子模式</em>可用于预热模拟器并将模拟快进到关键区域。这种模式假设内存系统中不会产生任何事件。相反，所有内存请求都通过单个长调用链执行。一般不需要为内存对象实现原子访问，除非它将在快进或模拟器预热期间使用。</p>
<p><em>功能模式</em>也可以称为<em>调试模式</em>。功能模式用于将数据从主设备读取到模拟器内存中。它在系统调用仿真模式中大量使用。例如，功能模式用于<code>process.cmd</code>将主设备中的二进制文件加载 到模拟系统的内存中，以便模拟系统可以访问它。功能访问应该在读取时返回最新的数据，无论数据在哪里，并且应该在写入时更新所有可能的有效数据（例如，在具有缓存的系统中，可能有多个有效的缓存块与地址相同）。</p>
<h3 id="数据包packets"><a class="header" href="#数据包packets">数据包（Packets）</a></h3>
<p>在 gem5 中，<code>Packets</code>是跨端口发送的。一个<code>Packet</code>由一个内存请求对象<code>MemReq</code>组成。<code>MemReq</code>保存关于发起所述数据包如请求者，地址，和请求的类型（读，写等）的原始请求的信息。</p>
<p>数据包也有一个<code>MemCmd</code>，它是数据包的<em>当前</em>命令。这个命令可以在数据包的整个生命周期中改变（例如，一旦满足内存命令，请求就会变成响应）。最常见的<code>MemCmd</code>有<code>ReadReq</code>（读请求）、<code>ReadResp</code>（读响应）、<code>WriteReq</code>（写请求）、<code>WriteResp</code>（写响应）。还有针对缓存和许多其他命令类型的写回请求 ( <code>WritebackDirty</code>, <code>WritebackClean</code>)。</p>
<p>数据包也可以保留请求的数据，或指向数据的指针。无论数据是动态的（显式分配和解除分配）还是静态的（由数据包对象分配和解除分配），在创建数据包时都有一些选项。</p>
<p>最后，在经典缓存中使用数据包作为跟踪一致性的单元。因此，许多数据包代码特定于经典缓存一致性协议。然而，数据包用于 gem5 中内存对象之间的所有通信，即使它们不直接涉及一致性（例如，DRAM 控制器和 CPU 模型）。</p>
<p>所有端口接口函数都接受一个<code>Packet</code>指针作为参数。由于这个指针非常常见，gem5 为它包含了一个 typedef：<code>PacketPtr</code>.</p>
<h3 id="端口接口"><a class="header" href="#端口接口">端口接口</a></h3>
<p>gem5中有两种类型的端口：主端口和从端口。每当您实现一个内存对象时，您将至少实现这些类型的端口之一。为此，您需要创建一个新类，该类分别继承自<code>MasterPort</code>或<code>SlavePort</code>，也就是主端口或从端口。主端口发送请求（并接收响应），从端口接收请求（并发送响应）。</p>
<p>下图概述了主端口和从端口之间最简单的交互。该图显示了时序模式下的交互。其他模式要简单得多，并且在主设备和从设备之间使用简单的调用链。</p>
<p><img src="part2/part2_5_memoryobject.assets/master_slave_1.png" alt="当两者都可以接受请求和响应时，简单的主从交互。" /></p>
<p>如上所述，所有端口接口都需要 一个<code>PacketPtr</code>作为参数。这些函数（<code>sendTimingReq</code>、<code>recvTimingReq</code>等）也都接受一个 <code>PacketPtr</code>。此数据包表示发送或接收的请求或响应。</p>
<p>要发送请求数据包，主设备调用<code>sendTimingReq</code>。从设备（在同一个调用链中）返回响应时，会调用<code>recvTimingReq</code>，且其中的<code>PacketPtr</code>参数和主设备传入的一致。</p>
<p><code>recvTimingReq</code>的返回类型为<code>bool</code>。这个布尔返回值直接返回给调用者。返回值 <code>true</code>表示数据包已被从设备接受。返回<code>false</code>意味着从设备无法接受并且必须在将来的某个时间重试该请求。</p>
<p>在上图，首先，主设备通过调用发送定时请求<code>sendTimingReq</code>，其响应由<code>recvTimingResp</code>产生。从设备通过<code>recvTimingResp</code>返回true，作为<code>sendTimingReq</code>的返回值。主设备继续执行其他任务，从设备异步完成请求（例如，如果它是一个缓存，它会查找标签以查看是否与请求中的地址匹配）。</p>
<p>一旦从设备完成请求，它就可以向主设备发送响应。从设备调用<code>sendTimingResp</code>响应数据包（传入的<code>PacketPtr</code>与请求时相同，但现在是响应数据包），使主设备<code>recvTimingResp</code>被调用。主设备的<code>recvTimingResp</code>返回<code>true</code>，即从设备的<code>sendTimingResp</code>中的返回值。这样，该请求的交互就完成了。</p>
<p>稍后在 master-slave-example-section 中，我们将展示这些函数的示例代码。</p>
<p>主设备或从设备在收到请求或响应时可能正忙。下图显示了原始请求发送时从设备忙的情况。</p>
<p><img src="part2/part2_5_memoryobject.assets/master_slave_2.png" alt="从设备忙时简单的主从交互" /></p>
<p>在这种情况下，从设备通过<code>recvTimingReq</code> 函数返回<code>false</code>。当主设备在调用<code>sendTimingReq</code>后收到false时，它必须等到<code>recvReqRetry</code>被调用时才能重试 <code>sendTimingReq</code>。上图显示了计时请求失败一次，但它可能失败任意多次。注意：跟踪失败的数据包是由主设备负责，而不是从设备。从设备<em>不</em>保留指向失败数据包的指针。</p>
<p>类似地，该图显示了当从设备尝试发送响应时主设备忙的情况。在这种情况下，从设备在被调用<code>recvRespRetry</code>前无法再次调用<code>sendTimingResp</code>.</p>
<p><img src="part2/part2_5_memoryobject.assets/master_slave_3.png" alt="主忙时简单的主从交互" /></p>
<p>需要注意的是，在这两种情况下，重试代码路径可以是单个调用堆栈。例如，当主设备调用<code>sendRespRetry</code>时， <code>recvTimingReq</code>也可以在同一个调用栈中调用。因此，很容易错误地创建无限递归错误或其他错误。因此，确保一个内存对象发送重试请求之前，它已准备好<em>在那一瞬间</em>接受另一个包非常重要。</p>
<h2 id="简单的内存对象示例"><a class="header" href="#简单的内存对象示例">简单的内存对象示例</a></h2>
<p>在本节中，我们将构建一个简单的内存对象。最初，它只会将请求从 CPU 端（一个简单的 CPU）传递到内存端（一个简单的内存总线）。见下图。它将有一个主端口，用于向内存总线发送请求，以及两个 CPU 侧端口，用于 CPU 的指令和数据缓存端口。在下一章<a href="https://www.gem5.org/documentation/learning_gem5/part2/simplecache">simplecache-chapter 中</a>，我们将添加使该对象成为缓存的逻辑。</p>
<p><img src="part2/part2_5_memoryobject.assets/simple_memobj.png" alt="具有位于 CPU 和内存总线之间的简单内存对象的系统。" /></p>
<h3 id="声明-simobject"><a class="header" href="#声明-simobject">声明 SimObject</a></h3>
<p>正如我们在<a href="https://www.gem5.org/documentation/learning_gem5/part2/helloobject">hello-simobject-chapter</a>中创建简单 <a href="https://www.gem5.org/documentation/learning_gem5/part2/helloobject">的 SimObject 一样</a>，第一步是创建一个 SimObject Python 文件。我们将调用这个简单的内存对象<code>SimpleMemobj</code>并在<code>src/learning_gem5/simple_memobj</code>.</p>
<pre><code class="language-python">from m5.params import *
from m5.proxy import *
from m5.SimObject import SimObject

class SimpleMemobj(SimObject):
    type = 'SimpleMemobj'
    cxx_header = &quot;learning_gem5/part2/simple_memobj.hh&quot;

    inst_port = SlavePort(&quot;CPU side port, receives requests&quot;)
    data_port = SlavePort(&quot;CPU side port, receives requests&quot;)
    mem_side = MasterPort(&quot;Memory side port, sends requests&quot;)
</code></pre>
<p>我们让这个对象继承自<code>SimObject</code>。 <code>SimObject</code>类有一个纯虚函数<code>getPort</code>需要我们在C ++代码中实现。</p>
<p>这个对象的参数是三个端口。其中两个是连接CPU 的指令端口和数据端口，第三个端口连接内存总线。这些端口没有默认值，并且有简单的描述。</p>
<p>记住这些端口的名称很重要。我们将在实现<code>SimpleMemobj</code>和定义 <code>getPort</code>函数时明确使用这些名称。</p>
<p>您可以在<a href="https://www.gem5.org/_pages/static/scripts/part2/memoryobject/SimpleMemobj.py">此处</a>下载 SimObject 文件 。</p>
<p>当然，您还需要在新目录中创建一个 SConscript 文件来声明 SimObject Python 文件。您可以在<a href="https://www.gem5.org/_pages/static/scripts/part2/memoryobject/SConscript">此处</a>下载 SConscript 文件 。</p>
<h3 id="定义-simplememobj-类"><a class="header" href="#定义-simplememobj-类">定义 SimpleMemobj 类</a></h3>
<p>现在，我们为<code>SimpleMemobj</code>.</p>
<pre><code class="language-cpp">#include &quot;mem/port.hh&quot;
#include &quot;params/SimpleMemobj.hh&quot;
#include &quot;sim/sim_object.hh&quot;

class SimpleMemobj : public SimObject
{
  private:

  public:

    /** constructor
     */
    SimpleMemobj(SimpleMemobjParams *params);
};
</code></pre>
<h3 id="定义从端口类型"><a class="header" href="#定义从端口类型">定义从端口类型</a></h3>
<p>现在，我们需要为我们的两种端口定义类：CPU 端和内存端端口。为此，我们将在<code>SimpleMemobj</code>类中声明这些类，因为没有其他对象会使用这些类。</p>
<p>让我们从从端口开始，或者说 CPU 端端口。我们将从<code>SlavePort</code>类继承。以下是重写<code>SlavePort</code>类中所有纯虚函数所需的代码。</p>
<pre><code class="language-cpp">class CPUSidePort : public SlavePort
{
  private:
    SimpleMemobj *owner;

  public:
    CPUSidePort(const std::string&amp; name, SimpleMemobj *owner) :
        SlavePort(name, owner), owner(owner)
    { }

    AddrRangeList getAddrRanges() const override;

  protected:
    Tick recvAtomic(PacketPtr pkt) override { panic(&quot;recvAtomic unimpl.&quot;); }
    void recvFunctional(PacketPtr pkt) override;
    bool recvTimingReq(PacketPtr pkt) override;
    void recvRespRetry() override;
};
</code></pre>
<p>这个对象需要定义五个函数。</p>
<p>该对象还有一个成员变量，即它的所有者，因此它可以调用该对象上的函数。</p>
<h3 id="定义主端口类型"><a class="header" href="#定义主端口类型">定义主端口类型</a></h3>
<p>接下来，我们需要定义主端口类型。这将是内存端端口，它将请求从 CPU 端转发到内存系统的其余部分。</p>
<pre><code class="language-cpp">class MemSidePort : public MasterPort
{
  private:
    SimpleMemobj *owner;

  public:
    MemSidePort(const std::string&amp; name, SimpleMemobj *owner) :
        MasterPort(name, owner), owner(owner)
    { }

  protected:
    bool recvTimingResp(PacketPtr pkt) override;
    void recvReqRetry() override;
    void recvRangeChange() override;
};
</code></pre>
<p>这个类只有三个我们必须重写的纯虚函数。</p>
<h3 id="定义-simobject-接口"><a class="header" href="#定义-simobject-接口">定义 SimObject 接口</a></h3>
<p>既然我们已经定义了<code>CPUSidePort</code>类和 <code>MemSidePort</code>类，我们可以将我们的三个端口声明为<code>SimpleMemobj</code>的成员变量. 我们还需要在<code>SimObject</code>类中声明纯虚函数 <code>getPort</code>。gem5 在初始化阶段使用该函数通过端口将内存对象连接在一起。</p>
<pre><code class="language-cpp">class SimpleMemobj : public SimObject
{
  private:

    &lt;CPUSidePort 声明&gt;
    &lt;MemSidePort 声明&gt;

    CPUSidePort instPort;
    CPUSidePort dataPort;

    MemSidePort memPort;

  public:
    SimpleMemobj(SimpleMemobjParams *params);

    Port &amp;getPort(const std::string &amp;if_name,
                  PortID idx=InvalidPortID) override;
};
</code></pre>
<p>您可以在<code>SimpleMemobj</code> <a href="https://www.gem5.org/_pages/static/scripts/part2/memoryobject/simple_memobj.hh">此处</a>下载头文件。</p>
<h3 id="实现基本的-simobject-函数"><a class="header" href="#实现基本的-simobject-函数">实现基本的 SimObject 函数</a></h3>
<p>我们将在<code>SimpleMemobj</code>的构造函数中简单地调用 <code>SimObject</code>的构造函数。我们还需要初始化所有端口。每个端口的构造函数都有两个参数：名称和指向其所有者的指针，正如我们在头文件中定义的那样。该名称可以是任何字符串，但按照惯例，它与 Python SimObject 文件中的名称相同。我们还将blocked 初始化为false。</p>
<pre><code class="language-cpp">#include &quot;learning_gem5/part2/simple_memobj.hh&quot;
#include &quot;debug/SimpleMemobj.hh&quot;

SimpleMemobj::SimpleMemobj(SimpleMemobjParams *params) :
    SimObject(params),
    instPort(params-&gt;name + &quot;.inst_port&quot;, this),
    dataPort(params-&gt;name + &quot;.data_port&quot;, this),
    memPort(params-&gt;name + &quot;.mem_side&quot;, this), blocked(false)
{
}
</code></pre>
<p>接下来，我们需要实现接口以获取端口。这个接口由函数<code>getPort</code>组成，该函数有两个参数。<code>if_name</code>(interface_name)是<em>此</em>对象中的接口在Python中的变量名。</p>
<p>为了实现<code>getPort</code>，我们比较<code>if_name</code>并判断它是不是<code>&quot;mem_side&quot;</code>，如我们在 Python SimObject 文件中指定的那样。如果是，那么我们返回<code>memPort</code>对象。如果名称为<code>&quot;inst_port&quot;</code>，则返回 instPort，如果名称为，<code>data_port</code>则返回dataPort。如果都不是，那么我们将请求名称传递给父级。</p>
<pre><code class="language-cpp">Port &amp;
SimpleMemobj::getPort(const std::string &amp;if_name, PortID idx)
{
    panic_if(idx != InvalidPortID, &quot;This object doesn't support vector ports&quot;);

    // This is the name from the Python SimObject declaration (SimpleMemobj.py)
    if (if_name == &quot;mem_side&quot;) {
        return memPort;
    } else if (if_name == &quot;inst_port&quot;) {
        return instPort;
    } else if (if_name == &quot;data_port&quot;) {
        return dataPort;
    } else {
        // pass it along to our super class
        return SimObject::getPort(if_name, idx);
    }
}
</code></pre>
<h3 id="实现从端口和主端口功能"><a class="header" href="#实现从端口和主端口功能">实现从端口和主端口功能</a></h3>
<p>从端口和主端口的实现都比较简单。大多数情况下，每个端口函数只是将信息转发到主内存对象 ( <code>SimpleMemobj</code>)。</p>
<p>从两个简单的函数开始，简单调用owner(<code>SimpleMemobj</code>子类对象)中对应方法的<code>getAddrRanges</code>和<code>recvFunctional</code> 。</p>
<pre><code class="language-cpp">AddrRangeList
SimpleMemobj::CPUSidePort::getAddrRanges() const
{
    return owner-&gt;getAddrRanges();
}

void
SimpleMemobj::CPUSidePort::recvFunctional(PacketPtr pkt)
{
    return owner-&gt;handleFunctional(pkt);
}
</code></pre>
<p>这些函数在 中的<code>SimpleMemobj</code>实现同样简单。这些实现只是将请求传递到内存端。我们也可以<code>DPRINTF</code>在此处使用调用来跟踪正在发生的情况以进行调试。</p>
<pre><code class="language-cpp">void
SimpleMemobj::handleFunctional(PacketPtr pkt)
{
    memPort.sendFunctional(pkt);
}

AddrRangeList
SimpleMemobj::getAddrRanges() const
{
    DPRINTF(SimpleMemobj, &quot;Sending new ranges\n&quot;);
    return memPort.getAddrRanges();
}
</code></pre>
<p>类似地，对于<code>MemSidePort</code>，我们需要实现<code>recvRangeChange</code> 并通过<code>SimpleMemobj</code>将请求转发到从端口。</p>
<pre><code class="language-cpp">void
SimpleMemobj::MemSidePort::recvRangeChange()
{
    owner-&gt;sendRangeChange();
}
void
SimpleMemobj::sendRangeChange()
{
    instPort.sendRangeChange();
    dataPort.sendRangeChange();
}
</code></pre>
<h3 id="实现接收请求"><a class="header" href="#实现接收请求">实现接收请求</a></h3>
<p><code>recvTimingReq</code>的实现稍微复杂一些。我们需要检查<code>SimpleMemobj</code>是否可以接受请求。 <code>SimpleMemobj</code>是一个非常简单的阻塞结构；我们一次只允许一个未完成的请求。因此，如果我们收到一个请求而当前请求未完成，<code>SimpleMemobj</code>则将阻塞第二个请求。</p>
<p>为了简化实现，<code>CPUSidePort</code>存储了端口接口的所有流量控制信息。因此，我们需要添加一个额外的成员变量 ，bool <code>needRetry</code>到<code>CPUSidePort</code>，用于存储我们是否需要在<code>SimpleMemobj</code> 空闲时发送重试。然后，如果<code>SimpleMemobj</code>请求被阻止，我们设置我们需要在未来某个时间发送重试。</p>
<pre><code class="language-cpp">bool
SimpleMemobj::CPUSidePort::recvTimingReq(PacketPtr pkt)
{
    if (!owner-&gt;handleRequest(pkt)) {
        needRetry = true;
        return false;
    } else {
        return true;
    }
}
</code></pre>
<p>为了处理对<code>SimpleMemobj</code>的请求，我们首先检查 <code>SimpleMemobj</code>是否已经因等待对另一个请求的响应阻塞。如果它被阻塞，那么我们返回<code>false</code>通知调用主端口我们现在不能接受请求。否则，我们将端口标记为阻塞并将数据包从内存端口发送出去。为此，我们可以在<code>MemSidePort</code>对象中定义一个辅助函数来隐藏<code>SimpleMemobj</code>实现中的流控制。我们将假设<code>memPort</code>处理所有的流控制并且总是在我们成功消费请求后从<code>handleRequest</code>返回 <code>true</code>。</p>
<pre><code class="language-cpp">bool
SimpleMemobj::handleRequest(PacketPtr pkt)
{
    if (blocked) {
        return false;
    }
    DPRINTF(SimpleMemobj, &quot;Got request for addr %#x\n&quot;, pkt-&gt;getAddr());
    blocked = true;
    memPort.sendPacket(pkt);
    return true;
}
</code></pre>
<p>接下来，我们需要在 <code>MemSidePort</code>实现<code>sendPacket</code>的功能。该函数将处理流量控制，以防其对等从端口不能接受请求。为此，我们需要向<code>MemSidePort</code>中添加一个成员来存储数据包，以防它被阻塞。如果接收方无法收到请求（或响应），则发送方有责任存储数据包。</p>
<p>这个函数只是通过调用函数<code>sendTimingReq</code>来发送数据包。如果发送失败，则此对象将数据包存储在<code>blockedPacket</code>成员函数中，以便稍后（当它收到<code>recvReqRetry</code>时）发送数据包。此函数还包含一些防御性代码，以确保没有错误，并且我们永远不会尝试错误地覆盖<code>blockedPacket</code>变量。</p>
<pre><code class="language-cpp">void
SimpleMemobj::MemSidePort::sendPacket(PacketPtr pkt)
{
    panic_if(blockedPacket != nullptr, &quot;Should never try to send if blocked!&quot;);
    if (!sendTimingReq(pkt)) {
        blockedPacket = pkt;
    }
}
</code></pre>
<p>接下来，我们需要实现重新发送数据包的代码。在这个函数中，我们尝试通过调用我们上面写的<code>sendPacket</code>函数来重新发送数据包。</p>
<pre><code class="language-cpp">void
SimpleMemobj::MemSidePort::recvReqRetry()
{
    assert(blockedPacket != nullptr);

    PacketPtr pkt = blockedPacket;
    blockedPacket = nullptr;

    sendPacket(pkt);
}
</code></pre>
<h3 id="实现接收响应"><a class="header" href="#实现接收响应">实现接收响应</a></h3>
<p>响应代码路径类似于接收代码路径。当 <code>MemSidePort</code>得到响应时，我们将响应通过<code>SimpleMemobj</code>转发到适当的<code>CPUSidePort</code>。</p>
<pre><code class="language-cpp">bool
SimpleMemobj::MemSidePort::recvTimingResp(PacketPtr pkt)
{
    return owner-&gt;handleResponse(pkt);
}
</code></pre>
<p>在<code>SimpleMemobj</code>中，当我们收到响应后，它应当总是进入阻塞状态，因为对象是阻塞的。在将数据包发送回 CPU 端之前，我们需要标记该对象不再被阻塞。这必须*在调用之前<code>sendTimingResp</code>*完成。否则，可能会陷入无限循环，因为主端口在接收响应和发送另一个请求之间可能只有一个调用链。</p>
<p>解除<code>SimpleMemobj</code>的阻塞后，我们检查包是指令还是数据，然后通过适当的端口将其发送回。最后，由于对象现在已解除阻塞，我们可能需要通知 CPU 端端口重试失败的请求。</p>
<pre><code class="language-cpp">bool
SimpleMemobj::handleResponse(PacketPtr pkt)
{
    assert(blocked);
    DPRINTF(SimpleMemobj, &quot;Got response for addr %#x\n&quot;, pkt-&gt;getAddr());

    blocked = false;

    // Simply forward to the memory port
    if (pkt-&gt;req-&gt;isInstFetch()) {
        instPort.sendPacket(pkt);
    } else {
        dataPort.sendPacket(pkt);
    }

    instPort.trySendRetry();
    dataPort.trySendRetry();

    return true;
}
</code></pre>
<p>类似于我们在<code>MemSidePort</code>中实现发送数据包的便利功能，我们可以在<code>CPUSidePort</code>中实现一个<code>sendPacket</code>功能， 将响应发送到 CPU 端。此函数调用 <code>sendTimingResp</code>，它将调用对等主端口的<code>recvTimingResp</code>。如果这个调用失败并且对等端口当前被阻塞，那么我们存储稍后发送的数据包。</p>
<pre><code class="language-cpp">void
SimpleMemobj::CPUSidePort::sendPacket(PacketPtr pkt)
{
    panic_if(blockedPacket != nullptr, &quot;Should never try to send if blocked!&quot;);

    if (!sendTimingResp(pkt)) {
        blockedPacket = pkt;
    }
}
</code></pre>
<p>我们将在收到 <code>recvRespRetry</code>后发送被阻塞的数据包。这个函数上面的完全一样，只是尝试重新发送数据包，这可能会再次被阻塞。</p>
<pre><code class="language-cpp">void
SimpleMemobj::CPUSidePort::recvRespRetry()
{
    assert(blockedPacket != nullptr);

    PacketPtr pkt = blockedPacket;
    blockedPacket = nullptr;

    sendPacket(pkt);
}
</code></pre>
<p>最后，我们需要为 <code>CPUSidePort</code>实现<code>trySendRetry</code>。这个函数由 <code>SimpleMemobj</code>在自身可能解除阻塞时调用。<code>trySendRetry</code>检查是否需要重试，如果需要重试，该函数调用<code>sendRetryReq</code>，然后调用 对等主端口（在本例中为 CPU）的<code>recvReqRetry</code>。</p>
<pre><code class="language-cpp">void
SimpleMemobj::CPUSidePort::trySendRetry()
{
    if (needRetry &amp;&amp; blockedPacket == nullptr) {
        needRetry = false;
        DPRINTF(SimpleMemobj, &quot;Sending retry req for %d\n&quot;, id);
        sendRetryReq();
    }
}
</code></pre>
<p>除了这个函数之外，为了完成这个文件，添加 SimpleMemobj 的 create 函数。</p>
<pre><code class="language-cpp">SimpleMemobj*
SimpleMemobjParams::create()
{
    return new SimpleMemobj(this);
}
</code></pre>
<p>您可以在<code>SimpleMemobj</code> <a href="https://www.gem5.org/_pages/static/scripts/part2/memoryobject/simple_memobj.cc">此处</a>下载实现。</p>
<p>下图显示了<code>CPUSidePort</code>，<code>MemSidePort</code>以及<code>SimpleMemobj</code>之间的关系。此图显示了对等端口如何与 <code>SimpleMemobj</code>的实现交互。 每个粗体功能都是我们必须实现的功能，非粗体功能是对等端口的端口接口。颜色突出显示通过对象的一个 API 路径（例如，接收请求或更新内存范围）。</p>
<p><img src="part2/part2_5_memoryobject.assets/memobj_api.png" alt="SimpleMemobj 与其端口之间的交互" /></p>
<p>对于这个简单的内存对象，数据包只是从 CPU 端转发到内存端。但是，通过修改<code>handleRequest</code>和 <code>handleResponse</code>，我们可以创建丰富的功能对象，例如<a href="https://www.gem5.org/documentation/learning_gem5/part2/simplecache">下一章中</a>的缓存。</p>
<h3 id="创建配置文件-1"><a class="header" href="#创建配置文件-1">创建配置文件</a></h3>
<p>这是实现一个简单内存对象所需的所有代码！在<a href="https://www.gem5.org/documentation/learning_gem5/part2/simplecache">下一章中</a>，我们将以此为框架，并增加一些高速缓存逻辑，使这个内存对象到一个简单的缓存。但是，在此之前，让我们看一下将 SimpleMemobj 添加到系统的配置文件。</p>
<p>这个配置文件建立在<a href="https://www.gem5.org/documentation/learning_gem5/part1/simple_config">simple-config-chapter</a>中的<a href="https://www.gem5.org/documentation/learning_gem5/part1/simple_config">简单</a>配置文件之上 。然而，我们将实例化 <code>SimpleMemobj</code>并将其放置在 CPU 和内存总线之间，而不是将 CPU 直接连接到内存总线。</p>
<pre><code class="language-python">import m5
from m5.objects import *

system = System()
system.clk_domain = SrcClockDomain()
system.clk_domain.clock = '1GHz'
system.clk_domain.voltage_domain = VoltageDomain()
system.mem_mode = 'timing'
system.mem_ranges = [AddrRange('512MB')]

system.cpu = TimingSimpleCPU()

system.memobj = SimpleMemobj()

system.cpu.icache_port = system.memobj.inst_port
system.cpu.dcache_port = system.memobj.data_port

system.membus = SystemXBar()

system.memobj.mem_side = system.membus.slave

system.cpu.createInterruptController()
system.cpu.interrupts[0].pio = system.membus.master
system.cpu.interrupts[0].int_master = system.membus.slave
system.cpu.interrupts[0].int_slave = system.membus.master

system.mem_ctrl = DDR3_1600_8x8()
system.mem_ctrl.range = system.mem_ranges[0]
system.mem_ctrl.port = system.membus.master

system.system_port = system.membus.slave

process = Process()
process.cmd = ['tests/test-progs/hello/bin/x86/linux/hello']
system.cpu.workload = process
system.cpu.createThreads()

root = Root(full_system = False, system = system)
m5.instantiate()

print (&quot;Beginning simulation!&quot;)
exit_event = m5.simulate()
print('Exiting @ tick %i because %s' % (m5.curTick(), exit_event.getCause()))
</code></pre>
<p>您可以<a href="https://www.gem5.org/_pages/static/scripts/part2/memoryobject/simple_memobj.py">在此处</a>下载此配置脚本 。</p>
<p>现在，当您运行此配置文件时，您将获得以下输出。</p>
<pre><code class="language-bash">gem5 Simulator System.  http://gem5.org
gem5 is copyrighted software; use the --copyright option for details.

gem5 compiled Jan  5 2017 13:40:18
gem5 started Jan  9 2017 10:17:17
gem5 executing on chinook, pid 5138
command line: build/X86/gem5.opt configs/learning_gem5/part2/simple_memobj.py

Global frequency set at 1000000000000 ticks per second
warn: DRAM device capacity (8192 Mbytes) does not match the address range assigned (512 Mbytes)
0: system.remote_gdb.listener: listening for remote gdb #0 on port 7000
warn: CoherentXBar system.membus has no snooping ports attached!
warn: ClockedObject: More than one power state change request encountered within the same simulation tick
Beginning simulation!
info: Entering event queue @ 0.  Starting simulation...
Hello world!
Exiting @ tick 507841000 because target called exit()
</code></pre>
<p>如果您使用<code>SimpleMemobj</code>调试标志运行，您可以看到所有来自 CPU 和发往 CPU 的内存请求和响应。</p>
<pre><code class="language-bash">gem5 Simulator System.  http://gem5.org
gem5 is copyrighted software; use the --copyright option for details.

gem5 compiled Jan  5 2017 13:40:18
gem5 started Jan  9 2017 10:18:51
gem5 executing on chinook, pid 5157
command line: build/X86/gem5.opt --debug-flags=SimpleMemobj configs/learning_gem5/part2/simple_memobj.py

Global frequency set at 1000000000000 ticks per second
Beginning simulation!
info: Entering event queue @ 0.  Starting simulation...
      0: system.memobj: Got request for addr 0x190
  77000: system.memobj: Got response for addr 0x190
  77000: system.memobj: Got request for addr 0x190
 132000: system.memobj: Got response for addr 0x190
 132000: system.memobj: Got request for addr 0x190
 187000: system.memobj: Got response for addr 0x190
 187000: system.memobj: Got request for addr 0x94e30
 250000: system.memobj: Got response for addr 0x94e30
 250000: system.memobj: Got request for addr 0x190
 ...
</code></pre>
<p>您可能还想将 CPU 模型更改为乱序模型 ( <code>DerivO3CPU</code>)。使用乱序 CPU 时，您可能会看到不同的地址流，因为它允许一次处理多个内存请求。当使用乱序 CPU 时，现在会因为<code>SimpleMemobj</code>阻塞而出现很多停顿。</p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="创建一个简单的缓存对象"><a class="header" href="#创建一个简单的缓存对象">创建一个简单的缓存对象</a></h1>
<p>在本章中，我们将采用我们在<a href="https://www.gem5.org/documentation/learning_gem5/part2/memoryobject">上一章中</a>创建的内存对象框架，并为其添加缓存逻辑。</p>
<h2 id="简单缓存模拟对象"><a class="header" href="#简单缓存模拟对象">简单缓存模拟对象</a></h2>
<p>创建 SConscript （您可以<a href="https://www.gem5.org/_pages/static/scripts/part2/simplecache/SConscript">在此处</a>下载 ）文件后，我们可以创建 SimObject Python 文件。我们将管这个简单的内存对象叫 <code>SimpleCache</code>并在 <code>src/learning_gem5/simple_cache</code>创建这个 SimObject 文件。</p>
<pre><code class="language-python">from m5.params import *
from m5.proxy import *
from MemObject import MemObject

class SimpleCache(MemObject):
    type = 'SimpleCache'
    cxx_header = &quot;learning_gem5/simple_cache/simple_cache.hh&quot;

    cpu_side = VectorSlavePort(&quot;CPU side port, receives requests&quot;)
    mem_side = MasterPort(&quot;Memory side port, sends requests&quot;)

    latency = Param.Cycles(1, &quot;Cycles taken on a hit or to resolve a miss&quot;)

    size = Param.MemorySize('16kB', &quot;The size of the cache&quot;)

    system = Param.System(Parent.any, &quot;The system this cache is part of&quot;)
</code></pre>
<p>和<a href="https://www.gem5.org/documentation/learning_gem5/part2/memoryobject">上一章的</a>文件有一些不同。首先，我们有几个额外的参数。即，缓存访问的延迟和缓存的大小。parameters-chapter一章更详细地介绍了这些类型的 SimObject 参数。</p>
<p>接下来，我们包含一个<code>System</code>参数，它是指向该缓存所连接的主系统的指针。这是必要的，因此我们可以在初始化缓存时从系统对象中获取缓存块大小。为了引用这个缓存所连接的系统对象，我们使用了一个特殊的<em>代理参数</em>。在这种情况下，我们使用<code>Parent.any</code>.</p>
<p>在 Python 配置文件中，当<code>SimpleCache</code>被实例化时，此代理参数会搜索<code>SimpleCache</code> 实例的所有父项以找到与该<code>System</code>类型匹配的 SimObject 。由于我们经常使用<code>System</code>作为根 SimObject，您经常会看到此代理参数被解析为 <code>system</code>。</p>
<p><code>SimpleCache</code>和 <code>SimpleMemobj</code>之间的第三个区别是：不同于有两个命名CPU端口（即<code>inst_port</code>和<code>data_port</code>），<code>SimpleCache</code>使用另一个特殊的参数：<code>VectorPort</code>。<code>VectorPorts</code>行为类似于常规端口（例如，它们由<code>getMasterPort</code>和<code>getSlavePort</code>解析），但它们允许此对象连接到多个对等点。然后，在解析函数中，我们之前忽略的参数 ( <code>PortID idx</code>) 用于区分不同的端口。通过使用向量端口，该缓存可以比 <code>SimpleMemobj</code>更灵活地连接系统.</p>
<h2 id="实现-simplecache"><a class="header" href="#实现-simplecache">实现 SimpleCache</a></h2>
<p><code>SimpleCache</code>的大部分代码与 <code>SimpleMemobj</code>相同。 构造函数和关键内存对象函数有一些变化。</p>
<p>首先，我们需要在构造函数中动态创建 CPU 侧端口，并根据 SimObject 参数初始化额外的成员函数。</p>
<pre><code class="language-cpp">SimpleCache::SimpleCache(SimpleCacheParams *params) :
    MemObject(params),
    latency(params-&gt;latency),
    blockSize(params-&gt;system-&gt;cacheLineSize()),
    capacity(params-&gt;size / blockSize),
    memPort(params-&gt;name + &quot;.mem_side&quot;, this),
    blocked(false), outstandingPacket(nullptr), waitingPortId(-1)
{
    for (int i = 0; i &lt; params-&gt;port_cpu_side_connection_count; ++i) {
        cpuPorts.emplace_back(name() + csprintf(&quot;.cpu_side[%d]&quot;, i), i, this);
    }
}
</code></pre>
<p>在这个函数中，我们使用系统参数中的<code>cacheLineSize</code>来设置缓存的<code>blockSize</code>。我们还根据块大小和参数初始化容量，并初始化我们下面需要的其他成员变量。最后，我们必须根据与此对象的连接数创建多个<code>CPUSidePorts</code> 。由于<code>cpu_side</code>端口在 SimObject Python 文件中声明为<code>VectorSlavePort</code> ，因此参数自动具有一个变量 <code>port_cpu_side_connection_count</code>. 这是基于参数的 Python 名称。对于这些连接中的每一个，我们向<code>SimpleCache</code>类中声明的<code>cpuPorts</code>向量添加一个新的<code>CPUSidePort</code>对象 。</p>
<p>我们还向<code>CPUSidePort</code>中添加了一个额外的成员变量以保存其 id，并将其作为参数添加到其构造函数中。</p>
<p>接下来，我们需要实现<code>getMasterPort</code>和<code>getSlavePort</code>。 <code>getMasterPort</code>与<code>SimpleMemobj</code>完全相同。对于 <code>getSlavePort</code>，我们现在需要根据请求的 id 返回端口。</p>
<pre><code class="language-cpp">BaseSlavePort&amp;
SimpleCache::getSlavePort(const std::string&amp; if_name, PortID idx)
{
    if (if_name == &quot;cpu_side&quot; &amp;&amp; idx &lt; cpuPorts.size()) {
        return cpuPorts[idx];
    } else {
        return MemObject::getSlavePort(if_name, idx);
    }
}
</code></pre>
<p>在<code>SimpleMemobj</code>中<code>CPUSidePort</code>和<code>MemSidePort</code>的实现的几乎相同。唯一的区别是我们需要向<code>handleRequest</code>添加一个额外的参数，即请求发起的端口的 id。如果没有这个 id，我们将无法将响应转发到正确的端口。<code>SimpleMemobj</code>根据原始请求是指令还是数据访问，得知要发送回复的端口。但是，此信息对<code>SimpleCache</code>无用， 因为它使用端口向量而不是命名端口。</p>
<p>新<code>handleRequest</code>函数与<code>SimpleMemobj</code>中的 <code>handleRequest</code>有两处不同。 首先，它存储如上所述的请求的端口 id。由于<code>SimpleCache</code>是阻塞的并且一次只允许一个未完成的请求，我们只需要保存一个端口 id。</p>
<p>其次，访问缓存需要时间。因此，我们需要考虑访问缓存标签和请求缓存数据的延迟。为此，我们向缓存对象添加了一个额外的参数，我们在<code>handleRequest</code>中使用一个事件将请求拖延所需的时间。我们为<code>latency</code>未来的周期安排了一个新的事件。<code>clockEdge</code>函数返回<em>n个周期后的</em>滴答数。</p>
<pre><code class="language-cpp">bool
SimpleCache::handleRequest(PacketPtr pkt, int port_id)
{
    if (blocked) {
        return false;
    }
    DPRINTF(SimpleCache, &quot;Got request for addr %#x\n&quot;, pkt-&gt;getAddr());

    blocked = true;
    waitingPortId = port_id;

    schedule(new AccessEvent(this, pkt), clockEdge(latency));

    return true;
}
</code></pre>
<p>这个<code>AccessEvent</code>比我们在event-chapter使用的<code>EventWrapper</code> 要复杂一些。在<code>SimpleCache</code>中我们将使用一个新类， 而不是<code>EventWrapper</code>，因为我们需要将数据包 ( <code>pkt</code>) 从<code>handleRequest</code>传递给事件处理函数。以下代码是 <code>AccessEvent</code>类。我们只需要实现<code>process</code>函数，以调用我们想要用作事件处理程序的函数，在本例中为<code>accessTming</code>。我们还将传递标志<code>AutoDelete</code>给事件构造函数，因此我们无需考虑为动态创建的对象释放内存。<code>process</code>函数执行后，事件代码会自动删除对象。</p>
<pre><code class="language-cpp">class AccessEvent : public Event
{
  private:
    SimpleCache *cache;
    PacketPtr pkt;
  public:
    AccessEvent(SimpleCache *cache, PacketPtr pkt) :
        Event(Default_Pri, AutoDelete), cache(cache), pkt(pkt)
    { }
    void process() override {
        cache-&gt;accessTiming(pkt);
    }
};
</code></pre>
<p>现在，我们需要实现事件处理程序<code>accessTiming</code>.</p>
<pre><code class="language-cpp">void
SimpleCache::accessTiming(PacketPtr pkt)
{
    bool hit = accessFunctional(pkt);
    if (hit) {
        pkt-&gt;makeResponse();
        sendResponse(pkt);
    } else {
        &lt;miss handling&gt;
    }
}
</code></pre>
<p>该函数首先在<em>功能上</em>访问缓存。此函数 <code>accessFunctional</code>（如下所述）执行缓存的功能访问，并在命中时读写缓存或返回访问未命中。</p>
<p>如果访问命中，我们只需要对数据包做出响应。要做出响应，您首先必须调用数据包上的函数<code>makeResponse</code>。这会将数据包从请求数据包转换为响应数据包。例如，如果数据包中的内存命令是<code>ReadReq</code>，它将被转换为<code>ReadResp</code>。写入行为类似。然后，我们可以将响应发送回 CPU。</p>
<p>除了使用<code>waitingPortId</code>将数据包发送到正确的端口之外，<code>sendResponse</code> 函数与<code>SimpleMemobj</code>中的<code>handleResponse</code>函数执行相同的操作。在这个函数中，我们需要在调用<code>sendPacket</code>前标记<code>SimpleCache</code>为unblocked ，以防CPU端的peer立即调用<code>sendTimingReq</code>。然后，如果<code>SimpleCache</code>现在可以接收请求，且端口需要重试发送，我们尝试向 CPU 端端口发送重试。</p>
<pre><code class="language-cpp">void SimpleCache::sendResponse(PacketPtr pkt)
{
    int port = waitingPortId;

    blocked = false;
    waitingPortId = -1;

    cpuPorts[port].sendPacket(pkt);
    for (auto&amp; port : cpuPorts) {
        port.trySendRetry();
    }
}
</code></pre>
<hr />
<p>回到<code>accessTiming</code>函数，我们现在需要处理缓存未命中的情况。如果未命中，我们首先必须检查丢失的数据包是否针对整个缓存块。如果数据包对齐并且请求的大小是缓存块的大小，那么我们可以简单地将请求转发到内存，就像在<code>SimpleMemobj</code>.</p>
<p>但是，如果数据包小于一个缓存块，那么我们需要创建一个新的数据包来从内存中读取整个缓存块。在这里，无论数据包是读请求还是写请求，我们都会向内存发送一个读请求，以将缓存块的数据加载到缓存中。如果是写请求，它会在我们从内存中加载数据后，在缓存中执行。</p>
<p>然后，我们创建一个新的数据包，大小与<code>blockSize</code>相同，我们在<code>Packet</code>对象中调用<code>allocate</code>函数为将从内存中读取的数据分配内存。注意：当我们释放数据包时，其内存被释放。我们使用数据包中的原始请求对象，以便内存侧对象统计请求发起者和请求类型。</p>
<p>最后，我们将发送方数据包指针 ( <code>pkt</code>)保存在一个成员变量<code>outstandingPacket</code>中，以便在<code>SimpleCache</code> 收到响应时可以恢复它。然后，我们通过内存端端口发送新数据包。</p>
<pre><code class="language-cpp">void
SimpleCache::accessTiming(PacketPtr pkt)
{
    bool hit = accessFunctional(pkt);
    if (hit) {
        pkt-&gt;makeResponse();
        sendResponse(pkt);
    } else {
        Addr addr = pkt-&gt;getAddr();
        Addr block_addr = pkt-&gt;getBlockAddr(blockSize);
        unsigned size = pkt-&gt;getSize();
        if (addr == block_addr &amp;&amp; size == blockSize) {
            DPRINTF(SimpleCache, &quot;forwarding packet\n&quot;);
            memPort.sendPacket(pkt);
        } else {
            DPRINTF(SimpleCache, &quot;Upgrading packet to block size\n&quot;);
            panic_if(addr - block_addr + size &gt; blockSize,
                     &quot;Cannot handle accesses that span multiple cache lines&quot;);

            assert(pkt-&gt;needsResponse());
            MemCmd cmd;
            if (pkt-&gt;isWrite() || pkt-&gt;isRead()) {
                cmd = MemCmd::ReadReq;
            } else {
                panic(&quot;Unknown packet type in upgrade size&quot;);
            }

            PacketPtr new_pkt = new Packet(pkt-&gt;req, cmd, blockSize);
            new_pkt-&gt;allocate();

            outstandingPacket = pkt;

            memPort.sendPacket(new_pkt);
        }
    }
}
</code></pre>
<p>根据内存的响应，我们知道这是由缓存未命中引起的。第一步是将响应数据包插入缓存中。</p>
<p>然后，要么有<code>outstandingPacket</code>，在这种情况下我们需要将该数据包转发给请求发起者，要么没有 <code>outstandingPacket</code>这意味着我们应该将响应中的<code>pkt</code>转发给请求发起者。</p>
<p>如果作为响应收到的数据包是更新数据包，因为发起的请求小于缓存行，那么我们需要将新数据复制到outstandingPacket 数据包或写入缓存。然后，我们需要删除我们在未命中处理逻辑中创建的新数据包。</p>
<pre><code class="language-cpp">bool
SimpleCache::handleResponse(PacketPtr pkt)
{
    assert(blocked);
    DPRINTF(SimpleCache, &quot;Got response for addr %#x\n&quot;, pkt-&gt;getAddr());
    insert(pkt);

    if (outstandingPacket != nullptr) {
        accessFunctional(outstandingPacket);
        outstandingPacket-&gt;makeResponse();
        delete pkt;
        pkt = outstandingPacket;
        outstandingPacket = nullptr;
    } // else, pkt contains the data it needs

    sendResponse(pkt);

    return true;
}
</code></pre>
<h3 id="功能缓存逻辑"><a class="header" href="#功能缓存逻辑">功能缓存逻辑</a></h3>
<p>现在，我们需要实现另外两个函数：<code>accessFunctional</code>和 <code>insert</code>。这两个函数构成了缓存逻辑的关键组件。</p>
<p>首先，为了在功能上更新缓存，我们首先需要存储缓存内容。最简单的缓存存储是从地址映射到数据的映射（哈希表）。因此，我们将以下成员添加到<code>SimpleCache</code>.</p>
<pre><code class="language-cpp">std::unordered_map&lt;Addr, uint8_t*&gt; cacheStore;
</code></pre>
<p>要访问缓存，我们首先检查映射中是否存在与数据包中的地址匹配的条目。我们使用<code>Packet</code>类中的<code>getBlockAddr</code> 函数来获取块对齐的地址。然后，我们只需在map中搜索该地址。如果我们没有找到地址，那么这个函数返回<code>false</code>，数据不在缓存中，就是未命中。</p>
<p>否则，如果数据包是写请求，我们需要更新缓存中的数据。为此，我们将数据包中的数据写入缓存。我们使用<code>writeDataToBlock</code>函数，将数据包中的数据写入到可能更大的缓存数据块。该函数采用缓存块偏移量和块大小（作为参数），并将正确的偏移量写入作为第一个参数传递的指针中。</p>
<p>如果数据包是读请求，我们需要用缓存中的数据更新数据包的数据。<code>setDataFromBlock</code>函数执行与<code>writeDataToBlock</code>函数相同的偏移量计算，但将第一个参数中指针中的数据写入数据包。</p>
<pre><code>bool
SimpleCache::accessFunctional(PacketPtr pkt)
{
    Addr block_addr = pkt-&gt;getBlockAddr(blockSize);
    auto it = cacheStore.find(block_addr);
    if (it != cacheStore.end()) {
        if (pkt-&gt;isWrite()) {
            pkt-&gt;writeDataToBlock(it-&gt;second, blockSize);
        } else if (pkt-&gt;isRead()) {
            pkt-&gt;setDataFromBlock(it-&gt;second, blockSize);
        } else {
            panic(&quot;Unknown packet type!&quot;);
        }
        return true;
    }
    return false;
}
</code></pre>
<p>最后，我们还需要实现该<code>insert</code>功能。每次内存端端口响应请求时都会调用此函数。</p>
<p>第一步是检查缓存当前是否已满。如果缓存的条目（块）比 SimObject 参数设置的缓存容量多，那么我们需要替换一些东西。以下代码通过利用 C++ 的<code>unordered_map</code>哈希表实现来随机替换条目。</p>
<p>在置换时，我们需要将数据写回后备内存，以防它已被更新。为此，我们创建了一个新的<code>Request</code>-<code>Packet</code> 对。数据包使用了一个新的内存命令：<code>MemCmd::WritebackDirty</code>。然后，我们通过内存端端口 ( <code>memPort</code>)发送数据包并擦除缓存存储映射中的条目。</p>
<p>然后，在一个块可能被驱逐后，我们将新地址添加到缓存中。为此，我们只需为块分配空间并向映射添加一个条目。最后，我们将响应包中的数据写入新分配的块中。可以认为这个数据包等于缓存块的大小，因为如果数据包小于等于缓存块，我们要在缓存未命中逻辑中创建一个新数据包。</p>
<pre><code class="language-cpp">void
SimpleCache::insert(PacketPtr pkt)
{
    if (cacheStore.size() &gt;= capacity) {
        // Select random thing to evict. This is a little convoluted since we
        // are using a std::unordered_map. See http://bit.ly/2hrnLP2
        int bucket, bucket_size;
        do {
            bucket = random_mt.random(0, (int)cacheStore.bucket_count() - 1);
        } while ( (bucket_size = cacheStore.bucket_size(bucket)) == 0 );
        auto block = std::next(cacheStore.begin(bucket),
                               random_mt.random(0, bucket_size - 1));

        RequestPtr req = new Request(block-&gt;first, blockSize, 0, 0);
        PacketPtr new_pkt = new Packet(req, MemCmd::WritebackDirty, blockSize);
        new_pkt-&gt;dataDynamic(block-&gt;second); // This will be deleted later

        DPRINTF(SimpleCache, &quot;Writing packet back %s\n&quot;, pkt-&gt;print());
        memPort.sendTimingReq(new_pkt);

        cacheStore.erase(block-&gt;first);
    }
    uint8_t *data = new uint8_t[blockSize];
    cacheStore[pkt-&gt;getAddr()] = data;

    pkt-&gt;writeDataToBlock(data, blockSize);
}
</code></pre>
<h2 id="为缓存创建配置文件"><a class="header" href="#为缓存创建配置文件">为缓存创建配置文件</a></h2>
<p>我们实现的最后一步是创建一个使用我们缓存的新 Python 配置脚本。我们可以使用<a href="https://www.gem5.org/documentation/learning_gem5/part2/memoryobject">上一章</a>的大纲 作为起点。唯一的区别是我们可能想要设置此缓存的参数（例如，将缓存的大小设置为<code>1kB</code>），而不是使用命名端口（<code>data_port</code>和<code>inst_port</code>），我们只使用该<code>cpu_side</code>端口两次。由于<code>cpu_side</code>是 a <code>VectorPort</code>，它将自动创建多个端口连接。</p>
<pre><code class="language-python">import m5
from m5.objects import *

...

system.cache = SimpleCache(size='1kB')

system.cpu.icache_port = system.cache.cpu_side
system.cpu.dcache_port = system.cache.cpu_side

system.membus = SystemXBar()

system.cache.mem_side = system.membus.slave

...
</code></pre>
<p>Python 配置文件可以在<a href="https://www.gem5.org/_pages/static/scripts/part2/simplecache/simple_cache.py">这里</a>下载 。</p>
<p>运行此脚本应该会从 hello 二进制文件中产生预期的输出。</p>
<pre><code class="language-bash">gem5 Simulator System.  http://gem5.org
gem5 is copyrighted software; use the --copyright option for details.

gem5 compiled Jan 10 2017 17:38:15
gem5 started Jan 10 2017 17:40:03
gem5 executing on chinook, pid 29031
command line: build/X86/gem5.opt configs/learning_gem5/part2/simple_cache.py

Global frequency set at 1000000000000 ticks per second
warn: DRAM device capacity (8192 Mbytes) does not match the address range assigned (512 Mbytes)
0: system.remote_gdb.listener: listening for remote gdb #0 on port 7000
warn: CoherentXBar system.membus has no snooping ports attached!
warn: ClockedObject: More than one power state change request encountered within the same simulation tick
Beginning simulation!
info: Entering event queue @ 0.  Starting simulation...
Hello world!
Exiting @ tick 56082000 because target called exit()
</code></pre>
<p>修改缓存的大小，例如修改为 128 KB，应该可以提高系统的性能。</p>
<pre><code class="language-bash">gem5 Simulator System.  http://gem5.org
gem5 is copyrighted software; use the --copyright option for details.

gem5 compiled Jan 10 2017 17:38:15
gem5 started Jan 10 2017 17:41:10
gem5 executing on chinook, pid 29037
command line: build/X86/gem5.opt configs/learning_gem5/part2/simple_cache.py

Global frequency set at 1000000000000 ticks per second
warn: DRAM device capacity (8192 Mbytes) does not match the address range assigned (512 Mbytes)
0: system.remote_gdb.listener: listening for remote gdb #0 on port 7000
warn: CoherentXBar system.membus has no snooping ports attached!
warn: ClockedObject: More than one power state change request encountered within the same simulation tick
Beginning simulation!
info: Entering event queue @ 0.  Starting simulation...
Hello world!
Exiting @ tick 32685000 because target called exit()
</code></pre>
<h2 id="向缓存添加统计信息"><a class="header" href="#向缓存添加统计信息">向缓存添加统计信息</a></h2>
<p>了解系统的整体执行时间是一项重要指标。但是，您可能还想包括其他统计信息，例如缓存的命中率和未命中率。为此，我们需要向<code>SimpleCache</code>对象添加一些统计信息。</p>
<p>首先，我们需要在<code>SimpleCache</code>对象中声明统计信息。它们是<code>Stats</code>命名空间的一部分。本例中，我们将进行四项统计。<code>hits</code>的数量和<code>misses</code>的数量只是简单的<code>Scalar</code>计数。我们还将添加  <code>missLatency</code>，它是缓存未命中所需访问时间的直方图。最后，我们给<code>hitRatio</code>添加一个特殊统计数据<code>Formula</code>，它是其他统计数据（命中和未命中的数量）的组合。</p>
<pre><code class="language-cpp">class SimpleCache : public MemObject
{
  private:
    ...

    Tick missTime; // To track the miss latency

    Stats::Scalar hits;
    Stats::Scalar misses;
    Stats::Histogram missLatency;
    Stats::Formula hitRatio;

  public:
    ...

    void regStats() override;
};
</code></pre>
<p>接下来，我们必须重写<code>regStats</code>函数，以便将统计信息注册到 gem5 的统计基础架构中。在这里，对于每个统计数据，我们根据“父” SimObject 名称和描述为其命名。对于直方图统计，我们还要用桶数来初始化它。最后，我们只需要在代码中写下公式即可。</p>
<pre><code class="language-cpp">void
SimpleCache::regStats()
{
    // If you don't do this you get errors about uninitialized stats.
    MemObject::regStats();

    hits.name(name() + &quot;.hits&quot;)
        .desc(&quot;Number of hits&quot;)
        ;

    misses.name(name() + &quot;.misses&quot;)
        .desc(&quot;Number of misses&quot;)
        ;

    missLatency.name(name() + &quot;.missLatency&quot;)
        .desc(&quot;Ticks for misses to the cache&quot;)
        .init(16) // number of buckets
        ;

    hitRatio.name(name() + &quot;.hitRatio&quot;)
        .desc(&quot;The ratio of hits to the total accesses to the cache&quot;)
        ;

    hitRatio = hits / (hits + misses);

}
</code></pre>
<p>最后，我们需要在我们的代码中使用更新统计信息。在 <code>accessTiming</code>类中，我们可以分别在命中和未命中时增加<code>hits</code>和<code>misses</code>。此外，如果出现未命中，我们会保存当前时间，以便我们可以测量延迟。</p>
<pre><code class="language-cpp">void
SimpleCache::accessTiming(PacketPtr pkt)
{
    bool hit = accessFunctional(pkt);
    if (hit) {
        hits++; // update stats
        pkt-&gt;makeResponse();
        sendResponse(pkt);
    } else {
        misses++; // update stats
        missTime = curTick();
        ...
</code></pre>
<p>然后，当我们得到响应时，我们需要将测量的延迟添加到我们的直方图中。为此，我们使用<code>sample</code>函数。这会在直方图中添加一个点。此直方图会自动调整桶的大小以适应它接收到的数据。</p>
<pre><code class="language-cpp">bool
SimpleCache::handleResponse(PacketPtr pkt)
{
    insert(pkt);

    missLatency.sample(curTick() - missTime);
    ...
</code></pre>
<p><code>SimpleCache</code>头文件的完整代码可以在<a href="https://www.gem5.org/_pages/static/scripts/part2/simplecache/simple_cache.hh">这里</a>下载 ，<code>SimpleCache</code>实现的完整代码可以在<a href="https://www.gem5.org/_pages/static/scripts/part2/simplecache/simple_cache.cc">这里</a>下载 。</p>
<p>现在，如果我们运行上面的配置文件，我们可以检查<code>stats.txt</code>文件中的统计信息。对于 1 KB 的情况，我们得到以下统计信息。访存命中率为91% ，平均未命中延迟为 53334 滴答（或 53 ns）。</p>
<pre><code class="language-bash">system.cache.hits                                8431                       # Number of hits
system.cache.misses                               877                       # Number of misses
system.cache.missLatency::samples                 877                       # Ticks for misses to the cache
system.cache.missLatency::mean           53334.093501                       # Ticks for misses to the cache
system.cache.missLatency::gmean          44506.409356                       # Ticks for misses to the cache
system.cache.missLatency::stdev          36749.446469                       # Ticks for misses to the cache
system.cache.missLatency::0-32767                 305     34.78%     34.78% # Ticks for misses to the cache
system.cache.missLatency::32768-65535             365     41.62%     76.40% # Ticks for misses to the cache
system.cache.missLatency::65536-98303             164     18.70%     95.10% # Ticks for misses to the cache
system.cache.missLatency::98304-131071             12      1.37%     96.47% # Ticks for misses to the cache
system.cache.missLatency::131072-163839            17      1.94%     98.40% # Ticks for misses to the cache
system.cache.missLatency::163840-196607             7      0.80%     99.20% # Ticks for misses to the cache
system.cache.missLatency::196608-229375             0      0.00%     99.20% # Ticks for misses to the cache
system.cache.missLatency::229376-262143             0      0.00%     99.20% # Ticks for misses to the cache
system.cache.missLatency::262144-294911             2      0.23%     99.43% # Ticks for misses to the cache
system.cache.missLatency::294912-327679             4      0.46%     99.89% # Ticks for misses to the cache
system.cache.missLatency::327680-360447             1      0.11%    100.00% # Ticks for misses to the cache
system.cache.missLatency::360448-393215             0      0.00%    100.00% # Ticks for misses to the cache
system.cache.missLatency::393216-425983             0      0.00%    100.00% # Ticks for misses to the cache
system.cache.missLatency::425984-458751             0      0.00%    100.00% # Ticks for misses to the cache
system.cache.missLatency::458752-491519             0      0.00%    100.00% # Ticks for misses to the cache
system.cache.missLatency::491520-524287             0      0.00%    100.00% # Ticks for misses to the cache
system.cache.missLatency::total                   877                       # Ticks for misses to the cache
system.cache.hitRatio                        0.905780                       # The ratio of hits to the total access
</code></pre>
<p>当使用 128 KB 缓存时，我们获得了略高的命中率。看起来我们的缓存按预期工作！</p>
<pre><code class="language-bash">system.cache.hits                                8944                       # Number of hits
system.cache.misses                               364                       # Number of misses
system.cache.missLatency::samples                 364                       # Ticks for misses to the cache
system.cache.missLatency::mean           64222.527473                       # Ticks for misses to the cache
system.cache.missLatency::gmean          61837.584812                       # Ticks for misses to the cache
system.cache.missLatency::stdev          27232.443748                       # Ticks for misses to the cache
system.cache.missLatency::0-32767                   0      0.00%      0.00% # Ticks for misses to the cache
system.cache.missLatency::32768-65535             254     69.78%     69.78% # Ticks for misses to the cache
system.cache.missLatency::65536-98303             106     29.12%     98.90% # Ticks for misses to the cache
system.cache.missLatency::98304-131071              0      0.00%     98.90% # Ticks for misses to the cache
system.cache.missLatency::131072-163839             0      0.00%     98.90% # Ticks for misses to the cache
system.cache.missLatency::163840-196607             0      0.00%     98.90% # Ticks for misses to the cache
system.cache.missLatency::196608-229375             0      0.00%     98.90% # Ticks for misses to the cache
system.cache.missLatency::229376-262143             0      0.00%     98.90% # Ticks for misses to the cache
system.cache.missLatency::262144-294911             2      0.55%     99.45% # Ticks for misses to the cache
system.cache.missLatency::294912-327679             1      0.27%     99.73% # Ticks for misses to the cache
system.cache.missLatency::327680-360447             1      0.27%    100.00% # Ticks for misses to the cache
system.cache.missLatency::360448-393215             0      0.00%    100.00% # Ticks for misses to the cache
system.cache.missLatency::393216-425983             0      0.00%    100.00% # Ticks for misses to the cache
system.cache.missLatency::425984-458751             0      0.00%    100.00% # Ticks for misses to the cache
system.cache.missLatency::458752-491519             0      0.00%    100.00% # Ticks for misses to the cache
system.cache.missLatency::491520-524287             0      0.00%    100.00% # Ticks for misses to the cache
system.cache.missLatency::total                   364                       # Ticks for misses to the cache
system.cache.hitRatio                        0.960894                       # The ratio of hits to the total access
</code></pre>
<div style="break-before: page; page-break-before: always;"></div><h1 id="arm-电源建模"><a class="header" href="#arm-电源建模">ARM 电源建模</a></h1>
<p>可以对 gem5 模拟的能量和功率使用进行建模和监控。这是通过使用 gem5 已经记录在<code>MathExprPowerModel</code>中的各种统计数据来完成的；<code>MathExprPowerModel</code>是一种通过数学方程对电力使用进行建模的方法。本教程的这一章详细介绍了电源建模所需的各种组件，并说明了如何将它们添加到现有的 ARM 仿真中。</p>
<p>本章借鉴了<code>configs/example/arm</code>目录中提供的配置脚本<code>fs_power.py</code>，还提供了如何扩展此脚本或其他脚本的说明。</p>
<p>请注意，只有在使用更详细的“时序”CPU 时才能应用电源模型。</p>
<p>在 2017 年 ARM 研究峰会上<a href="https://youtu.be/3gWyUWHxVj4">Sascha Bischoff 的演讲中</a>可以找到有关如何将电源建模内置到 gem5 中以及它们与模拟器的哪些其他部分进行交互的概述。</p>
<h2 id="动态电源状态"><a class="header" href="#动态电源状态">动态电源状态</a></h2>
<p>电源模型由两个函数组成，它们描述了如何计算不同电源状态下的功耗。电源状态是（来自 <code>src/sim/PowerState.py</code>）：</p>
<ul>
<li><code>UNDEFINED</code>: 无效状态，没有可用的电源状态派生信息。此状态是默认状态。</li>
<li><code>ON</code>：逻辑块正在活跃地运行，并根据所需的处理量消耗动态和泄漏能量。</li>
<li><code>CLK_GATED</code>：块内的时钟电路被门控以节省动态能量，块的电源仍然打开并且块消耗泄漏能量。</li>
<li><code>SRAM_RETENTION</code>：逻辑块内的 SRAM 被拉入保持状态以进一步减少泄漏能量。</li>
<li><code>OFF</code>：逻辑块是电源门控的，不消耗任何能量。</li>
</ul>
<p>除了<code>UNDEFINED</code>，使用<code>PowerModel</code>类的<code>pm</code>字段为每个状态分配一个电源模型。它是一个包含 4 个电源模型的列表，每个状态一个，按以下顺序：</p>
<ol>
<li><code>ON</code></li>
<li><code>CLK_GATED</code></li>
<li><code>SRAM_RETENTION</code></li>
<li><code>OFF</code></li>
</ol>
<p>请注意，虽然有 4 个不同的条目，但它们不一定是不同的电源模型。提供的<code>fs_power.py</code>文件为<code>ON</code>状态使用一个电源模型，然后为其余状态使用相同的电源模型。</p>
<h2 id="电源使用类型"><a class="header" href="#电源使用类型">电源使用类型</a></h2>
<p>gem5 模拟器模拟 2 种电源使用类型：</p>
<ul>
<li><strong>static</strong>：无论活动如何，模拟系统使用的功率。</li>
<li><strong>dynamic</strong>：系统因各种类型的活动而使用的功率。</li>
</ul>
<p>功率模型必须包含用于对这两个模型进行建模的方程（尽管该方程可以很简单得像<code>st = &quot;0&quot;</code>，例如，静态功率在该功率模型中是不需要的或不相关的）。</p>
<h2 id="mathexprpowermodels"><a class="header" href="#mathexprpowermodels">MathExprPowerModels</a></h2>
<p><code>fs_power.py</code>中提供的电源模型继承了<code>MathExprPowerModel</code> 类。<code>MathExprPowerModels</code>被指定为包含数学表达式的字符串，用于计算系统使用的功率。它们通常包含统计数据和自动变量的混合，例如温度：</p>
<pre><code class="language-python">class CpuPowerOn(MathExprPowerModel):
    def __init__(self, cpu_path, **kwargs):
        super(CpuPowerOn, self).__init__(**kwargs)
        # 2A per IPC, 3pA per cache miss
        # and then convert to Watt
        self.dyn = &quot;voltage * (2 * {}.ipc + 3 * 0.000000001 * &quot; \
                   &quot;{}.dcache.overall_misses / sim_seconds)&quot;.format(cpu_path,
                                                                    cpu_path)
        self.st = &quot;4 * temp&quot;
</code></pre>
<p>（以上电源模型取自提供的<code>fs_power.py</code>文件。）</p>
<p>我们可以看到自动变量（<code>voltage</code>和<code>temp</code>）不需要路径，而特定于组件的统计信息（CPU 的每周期指令数 <code>ipc</code>）需要。在文件的更下方的<code>main</code>函数中，我们可以看到 CPU 对象具有一个<code>path()</code>函数，该函数返回组件在系统中的“路径”，例如<code>system.bigCluster.cpus0</code>. 该<code>path</code>函数由<code>SimObject</code>系统中的任何对象提供 ，因此可以被系统中扩展它的任何对象使用，例如，l2 缓存对象比 CPU 对象靠后几行使用它。</p>
<p>（注：<code>dcache.overall_misses</code>通过<code>sim_seconds</code>转换为瓦特。这是一个<em>功率</em>模式，即随着时间的推移能量，而不是一个能量模型。使用这些术语时最好小心些，因为它们经常被混用，但当涉及到电力和能源模拟/建模时它们分别指代不同的具体事物。）</p>
<h2 id="扩展现有的模拟"><a class="header" href="#扩展现有的模拟">扩展现有的模拟</a></h2>
<p><code>fs_power.py</code>通过导入和修改数值拓展了现有的<code>fs_bigLITTLE.py</code>脚本。作为其中的一部分，脚本用多个循环遍历 SimObjects 的子类以应用 Power Models。因此，为了扩展现有的仿真以支持功率模型，定义一个辅助函数来执行此操作会很有帮助：</p>
<pre><code class="language-python">def _apply_pm(simobj, power_model, so_class=None):
    for desc in simobj.descendants():
        if so_class is not None and not isinstance(desc, so_class):
            continue

        desc.power_state.default_state = &quot;ON&quot;
        desc.power_model = power_model(desc.path())
</code></pre>
<p>上面的函数采用 SimObject、Power Model 和可选的类，SimObject 的后代必须是<code>so_class</code>的实例才能应用 PM。如果未指定类，则 PM 将应用于所有子类。</p>
<p>无论您决定是否使用辅助函数，您现在都需要定义一些电源模型。这可以通过遵循<code>fs_power.py</code>中的模式来完成 ：</p>
<ol>
<li>为您感兴趣的每个电源状态定义一个类。这些类应该扩展<code>MathExprPowerModel</code>，并包含<code>dyn</code>和<code>st</code> 字段。这些字段中的每一个都应包含一个字符串，描述如何计算此状态下的相应功率类型。它们的构造函数应该包括 <code>format()</code>在描述功率计算方程的字符串中使用的路径(cpu_path)，以及要传递给超类构造函数的多个 kwarg。</li>
<li>定义一个类来保存上一步中定义的所有电源模型（CpuPowerModel）。此类应扩展<code>PowerModel</code>并包含一个字段<code>pm</code>，该字段包含 4 个元素的列表：<code>pm[0]</code>应该是“ON”电源状态的电源模型的实例；<code>pm[1]</code>应该是“CLK_GATED”电源状态的电源模型的实例；等等。这个类的构造函数应该采用传递给单个 Power Models 的路径，以及传递给超类构造函数的多个 kwarg。</li>
<li>定义了辅助函数和上述类后，您可以扩展该<code>build</code>函数以将这些考虑在内，如果您希望能够切换模型的使用，则可以选择在该函数中添加一个命令行标志<code>addOptions</code>。</li>
</ol>
<blockquote>
<p><strong>示例实现：</strong></p>
<pre><code class="language-python">class CpuPowerOn(MathExprPowerModel):
    def __init__(self, cpu_path, **kwargs):
        super(CpuPowerOn, self).__init__(**kwargs)
        self.dyn = &quot;voltage * 2 * {}.ipc&quot;.format(cpu_path)
        self.st = &quot;4 * temp&quot;


class CpuPowerClkGated(MathExprPowerModel):
    def __init__(self, cpu_path, **kwargs):
        super(CpuPowerOn, self).__init__(**kwargs)
        self.dyn = &quot;voltage / sim_seconds&quot;
        self.st = &quot;4 * temp&quot;


class CpuPowerOff(MathExprPowerModel):
    dyn = &quot;0&quot;
    st = &quot;0&quot;


class CpuPowerModel(PowerModel):
    def __init__(self, cpu_path, **kwargs):
        super(CpuPowerModel, self).__init__(**kwargs)
        self.pm = [
            CpuPowerOn(cpu_path),       # ON
            CpuPowerClkGated(cpu_path), # CLK_GATED
            CpuPowerOff(),              # SRAM_RETENTION
            CpuPowerOff(),              # OFF
        ]

[...]

def addOptions(parser):
    [...]
    parser.add_argument(&quot;--power-models&quot;, action=&quot;store_true&quot;,
                        help=&quot;Add power models to the simulated system. &quot;
                             &quot;Requires using the 'timing' CPU.&quot;
    return parser


def build(options):
    root = Root(full_system=True)
    [...]
    if options.power_models:
        if options.cpu_type != &quot;timing&quot;:
            m5.fatal(&quot;The power models require the 'timing' CPUs.&quot;)

        _apply_pm(root.system.bigCluster.cpus, CpuPowerModel
                  so_class=m5.objects.BaseCpu)
        _apply_pm(root.system.littleCluster.cpus, CpuPowerModel)

    return root

[...]
</code></pre>
</blockquote>
<h2 id="统计名称"><a class="header" href="#统计名称">统计名称</a></h2>
<p>统计名称通常与模拟后<code>stats.txt</code>在<code>m5out</code>目录中生成的文件中看到的相同。但是，也有一些例外：</p>
<ul>
<li>CPU 时钟在<code>stats.txt</code>称为<code>clk_domain.clock</code>，但在电源模型中使用<code>clock_period</code>（不是<code>clock</code>）访问。</li>
</ul>
<h2 id="统计转储频率"><a class="header" href="#统计转储频率">统计转储频率</a></h2>
<p>默认情况下，gem5<code>stats.txt</code>每隔模拟秒将模拟统计信息转储到文件中。这可以通过<code>m5.stats.periodicStatDump</code> 函数进行控制，该函数采用以模拟滴答而不是秒为单位的频率来转储统计数据。幸运的是，<code>m5.ticks</code>提供了<code>fromSeconds</code>函数方便转换。</p>
<p>下面是从<a href="https://youtu.be/3gWyUWHxVj4">Sascha Bischoff 的演示</a>幻灯片 16 中获取的统计转储频率如何影响结果分辨率的示例：</p>
<p><img src="part2/part2_7_arm_power_modelling.assets/empowering_the_masses_slide16.png" alt="一张比较不太详细的功率图与更详细的功率图的图片； 1 秒采样间隔与 1 毫秒采样间隔。" /></p>
<p>转储统计信息的频率直接影响可以基于<code>stats.txt</code>文件生成的图形的分辨率。但是，它也会影响输出文件的大小。每隔模拟秒与每个模拟毫秒转储统计数据会使文件大小增加数百倍。因此，想要控制统计转储频率是有意义的。</p>
<p>使用提供的<code>fs_power.py</code>脚本，可以按如下方式实现频率控制：</p>
<pre><code class="language-python">[...]

def addOptions(parser):
    [...]
    parser.add_argument(&quot;--stat-freq&quot;, type=float, default=1.0,
                        help=&quot;Frequency (in seconds) to dump stats to the &quot;
                             &quot;'stats.txt' file. Supports scientific notation, &quot;
                             &quot;e.g. '1.0E-3' for milliseconds.&quot;)
    return parser

[...]

def main():
    [...]
    m5.stats.periodicStatDump(m5.ticks.fromSeconds(options.stat_freq))
    bL.run()

[...]
</code></pre>
<p>然后可以在调用模拟时指定统计转储频率</p>
<pre><code class="language-bash">--stat-freq &lt;val&gt;
</code></pre>
<h2 id="常见问题"><a class="header" href="#常见问题">常见问题</a></h2>
<ul>
<li>使用提供的<code>fs_power.py</code>时 gem5 崩溃，并显示消息<code>fatal: statistic '' (160) was not properly initialized by a regStats() function</code></li>
<li>使用提供的<code>fs_power.py</code>时 gem5 崩溃，并显示消息<code>fatal: Failed to evaluate power expressions: [...]</code></li>
</ul>
<p>这是因为 gem5 的统计框架最近被重构了。获取最新版本的 gem5 源代码并重新构建应该可以解决问题。如果这是不可行的，则需要以下两组补丁：</p>
<ol>
<li>https://gem5-review.googlesource.com/c/public/gem5/+/26643</li>
<li>https://gem5-review.googlesource.com/c/public/gem5/+/26785</li>
</ol>
<p>可以按照各自链接中的下载说明进行检查和应用。</p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="arm-dvfs-建模"><a class="header" href="#arm-dvfs-建模">ARM DVFS 建模</a></h1>
<p>与大多数现代 CPU 一样，ARM CPU 支持 DVFS。可以对此进行建模，例如，在 gem5 中监控由此产生的功耗。DVFS 建模是通过使用时钟对象的两个组件完成的：电压域和时钟域。本章详细介绍了不同的组件，并展示了将它们添加到现有模拟中的不同方法。</p>
<h2 id="电压域vd"><a class="header" href="#电压域vd">电压域（VD）</a></h2>
<p>电压域决定了 CPU 可以使用的电压值。如果在 gem5 中运行完整系统模拟时未指定 VD，则使用默认值 1.0 伏。这是为了避免在用户对模拟电压不感兴趣时强迫他们考虑电压。</p>
<p>电压域可以从单个值或值列表构造，使用<code>voltage</code>kwarg传递给<code>VoltageDomain</code>构造函数。如果指定了单个值和多个频率，则电压用于时钟域中的所有频率。如果指定了电压值列表，则其条目数必须与相应时钟域中的条目数匹配，并且条目必须按<em>降序</em> 排列。与真实硬件一样，电压域适用于整个处理器插槽。这意味着，如果您想为不同的处理器使用不同的 VD（例如，对于 big.LITTLE 设置），您需要确保 big 和 LITTLE 集群位于不同的套接字上（检查与集群关联的<code>socket_id</code>值）。</p>
<p>有两种方法可以将 VD 添加到现有 CPU/仿真中，一种更灵活，另一种更直接。第一种方法向提供的<code>configs/example/arm/fs_bigLITTLE.py</code>文件添加命令行标志，而第二种方法添加自定义类。</p>
<ol>
<li>
<p>将电压域添加到仿真中的最灵活方法是使用命令行标志。要添加命令行标志，请<code>addOptions</code> 在文件中找到该函数并在那里添加标志，可以选择使用一些帮助文本。</p>
<p>支持单电压和多电压的示例：</p>
<pre><code class="language-python">def addOptions(parser):
    [...]
    parser.add_argument(&quot;--big-cpu-voltage&quot;, nargs=&quot;+&quot;, default=&quot;1.0V&quot;,
                        help=&quot;Big CPU voltage(s).&quot;)
    return parser
</code></pre>
<p><code>nargs=&quot;+&quot;</code>确保至少需要一个参数</p>
<p>然后可以指定电压域值</p>
<pre><code class="language-bash">--big-cpu-voltage &lt;val1&gt;V [&lt;val2&gt;V [&lt;val3&gt;V [...]]]
</code></pre>
<p>这些参数可以在<code>build</code>函数中使用 <code>options.big_cpu_voltage</code>访问。示例用法<code>build</code>：</p>
<pre><code class="language-python">def build(options):
    [...]
    # big cluster
    if options.big_cpus &gt; 0:
        system.bigCluster = big_model(system, options.big_cpus,
                                      options.big_cpu_clock,
                                      options.big_cpu_voltage)
    [...]
</code></pre>
<p><code>build</code>可以添加类似的标志和函数的附加功能，以支持为 LITTLE CPU 指定电压值。这种方法允许非常容易地指定和修改电压。这种方法的唯一缺点是多个命令行参数，有些是列表形式，可能会使用于调用模拟器的命令变得混乱。</p>
</li>
<li>
<p>指定电压域的不太灵活的方法是创建<code>CpuCluster</code>的子类。 与现有<code>BigCluster</code>和 <code>LittleCluster</code>子类类似，这将扩展<code>CpuCluster</code>类。在子类的构造函数中，除了指定 CPU 类型之外，我们还为电压域定义了一个值列表，并使用<code>cpu_voltage</code> kwarg将其传递给对<code>super</code>构造函数的调用。这是一个示例，用于向<code>BigCluster</code>添加电压：</p>
<pre><code class="language-python">class VDBigCluster(devices.CpuCluster):
    def __init__(self, system, num_cpus, cpu_clock=None, cpu_voltage=None):
        # use the same CPU as the stock BigCluster
        abstract_cpu = ObjectList.cpu_list.get(&quot;O3_ARM_v7a_3&quot;)
        # voltage value(s)
        my_voltages = [ '1.0V', '0.75V', '0.51V']

        super(VDBigCluster, self).__init__(
            cpu_voltage=my_voltages,
            system=system,
            num_cpus=num_cpus,
            cpu_type=abstract_cpu,
            l1i_type=devices.L1I,
            l1d_type=devices.L1D,
            wcache_type=devices.WalkCache,
            l2_type=devices.L2
        )
</code></pre>
<p>类似地，可以通过定义<code>VDLittleCluster</code>类来向<code>LittleCluster</code>增加电压参数。
定义了子类后，我们还需要在<code>cpu_types</code>字典中添加一个条目 ，指定一个字符串名称作为键和一对类作为值，例如：</p>
<pre><code class="language-python">cpu_types = {
    [...]
    &quot;vd-timing&quot; : (VDBigCluster, VDLittleCluster)
}
</code></pre>
<p>然后可以通过传递使用带有 VD 的 CPU</p>
<pre><code class="language-bash">--cpu-type vd-timing
</code></pre>
<p>到调用模拟的命令。
由于对电压值的任何修改都必须通过找到正确的子类并修改其代码或添加更多子类和 <code>cpu_types</code>条目来完成，因此这种方法比基于标志的方法少了很多灵活性。</p>
</li>
</ol>
<h2 id="时钟域cd"><a class="header" href="#时钟域cd">时钟域（CD）</a></h2>
<p>电压域与时钟域可以结合使用。如前所述，如果未指定自定义电压值，则时钟域中的所有值均使用默认值 1.0V。</p>
<p>与电压域相比，时钟域的类型有3种（来自 <code>src/sim/clock_domain.hh</code>）：</p>
<ul>
<li><code>ClockDomain</code>– 为绑定在同一时钟域下的一组时钟对象提供时钟。CD 依次按电压域分组。CD 为具有“源（Src）”和“派生（Derived）”时钟域的分层结构提供支持。</li>
<li><code>SrcClockDomain</code>– 描述了连接到可调时钟源的CD。它维护时钟周期并提供设置/获取时钟的方法，以及处理程序将要管理的 CD 的配置参数。这包括各种性能级别的频率值、域 ID 和当前性能级别。请注意，软件要求的性能级别对应于 CD 可以运行的频率之一。</li>
<li><code>DerivedClockDomain</code>-描述了连接到父CD的CD，父CD可以是<code>SrcClockDomain</code>或<code>DerivedClockDomain</code>。它维护时钟分频器并提供获取时钟的方法。</li>
</ul>
<h2 id="向现有仿真添加时钟域"><a class="header" href="#向现有仿真添加时钟域">向现有仿真添加时钟域</a></h2>
<p>这个例子将使用提供VD示例的文件，即 <code>configs/example/arm/fs_bigLITTLE.py</code>和<code>configs/example/arm/devices.py</code>。</p>
<p>与 VD 一样，CD 可以是单个值或值列表。如果给出了时钟速度列表，则适用于提供给 VD 的电压列表的相同规则，即 CD 中的值的数量必须与 VD 中的值的数量相匹配；并且时钟速度必须<em>按降序</em>给出。提供的文件支持将时钟指定为单个值（通过<code>--{big,little}-cpu-clock</code>标志），但不支持将时钟指定为值列表。扩展/修改所提供标志的行为是添加对多值 CD 支持的最简单、最灵活的方法，但也可以通过添加子类来实现。</p>
<ol>
<li>
<p>要向现有<code>--{big,little}-cpu-clock</code>标志添加多值支持，需要在<code>configs/example/arm/fs_bigLITTLE.py</code>中找到<code>addOptions()</code>函数。在大量<code>parser.add_argument</code>调用中，找到添加CPU时钟标志的那些，并把<code>type=str</code>替换成<code>nargs=&quot;+&quot;</code>:</p>
<pre><code class="language-python">def addOptions(parser):
    [...]
    parser.add_argument(&quot;--big-cpu-clock&quot;, nargs=&quot;+&quot;, default=&quot;2GHz&quot;,
                        help=&quot;Big CPU clock frequency.&quot;)
    parser.add_argument(&quot;--little-cpu-clock&quot;, nargs=&quot;+&quot;, default=&quot;1GHz&quot;,
                        help=&quot;Little CPU clock frequency.&quot;)
    [...]
</code></pre>
<p>这样，可以类似于用于 VD 的标志来指定多个频率：</p>
<pre><code class="language-bash">--{big,little}-cpu-clock &lt;val1&gt;GHz [&lt;val2&gt;MHz [&lt;val3&gt;MHz [...]]]
</code></pre>
<p>由于这会修改现有标志，因此标志的值已经连接到<code>build</code>函数中的相关构造函数和 kwargs ，因此无需修改任何内容。</p>
</li>
<li>
<p>在子类中添加 CD 的过程与将 VD 添加为子类的过程非常相似。不同之处在于我们指定时钟频率并在调用父类（构造）函数时用kwarg <code>cpu_voltage</code>传入。</p>
<pre><code class="language-python">class CDBigCluster(devices.CpuCluster):
    def __init__(self, system, num_cpus, cpu_clock=None, cpu_voltage=None):
        # use the same CPU as the stock BigCluster
        abstract_cpu = ObjectList.cpu_list.get(&quot;O3_ARM_v7a_3&quot;)
        # clock value(s)
        my_freqs = [ '1510MHz', '1000MHz', '667MHz']

        super(VDBigCluster, self).__init__(
            cpu_clock=my_freqs,
            system=system,
            num_cpus=num_cpus,
            cpu_type=abstract_cpu,
            l1i_type=devices.L1I,
            l1d_type=devices.L1D,
            wcache_type=devices.WalkCache,
            l2_type=devices.L2
        )
</code></pre>
<p>这可以与 VD 示例结合使用，以便为集群指定 VD 和 CD。
与使用这种方法添加 VD 一样，您需要为要使用的每个 CPU 类型定义一个类，并在<code>cpu_types</code>字典中指定它们的 name-cpuPair 值。这种方法也有同样的限制，而且比基于标志的少了很多灵活性。</p>
</li>
</ol>
<h2 id="确保-cd-具有有效的-domainid"><a class="header" href="#确保-cd-具有有效的-domainid">确保 CD 具有有效的 DomainID</a></h2>
<p>无论使用之前的哪种方法，都需要进行一些额外的修改。这些涉及提供的 <code>configs/example/arm/devices.py</code>文件。</p>
<p>在文件中，找到<code>CpuClusters</code>类并找到 <code>self.clk_domain</code>初始化为<code>SrcClockDomain</code>的位置. 如上述评论中<code>SrcClockDomain</code>所述，它们具有域 ID。如果未设置，则将使用默认 ID <code>-1</code>。以下代码可以确保CD设置了域 ID：</p>
<pre><code class="language-python">[...]
self.clk_domain = SrcClockDomain(clock=cpu_clock,
                                 voltage_domain=self.voltage_domain,
                                 domain_id=system.numCpuClusters())
[...]
</code></pre>
<p>由于CD适用于整个群集，这里使用<code>system.numCpuClusters()</code>。其中，0代表第一簇，1代表第二簇，以此类推。</p>
<p>如果不设置域 ID，则在尝试运行具有 DVFS 功能的模拟时将出现以下错误，因为某些内部检查会捕获默认域 ID：</p>
<pre><code class="language-python">fatal: fatal condition domain_id == SrcClockDomain::emptyDomainID occurred:
DVFS: Controlled domain system.bigCluster.clk_domain needs to have a properly
assigned ID.
</code></pre>
<h2 id="dvfs-处理程序"><a class="header" href="#dvfs-处理程序">DVFS 处理程序</a></h2>
<p>如果您指定 VD 和 CD，然后尝试运行您的模拟，它很可能会运行，但您可能会在输出中注意到以下警告：</p>
<pre><code class="language-bash">warn: Existing EnergyCtrl, but no enabled DVFSHandler found.
</code></pre>
<p>VD 和 CD 已添加，但系统无法与之交互以调整值，因为还没有指定<code>DVFSHandler</code>。解决此问题的最简单方法是在<code>configs/example/arm/fs_bigLITTLE.py</code>文件中添加另一个命令行标志。</p>
<p>在 VD 和 CD 示例中，找到<code>addOptions</code>函数并将以下代码加入：</p>
<pre><code class="language-python">def addOptions(parser):
    [...]
    parser.add_argument(&quot;--dvfs&quot;, action=&quot;store_true&quot;,
                        help=&quot;Enable the DVFS Handler.&quot;)
    return parser
</code></pre>
<p>然后，找到<code>build</code>函数并将此代码加入：</p>
<pre><code class="language-python">def build(options):
    [...]
    if options.dvfs:
        system.dvfs_handler.domains = [system.bigCluster.clk_domain,
                                       system.littleCluster.clk_domain]
        system.dvfs_handler.enable = options.dvfs

    return root
</code></pre>
<p>现在，您现在应该能够通过<code>--dvfs</code>在调用模拟时使用标志来运行支持 DVFS的模拟，并可以根据需要指定大集群和小集群的电压和频率工作点。</p>
<div style="break-before: page; page-break-before: always;"></div><hr />
<h2>layout: documentation
title: Introduction to Ruby
doc: Learning gem5
parent: part3
permalink: /documentation/learning_gem5/part3/MSIintro/
author: Jason Lowe-Power</h2>
<h1 id="introduction-to-ruby"><a class="header" href="#introduction-to-ruby">Introduction to Ruby</a></h1>
<p>Ruby comes from the <a href="http://research.cs.wisc.edu/gems/">multifacet GEMS
project</a>. Ruby provides a detailed
cache memory and cache coherence models as well as a detailed network
model (Garnet).</p>
<p>Ruby is flexible. It can model many different kinds of coherence
implementations, including broadcast, directory, token, region-based
coherence, and is simple to extend to new coherence models.</p>
<p>Ruby is a mostly drop-in replacement for the classic memory system.
There are interfaces between the classic gem5 MemObjects and Ruby, but
for the most part, the classic caches and Ruby are not compatible.</p>
<p>In this part of the book, we will first go through creating an example
protocol from the protocol description to debugging and running the
protocol.</p>
<p>Before diving into a protocol, we will first talk about some of the
architecture of Ruby. The most important structure in Ruby is the
controller, or state machine. Controllers are implemented by writing a
SLICC state machine file.</p>
<p>SLICC is a domain-specific language (Specification Language including
Cache Coherence) for specifying coherence protocols. SLICC files end in
&quot;.sm&quot; because they are <em>state machine</em> files. Each file describes
states, transitions from a begin to an end state on some event, and
actions to take during the transition.</p>
<p>Each coherence protocol is made up of multiple SLICC state machine
files. These files are compiled with the SLICC compiler which is written
in Python and part of the gem5 source. The SLICC compiler takes the
state machine files and output a set of C++ files that are compiled with
all of gem5's other files. These files include the SimObject declaration
file as well as implementation files for SimObjects and other C++
objects.</p>
<p>Currently, gem5 supports compiling only a single coherence protocol at a
time. For instance, you can compile MI_example into gem5 (the default,
poor performance, protocol), or you can use MESI_Two_Level. But, to
use MESI_Two_Level, you have to recompile gem5 so the SLICC compiler
can generate the correct files for the protocol. We discuss this further
in the compilation section &lt;MSI-building-section&gt;</p>
<p>Now, let's dive into implementing our first coherence protocol!</p>
<div style="break-before: page; page-break-before: always;"></div><hr />
<h2>layout: documentation
title: Introduction to Ruby
doc: Learning gem5
parent: part3
permalink: /documentation/learning_gem5/part3/MSIintro/
author: Jason Lowe-Power</h2>
<h1 id="introduction-to-ruby-1"><a class="header" href="#introduction-to-ruby-1">Introduction to Ruby</a></h1>
<p>Ruby comes from the <a href="http://research.cs.wisc.edu/gems/">multifacet GEMS
project</a>. Ruby provides a detailed
cache memory and cache coherence models as well as a detailed network
model (Garnet).</p>
<p>Ruby is flexible. It can model many different kinds of coherence
implementations, including broadcast, directory, token, region-based
coherence, and is simple to extend to new coherence models.</p>
<p>Ruby is a mostly drop-in replacement for the classic memory system.
There are interfaces between the classic gem5 MemObjects and Ruby, but
for the most part, the classic caches and Ruby are not compatible.</p>
<p>In this part of the book, we will first go through creating an example
protocol from the protocol description to debugging and running the
protocol.</p>
<p>Before diving into a protocol, we will first talk about some of the
architecture of Ruby. The most important structure in Ruby is the
controller, or state machine. Controllers are implemented by writing a
SLICC state machine file.</p>
<p>SLICC is a domain-specific language (Specification Language including
Cache Coherence) for specifying coherence protocols. SLICC files end in
&quot;.sm&quot; because they are <em>state machine</em> files. Each file describes
states, transitions from a begin to an end state on some event, and
actions to take during the transition.</p>
<p>Each coherence protocol is made up of multiple SLICC state machine
files. These files are compiled with the SLICC compiler which is written
in Python and part of the gem5 source. The SLICC compiler takes the
state machine files and output a set of C++ files that are compiled with
all of gem5's other files. These files include the SimObject declaration
file as well as implementation files for SimObjects and other C++
objects.</p>
<p>Currently, gem5 supports compiling only a single coherence protocol at a
time. For instance, you can compile MI_example into gem5 (the default,
poor performance, protocol), or you can use MESI_Two_Level. But, to
use MESI_Two_Level, you have to recompile gem5 so the SLICC compiler
can generate the correct files for the protocol. We discuss this further
in the compilation section &lt;MSI-building-section&gt;</p>
<p>Now, let's dive into implementing our first coherence protocol!</p>
<div style="break-before: page; page-break-before: always;"></div><hr />
<h2>layout: documentation
title: MSI example cache protocol
doc: Learning gem5
parent: part3
permalink: /documentation/learning_gem5/part3/cache-intro/
author: Jason Lowe-Power</h2>
<h1 id="msi-example-cache-protocol"><a class="header" href="#msi-example-cache-protocol">MSI example cache protocol</a></h1>
<p>Before we implement a cache coherence protocol, it is important to have
a solid understanding of cache coherence. This section leans heavily on
the great book <em>A Primer on Memory Consistency and Cache Coherence</em> by
Daniel J. Sorin, Mark D. Hill, and David A. Wood which was published as
part of the Synthesis Lectures on Computer Architecture in 2011
(<a href="https://doi.org/10.2200/S00346ED1V01Y201104CAC016">DOI:10.2200/S00346ED1V01Y201104CAC016</a>).
If you are unfamiliar with cache coherence, I strongly advise reading that book before continuing.</p>
<p>In this chapter, we will be implementing an MSI protocol.
(An MSI protocol has three stable states, modified with read-write permission, shared with read-only permission, and invalid with no permissions.)
We will implement this as a three-hop directory protocol (i.e., caches can send data directly to other caches without going through the directory).
Details for the protocol can be found in Section 8.2 of <em>A Primer on Memory Consistency and Cache Coherence</em> (pages 141-149).
It will be helpful to print out Section 8.2 to reference as you are implementing the protocol.</p>
<p>You can download an exceprt of Sorin et al. that contains Section 8.2 <a href="part3//_pages/static/external/Sorin_et-al_Excerpt_8.2.pdf">here</a>.</p>
<h2 id="first-steps-to-writing-a-protocol"><a class="header" href="#first-steps-to-writing-a-protocol">First steps to writing a protocol</a></h2>
<p>Let's start by creating a new directory for our protocol at src/learning_gem5/MSI_protocol.
In this directory, like in all gem5 source directories, we need to create a file for SCons to know what to compile.
However, this time, instead of creating a <code>SConscript</code> file, we are
going to create a <code>SConsopts</code> file. (The <code>SConsopts</code> files are processed
before the <code>SConscript</code> files and we need to run the SLICC compiler
before SCons executes.)</p>
<p>We need to create a <code>SConsopts</code> file with the following:</p>
<pre><code class="language-python">Import('*')

all_protocols.extend([
'MSI',
])

protocol_dirs.append(str(Dir('.').abspath))
</code></pre>
<p>We do two things in this file. First, we register the name of our
protocol (<code>'MSI'</code>). Since we have named our protocol MSI, SCons will
assume that there is a file named <code>MSI.slicc</code> which specifies all of the
state machine files and auxiliary files. We will create that file after
writing all of our state machine files. Second, the <code>SConsopts</code> files
tells the SCons to look in the current directory for files to pass to
the SLICC compiler.</p>
<p>You can download the <code>SConsopts</code> file
[here](/_pages/static/scripts/part3/MSI_protocol/SConsopts).</p>
<h2 id="writing-a-state-machine-file"><a class="header" href="#writing-a-state-machine-file">Writing a state machine file</a></h2>
<p>The next step, and most of the effort in writing a protocol, is to
create the state machine files. State machine files generally follow the
outline:</p>
<p>Parameters
:   These are the parameters for the SimObject that will be generated
from the SLICC code.</p>
<p>Declaring required structures and functions
:   This section declares the states, events, and many other required
structures for the state machine.</p>
<p>In port code blocks
:   Contain code that looks at incoming messages from the (<code>in_port</code>)
message buffers and determines what events to trigger.</p>
<p>Actions
:   These are simple one-effect code blocks (e.g., send a message) that
are executed when going through a transition.</p>
<p>Transitions
:   Specify actions to execute given a starting state and an event and
the final state. This is the meat of the state machine definition.</p>
<div style="break-before: page; page-break-before: always;"></div><hr />
<h2>layout: documentation
title: Declaring a state machine
doc: Learning gem5
parent: part3
permalink: /documentation/learning_gem5/part3/cache-declarations/
author: Jason Lowe-Power</h2>
<h1 id="declaring-a-state-machine"><a class="header" href="#declaring-a-state-machine">Declaring a state machine</a></h1>
<p>Let's start on our first state machine file! First, we will create the
L1 cache controller for our MSI protocol.</p>
<p>Create a file called <code>MSI-cache.sm</code> and the following code declares the
state machine.</p>
<pre><code class="language-cpp">machine(MachineType:L1Cache, &quot;MSI cache&quot;)
    : &lt;parameters&gt;
{
    &lt;All state machine code&gt;
}
</code></pre>
<p>The first thing you'll notice about the state machine code is that is
looks very C++-like. The state machine file is like creating a C++
object in a header file, if you included all of the code there as well.
When in doubt, C++ syntax with <em>probably</em> work in SLICC. However, there
are many cases where C++ syntax is incorrect syntax for SLICC as well as
cases where SLICC extends the syntax.</p>
<p>With <code>MachineType:L1Cache</code>, we are naming this state machine <code>L1Cache</code>.
SLICC will generate many different objects for us from the state machine
using that name. For instance, once this file is compiled, there will be
a new SimObject: <code>L1Cache_Controller</code> that is the cache controller. Also
included in this declaration is a description of this state machine:
&quot;MSI cache&quot;.</p>
<p>There are many cases in SLICC where you must include a description to go
along with the variable. The reason for this is that SLICC was
originally designed to just describe, not implement, coherence
protocols. Today, these extra descriptions serve two purposes. First,
they act as comments on what the author intended each variable, or
state, or event, to be used for. Second, many of them are still exported
into HTML when building the HTML tables for the SLICC protocol. Thus,
while browsing the HTML table, you can see the more detailed comments
from the author of the protocol. It is important to be clear with these
descriptions since coherence protocols can get quite complicated.</p>
<h2 id="state-machine-parameters"><a class="header" href="#state-machine-parameters">State machine parameters</a></h2>
<p>Proceeding the <code>machine()</code> declaration is a colon, after which all of
the parameters to the state machine are declared. These parameters are
directly exported to the SimObject that is generated by the state
machine.</p>
<p>For our MSI L1 cache, we have the following parameters:</p>
<pre><code class="language-cpp">machine(MachineType:L1Cache, &quot;MSI cache&quot;)
: Sequencer *sequencer;
  CacheMemory *cacheMemory;
  bool send_evictions;

  &lt;Message buffer declarations&gt;

  {

  }
</code></pre>
<p>First, we have a <code>Sequencer</code>. This is a special class that is
implemented in Ruby to interface with the rest of gem5. The Sequencer is
a gem5 <code>MemObject</code> with a slave port so it can accept memory requests
from other objects. The sequencer accepts requests from a CPU (or other
master port) and converts the gem5 the packet into a <code>RubyRequest</code>.
Finally, the <code>RubyRequest</code> is pushed onto the <code>mandatoryQueue</code> of the
state machine. We will revisit the <code>mandatoryQueue</code> in
the <a href="part3/../cache-in-ports">in-port section</a>.</p>
<p>Next, there is a <code>CacheMemory</code> object. This is what holds the cache data
(i.e., cache entries). The exact implementation, size, etc. is
configurable at runtime.</p>
<p>Finally, we can specify any other parameters we would like, similar to a
general <code>SimObject</code>. In this case, we have a boolean variable
<code>send_evictions</code>. This is used for out-of-order core models to notify
the load-store queue if an address is evicted after a load to squash a
load if it is speculative.</p>
<p>Next, also in the parameter block (i.e., before the first open bracket),
we need to declare all of the message buffers that this state machine
will use. Message buffers are the interface between the state machine
and the Ruby network. Messages are sent and received via the message
buffers. Thus, for each virtual channel in our protocol we need a
separate message buffer.</p>
<p>The MSI protocol needs three different virtual networks. Virtual
networks are needed to prevent deadlock (e.g., it is bad if a response
gets stuck behind a stalled request). In this protocol, the highest
priority is responses (virtual network 2), followed by forwarded
requests (virtual network 1), then requests have the lowest priority
(virtual network 0). See Sorin et al. for details on why these three
virtual networks are needed.</p>
<p>The following code declares all of the needed message buffers.</p>
<pre><code class="language-cpp">machine(MachineType:L1Cache, &quot;MSI cache&quot;)
: Sequencer *sequencer;
  CacheMemory *cacheMemory;
  bool send_evictions;

  MessageBuffer * requestToDir, network=&quot;To&quot;, virtual_network=&quot;0&quot;, vnet_type=&quot;request&quot;;
  MessageBuffer * responseToDirOrSibling, network=&quot;To&quot;, virtual_network=&quot;2&quot;, vnet_type=&quot;response&quot;;

  MessageBuffer * forwardFromDir, network=&quot;From&quot;, virtual_network=&quot;1&quot;, vnet_type=&quot;forward&quot;;
  MessageBuffer * responseFromDirOrSibling, network=&quot;From&quot;, virtual_network=&quot;2&quot;, vnet_type=&quot;response&quot;;

  MessageBuffer * mandatoryQueue;

{

}
</code></pre>
<p>We have five different message buffers: two &quot;To&quot;, two &quot;From&quot;, and one
special message buffer. The &quot;To&quot; message buffers are similar to slave
ports in gem5. These are the message buffers that this controller uses
to send messages to other controllers in the system. The &quot;From&quot; message
buffers are like slave ports. This controller receives messages on
&quot;From&quot; buffers from other controllers in the system.</p>
<p>We have two different &quot;To&quot; buffers, one for low priority requests, and
one for high priority responses. The priority for the networks are not
inherent. The priority is based on the order that other controllers look
at the message buffers. It is a good idea to number the virtual networks
so that higher numbers mean higher priority, but the virtual network
number is ignored by Ruby except that messages on network 2 can only go
to other message buffers on network 2 (i.e., messages can't jump from
one network to another).</p>
<p>Similarly, there is two different ways this cache can receive messages,
either as a forwarded request from the directory (e.g., another cache
requests a writable block and we have a readable copy) or as a response
to a request this controller made. The response is higher priority than
the forwarded requests.</p>
<p>Finally, there is a special message buffer, the <code>mandatoryQueue</code>. This
message buffer is used by the <code>Sequencer</code> to convert gem5 packets into
Ruby requests. Unlike the other message buffers, <code>mandatoryQueue</code> does
not connect to the Ruby network. Note: the name of this message buffer
is hard-coded and must be exactly &quot;mandatoryQueue&quot;.</p>
<p>As previously mentioned, this parameter block is converted into the
SimObject description file. Any parameters you put in this block will be
SimObject parameters that are accessible from the Python configuration
files. If you look at the generated file L1Cache_Controller.py, it will
look very familiar. Note: This is a generated file and you should never
modify generated files directly!</p>
<pre><code class="language-python">from m5.params import *
from m5.SimObject import SimObject
from Controller import RubyController

class L1Cache_Controller(RubyController):
    type = 'L1Cache_Controller'
    cxx_header = 'mem/protocol/L1Cache_Controller.hh'
    sequencer = Param.RubySequencer(&quot;&quot;)
    cacheMemory = Param.RubyCache(&quot;&quot;)
    send_evictions = Param.Bool(&quot;&quot;)
    requestToDir = Param.MessageBuffer(&quot;&quot;)
    responseToDirOrSibling = Param.MessageBuffer(&quot;&quot;)
    forwardFromDir = Param.MessageBuffer(&quot;&quot;)
    responseFromDirOrSibling = Param.MessageBuffer(&quot;&quot;)
    mandatoryQueue = Param.MessageBuffer(&quot;&quot;)
</code></pre>
<h2 id="state-declarations"><a class="header" href="#state-declarations">State declarations</a></h2>
<p>The next part of the state machine is the state declaration. Here, we
are going to declare all of the stable and transient states for the
state machine. We will follow the naming convention in Sorin et al. For
instance, the transient state &quot;IM_AD&quot; corresponds to moving from
Invalid to Modified waiting on acks and data. These states come directly
from the left column of Table 8.3 in Sorin et al.</p>
<pre><code class="language-cpp">state_declaration(State, desc=&quot;Cache states&quot;) {
    I,      AccessPermission:Invalid,
                desc=&quot;Not present/Invalid&quot;;

    // States moving out of I
    IS_D,   AccessPermission:Invalid,
                desc=&quot;Invalid, moving to S, waiting for data&quot;;
    IM_AD,  AccessPermission:Invalid,
                desc=&quot;Invalid, moving to M, waiting for acks and data&quot;;
    IM_A,   AccessPermission:Busy,
                desc=&quot;Invalid, moving to M, waiting for acks&quot;;

    S,      AccessPermission:Read_Only,
                desc=&quot;Shared. Read-only, other caches may have the block&quot;;

    // States moving out of S
    SM_AD,  AccessPermission:Read_Only,
                desc=&quot;Shared, moving to M, waiting for acks and 'data'&quot;;
    SM_A,   AccessPermission:Read_Only,
                desc=&quot;Shared, moving to M, waiting for acks&quot;;

    M,      AccessPermission:Read_Write,
                desc=&quot;Modified. Read &amp; write permissions. Owner of block&quot;;

    // States moving to Invalid
    MI_A,   AccessPermission:Busy,
                desc=&quot;Was modified, moving to I, waiting for put ack&quot;;
    SI_A,   AccessPermission:Busy,
                desc=&quot;Was shared, moving to I, waiting for put ack&quot;;
    II_A,   AccessPermission:Invalid,
                desc=&quot;Sent valid data before receiving put ack. &quot;Waiting for put ack.&quot;;
}
</code></pre>
<p>Each state has an associated access permission: &quot;Invalid&quot;, &quot;NotPresent&quot;,
&quot;Busy&quot;, &quot;Read_Only&quot;, or &quot;Read_Write&quot;. The access permission is used
for <em>functional</em> accesses to the cache. Functional accesses are
debug-like accesses when the simulator wants to read or update the data
immediately. One example of this is reading in files in SE mode which
are directly loaded into memory.</p>
<p>For functional accesses all caches are checked to see if they have a
corresponding block with matching address. For functional reads, <em>all</em>
of the blocks with a matching address that have read-only or read-write
permission are accessed (they should all have the same data). For
functional writes, all blocks are updated with new data if they have
busy, read-only, or read-write permission.</p>
<h2 id="event-declarations"><a class="header" href="#event-declarations">Event declarations</a></h2>
<p>Next, we need to declare all of the events that are triggered by
incoming messages for this cache controller. These events come directly
from the first row in Table 8.3 in Sorin et al.</p>
<pre><code class="language-cpp">enumeration(Event, desc=&quot;Cache events&quot;) {
    // From the processor/sequencer/mandatory queue
    Load,           desc=&quot;Load from processor&quot;;
    Store,          desc=&quot;Store from processor&quot;;

    // Internal event (only triggered from processor requests)
    Replacement,    desc=&quot;Triggered when block is chosen as victim&quot;;

    // Forwarded request from other cache via dir on the forward network
    FwdGetS,        desc=&quot;Directory sent us a request to satisfy GetS. We must have the block in M to respond to this.&quot;;
    FwdGetM,        desc=&quot;Directory sent us a request to satisfy GetM. We must have the block in M to respond to this.&quot;;
    Inv,            desc=&quot;Invalidate from the directory.&quot;;
    PutAck,         desc=&quot;Response from directory after we issue a put. This must be on the fwd network to avoid deadlock.&quot;;

    // Responses from directory
    DataDirNoAcks,  desc=&quot;Data from directory (acks = 0)&quot;;
    DataDirAcks,    desc=&quot;Data from directory (acks &gt; 0)&quot;;

    // Responses from other caches
    DataOwner,      desc=&quot;Data from owner&quot;;
    InvAck,         desc=&quot;Invalidation ack from other cache after Inv&quot;;

    // Special event to simplify implementation
    LastInvAck,     desc=&quot;Triggered after the last ack is received&quot;;
}
</code></pre>
<h2 id="user-defined-structures"><a class="header" href="#user-defined-structures">User-defined structures</a></h2>
<p>Next, we need to define some structures that we will use in other places
in this controller. The first one we will define is <code>Entry</code>. This is the
structure that is stored in the <code>CacheMemory</code>. It only needs to contain
data and a state, but it may contain any other data you want. Note: The
state that this structure is storing is the <code>State</code> type that was
defined above, not a hardcoded state type.</p>
<p>You can find the abstract version of this class (<code>AbstractCacheEntry</code>)
in <code>src/mem/ruby/slicc_interface/AbstractCacheEntry.hh</code>. If you want to
use any of the member functions of <code>AbstractCacheEntry</code>, you need to
declare them here (this isn't used in this protocol).</p>
<pre><code class="language-cpp">structure(Entry, desc=&quot;Cache entry&quot;, interface=&quot;AbstractCacheEntry&quot;) {
    State CacheState,        desc=&quot;cache state&quot;;
    DataBlock DataBlk,       desc=&quot;Data in the block&quot;;
}
</code></pre>
<p>Another structure we will need is a TBE. TBE is the &quot;transaction buffer
entry&quot;. This stores information needed during transient states. This is
<em>like</em> an MSHR. It functions as an MSHR in this protocol, but the entry
is also allocated for other uses. In this protocol, it will store the
state (usually needed), data (also usually needed), and the number of
acks that this block is currently waiting for. The <code>AcksOutstanding</code> is
used for the transitions where other controllers send acks instead of
the data.</p>
<pre><code class="language-cpp">structure(TBE, desc=&quot;Entry for transient requests&quot;) {
    State TBEState,         desc=&quot;State of block&quot;;
    DataBlock DataBlk,      desc=&quot;Data for the block. Needed for MI_A&quot;;
    int AcksOutstanding, default=0, desc=&quot;Number of acks left to receive.&quot;;
}
</code></pre>
<p>Next, we need a place to store all of the TBEs. This is an externally
defined class; it is defined in C++ outside of SLICC. Therefore, we need
to declare that we are going to use it, and also declare any of the
functions that we will call on it. You can find the code for the
<code>TBETable</code> in src/mem/ruby/structures/TBETable.hh. It is templatized on
the TBE structure defined above, which gets a little confusing, as we
will see.</p>
<pre><code class="language-cpp">structure(TBETable, external=&quot;yes&quot;) {
  TBE lookup(Addr);
  void allocate(Addr);
  void deallocate(Addr);
  bool isPresent(Addr);
}
</code></pre>
<p>The <code>external=&quot;yes&quot;</code> tells SLICC to not look for the definition of this
structure. This is similar to declaring a variable <code>extern</code> in C/C++.</p>
<h2 id="other-declarations-and-definitions-required"><a class="header" href="#other-declarations-and-definitions-required">Other declarations and definitions required</a></h2>
<p>Finally, we are going to go through some boilerplate of declaring
variables, declaring functions in <code>AbstractController</code> that we will use
in this controller, and defining abstract functions in
<code>AbstractController</code>.</p>
<p>First, we need to have a variable that stores a TBE table. We have to do
this in SLICC because it is not until this time that we know the true
type of the TBE table since the TBE type was defined above. This is some
particularly tricky (or nasty) code to get SLICC to generate the right
C++ code. The difficulty is that we want templatize <code>TBETable</code> based on
the <code>TBE</code> type above. The key is that SLICC mangles the names of all
types declared in the machine with the machine's name. For instance,
<code>TBE</code> is actually L1Cache_TBE in C++.</p>
<p>We also want to pass a parameter to the constructor of the <code>TBETable</code>.
This is a parameter that is actually part of the <code>AbstractController</code>,
thus we need to use the C++ name for the variable since it doesn't have
a SLICC name.</p>
<pre><code class="language-cpp">TBETable TBEs, template=&quot;&lt;L1Cache_TBE&gt;&quot;, constructor=&quot;m_number_of_TBEs&quot;;
</code></pre>
<p>If you can understand the above code, then you are an official SLICC
ninja!</p>
<p>Next, any functions that are part of AbstractController need to be
declared, if we are going to use them in the rest of the file. In this
case, we are only going to use <code>clockEdge()</code>:</p>
<pre><code class="language-cpp">Tick clockEdge();
</code></pre>
<p>There are a few other functions we're going to use in actions. These
functions are used in actions to set and unset implicit variables
available in action code-blocks. Action code blocks will be explained in
detail in the action section &lt;MSI-actions-section&gt;. These may be
needed when a transition has many actions.</p>
<pre><code class="language-cpp">void set_cache_entry(AbstractCacheEntry a);
void unset_cache_entry();
void set_tbe(TBE b);
void unset_tbe();
</code></pre>
<p>Another useful function is <code>mapAddressToMachine</code>. This allows us to
change the address mappings for banked directories or caches at runtime
so we don't have to hardcode them in the SLICC file.</p>
<pre><code class="language-cpp">MachineID mapAddressToMachine(Addr addr, MachineType mtype);
</code></pre>
<p>Finally, you can also add any functions you may want to use in the file
and implement them here. For instance, it is convenient to access cache
blocks by address with a single function. Again, in this function there
is some SLICC trickery. We need to access &quot;by pointer&quot; since the cache
block is something that we need to be mutable later (&quot;by reference&quot;
would have been a better name). The cast is also necessary since we
defined a specific <code>Entry</code> type in the file, but the <code>CacheMemory</code> holds
the abstract type.</p>
<pre><code class="language-cpp">// Convenience function to look up the cache entry.
// Needs a pointer so it will be a reference and can be updated in actions
Entry getCacheEntry(Addr address), return_by_pointer=&quot;yes&quot; {
    return static_cast(Entry, &quot;pointer&quot;, cacheMemory.lookup(address));
}
</code></pre>
<p>The next set of boilerplate code rarely changes between different
protocols. There's a set of functions that are pure-virtual in
<code>AbstractController</code> that we must implement.</p>
<p><code>getState</code>
:   Given a TBE, cache entry, and address return the state of the block.
This is called on the block to decide which transition to execute
when an event is triggered. Usually, you return the state in the TBE
or cache entry, whichever is valid.</p>
<p><code>setState</code>
:   Given a TBE, cache entry, and address make sure the state is set
correctly on the block. This is called at the end of the transition
to set the final state on the block.</p>
<p><code>getAccessPermission</code>
:   Get the access permission of a block. This is used during functional
access to decide whether or not to functionally access the block. It
is similar to <code>getState</code>, get the information from the TBE if valid,
cache entry, if valid, or the block is not present.</p>
<p><code>setAccessPermission</code>
:   Like <code>getAccessPermission</code>, but sets the permission.</p>
<p><code>functionalRead</code>
:   Functionally read the data. It is possible the TBE has more
up-to-date information, so check that first. Note: testAndRead/Write
defined in src/mem/ruby/slicc_interface/Util.hh</p>
<p><code>functionalWrite</code>
:   Functionally write the data. Similarly, you may need to update the
data in both the TBE and the cache entry.</p>
<pre><code class="language-cpp">State getState(TBE tbe, Entry cache_entry, Addr addr) {
    // The TBE state will override the state in cache memory, if valid
    if (is_valid(tbe)) { return tbe.TBEState; }
    // Next, if the cache entry is valid, it holds the state
    else if (is_valid(cache_entry)) { return cache_entry.CacheState; }
    // If the block isn't present, then it's state must be I.
    else { return State:I; }
}

void setState(TBE tbe, Entry cache_entry, Addr addr, State state) {
  if (is_valid(tbe)) { tbe.TBEState := state; }
  if (is_valid(cache_entry)) { cache_entry.CacheState := state; }
}

AccessPermission getAccessPermission(Addr addr) {
    TBE tbe := TBEs[addr];
    if(is_valid(tbe)) {
        return L1Cache_State_to_permission(tbe.TBEState);
    }

    Entry cache_entry := getCacheEntry(addr);
    if(is_valid(cache_entry)) {
        return L1Cache_State_to_permission(cache_entry.CacheState);
    }

    return AccessPermission:NotPresent;
}

void setAccessPermission(Entry cache_entry, Addr addr, State state) {
    if (is_valid(cache_entry)) {
        cache_entry.changePermission(L1Cache_State_to_permission(state));
    }
}

void functionalRead(Addr addr, Packet *pkt) {
    TBE tbe := TBEs[addr];
    if(is_valid(tbe)) {
        testAndRead(addr, tbe.DataBlk, pkt);
    } else {
        testAndRead(addr, getCacheEntry(addr).DataBlk, pkt);
    }
}

int functionalWrite(Addr addr, Packet *pkt) {
    int num_functional_writes := 0;

    TBE tbe := TBEs[addr];
    if(is_valid(tbe)) {
        num_functional_writes := num_functional_writes +
            testAndWrite(addr, tbe.DataBlk, pkt);
        return num_functional_writes;
    }

    num_functional_writes := num_functional_writes +
            testAndWrite(addr, getCacheEntry(addr).DataBlk, pkt);
    return num_functional_writes;
}
</code></pre>
<div style="break-before: page; page-break-before: always;"></div><hr />
<h2>layout: documentation
title: In port code blocks
doc: Learning gem5
parent: part3
permalink: documentation/learning_gem5/part3/cache-in-ports/
author: Jason Lowe-Power</h2>
<h1 id="in-port-code-blocks"><a class="header" href="#in-port-code-blocks">In port code blocks</a></h1>
<p>After declaring all of the structures we need in the state machine file,
the first &quot;functional&quot; part of the file are the &quot;in ports&quot;. This section
specifies what <em>events</em> to <em>trigger</em> on different incoming messages.</p>
<p>However, before we get to the in ports, we must declare our out ports.</p>
<pre><code class="language-cpp">out_port(request_out, RequestMsg, requestToDir);
out_port(response_out, ResponseMsg, responseToDirOrSibling);
</code></pre>
<p>This code essentially just renames <code>requestToDir</code> and
<code>responseToDirOrSibling</code> to <code>request_out</code> and <code>response_out</code>. Later in
the file, when we want to <em>enqueue</em> messages to these message buffers we
will use the new names <code>request_out</code> and <code>response_out</code>. This also
specifies the exact implementation of the messages that we will send
across these ports. We will look at the exact definition of these types
below in the file <code>MSI-msg.sm</code>.</p>
<p>Next, we create an <em>in port code block</em>. In SLICC, there are many cases
where there are code blocks that look similar to <code>if</code> blocks, but they
encode specific information. For instance, the code inside an
<code>in_port()</code> block is put in a special generated file:
<code>L1Cache_Wakeup.cc</code>.</p>
<p>All of the <code>in_port</code> code blocks are executed in order (or based on the
priority if it is specified). On each active cycle for the controller,
the first <code>in_port</code> code is executed. If it is successful, it is
re-executed to see if there are other messages that can be consumed on
the port. If there are no messages or no events are triggered, then the
next <code>in_port</code> code block is executed.</p>
<p>There are three different kinds of <em>stalls</em> that can be generated when
executing <code>in_port</code> code blocks. First, there is a parameterized limit
for the number of transitions per cycle at each controller. If this
limit is reached (i.e., there are more messages on the message buffers
than the transition per cycle limit), then all <code>in_port</code> will stop
processing and wait to continue until the next cycle. Second, there
could be a <em>resource stall</em>. This happens if some needed resource is
unavailable. For instance, if using the <code>BankedArray</code> bandwidth model,
the needed bank of the cache may be currently occupied. Third, there
could be a <em>protocol stall</em>. This is a special kind of action that
causes the state machine to stall until the next cycle.</p>
<p>It is important to note that protocol stalls and resource stalls prevent
<strong>all</strong> <code>in_port</code> blocks from executing. For instance, if the first
<code>in_port</code> block generates a protocol stall, none of the other ports will
be executed, blocking all messages. This is why it is important to use
the correct number and ordering of virtual networks.</p>
<p>Below, is the full code for the <code>in_port</code> block for the highest priority
messages to our L1 cache controller, the response from directory or
other caches. Next we will break the code block down to explain each
section.</p>
<pre><code class="language-cpp">in_port(response_in, ResponseMsg, responseFromDirOrSibling) {
    if (response_in.isReady(clockEdge())) {
        peek(response_in, ResponseMsg) {
            Entry cache_entry := getCacheEntry(in_msg.addr);
            TBE tbe := TBEs[in_msg.addr];
            assert(is_valid(tbe));

            if (machineIDToMachineType(in_msg.Sender) ==
                        MachineType:Directory) {
                if (in_msg.Type != CoherenceResponseType:Data) {
                    error(&quot;Directory should only reply with data&quot;);
                }
                assert(in_msg.Acks + tbe.AcksOutstanding &gt;= 0);
                if (in_msg.Acks + tbe.AcksOutstanding == 0) {
                    trigger(Event:DataDirNoAcks, in_msg.addr, cache_entry,
                            tbe);
                } else {
                    trigger(Event:DataDirAcks, in_msg.addr, cache_entry,
                            tbe);
                }
            } else {
                if (in_msg.Type == CoherenceResponseType:Data) {
                    trigger(Event:DataOwner, in_msg.addr, cache_entry,
                            tbe);
                } else if (in_msg.Type == CoherenceResponseType:InvAck) {
                    DPRINTF(RubySlicc, &quot;Got inv ack. %d left\n&quot;,
                            tbe.AcksOutstanding);
                    if (tbe.AcksOutstanding == 1) {
                        trigger(Event:LastInvAck, in_msg.addr, cache_entry,
                                tbe);
                    } else {
                        trigger(Event:InvAck, in_msg.addr, cache_entry,
                                tbe);
                    }
                } else {
                    error(&quot;Unexpected response from other cache&quot;);
                }
            }
        }
    }
}
</code></pre>
<p>First, like the out_port above &quot;response_in&quot; is the name we'll use
later when we refer to this port, and &quot;ResponseMsg&quot; is the type of
message we expect on this port (since this port processes responses to
our requests). The first step in all <code>in_port</code> code blocks is to check
the message buffer to see if there are any messages to be processed. If
not, then this <code>in_port</code> code block is skipped and the next one is
executed.</p>
<pre><code class="language-cpp">in_port(response_in, ResponseMsg, responseFromDirOrSibling) {
    if (response_in.isReady(clockEdge())) {
        . . .
    }
}
</code></pre>
<p>Assuming there is a valid message in the message buffer, next, we grab
that message by using the special code block <code>peek</code>. Peek is a special
function. Any code inside a peek statement has a special variable
declared and populated: <code>in_msg</code>. This contains the message (of type
ResponseMsg in this case as specified by the second parameter of the
<code>peek</code> call) at the head of the port. Here, <code>response_in</code> is the port we
want to peek into.</p>
<p>Then, we need to grab the cache entry and the TBE for the incoming
address. (We will look at the other parameters in response message
below.) Above, we implemented getCacheEntry. It will return either the
valid matching entry for the address, or an invalid entry if there is
not a matching cache block.</p>
<p>For the TBE, since this is a response to a request this cache controller
initiated, there <em>must</em> be a valid TBE in the TBE table. Hence, we see
our first debug statement, an <em>assert</em>. This is one of the ways to ease
debugging of cache coherence protocols. It is encouraged to use asserts
liberally to make debugging easier.</p>
<pre><code class="language-cpp">peek(response_in, ResponseMsg) {
    Entry cache_entry := getCacheEntry(in_msg.addr);
    TBE tbe := TBEs[in_msg.addr];
    assert(is_valid(tbe));

    . . .
}
</code></pre>
<p>Next, we need to decide what event to trigger based on the message. For
this, we first need to discuss what data response messages are carrying.</p>
<p>To declare a new message type, first create a new file for all of the
message types: <code>MSI-msg.sm</code>. In this file, you can declare any
structures that will be <em>globally</em> used across all of the SLICC files
for your protocol. We will include this file in all of the state machine
definitions via the <code>MSI.slicc</code> file later. This is similar to including
global definitions in header files in C/C++.</p>
<p>In the <code>MSI-msg.sm</code> file, add the following code block:</p>
<pre><code class="language-cpp">structure(ResponseMsg, desc=&quot;Used for Dir-&gt;Cache and Fwd message responses&quot;,
          interface=&quot;Message&quot;) {
    Addr addr,                   desc=&quot;Physical address for this response&quot;;
    CoherenceResponseType Type,  desc=&quot;Type of response&quot;;
    MachineID Sender,            desc=&quot;Node who is responding to the request&quot;;
    NetDest Destination,         desc=&quot;Multicast destination mask&quot;;
    DataBlock DataBlk,           desc=&quot;data for the cache line&quot;;
    MessageSizeType MessageSize, desc=&quot;size category of the message&quot;;
    int Acks,                    desc=&quot;Number of acks required from others&quot;;

    // This must be overridden here to support functional accesses
    bool functionalRead(Packet *pkt) {
        if (Type == CoherenceResponseType:Data) {
            return testAndRead(addr, DataBlk, pkt);
        }
        return false;
    }

    bool functionalWrite(Packet *pkt) {
        // No check on message type required since the protocol should read
        // data block from only those messages that contain valid data
        return testAndWrite(addr, DataBlk, pkt);
    }
}
</code></pre>
<p>The message is just another SLICC structure similar to the structures
we've defined before. However, this time, we have a specific interface
that it is implementing: <code>Message</code>. Within this message, we can add any
members that we need for our protocol. In this case, we first have the
address. Note, a common &quot;gotcha&quot; is that you <em>cannot</em> use &quot;Addr&quot; with a
capitol &quot;A&quot; for the name of the member since it is the same name as the
type!</p>
<p>Next, we have the type of response. In our case, there are two types of
response data and invalidation acks from other caches after they have
invalidated their copy. Thus, we need to define an <em>enumeration</em>, the
<code>CoherenceResponseType</code>, to use it in this message. Add the following
code <em>before</em> the <code>ResponseMsg</code> declaration in the same file.</p>
<pre><code class="language-cpp">enumeration(CoherenceResponseType, desc=&quot;Types of response messages&quot;) {
    Data,       desc=&quot;Contains the most up-to-date data&quot;;
    InvAck,     desc=&quot;Message from another cache that they have inv. the blk&quot;;
}
</code></pre>
<p>Next, in the response message type, we have the <code>MachineID</code> which sent
the response. <code>MachineID</code> is the <em>specific machine</em> that sent the
response. For instance, it might be directory 0 or cache 12. The
<code>MachineID</code> contains both the <code>MachineType</code> (e.g., we have been creating
an <code>L1Cache</code> as declared in the first <code>machine()</code>) and the specific
<em>version</em> of that machine type. We will come back to machine version
numbers when configuring the system.</p>
<p>Next, all messages need a <em>destination</em>, and a <em>size</em>. The destination
is specified as a <code>NetDest</code>, which is a bitmap of all the <code>MachineID</code> in
the system. This allows messages to be broadcast to a flexible set of
receivers. The message also has a size. You can find the possible
message sizes in <code>src/mem/protocol/RubySlicc_Exports.sm</code>.</p>
<p>This message may also contain a data block and the number acks that are
expected. Thus, we can include these in the message definition as well.</p>
<p>Finally, we also have to define functional read and write functions.
These are used by Ruby to inspect in-flight messages on function reads
and writes. Note: This functionality currently is very brittle and if
there are messages in-flight for an address that is functionally read or
written the functional access may fail.</p>
<p>You can download the complete <code>MSI-msg.sm</code> file 
<a href="part3//_pages/static/scripts/part3/MSI_protocol/MSI-msg.sm">here</a>.</p>
<p>Now that we have defined the data in the response message, we can look
at how we choose which action to trigger in the <code>in_port</code> for response
to the cache.</p>
<pre><code class="language-cpp">// If it's from the directory...
if (machineIDToMachineType(in_msg.Sender) ==
            MachineType:Directory) {
    if (in_msg.Type != CoherenceResponseType:Data) {
        error(&quot;Directory should only reply with data&quot;);
    }
    assert(in_msg.Acks + tbe.AcksOutstanding &gt;= 0);
    if (in_msg.Acks + tbe.AcksOutstanding == 0) {
        trigger(Event:DataDirNoAcks, in_msg.addr, cache_entry,
                tbe);
    } else {
        trigger(Event:DataDirAcks, in_msg.addr, cache_entry,
                tbe);
    }
} else {
    // This is from another cache.
    if (in_msg.Type == CoherenceResponseType:Data) {
        trigger(Event:DataOwner, in_msg.addr, cache_entry,
                tbe);
    } else if (in_msg.Type == CoherenceResponseType:InvAck) {
        DPRINTF(RubySlicc, &quot;Got inv ack. %d left\n&quot;,
                tbe.AcksOutstanding);
        if (tbe.AcksOutstanding == 1) {
            // If there is exactly one ack remaining then we
            // know it is the last ack.
            trigger(Event:LastInvAck, in_msg.addr, cache_entry,
                    tbe);
        } else {
            trigger(Event:InvAck, in_msg.addr, cache_entry,
                    tbe);
        }
    } else {
        error(&quot;Unexpected response from other cache&quot;);
    }
}
</code></pre>
<p>First, we check to see if the message comes from the directory or
another cache. If it comes from the directory, we know that it <em>must</em> be
a data response (the directory will never respond with an ack).</p>
<p>Here, we meet our second way to add debug information to protocols: the
<code>error</code> function. This function breaks simulation and prints out the
string parameter similar to <code>panic</code>.</p>
<p>Next, when we receive data from the directory, we expect that the number
of acks we are waiting for will never be less than 0. The number of acks
we're waiting for is the current acks we have received
(tbe.AcksOutstanding) and the number of acks the directory has told us
to be waiting for. We need to check it this way because it is possible
that we have received acks from other caches before we get the message
from the directory that we need to wait for acks.</p>
<p>There are two possibilities for the acks, either we have already
received all of the acks and now we are getting the data (data from dir
acks==0 in Table 8.3), or we need to wait for more acks. Thus, we check
this condition and trigger two different events, one for each
possibility.</p>
<p>When triggering transitions, you need to pass four parameters. The first
parameter is the event to trigger. These events were specified earlier
in the <code>Event</code> declaration. The next parameter is the (physical memory)
address of the cache block to operate on. Usually this is the same as
the address of the <code>in_msg</code>, but it may be different, for instance, on a
replacement the address is for the block being replaced. Next is the
cache entry and the TBE for the block. These may be invalid if there are
no valid entries for the address in the cache or there is not a valid
TBE in the TBE table.</p>
<p>When we implement actions below, we will see how these last three
parameters are used. They are passed into the actions as implicit
variables: <code>address</code>, <code>cache_entry</code>, and <code>tbe</code>.</p>
<p>If the <code>trigger</code> function is executed, after the transition is complete,
the <code>in_port</code> logic is executed again, assuming there have been fewer
transitions than that maximum transitions per cycle. If there are other
messages in the message buffer more transitions can be triggered.</p>
<p>If the response is from another cache instead of the directory, then
other events are triggered, as shown in the code above. These events
come directly from Table 8.3 in Sorin et al.</p>
<p>Importantly, you should use the <code>in_port</code> logic to check all conditions.
After an event is triggered, it should only have a <em>single code path</em>.
I.e., there should be no <code>if</code> statements in any action blocks. If you
want to conditionally execute actions, you should use different states
or different events in the <code>in_port</code> logic.</p>
<p>The reason for this constraint is the way Ruby checks resources before
executing a transition. In the generated code from the <code>in_port</code> blocks
before the transition is actually executed all of the resources are
checked. In other words, transitions are atomic and either execute all
of the actions or none. Conditional statements inside the actions
prevents the SLICC compiler from correctly tracking the resource usage
and can lead to strange performance, deadlocks, and other bugs.</p>
<p>After specifying the <code>in_port</code> logic for the highest priority network,
the response network, we need to add the <code>in_port</code> logic for the forward
request network. However, before specifying this logic, we need to
define the <code>RequestMsg</code> type and the <code>CoherenceRequestType</code> which
contains the types of requests. These two definitions go in the
<code>MSI-msg.sm</code> file <em>not in MSI-cache.sm</em> since they are global
definitions.</p>
<p>It is possible to implement this as two different messages and request
type enumerations, one for forward and one for normal requests, but it
simplifies the code to use a single message and type.</p>
<pre><code class="language-cpp">enumeration(CoherenceRequestType, desc=&quot;Types of request messages&quot;) {
    GetS,       desc=&quot;Request from cache for a block with read permission&quot;;
    GetM,       desc=&quot;Request from cache for a block with write permission&quot;;
    PutS,       desc=&quot;Sent to directory when evicting a block in S (clean WB)&quot;;
    PutM,       desc=&quot;Sent to directory when evicting a block in M&quot;;

    // &quot;Requests&quot; from the directory to the caches on the fwd network
    Inv,        desc=&quot;Probe the cache and invalidate any matching blocks&quot;;
    PutAck,     desc=&quot;The put request has been processed.&quot;;
}
</code></pre>
<pre><code class="language-cpp">structure(RequestMsg, desc=&quot;Used for Cache-&gt;Dir and Fwd messages&quot;,  interface=&quot;Message&quot;) {
    Addr addr,                   desc=&quot;Physical address for this request&quot;;
    CoherenceRequestType Type,   desc=&quot;Type of request&quot;;
    MachineID Requestor,         desc=&quot;Node who initiated the request&quot;;
    NetDest Destination,         desc=&quot;Multicast destination mask&quot;;
    DataBlock DataBlk,           desc=&quot;data for the cache line&quot;;
    MessageSizeType MessageSize, desc=&quot;size category of the message&quot;;

    bool functionalRead(Packet *pkt) {
        // Requests should never have the only copy of the most up-to-date data
        return false;
    }

    bool functionalWrite(Packet *pkt) {
        // No check on message type required since the protocol should read
        // data block from only those messages that contain valid data
        return testAndWrite(addr, DataBlk, pkt);
    }
}
</code></pre>
<p>Now, we can specify the logic for the forward network <code>in_port</code>. This
logic is straightforward and triggers a different event for each request
type.</p>
<pre><code class="language-cpp">in_port(forward_in, RequestMsg, forwardFromDir) {
    if (forward_in.isReady(clockEdge())) {
        peek(forward_in, RequestMsg) {
            // Grab the entry and tbe if they exist.
            Entry cache_entry := getCacheEntry(in_msg.addr);
            TBE tbe := TBEs[in_msg.addr];

            if (in_msg.Type == CoherenceRequestType:GetS) {
                trigger(Event:FwdGetS, in_msg.addr, cache_entry, tbe);
            } else if (in_msg.Type == CoherenceRequestType:GetM) {
                trigger(Event:FwdGetM, in_msg.addr, cache_entry, tbe);
            } else if (in_msg.Type == CoherenceRequestType:Inv) {
                trigger(Event:Inv, in_msg.addr, cache_entry, tbe);
            } else if (in_msg.Type == CoherenceRequestType:PutAck) {
                trigger(Event:PutAck, in_msg.addr, cache_entry, tbe);
            } else {
                error(&quot;Unexpected forward message!&quot;);
            }
        }
    }
}
</code></pre>
<p>The final <code>in_port</code> is for the mandatory queue. This is the lowest
priority queue, so it must be lowest in the state machine file. The
mandatory queue has a special message type: <code>RubyRequest</code>. This type is
specified in <code>src/mem/protocol/RubySlicc_Types.sm</code> It contains two
different addresses, the <code>LineAddress</code> which is cache-block aligned and
the <code>PhysicalAddress</code> which holds the original request's address and may
not be cache-block aligned. It also has other members that may be useful
in some protocols. However, for this simple protocol we only need the
<code>LineAddress</code>.</p>
<pre><code class="language-cpp">in_port(mandatory_in, RubyRequest, mandatoryQueue) {
    if (mandatory_in.isReady(clockEdge())) {
        peek(mandatory_in, RubyRequest, block_on=&quot;LineAddress&quot;) {
            Entry cache_entry := getCacheEntry(in_msg.LineAddress);
            TBE tbe := TBEs[in_msg.LineAddress];

            if (is_invalid(cache_entry) &amp;&amp;
                    cacheMemory.cacheAvail(in_msg.LineAddress) == false ) {
                Addr addr := cacheMemory.cacheProbe(in_msg.LineAddress);
                Entry victim_entry := getCacheEntry(addr);
                TBE victim_tbe := TBEs[addr];
                trigger(Event:Replacement, addr, victim_entry, victim_tbe);
            } else {
                if (in_msg.Type == RubyRequestType:LD ||
                        in_msg.Type == RubyRequestType:IFETCH) {
                    trigger(Event:Load, in_msg.LineAddress, cache_entry,
                            tbe);
                } else if (in_msg.Type == RubyRequestType:ST) {
                    trigger(Event:Store, in_msg.LineAddress, cache_entry,
                            tbe);
                } else {
                    error(&quot;Unexpected type from processor&quot;);
                }
            }
        }
    }
}
</code></pre>
<p>There are a couple of new concepts shown in this code block. First, we
use <code>block_on=&quot;LineAddress&quot;</code> in the peek function. What this does is
ensure that any other requests to the same cache line will be blocked
until the current request is complete.</p>
<p>Next, we check if the cache entry for this line is valid. If not, and
there are no more entries available in the set, then we need to evict
another entry. To get the victim address, we can use the <code>cacheProbe</code>
function on the <code>CacheMemory</code> object. This function uses the
parameterized replacement policy and returns the physical (line) address
of the victim.</p>
<p>Importantly, when we trigger the <code>Replacement</code> event <em>we use the address
of the victim block</em> and the victim cache entry and tbe. Thus, when we
take actions in the replacement transitions we will be acting on the
victim block, not the requesting block. Additionally, we need to
remember to <em>not</em> remove the requesting message from the mandatory queue
(pop) until it has been satisfied. The message should not be popped
after the replacement is complete.</p>
<p>If the cache block was found to be valid, then we simply trigger the
<code>Load</code> or <code>Store</code> event.</p>
<div style="break-before: page; page-break-before: always;"></div><hr />
<h2>layout: documentation
title: Action code blocks
doc: Learning gem5
parent: part3
permalink: /documentation/learning_gem5/part3/cache-actions/
author: Jason Lowe-Power</h2>
<h1 id="action-code-blocks"><a class="header" href="#action-code-blocks">Action code blocks</a></h1>
<p>The next section of the state machine file is the action blocks. The
action blocks are executed during a transition from one state to
another, and are called by the transition code blocks (which we will
discuss in the next section &lt;MSI-transitions-section&gt;). Actions are
<em>single action</em> blocks. Some examples are &quot;send a message to the
directory&quot; and &quot;pop the head of the buffer&quot;. Each action should be small
and only perform a single action.</p>
<p>The first action we will implement is an action to send a GetS request
to the directory. We need to send a GetS request to the directory
whenever we want to read some data that is not in the Modified or Shared
states in our cache. As previously mentioned, there are three variables
that are automatically populated inside the action block (like the
<code>in_msg</code> in <code>peek</code> blocks). <code>address</code> is the address that was passed
into the <code>trigger</code> function, <code>cache_entry</code> is the cache entry passed
into the <code>trigger</code> function, and <code>tbe</code> is the TBE passed into the
<code>trigger</code> function.</p>
<pre><code class="language-cpp">action(sendGetS, 'gS', desc=&quot;Send GetS to the directory&quot;) {
    enqueue(request_out, RequestMsg, 1) {
        out_msg.addr := address;
        out_msg.Type := CoherenceRequestType:GetS;
        out_msg.Destination.add(mapAddressToMachine(address,
                                MachineType:Directory));
        // See mem/protocol/RubySlicc_Exports.sm for possible sizes.
        out_msg.MessageSize := MessageSizeType:Control;
        // Set that the requestor is this machine so we get the response.
        out_msg.Requestor := machineID;
    }
}
</code></pre>
<p>When specifying the action block, there are two parameters: a
description and a &quot;shorthand&quot;. These two parameters are used in the HTML
table generation. The shorthand shows up in the transition cell, so it
should be as short as possible. SLICC provides a special syntax to allow
for bold (''), superscript ('^'), and spaces ('_') in the shorthand to
help keep them short. Second, the description also shows up in the HTML
table when you click on a particular action. The description can be
longer and help explain what the action does.</p>
<p>Next, in this action we are going to send a message to the directory on
the <code>request_out</code> port as declared above the <code>in_port</code> blocks. The
<code>enqueue</code> function is similar to the <code>peek</code> function since it requires a
code block. <code>enqueue</code>, however, has the special variable <code>out_msg</code>. In
the <code>enqueue</code> block, you can modify the <code>out_msg</code> with the current data.</p>
<p>The <code>enqueue</code> block takes three parameters, the message buffer to send
the message, the type of the message, and a latency. This latency (1
cycle in the example above and throughout this cache controller) is the
<em>cache latency</em>. This is where you specify the latency of accessing the
cache, in this case for a miss. Below we will see that specifying the
latency for a hit is similar.</p>
<p>Inside the <code>enqueue</code> block is where the message data is populated. For
the address of the request, we can use the automatically populated
<code>address</code> variable. We are sending a GetS message, so we use that
message type. Next, we need to specify the destination of the message.
For this, we use the <code>mapAddressToMachine</code> function that takes the
address and the machine type we are sending to. This will look up in the
correct <code>MachineID</code> based on the address. We call <code>Destination.add</code>
because <code>Destination</code> is a <code>NetDest</code> object, or a bitmap of all
<code>MachineID</code>.</p>
<p>Finally, we need to specify the message size (from
<code>mem/protocol/RubySlicc_Exports.sm</code>) and set ourselves as the requestor.
By setting this <code>machineID</code> as the requestor, it will allow the
directory to respond to this cache or forward it to another cache to
respond to this request.</p>
<p>Similarly, we can create actions for sending other get and put requests.
Note that get requests represent requests for data and put requests
represent requests where we downgrading or evicting our copy of the
data.</p>
<pre><code class="language-cpp">action(sendGetM, &quot;gM&quot;, desc=&quot;Send GetM to the directory&quot;) {
    enqueue(request_out, RequestMsg, 1) {
        out_msg.addr := address;
        out_msg.Type := CoherenceRequestType:GetM;
        out_msg.Destination.add(mapAddressToMachine(address,
                                MachineType:Directory));
        out_msg.MessageSize := MessageSizeType:Control;
        out_msg.Requestor := machineID;
    }
}

action(sendPutS, &quot;pS&quot;, desc=&quot;Send PutS to the directory&quot;) {
    enqueue(request_out, RequestMsg, 1) {
        out_msg.addr := address;
        out_msg.Type := CoherenceRequestType:PutS;
        out_msg.Destination.add(mapAddressToMachine(address,
                                MachineType:Directory));
        out_msg.MessageSize := MessageSizeType:Control;
        out_msg.Requestor := machineID;
    }
}

action(sendPutM, &quot;pM&quot;, desc=&quot;Send putM+data to the directory&quot;) {
    enqueue(request_out, RequestMsg, 1) {
        out_msg.addr := address;
        out_msg.Type := CoherenceRequestType:PutM;
        out_msg.Destination.add(mapAddressToMachine(address,
                                MachineType:Directory));
        out_msg.DataBlk := cache_entry.DataBlk;
        out_msg.MessageSize := MessageSizeType:Data;
        out_msg.Requestor := machineID;
    }
}
</code></pre>
<p>Next, we need to specify an action to send data to another cache in the
case that we get a forwarded request from the directory for another
cache. In this case, we have to peek into the request queue to get other
data from the requesting message. This peek code block is exactly the
same as the ones in the <code>in_port</code>. When you nest an <code>enqueue</code> block in a
<code>peek</code> block both <code>in_msg</code> and <code>out_msg</code> variables are available. This
is needed so we know which other cache to send the data to.
Additionally, in this action we use the <code>cache_entry</code> variable to get
the data to send to the other cache.</p>
<pre><code class="language-cpp">action(sendCacheDataToReq, &quot;cdR&quot;, desc=&quot;Send cache data to requestor&quot;) {
    assert(is_valid(cache_entry));
    peek(forward_in, RequestMsg) {
        enqueue(response_out, ResponseMsg, 1) {
            out_msg.addr := address;
            out_msg.Type := CoherenceResponseType:Data;
            out_msg.Destination.add(in_msg.Requestor);
            out_msg.DataBlk := cache_entry.DataBlk;
            out_msg.MessageSize := MessageSizeType:Data;
            out_msg.Sender := machineID;
        }
    }
}
</code></pre>
<p>Next, we specify actions for sending data to the directory and sending
an invalidation ack to the original requestor on a forward request when
this cache does not have the data.</p>
<pre><code class="language-cpp">action(sendCacheDataToDir, &quot;cdD&quot;, desc=&quot;Send the cache data to the dir&quot;) {
    enqueue(response_out, ResponseMsg, 1) {
        out_msg.addr := address;
        out_msg.Type := CoherenceResponseType:Data;
        out_msg.Destination.add(mapAddressToMachine(address,
                                MachineType:Directory));
        out_msg.DataBlk := cache_entry.DataBlk;
        out_msg.MessageSize := MessageSizeType:Data;
        out_msg.Sender := machineID;
    }
}

action(sendInvAcktoReq, &quot;iaR&quot;, desc=&quot;Send inv-ack to requestor&quot;) {
    peek(forward_in, RequestMsg) {
        enqueue(response_out, ResponseMsg, 1) {
            out_msg.addr := address;
            out_msg.Type := CoherenceResponseType:InvAck;
            out_msg.Destination.add(in_msg.Requestor);
            out_msg.DataBlk := cache_entry.DataBlk;
            out_msg.MessageSize := MessageSizeType:Control;
            out_msg.Sender := machineID;
        }
    }
}
</code></pre>
<p>Another required action is to decrement the number of acks we are
waiting for. This is used when we get a invalidation ack from another
cache to track the total number of acks. For this action, we assume that
there is a valid TBE and modify the implicit <code>tbe</code> variable in the
action block.</p>
<p>Additionally, we have another example of making debugging easier in
protocols: <code>APPEND_TRANSITION_COMMENT</code>. This function takes a string, or
something that can easily be converted to a string (e.g., <code>int</code>) as a
parameter. It modifies the <em>protocol trace</em> output, which we will
discuss in the <a href="part3/../MSIdebugging">debugging section</a>. On each
protocol trace line that executes this action it will print the total
number of acks this cache is still waiting on. This is useful since the
number of remaining acks is part of the cache block state.</p>
<pre><code class="language-cpp">action(decrAcks, &quot;da&quot;, desc=&quot;Decrement the number of acks&quot;) {
    assert(is_valid(tbe));
    tbe.AcksOutstanding := tbe.AcksOutstanding - 1;
    APPEND_TRANSITION_COMMENT(&quot;Acks: &quot;);
    APPEND_TRANSITION_COMMENT(tbe.AcksOutstanding);
}
</code></pre>
<p>We also need an action to store the acks when we receive a message from
the directory with an ack count. For this action, we peek into the
directory's response message to get the number of acks and store them in
the (required to be valid) TBE.</p>
<pre><code class="language-cpp">action(storeAcks, &quot;sa&quot;, desc=&quot;Store the needed acks to the TBE&quot;) {
    assert(is_valid(tbe));
    peek(response_in, ResponseMsg) {
        tbe.AcksOutstanding := in_msg.Acks + tbe.AcksOutstanding;
    }
    assert(tbe.AcksOutstanding &gt; 0);
}
</code></pre>
<p>The next set of actions are to respond to CPU requests on hits and
misses. For these actions, we need to notify the sequencer (the
interface between Ruby and the rest of gem5) of the new data. In the
case of a store, we give the sequencer a pointer to the data block and
the sequencer updates the data in-place.</p>
<pre><code class="language-cpp">action(loadHit, &quot;Lh&quot;, desc=&quot;Load hit&quot;) {
    assert(is_valid(cache_entry));
    cacheMemory.setMRU(cache_entry);
    sequencer.readCallback(address, cache_entry.DataBlk, false);
}

action(externalLoadHit, &quot;xLh&quot;, desc=&quot;External load hit (was a miss)&quot;) {
    assert(is_valid(cache_entry));
    peek(response_in, ResponseMsg) {
        cacheMemory.setMRU(cache_entry);
        // Forward the type of machine that responded to this request
        // E.g., another cache or the directory. This is used for tracking
        // statistics.
        sequencer.readCallback(address, cache_entry.DataBlk, true,
                               machineIDToMachineType(in_msg.Sender));
    }
}

action(storeHit, &quot;Sh&quot;, desc=&quot;Store hit&quot;) {
    assert(is_valid(cache_entry));
    cacheMemory.setMRU(cache_entry);
    // The same as the read callback above.
    sequencer.writeCallback(address, cache_entry.DataBlk, false);
}

action(externalStoreHit, &quot;xSh&quot;, desc=&quot;External store hit (was a miss)&quot;) {
    assert(is_valid(cache_entry));
    peek(response_in, ResponseMsg) {
        cacheMemory.setMRU(cache_entry);
        sequencer.writeCallback(address, cache_entry.DataBlk, true,
                               // Note: this could be the last ack.
                               machineIDToMachineType(in_msg.Sender));
    }
}

action(forwardEviction, &quot;e&quot;, desc=&quot;sends eviction notification to CPU&quot;) {
    if (send_evictions) {
        sequencer.evictionCallback(address);
    }
}
</code></pre>
<p>In each of these actions, it is vital that we call <code>setMRU</code> on the cache
entry. The <code>setMRU</code> function is what allows the replacement policy to
know which blocks are most recently accessed. If you leave out the
<code>setMRU</code> call, the replacement policy will not operate correctly!</p>
<p>On loads and stores, we call the <code>read/writeCallback</code> function on the
<code>sequencer</code>. This notifies the sequencer of the new data or allows it to
write the data into the data block. These functions take four parameters
(the last parameter is optional): address, data block, a boolean for if
the original request was a miss, and finally, an optional <code>MachineType</code>.
The final optional parameter is used for tracking statistics on where
the data for the request was found. It allows you to track whether the
data comes from cache-to-cache transfers or from memory.</p>
<p>Finally, we also have an action to forward evictions to the CPU. This is
required for gem5's out-of-order models to squash speculative loads if
the cache block is evicted before the load is committed. We use the
parameter specified at the top of the state machine file to check if
this is needed or not.</p>
<p>Next, we have a set of cache management actions that allocate and free
cache entries and TBEs. To create a new cache entry, we must have space
in the <code>CacheMemory</code> object. Then, we can call the <code>allocate</code> function.
This allocate function doesn't actually allocate the host memory for the
cache entry since this controller specialized the <code>Entry</code> type, which is
why we need to pass a <code>new Entry</code> to the <code>allocate</code> function.</p>
<p>Additionally, in these actions we call <code>set_cache_entry</code>,
<code>unset_cache_entry</code>, and similar functions for the TBE. These set and
unset the implicit variables that were passed in via the <code>trigger</code>
function. For instance, when allocating a new cache block, we call
<code>set_cache_entry</code> and in all actions proceeding <code>allocateCacheBlock</code> the
<code>cache_entry</code> variable will be valid.</p>
<p>There is also an action that copies the data from the cache data block
to the TBE. This allows us to keep the data around even after removing
the cache block until we are sure that this cache no longer are
responsible for the data.</p>
<pre><code class="language-cpp">action(allocateCacheBlock, &quot;a&quot;, desc=&quot;Allocate a cache block&quot;) {
    assert(is_invalid(cache_entry));
    assert(cacheMemory.cacheAvail(address));
    set_cache_entry(cacheMemory.allocate(address, new Entry));
}

action(deallocateCacheBlock, &quot;d&quot;, desc=&quot;Deallocate a cache block&quot;) {
    assert(is_valid(cache_entry));
    cacheMemory.deallocate(address);
    // clear the cache_entry variable (now it's invalid)
    unset_cache_entry();
}

action(writeDataToCache, &quot;wd&quot;, desc=&quot;Write data to the cache&quot;) {
    peek(response_in, ResponseMsg) {
        assert(is_valid(cache_entry));
        cache_entry.DataBlk := in_msg.DataBlk;
    }
}

action(allocateTBE, &quot;aT&quot;, desc=&quot;Allocate TBE&quot;) {
    assert(is_invalid(tbe));
    TBEs.allocate(address);
    // this updates the tbe variable for other actions
    set_tbe(TBEs[address]);
}

action(deallocateTBE, &quot;dT&quot;, desc=&quot;Deallocate TBE&quot;) {
    assert(is_valid(tbe));
    TBEs.deallocate(address);
    // this makes the tbe variable invalid
    unset_tbe();
}

action(copyDataFromCacheToTBE, &quot;Dct&quot;, desc=&quot;Copy data from cache to TBE&quot;) {
    assert(is_valid(cache_entry));
    assert(is_valid(tbe));
    tbe.DataBlk := cache_entry.DataBlk;
}
</code></pre>
<p>The next set of actions are for managing the message buffers. We need to
add actions to pop the head message off of the buffers after the message
has been satisfied. The <code>dequeue</code> function takes a single parameter, a
time for the dequeue to take place. Delaying the dequeue for a cycle
prevents the <code>in_port</code> logic from consuming another message from the
same message buffer in a single cycle.</p>
<pre><code class="language-cpp">action(popMandatoryQueue, &quot;pQ&quot;, desc=&quot;Pop the mandatory queue&quot;) {
    mandatory_in.dequeue(clockEdge());
}

action(popResponseQueue, &quot;pR&quot;, desc=&quot;Pop the response queue&quot;) {
    response_in.dequeue(clockEdge());
}

action(popForwardQueue, &quot;pF&quot;, desc=&quot;Pop the forward queue&quot;) {
    forward_in.dequeue(clockEdge());
}
</code></pre>
<p>Finally, the last action is a stall. Below, we are using a &quot;z_stall&quot;,
which is the simplest kind of stall in SLICC. By leaving the action
blank, it generates a &quot;protocol stall&quot; in the <code>in_port</code> logic which
stalls all messages from being processed in the current message buffer
and all lower priority message buffer. Protocols using &quot;z_stall&quot; are
usually simpler, but lower performance since a stall on a high priority
buffer can stall many requests that may not need to be stalled.</p>
<pre><code class="language-cpp">action(stall, &quot;z&quot;, desc=&quot;Stall the incoming request&quot;) {
    // z_stall
}
</code></pre>
<p>There are two other ways to deal with messages that cannot currently be
processed that can improve the performance of protocols. (Note: We will
not be using these more complicated techniques in this simple example
protocol.) The first is <code>recycle</code>. The message buffers have a <code>recycle</code>
function that moves the request on the head of the queue to the tail.
This allows other requests in the buffer or requests in other buffers to
be processed immediately. <code>recycle</code> actions often improve the
performance of protocols significantly.</p>
<p>However, <code>recycle</code> is not very realistic when compared to real
implementations of cache coherence. For a more realistic
high-performance solution to stalling messages, Ruby provides the
<code>stall_and_wait</code> function on message buffers. This function takes the
head request and moves it into a separate structure tagged by an
address. The address is user-specified, but is usually the request's
address. Later, when the blocked request can be handled, there is
another function <code>wakeUpBuffers(address)</code> which will wake up all
requests stalled on <code>address</code> and <code>wakeUpAllBuffers()</code> that wakes up all
of the stalled requests. When a request is &quot;woken up&quot; it is placed back
into the message buffer to be subsequently processed.</p>
<div style="break-before: page; page-break-before: always;"></div><hr />
<h2>layout: documentation
title: Transition code blocks
doc: Learning gem5
parent: part3
permalink: /documentation/learning_gem5/part3/cache-transitions/
author: Jason Lowe-Power</h2>
<h1 id="transition-code-blocks"><a class="header" href="#transition-code-blocks">Transition code blocks</a></h1>
<p>Finally, we've reached the final section of the state machine file! This
section contains the details for all of the transitions between states
and what actions to execute during the transition.</p>
<p>So far in this chapter we have written the state machine top to bottom
one section at a time. However, in most cache coherence implementations
you will find that you need to move around between sections. For
instance, when writing the transitions you will realize you forgot to
add an action, or you notice that you actually need another transient
state to implement the protocol. This is the normal way to write
protocols, but for simplicity this chapter goes through the file top to
bottom.</p>
<p>Transition blocks consist of two parts. First, the first line of a
transition block contains the begin state, event to transition on, and
end state (the end state may not be required, as we will discuss below).
Second, the transition block contains all of the actions to execute on
this transition. For instance, a simple transition in the MSI protocol
is transitioning out of Invalid on a Load.</p>
<pre><code class="language-cpp">transition(I, Load, IS_D) {
    allocateCacheBlock;
    allocateTBE;
    sendGetS;
    popMandatoryQueue;
}
</code></pre>
<p>First, you specify the transition as the &quot;parameters&quot; to the
<code>transition</code> statement. In this case, if the initial state is <code>I</code> and
the event is <code>Load</code> then transition to <code>IS_D</code> (was invalid, going to
shared, waiting for data). This transition is straight out of Table 8.3
in Sorin et al.</p>
<p>Then, inside the <code>transition</code> code block, all of the actions that will
execute are listed in order. For this transition first we allocate the
cache block. Remember that in the <code>allocateCacheBlock</code> action the newly
allocated entry is set to the entry that will be used in the rest of the
actions. After allocating the cache block, we also allocate a TBE. This
could be used if we need to wait for acks from other caches. Next, we
send a GetS request to the directory, and finally we pop the head entry
off of the mandatory queue since we have fully handled it.</p>
<pre><code class="language-cpp">transition(IS_D, {Load, Store, Replacement, Inv}) {
    stall;
}
</code></pre>
<p>In this transition, we use slightly different syntax. According to Table
8.3 from Sorin et al., we should stall if the cache is in IS_D on
loads, stores, replacements, and invalidates. We can specify a single
transition statement for this by including multiple events in curly
brackets as above. Additionally, the final state isn't required. If the
final state isn't specified, then the transition is executed and the
state is not updated (i.e., the block stays in its beginning state). You
can read the above transition as &quot;If the cache block is in state IS_D
and there is a load, store, replacement, or invalidate stall the
protocol and do not transition out of the state.&quot; You can also use curly
brackets for beginning states, as shown in some of the transitions
below.</p>
<p>Below is the rest of the transitions needed to implement the L1 cache
from the MSI protocol.</p>
<pre><code class="language-cpp">transition(IS_D, {DataDirNoAcks, DataOwner}, S) {
    writeDataToCache;
    deallocateTBE;
    externalLoadHit;
    popResponseQueue;
}

transition({IM_AD, IM_A}, {Load, Store, Replacement, FwdGetS, FwdGetM}) {
    stall;
}

transition({IM_AD, SM_AD}, {DataDirNoAcks, DataOwner}, M) {
    writeDataToCache;
    deallocateTBE;
    externalStoreHit;
    popResponseQueue;
}

transition(IM_AD, DataDirAcks, IM_A) {
    writeDataToCache;
    storeAcks;
    popResponseQueue;
}

transition({IM_AD, IM_A, SM_AD, SM_A}, InvAck) {
    decrAcks;
    popResponseQueue;
}

transition({IM_A, SM_A}, LastInvAck, M) {
    deallocateTBE;
    externalStoreHit;
    popResponseQueue;
}

transition({S, SM_AD, SM_A, M}, Load) {
    loadHit;
    popMandatoryQueue;
}

transition(S, Store, SM_AD) {
    allocateTBE;
    sendGetM;
    popMandatoryQueue;
}

transition(S, Replacement, SI_A) {
    sendPutS;
    forwardEviction;
}

transition(S, Inv, I) {
    sendInvAcktoReq;
    deallocateCacheBlock;
    forwardEviction;
    popForwardQueue;
}

transition({SM_AD, SM_A}, {Store, Replacement, FwdGetS, FwdGetM}) {
    stall;
}

transition(SM_AD, Inv, IM_AD) {
    sendInvAcktoReq;
    forwardEviction;
    popForwardQueue;
}

transition(SM_AD, DataDirAcks, SM_A) {
    writeDataToCache;
    storeAcks;
    popResponseQueue;
}

transition(M, Store) {
    storeHit;
    popMandatoryQueue;
}

transition(M, Replacement, MI_A) {
    sendPutM;
    forwardEviction;
}

transition(M, FwdGetS, S) {
    sendCacheDataToReq;
    sendCacheDataToDir;
    popForwardQueue;
}

transition(M, FwdGetM, I) {
    sendCacheDataToReq;
    deallocateCacheBlock;
    popForwardQueue;
}

transition({MI_A, SI_A, II_A}, {Load, Store, Replacement}) {
    stall;
}

transition(MI_A, FwdGetS, SI_A) {
    sendCacheDataToReq;
    sendCacheDataToDir;
    popForwardQueue;
}

transition(MI_A, FwdGetM, II_A) {
    sendCacheDataToReq;
    popForwardQueue;
}

transition({MI_A, SI_A, II_A}, PutAck, I) {
    deallocateCacheBlock;
    popForwardQueue;
}

transition(SI_A, Inv, II_A) {
    sendInvAcktoReq;
    popForwardQueue;
}
</code></pre>
<p>You can download the complete <code>MSI-cache.sm</code> file
<a href="part3//_pages/static/scripts/part3/MSI_protocol/MSI-cache.sm">here</a>.</p>
<div style="break-before: page; page-break-before: always;"></div><hr />
<h2>layout: documentation
title: MSI Directory implementation
doc: Learning gem5
parent: part3
permalink: /documentation/learning_gem5/part3/directory/
author: Jason Lowe-Power</h2>
<h1 id="msi-directory-implementation"><a class="header" href="#msi-directory-implementation">MSI Directory implementation</a></h1>
<p>Implementing a directory controller is very similar to the L1 cache
controller, except using a different state machine table. The state
machine fore the directory can be found in Table 8.2 in Sorin et al.
Since things are mostly similar to the L1 cache, this section mostly
just discusses a few more SLICC details and a few differences between
directory controllers and cache controllers. Let's dive straight in and
start modifying a new file <code>MSI-dir.sm</code>.</p>
<pre><code class="language-cpp">machine(MachineType:Directory, &quot;Directory protocol&quot;)
:
  DirectoryMemory * directory;
  Cycles toMemLatency := 1;

MessageBuffer *forwardToCache, network=&quot;To&quot;, virtual_network=&quot;1&quot;,
      vnet_type=&quot;forward&quot;;
MessageBuffer *responseToCache, network=&quot;To&quot;, virtual_network=&quot;2&quot;,
      vnet_type=&quot;response&quot;;

MessageBuffer *requestFromCache, network=&quot;From&quot;, virtual_network=&quot;0&quot;,
      vnet_type=&quot;request&quot;;

MessageBuffer *responseFromCache, network=&quot;From&quot;, virtual_network=&quot;2&quot;,
      vnet_type=&quot;response&quot;;

MessageBuffer *responseFromMemory;

{
. . .
}
</code></pre>
<p>First, there are two parameter to this directory controller,
<code>DirectoryMemory</code> and a <code>toMemLatency</code>. The <code>DirectoryMemory</code> is a
little weird. It is allocated at initialization time such that it can
cover <em>all</em> of physical memory, like a complete directory <em>not a
directory cache</em>. I.e., there are pointers in the <code>DirectoryMemory</code>
object for every 64-byte block in physical memory. However, the actual
entries (as defined below) are lazily created via <code>getDirEntry()</code>. We'll
see more details about <code>DirectoryMemory</code> below.</p>
<p>Next, is the <code>toMemLatency</code> parameter. This will be used in the
<code>enqueue</code> function when enqueuing requests to model the directory
latency. We didn't use a parameter for this in the L1 cache, but it is
simple to make the controller latency parameterized. This parameter
defaults to 1 cycle. It is not required to set a default here. The
default is propagated to the generated SimObject description file as the
default to the SimObject parameter.</p>
<p>Next, we have the message buffers for the directory. Importantly, <em>these
need to have the same virtual network numbers</em> as the message buffers in
the L1 cache. These virtual network numbers are how the Ruby network
directs messages between controllers.</p>
<p>There is also one more special message buffer: <code>responseFromMemory</code>.
This is similar to the <code>mandatoryQueue</code>, except instead of being like a
slave port for CPUs it is like a master port. The <code>responseFromMemory</code>
buffer will deliver response sent across the the memory port, as we will
see below in the action section.</p>
<p>After the parameters and message buffers, we need to declare all of the
states, events, and other local structures.</p>
<pre><code class="language-cpp">state_declaration(State, desc=&quot;Directory states&quot;,
                  default=&quot;Directory_State_I&quot;) {
    // Stable states.
    // NOTE: These are &quot;cache-centric&quot; states like in Sorin et al.
    // However, The access permissions are memory-centric.
    I, AccessPermission:Read_Write,  desc=&quot;Invalid in the caches.&quot;;
    S, AccessPermission:Read_Only,   desc=&quot;At least one cache has the blk&quot;;
    M, AccessPermission:Invalid,     desc=&quot;A cache has the block in M&quot;;

    // Transient states
    S_D, AccessPermission:Busy,      desc=&quot;Moving to S, but need data&quot;;

    // Waiting for data from memory
    S_m, AccessPermission:Read_Write, desc=&quot;In S waiting for mem&quot;;
    M_m, AccessPermission:Read_Write, desc=&quot;Moving to M waiting for mem&quot;;

    // Waiting for write-ack from memory
    MI_m, AccessPermission:Busy,       desc=&quot;Moving to I waiting for ack&quot;;
    SS_m, AccessPermission:Busy,       desc=&quot;Moving to I waiting for ack&quot;;
}

enumeration(Event, desc=&quot;Directory events&quot;) {
    // Data requests from the cache
    GetS,         desc=&quot;Request for read-only data from cache&quot;;
    GetM,         desc=&quot;Request for read-write data from cache&quot;;

    // Writeback requests from the cache
    PutSNotLast,  desc=&quot;PutS and the block has other sharers&quot;;
    PutSLast,     desc=&quot;PutS and the block has no other sharers&quot;;
    PutMOwner,    desc=&quot;Dirty data writeback from the owner&quot;;
    PutMNonOwner, desc=&quot;Dirty data writeback from non-owner&quot;;

    // Cache responses
    Data,         desc=&quot;Response to fwd request with data&quot;;

    // From Memory
    MemData,      desc=&quot;Data from memory&quot;;
    MemAck,       desc=&quot;Ack from memory that write is complete&quot;;
}

structure(Entry, desc=&quot;...&quot;, interface=&quot;AbstractEntry&quot;) {
    State DirState,         desc=&quot;Directory state&quot;;
    NetDest Sharers,        desc=&quot;Sharers for this block&quot;;
    NetDest Owner,          desc=&quot;Owner of this block&quot;;
}
</code></pre>
<p>In the <code>state_declaration</code> we define a default. For many things in SLICC
you can specify a default. However, this default must use the C++ name
(mangled SLICC name). For the state below you have to use the controller
name and the name we use for states. In this case, since the name of the
machine is &quot;Directory&quot; the name for &quot;I&quot; is &quot;Directory&quot;+&quot;State&quot; (for the
name of the structure)+&quot;I&quot;.</p>
<p>Note that the permissions in the directory are &quot;memory-centric&quot;.
Whereas, all of the states are cache centric as in Sorin et al.</p>
<p>In the <code>Entry</code> definition for the directory, we use a NetDest for both
the sharers and the owner. This makes sense for the sharers, since we
want a full bitvector for all L1 caches that may be sharing the block.
The reason we also use a <code>NetDest</code> for the owner is to simply copy the
structure into the message we send as a response as shown below.</p>
<p>In this implementation, we use a few more transient states than in Table
8.2 in Sorin et al. to deal with the fact that the memory latency in
unknown. In Sorin et al., the authors assume that the directory state
and memory data is stored together in main-memory to simplify the
protocol. Similarly, we also include new actions: the responses from
memory.</p>
<p>Next, we have the functions that need to overridden and declared. The
function <code>getDirectoryEntry</code> either returns the valid directory entry,
or, if it hasn't been allocated yet, this allocates the entry.
Implementing it this way may save some host memory since this is lazily
populated.</p>
<pre><code class="language-cpp">Tick clockEdge();

Entry getDirectoryEntry(Addr addr), return_by_pointer = &quot;yes&quot; {
    Entry dir_entry := static_cast(Entry, &quot;pointer&quot;, directory[addr]);
    if (is_invalid(dir_entry)) {
        // This first time we see this address allocate an entry for it.
        dir_entry := static_cast(Entry, &quot;pointer&quot;,
                                 directory.allocate(addr, new Entry));
    }
    return dir_entry;
}

State getState(Addr addr) {
    if (directory.isPresent(addr)) {
        return getDirectoryEntry(addr).DirState;
    } else {
        return State:I;
    }
}

void setState(Addr addr, State state) {
    if (directory.isPresent(addr)) {
        if (state == State:M) {
            DPRINTF(RubySlicc, &quot;Owner %s\n&quot;, getDirectoryEntry(addr).Owner);
            assert(getDirectoryEntry(addr).Owner.count() == 1);
            assert(getDirectoryEntry(addr).Sharers.count() == 0);
        }
        getDirectoryEntry(addr).DirState := state;
        if (state == State:I)  {
            assert(getDirectoryEntry(addr).Owner.count() == 0);
            assert(getDirectoryEntry(addr).Sharers.count() == 0);
        }
    }
}

AccessPermission getAccessPermission(Addr addr) {
    if (directory.isPresent(addr)) {
        Entry e := getDirectoryEntry(addr);
        return Directory_State_to_permission(e.DirState);
    } else  {
        return AccessPermission:NotPresent;
    }
}
void setAccessPermission(Addr addr, State state) {
    if (directory.isPresent(addr)) {
        Entry e := getDirectoryEntry(addr);
        e.changePermission(Directory_State_to_permission(state));
    }
}

void functionalRead(Addr addr, Packet *pkt) {
    functionalMemoryRead(pkt);
}

int functionalWrite(Addr addr, Packet *pkt) {
    if (functionalMemoryWrite(pkt)) {
        return 1;
    } else {
        return 0;
    }
</code></pre>
<p>Next, we need to implement the ports for the cache. First we specify the
<code>out_port</code> and then the <code>in_port</code> code blocks. The only difference
between the <code>in_port</code> in the directory and in the L1 cache is that the
directory does not have a TBE or cache entry. Thus, we do not pass
either into the <code>trigger</code> function.</p>
<pre><code class="language-cpp">out_port(forward_out, RequestMsg, forwardToCache);
out_port(response_out, ResponseMsg, responseToCache);

in_port(memQueue_in, MemoryMsg, responseFromMemory) {
    if (memQueue_in.isReady(clockEdge())) {
        peek(memQueue_in, MemoryMsg) {
            if (in_msg.Type == MemoryRequestType:MEMORY_READ) {
                trigger(Event:MemData, in_msg.addr);
            } else if (in_msg.Type == MemoryRequestType:MEMORY_WB) {
                trigger(Event:MemAck, in_msg.addr);
            } else {
                error(&quot;Invalid message&quot;);
            }
        }
    }
}

in_port(response_in, ResponseMsg, responseFromCache) {
    if (response_in.isReady(clockEdge())) {
        peek(response_in, ResponseMsg) {
            if (in_msg.Type == CoherenceResponseType:Data) {
                trigger(Event:Data, in_msg.addr);
            } else {
                error(&quot;Unexpected message type.&quot;);
            }
        }
    }
}

in_port(request_in, RequestMsg, requestFromCache) {
    if (request_in.isReady(clockEdge())) {
        peek(request_in, RequestMsg) {
            Entry e := getDirectoryEntry(in_msg.addr);
            if (in_msg.Type == CoherenceRequestType:GetS) {

                trigger(Event:GetS, in_msg.addr);
            } else if (in_msg.Type == CoherenceRequestType:GetM) {
                trigger(Event:GetM, in_msg.addr);
            } else if (in_msg.Type == CoherenceRequestType:PutS) {
                assert(is_valid(e));
                // If there is only a single sharer (i.e., the requestor)
                if (e.Sharers.count() == 1) {
                    assert(e.Sharers.isElement(in_msg.Requestor));
                    trigger(Event:PutSLast, in_msg.addr);
                } else {
                    trigger(Event:PutSNotLast, in_msg.addr);
                }
            } else if (in_msg.Type == CoherenceRequestType:PutM) {
                assert(is_valid(e));
                if (e.Owner.isElement(in_msg.Requestor)) {
                    trigger(Event:PutMOwner, in_msg.addr);
                } else {
                    trigger(Event:PutMNonOwner, in_msg.addr);
                }
            } else {
                error(&quot;Unexpected message type.&quot;);
            }
        }
    }
}
</code></pre>
<p>The next part of the state machine file is the actions. First, we define
actions for queuing memory reads and writes. For this, we will use a
special function define in the <code>AbstractController</code>: <code>queueMemoryRead</code>.
This function takes an address and converts it to a gem5 request and
packet and sends it to across the port that is connected to this
controller. We will see how to connect this port in the
configuration section &lt;MSI-config-section&gt;. Note that we need two
different actions to send data to memory for both requests and responses
since there are two different message buffers (virtual networks) that
data might arrive on.</p>
<pre><code class="language-cpp">action(sendMemRead, &quot;r&quot;, desc=&quot;Send a memory read request&quot;) {
    peek(request_in, RequestMsg) {
        queueMemoryRead(in_msg.Requestor, address, toMemLatency);
    }
}

action(sendDataToMem, &quot;w&quot;, desc=&quot;Write data to memory&quot;) {
    peek(request_in, RequestMsg) {
        DPRINTF(RubySlicc, &quot;Writing memory for %#x\n&quot;, address);
        DPRINTF(RubySlicc, &quot;Writing %s\n&quot;, in_msg.DataBlk);
        queueMemoryWrite(in_msg.Requestor, address, toMemLatency,
                         in_msg.DataBlk);
    }
}

action(sendRespDataToMem, &quot;rw&quot;, desc=&quot;Write data to memory from resp&quot;) {
    peek(response_in, ResponseMsg) {
        DPRINTF(RubySlicc, &quot;Writing memory for %#x\n&quot;, address);
        DPRINTF(RubySlicc, &quot;Writing %s\n&quot;, in_msg.DataBlk);
        queueMemoryWrite(in_msg.Sender, address, toMemLatency,
                         in_msg.DataBlk);
    }
}
</code></pre>
<p>In this code, we also see the last way to add debug information to SLICC
protocols: <code>DPRINTF</code>. This is exactly the same as a <code>DPRINTF</code> in gem5,
except in SLICC only the <code>RubySlicc</code> debug flag is available.</p>
<p>Next, we specify actions to update the sharers and owner of a particular
block.</p>
<pre><code class="language-cpp">action(addReqToSharers, &quot;aS&quot;, desc=&quot;Add requestor to sharer list&quot;) {
    peek(request_in, RequestMsg) {
        getDirectoryEntry(address).Sharers.add(in_msg.Requestor);
    }
}

action(setOwner, &quot;sO&quot;, desc=&quot;Set the owner&quot;) {
    peek(request_in, RequestMsg) {
        getDirectoryEntry(address).Owner.add(in_msg.Requestor);
    }
}

action(addOwnerToSharers, &quot;oS&quot;, desc=&quot;Add the owner to sharers&quot;) {
    Entry e := getDirectoryEntry(address);
    assert(e.Owner.count() == 1);
    e.Sharers.addNetDest(e.Owner);
}

action(removeReqFromSharers, &quot;rS&quot;, desc=&quot;Remove requestor from sharers&quot;) {
    peek(request_in, RequestMsg) {
        getDirectoryEntry(address).Sharers.remove(in_msg.Requestor);
    }
}

action(clearSharers, &quot;cS&quot;, desc=&quot;Clear the sharer list&quot;) {
    getDirectoryEntry(address).Sharers.clear();
}

action(clearOwner, &quot;cO&quot;, desc=&quot;Clear the owner&quot;) {
    getDirectoryEntry(address).Owner.clear();
}
</code></pre>
<p>The next set of actions send invalidates and forward requests to caches
that the directory cannot deal with alone.</p>
<pre><code class="language-cpp">action(sendInvToSharers, &quot;i&quot;, desc=&quot;Send invalidate to all sharers&quot;) {
    peek(request_in, RequestMsg) {
        enqueue(forward_out, RequestMsg, 1) {
            out_msg.addr := address;
            out_msg.Type := CoherenceRequestType:Inv;
            out_msg.Requestor := in_msg.Requestor;
            out_msg.Destination := getDirectoryEntry(address).Sharers;
            out_msg.MessageSize := MessageSizeType:Control;
        }
    }
}

action(sendFwdGetS, &quot;fS&quot;, desc=&quot;Send forward getS to owner&quot;) {
    assert(getDirectoryEntry(address).Owner.count() == 1);
    peek(request_in, RequestMsg) {
        enqueue(forward_out, RequestMsg, 1) {
            out_msg.addr := address;
            out_msg.Type := CoherenceRequestType:GetS;
            out_msg.Requestor := in_msg.Requestor;
            out_msg.Destination := getDirectoryEntry(address).Owner;
            out_msg.MessageSize := MessageSizeType:Control;
        }
    }
}

action(sendFwdGetM, &quot;fM&quot;, desc=&quot;Send forward getM to owner&quot;) {
    assert(getDirectoryEntry(address).Owner.count() == 1);
    peek(request_in, RequestMsg) {
        enqueue(forward_out, RequestMsg, 1) {
            out_msg.addr := address;
            out_msg.Type := CoherenceRequestType:GetM;
            out_msg.Requestor := in_msg.Requestor;
            out_msg.Destination := getDirectoryEntry(address).Owner;
            out_msg.MessageSize := MessageSizeType:Control;
        }
    }
}
</code></pre>
<p>Now we have responses from the directory. Here we are peeking into the
special buffer <code>responseFromMemory</code>. You can find the definition of
<code>MemoryMsg</code> in <code>src/mem/protocol/RubySlicc_MemControl.sm</code>.</p>
<pre><code class="language-cpp">action(sendDataToReq, &quot;d&quot;, desc=&quot;Send data from memory to requestor. May need to send sharer number, too&quot;) {
    peek(memQueue_in, MemoryMsg) {
        enqueue(response_out, ResponseMsg, 1) {
            out_msg.addr := address;
            out_msg.Type := CoherenceResponseType:Data;
            out_msg.Sender := machineID;
            out_msg.Destination.add(in_msg.OriginalRequestorMachId);
            out_msg.DataBlk := in_msg.DataBlk;
            out_msg.MessageSize := MessageSizeType:Data;
            Entry e := getDirectoryEntry(address);
            // Only need to include acks if we are the owner.
            if (e.Owner.isElement(in_msg.OriginalRequestorMachId)) {
                out_msg.Acks := e.Sharers.count();
            } else {
                out_msg.Acks := 0;
            }
            assert(out_msg.Acks &gt;= 0);
        }
    }
}

action(sendPutAck, &quot;a&quot;, desc=&quot;Send the put ack&quot;) {
    peek(request_in, RequestMsg) {
        enqueue(forward_out, RequestMsg, 1) {
            out_msg.addr := address;
            out_msg.Type := CoherenceRequestType:PutAck;
            out_msg.Requestor := machineID;
            out_msg.Destination.add(in_msg.Requestor);
            out_msg.MessageSize := MessageSizeType:Control;
        }
    }
}
</code></pre>
<p>Then, we have the queue management and stall actions.</p>
<pre><code class="language-cpp">action(popResponseQueue, &quot;pR&quot;, desc=&quot;Pop the response queue&quot;) {
    response_in.dequeue(clockEdge());
}

action(popRequestQueue, &quot;pQ&quot;, desc=&quot;Pop the request queue&quot;) {
    request_in.dequeue(clockEdge());
}

action(popMemQueue, &quot;pM&quot;, desc=&quot;Pop the memory queue&quot;) {
    memQueue_in.dequeue(clockEdge());
}

action(stall, &quot;z&quot;, desc=&quot;Stall the incoming request&quot;) {
    // Do nothing.
}
</code></pre>
<p>Finally, we have the transition section of the state machine file. These
mostly come from Table 8.2 in Sorin et al., but there are some extra
transitions to deal with the unknown memory latency.</p>
<pre><code class="language-cpp">transition({I, S}, GetS, S_m) {
    sendMemRead;
    addReqToSharers;
    popRequestQueue;
}

transition(I, {PutSNotLast, PutSLast, PutMNonOwner}) {
    sendPutAck;
    popRequestQueue;
}

transition(S_m, MemData, S) {
    sendDataToReq;
    popMemQueue;
}

transition(I, GetM, M_m) {
    sendMemRead;
    setOwner;
    popRequestQueue;
}

transition(M_m, MemData, M) {
    sendDataToReq;
    clearSharers; // NOTE: This isn't *required* in some cases.
    popMemQueue;
}

transition(S, GetM, M_m) {
    sendMemRead;
    removeReqFromSharers;
    sendInvToSharers;
    setOwner;
    popRequestQueue;
}

transition({S, S_D, SS_m, S_m}, {PutSNotLast, PutMNonOwner}) {
    removeReqFromSharers;
    sendPutAck;
    popRequestQueue;
}

transition(S, PutSLast, I) {
    removeReqFromSharers;
    sendPutAck;
    popRequestQueue;
}

transition(M, GetS, S_D) {
    sendFwdGetS;
    addReqToSharers;
    addOwnerToSharers;
    clearOwner;
    popRequestQueue;
}

transition(M, GetM) {
    sendFwdGetM;
    clearOwner;
    setOwner;
    popRequestQueue;
}

transition({M, M_m, MI_m}, {PutSNotLast, PutSLast, PutMNonOwner}) {
    sendPutAck;
    popRequestQueue;
}

transition(M, PutMOwner, MI_m) {
    sendDataToMem;
    clearOwner;
    sendPutAck;
    popRequestQueue;
}

transition(MI_m, MemAck, I) {
    popMemQueue;
}

transition(S_D, {GetS, GetM}) {
    stall;
}

transition(S_D, PutSLast) {
    removeReqFromSharers;
    sendPutAck;
    popRequestQueue;
}

transition(S_D, Data, SS_m) {
    sendRespDataToMem;
    popResponseQueue;
}

transition(SS_m, MemAck, S) {
    popMemQueue;
}

// If we get another request for a block that's waiting on memory,
// stall that request.
transition({MI_m, SS_m, S_m, M_m}, {GetS, GetM}) {
    stall;
}
</code></pre>
<p>You can download the complete <code>MSI-dir.sm</code> file
<a href="part3//_pages/static/scripts/part3/MSI_protocol/MSI-dir.sm">here</a>.</p>
<div style="break-before: page; page-break-before: always;"></div><hr />
<h2>layout: documentation
title: Compiling a SLICC protocol
doc: Learning gem5
parent: part3
permalink: /documentation/learning_gem5/part3/MSIbuilding/
author: Jason Lowe-Power</h2>
<h1 id="compiling-a-slicc-protocol"><a class="header" href="#compiling-a-slicc-protocol">Compiling a SLICC protocol</a></h1>
<h2 id="the-slicc-file"><a class="header" href="#the-slicc-file">The SLICC file</a></h2>
<p>Now that we have finished implementing the protocol, we need to compile
it. You can download the complete SLICC files below:</p>
<ul>
<li><a href="part3//_pages/static/scripts/part3/MSI_protocol/MSI-cache.sm">MSI-cache.sm</a></li>
<li><a href="part3//_pages/static/scripts/part3/MSI_protocol/MSI-dir.sm">MSI-dir.sm</a></li>
<li><a href="part3//_pages/static/scripts/part3/MSI_protocol/MSI-msg.sm">MSI-msg.sm</a></li>
</ul>
<p>Before building the protocol, we need to create one more file:
<code>MSI.slicc</code>. This file tells the SLICC compiler which state machine
files to compile for this protocol. The first line contains the name of
our protocol. Then, the file has a number of <code>include</code> statements. Each
<code>include</code> statement has a file name. This filename can come from any of
the <code>protocol_dirs</code> directories. We declared the current directory as
part of the <code>protocol_dirs</code> in the SConsopts file
(<code>protocol_dirs.append(str(Dir('.').abspath))</code>). The other directory is
<code>src/mem/protocol/</code>. These files are included like C++h header files.
Effectively, all of the files are processed as one large SLICC file.
Thus, any files that declare types that are used in other files must
come before the files they are used in (e.g., <code>MSI-msg.sm</code> must come
before <code>MSI-cache.sm</code> since <code>MSI-cache.sm</code> uses the <code>RequestMsg</code> type).</p>
<pre><code class="language-cpp">protocol &quot;MSI&quot;;
include &quot;RubySlicc_interfaces.slicc&quot;;
include &quot;MSI-msg.sm&quot;;
include &quot;MSI-cache.sm&quot;;
include &quot;MSI-dir.sm&quot;;
</code></pre>
<p>You can download the fill file
<a href="part3//_pages/static/scripts/part3/MSI_protocol/MSI.slicc">here</a>.</p>
<h2 id="compiling-a-protocol-with-scons"><a class="header" href="#compiling-a-protocol-with-scons">Compiling a protocol with SCons</a></h2>
<p>Most SCons defaults (found in <code>build_opts/</code>) specify the protocol as
<code>MI_example</code>, an example, but poor performing protocol. Therefore, we
cannot simply use a default build name (e.g., <code>X86</code> or <code>ARM</code>). We have
to specify the SCons options on the command line. The command line below
will build our new protocol with the X86 ISA.</p>
<pre><code>scons build/X86_MSI/gem5.opt --default=X86 PROTOCOL=MSI SLICC_HTML=True
</code></pre>
<p>This command will build <code>gem5.opt</code> in the directory <code>build/X86_MSI</code>. You
can specify <em>any</em> directory here. This command line has two new
parameters: <code>--default</code> and <code>PROTOCOL</code>. First, <code>--default</code> specifies
which file to use in <code>build_opts</code> for defaults for all of the SCons
variables (e.g., <code>ISA</code>, <code>CPU_MODELS</code>). Next, <code>PROTOCOL</code> <em>overrides</em> any
default for the <code>PROTOCOL</code> SCons variable in the default specified.
Thus, we are telling SCons to specifically compile our new protocol, not
whichever protocol was specified in <code>build_opts/X86</code>.</p>
<p>There is one more variable on this command line to build gem5:
<code>SLICC_HTML=True</code>. When you specify this on the building command line,
SLICC will generate the HTML tables for your protocol. You can find the
HTML tables in <code>&lt;build directory&gt;/mem/protocol/html</code>. By default, the
SLICC compiler skips building the HTML tables because it impacts the
performance of compiling gem5, especially when compiling on a network
file system.</p>
<p>After gem5 finishes compiling, you will have a gem5 binary with your new
protocol! If you want to build another protocol into gem5, you have to
change the <code>PROTOCOL</code> SCons variable. Thus, it is a good idea to use a
different build directory for each protocol, especially if you will be
comparing protocols.</p>
<p>When building your protocol, you will likely encounter errors in your
SLICC code reported by the SLICC compiler. Most errors include the file
and line number of the error. Sometimes, this line number is the line
<em>after</em> the error occurs. In fact, the line number can be far below the
actual error. For instance, if the curly brackets do not match
correctly, the error will report the last line in the file as the
location.</p>
<div style="break-before: page; page-break-before: always;"></div><hr />
<h2>layout: documentation
title: Configuring a simple Ruby system
doc: Learning gem5
parent: part3
permalink: /documentation/learning_gem5/part3/configuration/
author: Jason Lowe-Power</h2>
<h1 id="configuring-a-simple-ruby-system"><a class="header" href="#configuring-a-simple-ruby-system">Configuring a simple Ruby system</a></h1>
<p>First, create a new configuration directory in <code>configs/</code>. Just like all
gem5 configuration files, we will have a configuration run script. For
the run script, we can start with <code>simple.py</code> from
simple-config-chapter. Copy this file to <code>simple_ruby.py</code> in your new
directory.</p>
<p>We will make a couple of small changes to this file to use Ruby instead
of directly connecting the CPU to the memory controllers.</p>
<p>First, so we can test our <em>coherence</em> protocol, let's use two CPUs.</p>
<pre><code class="language-python">system.cpu = [TimingSimpleCPU(), TimingSimpleCPU()]
</code></pre>
<p>Next, after the memory controllers have been instantiated, we are going
to create the cache system and set up all of the caches. Add the
following lines <em>after the CPU interrupts have been created, but before
instantiating the system</em>.</p>
<pre><code class="language-python">system.caches = MyCacheSystem()
system.caches.setup(system, system.cpu, [system.mem_ctrl])
</code></pre>
<p>Like the classic cache example in cache-config-chapter, we are going to
create a second file that contains the cache configuration code. In this
file we are going to have a class called <code>MyCacheSystem</code> and we will
create a <code>setup</code> function that takes as parameters the CPUs in the
system and the memory controllers.</p>
<p>You can download the complete run script
<a href="part3/_pages/static/scripts/part3/configs/simple_ruby.py">here</a>.</p>
<h2 id="cache-system-configuration"><a class="header" href="#cache-system-configuration">Cache system configuration</a></h2>
<p>Now, let's create a file <code>msi_caches.py</code>. In this file, we will create
four classes: <code>MyCacheSystem</code> which will inherit from <code>RubySystem</code>,
<code>L1Cache</code> and <code>Directory</code> which will inherit from the SimObjects created
by SLICC from our two state machines, and <code>MyNetwork</code> which will inherit
from <code>SimpleNetwork</code>.</p>
<h3 id="l1-cache"><a class="header" href="#l1-cache">L1 Cache</a></h3>
<p>Let's start with the <code>L1Cache</code>. First, we will inherit from
<code>L1Cache_Controller</code> since we named our L1 cache &quot;L1Cache&quot; in the state
machine file. We also include a special class variable and class method
for tracking the &quot;version number&quot;. For each SLICC state machine, you
have to number them in ascending order from 0. Each machine of the same
type should have a unique version number. This is used to differentiate
the individual machines. (Hopefully, in the future this requirement will
be removed.)</p>
<pre><code class="language-python">class L1Cache(L1Cache_Controller):

    _version = 0
    @classmethod
    def versionCount(cls):
        cls._version += 1 # Use count for this particular type
        return cls._version - 1
</code></pre>
<p>Next, we implement the constructor for the class.</p>
<pre><code class="language-python">def __init__(self, system, ruby_system, cpu):
    super(L1Cache, self).__init__()

    self.version = self.versionCount()
    self.cacheMemory = RubyCache(size = '16kB',
                           assoc = 8,
                           start_index_bit = self.getBlockSizeBits(system))
    self.clk_domain = cpu.clk_domain
    self.send_evictions = self.sendEvicts(cpu)
    self.ruby_system = ruby_system
    self.connectQueues(ruby_system)
</code></pre>
<p>We need the CPUs in this function to grab the clock domain and system is
needed for the cache block size. Here, we set all of the parameters that
we named in the state machine file (e.g., <code>cacheMemory</code>). We will set
<code>sequencer</code> later. We also hardcode the size an associativity of the
cache. You could add command line parameters for these options, if it is
important to vary them at runtime.</p>
<p>Next, we implement a couple of helper functions. First, we need to
figure out how many bits of the address to use for indexing into the
cache, which is a simple log operation. We also need to decide whether
to send eviction notices to the CPU. Only if we are using the
out-of-order CPU and using x86 or ARM ISA should we forward evictions.</p>
<pre><code class="language-python">def getBlockSizeBits(self, system):
    bits = int(math.log(system.cache_line_size, 2))
    if 2**bits != system.cache_line_size.value:
        panic(&quot;Cache line size not a power of 2!&quot;)
    return bits

def sendEvicts(self, cpu):
    &quot;&quot;&quot;True if the CPU model or ISA requires sending evictions from caches
       to the CPU. Two scenarios warrant forwarding evictions to the CPU:
       1. The O3 model must keep the LSQ coherent with the caches
       2. The x86 mwait instruction is built on top of coherence
       3. The local exclusive monitor in ARM systems
    &quot;&quot;&quot;
    if type(cpu) is DerivO3CPU or \
       buildEnv['TARGET_ISA'] in ('x86', 'arm'):
        return True
    return False
</code></pre>
<p>Finally, we need to implement <code>connectQueues</code> to connect all of the
message buffers to the Ruby network. First, we create a message buffer
for the mandatory queue. Since this is an L1 cache and it will have a
sequencer, we need to instantiate this special message buffer. Next, we
instantiate a message buffer for each buffer in the controller. All of
the &quot;to&quot; buffers we must set the &quot;master&quot; to the network (i.e., the
buffer will send messages into the network), and all of the &quot;from&quot;
buffers we must set the &quot;slave&quot; to the network. These <em>names</em> are the
same as the gem5 ports, but <em>message buffers are not currently
implemented as gem5 ports</em>. In this protocol, we are assuming the
message buffers are ordered for simplicity.</p>
<pre><code class="language-python">def connectQueues(self, ruby_system):
    self.mandatoryQueue = MessageBuffer()

    self.requestToDir = MessageBuffer(ordered = True)
    self.requestToDir.master = ruby_system.network.slave
    self.responseToDirOrSibling = MessageBuffer(ordered = True)
    self.responseToDirOrSibling.master = ruby_system.network.slave
    self.forwardFromDir = MessageBuffer(ordered = True)
    self.forwardFromDir.slave = ruby_system.network.master
    self.responseFromDirOrSibling = MessageBuffer(ordered = True)
    self.responseFromDirOrSibling.slave = ruby_system.network.master
</code></pre>
<h3 id="directory"><a class="header" href="#directory">Directory</a></h3>
<p>Now, we can similarly implement the directory. There are three
differences from the L1 cache. First, we need to set the address ranges
for the directory. Since each directory corresponds to a particular
memory controller for a subset of the address range (possibly), we need
to make sure the ranges match. The default address ranges for Ruby
controllers is <code>AllMemory</code>.</p>
<p>Next, we need to set the master port <code>memory</code>. This is the port that
sends messages when <code>queueMemoryRead/Write</code> is called in the SLICC code.
We set it the to the memory controller port. Similarly, in
<code>connectQueues</code> we need to instantiate the special message buffer
<code>responseFromMemory</code> like the <code>mandatoryQueue</code> in the L1 cache.</p>
<pre><code class="language-python">class DirController(Directory_Controller):

    _version = 0
    @classmethod
    def versionCount(cls):
        cls._version += 1 # Use count for this particular type
        return cls._version - 1

    def __init__(self, ruby_system, ranges, mem_ctrls):
        &quot;&quot;&quot;ranges are the memory ranges assigned to this controller.
        &quot;&quot;&quot;
        if len(mem_ctrls) &gt; 1:
            panic(&quot;This cache system can only be connected to one mem ctrl&quot;)
        super(DirController, self).__init__()
        self.version = self.versionCount()
        self.addr_ranges = ranges
        self.ruby_system = ruby_system
        self.directory = RubyDirectoryMemory()
        # Connect this directory to the memory side.
        self.memory = mem_ctrls[0].port
        self.connectQueues(ruby_system)

    def connectQueues(self, ruby_system):
        self.requestFromCache = MessageBuffer(ordered = True)
        self.requestFromCache.slave = ruby_system.network.master
        self.responseFromCache = MessageBuffer(ordered = True)
        self.responseFromCache.slave = ruby_system.network.master

        self.responseToCache = MessageBuffer(ordered = True)
        self.responseToCache.master = ruby_system.network.slave
        self.forwardToCache = MessageBuffer(ordered = True)
        self.forwardToCache.master = ruby_system.network.slave

        self.responseFromMemory = MessageBuffer()
</code></pre>
<h3 id="ruby-system"><a class="header" href="#ruby-system">Ruby System</a></h3>
<p>Now, we can implement the Ruby system object. For this object, the
constructor is simple. It just checks the SCons variable <code>PROTOCOL</code> to
be sure that we are using the right configuration file for the protocol
that was compiled. We cannot create the controllers in the constructor
because they require a pointer to the this object. If we were to create
them in the constructor, there would be a circular dependence in the
SimObject hierarchy which will cause infinite recursion in when the
system in instantiated with <code>m5.instantiate</code>.</p>
<pre><code class="language-python">class MyCacheSystem(RubySystem):

    def __init__(self):
        if buildEnv['PROTOCOL'] != 'MSI':
            fatal(&quot;This system assumes MSI from learning gem5!&quot;)

        super(MyCacheSystem, self).__init__()
</code></pre>
<p>Instead of create the controllers in the constructor, we create a new
function to create all of the needed objects: <code>setup</code>. First, we create
the network. We will look at this object next. With the network, we need
to set the number of virtual networks in the system.</p>
<p>Next, we instantiate all of the controllers. Here, we use a single
global list of the controllers to make it easier to connect them to the
network later. However, for more complicated cache topologies, it can
make sense to use multiple lists of controllers. We create one L1 cache
for each CPU and one directory for the system.</p>
<p>Then, we instantiate all of the sequencers, one for each CPU. Each
sequencer needs a pointer to the instruction and data cache to simulate
the correct latency when initially accessing the cache. In more
complicated systems, you also have to create sequencers for other
objects like DMA controllers.</p>
<p>After creating the sequencers, we set the sequencer variable on each L1
cache controller.</p>
<p>Then, we connect all of the controllers to the network and call the
<code>setup_buffers</code> function on the network.</p>
<p>We then have to set the &quot;port proxy&quot; for both the Ruby system and the
<code>system</code> for making functional accesses (e.g., loading the binary in SE
mode).</p>
<p>Finally, we connect all of the CPUs to the ruby system. In this example,
we assume that there are only CPU sequencers so the first CPU is
connected to the first sequencer, and so on. We also have to connect the
TLBs and interrupt ports (if we are using x86).</p>
<pre><code class="language-python">def setup(self, system, cpus, mem_ctrls):
    self.network = MyNetwork(self)

    self.number_of_virtual_networks = 3
    self.network.number_of_virtual_networks = 3

    self.controllers = \
        [L1Cache(system, self, cpu) for cpu in cpus] + \
        [DirController(self, system.mem_ranges, mem_ctrls)]

    self.sequencers = [RubySequencer(version = i,
                            # I/D cache is combined and grab from ctrl
                            icache = self.controllers[i].cacheMemory,
                            dcache = self.controllers[i].cacheMemory,
                            clk_domain = self.controllers[i].clk_domain,
                            ) for i in range(len(cpus))]

    for i,c in enumerate(self.controllers[0:len(self.sequencers)]):
        c.sequencer = self.sequencers[i]

    self.num_of_sequencers = len(self.sequencers)

    self.network.connectControllers(self.controllers)
    self.network.setup_buffers()

    self.sys_port_proxy = RubyPortProxy()
    system.system_port = self.sys_port_proxy.slave

    for i,cpu in enumerate(cpus):
        cpu.icache_port = self.sequencers[i].slave
        cpu.dcache_port = self.sequencers[i].slave
        isa = buildEnv['TARGET_ISA']
        if isa == 'x86':
            cpu.interrupts[0].pio = self.sequencers[i].master
            cpu.interrupts[0].int_master = self.sequencers[i].slave
            cpu.interrupts[0].int_slave = self.sequencers[i].master
        if isa == 'x86' or isa == 'arm':
            cpu.itb.walker.port = self.sequencers[i].slave
            cpu.dtb.walker.port = self.sequencers[i].slave
</code></pre>
<h3 id="network"><a class="header" href="#network">Network</a></h3>
<p>Finally, the last object we have to implement is the network. The
constructor is simple, but we need to declare an empty list for the list
of network interfaces (<code>netifs</code>).</p>
<p>Most of the code is in <code>connectControllers</code>. This function implements a
<em>very simple, unrealistic</em> point-to-point network. In other words, every
controller has a direct link to every other controller.</p>
<p>The Ruby network is made of three parts: routers that route data from
one router to another or to external controllers, external links that
link a controller to a router, and internal links that link two routers
together. First, we create a router for each controller. Then, we create
an external link from that router to the controller. Finally, we add all
of the &quot;internal&quot; links. Each router is connected to all other routers
to make the point-to-point network.</p>
<pre><code class="language-python">class MyNetwork(SimpleNetwork):

    def __init__(self, ruby_system):
        super(MyNetwork, self).__init__()
        self.netifs = []
        self.ruby_system = ruby_system

    def connectControllers(self, controllers):
        self.routers = [Switch(router_id = i) for i in range(len(controllers))]

        self.ext_links = [SimpleExtLink(link_id=i, ext_node=c,
                                        int_node=self.routers[i])
                          for i, c in enumerate(controllers)]

        link_count = 0
        self.int_links = []
        for ri in self.routers:
            for rj in self.routers:
                if ri == rj: continue # Don't connect a router to itself!
                link_count += 1
                self.int_links.append(SimpleIntLink(link_id = link_count,
                                                    src_node = ri,
                                                    dst_node = rj))
</code></pre>
<p>You can download the complete <code>msi_caches.py</code> file
<a href="part3//_pages/static/scripts/part3/configs/msi_caches.py">here</a>.</p>
<div style="break-before: page; page-break-before: always;"></div><hr />
<h2>layout: documentation
title: Running the simple Ruby system
doc: Learning gem5
parent: part3
permalink: /documentation/learning_gem5/part3/running/
author: Jason Lowe-Power</h2>
<h1 id="running-the-simple-ruby-system"><a class="header" href="#running-the-simple-ruby-system">Running the simple Ruby system</a></h1>
<p>Now, we can run our system with the MSI protocol!</p>
<p>As something interesting, below is a simple multithreaded program (note:
as of this writing there is a bug in gem5 preventing this code from
executing).</p>
<pre><code class="language-cpp">#include &lt;iostream&gt;
#include &lt;thread&gt;

using namespace std;

/*
 * c = a + b
 */
void array_add(int *a, int *b, int *c, int tid, int threads, int num_values)
{
    for (int i = tid; i &lt; num_values; i += threads) {
        c[i] = a[i] + b[i];
    }
}


int main(int argc, char *argv[])
{
    unsigned num_values;
    if (argc == 1) {
        num_values = 100;
    } else if (argc == 2) {
        num_values = atoi(argv[1]);
        if (num_values &lt;= 0) {
            cerr &lt;&lt; &quot;Usage: &quot; &lt;&lt; argv[0] &lt;&lt; &quot; [num_values]&quot; &lt;&lt; endl;
            return 1;
        }
    } else {
        cerr &lt;&lt; &quot;Usage: &quot; &lt;&lt; argv[0] &lt;&lt; &quot; [num_values]&quot; &lt;&lt; endl;
        return 1;
    }

    unsigned cpus = thread::hardware_concurrency();

    cout &lt;&lt; &quot;Running on &quot; &lt;&lt; cpus &lt;&lt; &quot; cores. &quot;;
    cout &lt;&lt; &quot;with &quot; &lt;&lt; num_values &lt;&lt; &quot; values&quot; &lt;&lt; endl;

    int *a, *b, *c;
    a = new int[num_values];
    b = new int[num_values];
    c = new int[num_values];

    if (!(a &amp;&amp; b &amp;&amp; c)) {
        cerr &lt;&lt; &quot;Allocation error!&quot; &lt;&lt; endl;
        return 2;
    }

    for (int i = 0; i &lt; num_values; i++) {
        a[i] = i;
        b[i] = num_values - i;
        c[i] = 0;
    }

    thread **threads = new thread*[cpus];

    // NOTE: -1 is required for this to work in SE mode.
    for (int i = 0; i &lt; cpus - 1; i++) {
        threads[i] = new thread(array_add, a, b, c, i, cpus, num_values);
    }
    // Execute the last thread with this thread context to appease SE mode
    array_add(a, b, c, cpus - 1, cpus, num_values);

    cout &lt;&lt; &quot;Waiting for other threads to complete&quot; &lt;&lt; endl;

    for (int i = 0; i &lt; cpus - 1; i++) {
        threads[i]-&gt;join();
    }

    delete[] threads;

    cout &lt;&lt; &quot;Validating...&quot; &lt;&lt; flush;

    int num_valid = 0;
    for (int i = 0; i &lt; num_values; i++) {
        if (c[i] == num_values) {
            num_valid++;
        } else {
            cerr &lt;&lt; &quot;c[&quot; &lt;&lt; i &lt;&lt; &quot;] is wrong.&quot;;
            cerr &lt;&lt; &quot; Expected &quot; &lt;&lt; num_values;
            cerr &lt;&lt; &quot; Got &quot; &lt;&lt; c[i] &lt;&lt; &quot;.&quot; &lt;&lt; endl;
        }
    }

    if (num_valid == num_values) {
        cout &lt;&lt; &quot;Success!&quot; &lt;&lt; endl;
        return 0;
    } else {
        return 2;
    }
}
</code></pre>
<p>With the above code compiled as <code>threads</code>, we can run gem5!</p>
<pre><code>build/MSI/gem5.opt configs/learning_gem5/part6/simple_ruby.py
</code></pre>
<p>The output should be something like the following. Most of the warnings
are unimplemented syscalls in SE mode due to using pthreads and can be
safely ignored for this simple example.</p>
<pre><code>gem5 Simulator System.  http://gem5.org
gem5 is copyrighted software; use the --copyright option for details.

gem5 compiled Sep  7 2017 12:39:51
gem5 started Sep 10 2017 20:56:35
gem5 executing on fuggle, pid 6687
command line: build/MSI/gem5.opt configs/learning_gem5/part6/simple_ruby.py

Global frequency set at 1000000000000 ticks per second
warn: DRAM device capacity (8192 Mbytes) does not match the address range assigned (512 Mbytes)
0: system.remote_gdb.listener: listening for remote gdb #0 on port 7000
0: system.remote_gdb.listener: listening for remote gdb #1 on port 7001
Beginning simulation!
info: Entering event queue @ 0.  Starting simulation...
warn: Replacement policy updates recently became the responsibility of SLICC state machines. Make sure to setMRU() near callbacks in .sm files!
warn: ignoring syscall access(...)
warn: ignoring syscall access(...)
warn: ignoring syscall access(...)
warn: ignoring syscall mprotect(...)
warn: ignoring syscall access(...)
warn: ignoring syscall mprotect(...)
warn: ignoring syscall access(...)
warn: ignoring syscall mprotect(...)
warn: ignoring syscall access(...)
warn: ignoring syscall mprotect(...)
warn: ignoring syscall access(...)
warn: ignoring syscall mprotect(...)
warn: ignoring syscall mprotect(...)
warn: ignoring syscall mprotect(...)
warn: ignoring syscall mprotect(...)
warn: ignoring syscall mprotect(...)
warn: ignoring syscall mprotect(...)
warn: ignoring syscall mprotect(...)
warn: ignoring syscall set_robust_list(...)
warn: ignoring syscall rt_sigaction(...)
      (further warnings will be suppressed)
warn: ignoring syscall rt_sigprocmask(...)
      (further warnings will be suppressed)
info: Increasing stack size by one page.
info: Increasing stack size by one page.
Running on 2 cores. with 100 values
warn: ignoring syscall mprotect(...)
warn: ClockedObject: Already in the requested power state, request ignored
warn: ignoring syscall set_robust_list(...)
Waiting for other threads to complete
warn: ignoring syscall madvise(...)
Validating...Success!
Exiting @ tick 9386342000 because exiting with last active thread context
</code></pre>
<div style="break-before: page; page-break-before: always;"></div><hr />
<h2>layout: documentation
title: Debugging SLICC Protocols
doc: Learning gem5
parent: part3
permalink: /documentation/learning_gem5/part3/MSIdebugging/
author: Jason Lowe-Power</h2>
<h1 id="debugging-slicc-protocols"><a class="header" href="#debugging-slicc-protocols">Debugging SLICC Protocols</a></h1>
<p>In this section, I present the steps that I took while debugging the MSI
protocol implemented earlier in this chapter. Learning to debug
coherence protocols is a challenge. The best way is by working with
others who have written SLICC protocols in the past. However, since you,
the reader, cannot look over my shoulder while I am debugging a
protocol, I am trying to present the next-best thing.</p>
<p>Here, I first present some high-level suggestions to tackling protocol
errors. Next, I discuss some details about deadlocks, and how to
understand protocol traces that can be used to fix them. Then, I present
my experience debugging the MSI protocol in this chapter in a
stream-of-consciousness style. I will show the error that was generated,
then the solution to the error, sometimes with some commentary of the
different tactics I tried to solve the error.</p>
<h2 id="general-debugging-tips"><a class="header" href="#general-debugging-tips">General debugging tips</a></h2>
<p>Ruby has many useful debug flags. However, the most useful, by far, is
<code>ProtocolTrace</code>. Below, you will see several examples of using the
protocol trace to debug a protocol. The protocol trace prints every
transition for all controllers. Thus, you can simply trace the entire
execution of the cache system.</p>
<p>Other useful debug flags include:</p>
<p>RubyGenerated
:   Prints a bunch of stuff from the ruby generated code.</p>
<p>RubyPort/RubySequencer
:   See the details of sending/receiving messages into/out of ruby.</p>
<p>RubyNetwork
:   Prints entire network messages including the sender/receiver and the
data within the message for all messages. This flag is useful when
there is a data mismatch.</p>
<p>The first step to debugging a Ruby protocol is to run it with the Ruby
random tester. The random tester issues semi-random requests into the
Ruby system and checks to make sure the returned data is correct. To
make debugging faster, the random tester issues read requests from one
controller for a block and a write request for the same cache block (but
a different byte) from a different controller. Thus, the Ruby random
tester does a good job exercising the transient states and race
conditions in the protocol.</p>
<p>Unfortunately, the random tester's configuration is slightly different
than when using normal CPUs. Thus, we need to use a different
<code>MyCacheSystem</code> than before. You can download this different cache
system file
<a href="part3//_pages/static/scripts/part3/configs/test_caches.py">here</a> and you
can download the modified run script
<a href="part3//_pages/static/scripts/part3/configs/ruby_test.py">here</a>. The test
run script is mostly the same as the simple run script, but creates the
<code>RubyRandomTester</code> instead of CPUs.</p>
<p>It is often a good idea to first run the random tester with a single
&quot;CPU&quot;. Then, increase the number of loads from the default of 100 to
something that takes a few minutes to execute on your host system. Next,
if there are no errors, then increase the number of &quot;CPUs&quot; to two and
reduce the number of loads to 100 again. Then, start increasing the
number of loads. Finally, you can increase the number of CPUs to
something reasonable for the system you are trying to simulate. If you
can run the random tester for 10-15 minutes, you can be slightly
confident that the random tester isn't going to find any other bugs.</p>
<p>Once you have your protocol working with the random tester, you can move
on to using real applications. It is likely that real applications will
expose even more bugs in the protocol. If at all possible, it is much
easier to debug your protocol with the random tester than with real
applications!</p>
<h2 id="understanding-protocol-traces"><a class="header" href="#understanding-protocol-traces">Understanding Protocol Traces</a></h2>
<p>Unfortunately, despite extensive effort to catch bugs in them, coherence
protocols (even heavily tested ones) will have bugs. Sometimes these
bugs are relatively simple fixes, while other times the bugs will be
very insidious and difficult to track down. In the worst case, the bugs
will manifest themselves as deadlocks: bugs that literally prevent the
application from making progress. Another similar problem is livelocks:
where the program runs forever due to a cycle somewhere in the system.
Whenever livelocks or deadlocks occur, the next thing to do is generate
a protocol trace. Traces print a running list of every transition that
is happening in the memory system: memory requests starting and
completing, L1 and directory transitions, etc. You can then use these
traces to identify why the deadlock is occurring. However, as we will
discuss in more detail below, debugging deadlocks in protocol traces is
often extremely challenging.</p>
<p>Here, we discuss what appears in the protocol trace to help explain what
is happening. To start with, lets look at a small snippet of a protocol
trace (we will discuss the details of this trace further below):</p>
<pre><code>...
4541   0    L1Cache         Replacement   MI_A&gt;MI_A   [0x4ac0, line 0x4ac0]
4542   0    L1Cache              PutAck   MI_A&gt;I      [0x4ac0, line 0x4ac0]
4549   0  Directory              MemAck   MI_M&gt;I      [0x4ac0, line 0x4ac0]
4641   0        Seq               Begin       &gt;       [0x4aec, line 0x4ac0] LD
4652   0    L1Cache                Load      I&gt;IS_D   [0x4ac0, line 0x4ac0]
4657   0  Directory                GetS      I&gt;S_M    [0x4ac0, line 0x4ac0]
4669   0  Directory             MemData    S_M&gt;S      [0x4ac0, line 0x4ac0]
4674   0        Seq                Done       &gt;       [0x4aec, line 0x4ac0] 33 cycles
4674   0    L1Cache       DataDirNoAcks   IS_D&gt;S      [0x4ac0, line 0x4ac0]
5321   0        Seq               Begin       &gt;       [0x4aec, line 0x4ac0] ST
5322   0    L1Cache               Store      S&gt;SM_AD  [0x4ac0, line 0x4ac0]
5327   0  Directory                GetM      S&gt;M_M    [0x4ac0, line 0x4ac0]
</code></pre>
<p>Every line in this trace has a set pattern in terms of what information
appears on that line. Specifically, the fields are:</p>
<ol>
<li>Current Tick: the tick the print is occurs in</li>
<li>Machine Version: The number of the machine where this request is
coming from. For example, if there are 4 L1 caches, then the numbers
would be 0-3. Assuming you have 1 L1 Cache per core, you can think
of this as representing the core the request is coming from.</li>
<li>Component: which part of the system is doing the print. Generally,
<code>Seq</code> is shorthand for Sequencer, <code>L1Cache</code> represents the L1 Cache,
&quot;Directory&quot; represents the directory, and so on. For L1 caches and
the directory, this represents the name of the machine type (i.e.,
what is after &quot;MachineType:&quot; in the <code>machine()</code> definition).</li>
<li>Action: what the component is doing. For example, &quot;Begin&quot; means the
Sequencer has received a new request, &quot;Done&quot; means that the
Sequencer is completing a previous request, and &quot;DataDirNoAcks&quot;
means that our DataDirNoAcks event is being triggered.</li>
<li>Transition (e.g., MI_A&gt;MI_A): what state transition this action
is doing (format: &quot;currentState&gt;nextState&quot;). If no transition is
happening, this is denoted with &quot;&gt;&quot;.</li>
<li>Address (e.g., [0x4ac0, line 0x4ac0]): the physical address of the
request (format: [wordAddress, lineAddress]). This address will
always be cache-block aligned except for requests from the
<code>Sequencer</code> and <code>mandatoryQueue</code>.</li>
<li>(Optional) Comments: optionally, there is one additional field to
pass comments. For example, the &quot;LD&quot; , &quot;ST&quot;, and &quot;33 cycles&quot; lines
use this extra field to pass additional information to the trace --
such as identifying the request as a load or store. For SLICC
transitions, <code>APPEND_TRANSITION_COMMENT</code> often use this, as we
<a href="part3/../cache-actions/">discussed previously</a>.</li>
</ol>
<p>Generally, spaces are used to separate each of these fields (the space
between the fields are added implicitly, you do not need to add them).
However, sometimes if a field is very long, there may be no spaces or
the line may be shifted compared to other lines.</p>
<p>Using this information, let's analyze the above snippet. The first
(tick) field tells us that this trace snippet is showing what was
happening in the memory system between ticks 4541 and 5327. In this
snippet, all of the requests are coming from L1Cache-0 (core 0) and
going to Directory-0 (the first bank of the directory). During this
time, we see several memory requests and state transitions for the cache
line 0x4ac0, both at the L1 caches and the directory. For example, in
tick 5322, the core executes a store to 0x4ac0. However, it currently
does not have that line in Modified in its cache (it is in Shared after
the core loaded it from ticks 4641-4674), so it needs to request
ownership for that line from the directory (which receives this request
in tick 5327). While waiting for ownership, L1Cache-0 transitions from S
(Shared) to SM_AD (a transient state -- was in S, going to M, waiting
for Ack and Data).</p>
<p>To add a print to the protocol trace, you will need to add a print with
these fields with the ProtocolTrace flag. For example, if you look at
<code>src/mem/ruby/system/Sequencer.cc</code>, you can see where the
<code>Seq               Begin</code> and <code>Seq                Done</code> trace prints
come from (search for ProtocolTrace).</p>
<h2 id="errors-i-ran-into-debugging-msi"><a class="header" href="#errors-i-ran-into-debugging-msi">Errors I ran into debugging MSI</a></h2>
<pre><code>gem5.opt: build/MSI/mem/ruby/system/Sequencer.cc:423: void Sequencer::readCallback(Addr, DataBlock&amp;, bool, MachineType, Cycles, Cycles, Cycles): Assertion `m_readRequestTable.count(makeLineAddress(address))' failed.
</code></pre>
<p>I'm an idiot, it was that I called readCallback in externalStoreHit
instead of writeCallback. It's good to start simple!</p>
<pre><code>gem5.opt: build/MSI/mem/ruby/network/MessageBuffer.cc:220: Tick MessageBuffer::dequeue(Tick, bool): Assertion `isReady(current_time)' failed.
</code></pre>
<p>I ran gem5 in GDB to get more information. Look at
L1Cache_Controller::doTransitionWorker. The current transition is:
event=L1Cache_Event_PutAck, state=L1Cache_State_MI_A,
<a href="mailto:part3/next_state=@0x7fffffffd0a0">next_state=@0x7fffffffd0a0</a>: L1Cache_State_FIRST This is more simply
MI_A-&gt;I on a PutAck See it's in popResponseQueue.</p>
<p>The problem is that the PutAck is on the forward network, not the
response network.</p>
<pre><code>panic: Invalid transition
system.caches.controllers0 time: 3594 addr: 3264 event: DataDirAcks state: IS_D
</code></pre>
<p>Hmm. I think this shouldn't have happened. The needed acks should always
be 0 or you get data from the owner. Ah. So I implemented sendDataToReq
at the directory to always send the number of sharers. If we get this
response in IS_D we don't care whether or not there are sharers. Thus,
to make things more simple, I'm just going to transition to S on
DataDirAcks. This is a slight difference from the original
implementation in Sorin et al.</p>
<p>Well, actually, I think it's that we send the request after we add
ourselves to the sharer list. The above is <em>incorrect</em>. Sorin et al.
were not wrong! Let's try not doing that!</p>
<p>So, I fixed this by checking to see if the requestor is the <em>owner</em>
before sending the data to the requestor at the directory. Only if the
requestor is the owner do we include the number of sharers. Otherwise,
it doesn't matter at all and we just set the sharers to 0.</p>
<pre><code>panic: Invalid transition system.caches.controllers0 time: 5332
addr: 0x4ac0 event: Inv state: SM\_AD
</code></pre>
<p>First, let's look at where Inv is triggered. If you get an invalidate...
only then. Maybe it's that we are on the sharer list and shouldn't be?</p>
<p>We can use protocol trace and grep to find what's going on.</p>
<pre><code>build/MSI/gem5.opt --debug-flags=ProtocolTrace configs/learning_gem5/part6/ruby_test.py | grep 0x4ac0
</code></pre>
<pre><code>...
4541   0    L1Cache         Replacement   MI_A&gt;MI_A   [0x4ac0, line 0x4ac0]
4542   0    L1Cache              PutAck   MI_A&gt;I      [0x4ac0, line 0x4ac0]
4549   0  Directory              MemAck   MI_M&gt;I      [0x4ac0, line 0x4ac0]
4641   0        Seq               Begin       &gt;       [0x4aec, line 0x4ac0] LD
4652   0    L1Cache                Load      I&gt;IS_D   [0x4ac0, line 0x4ac0]
4657   0  Directory                GetS      I&gt;S_M    [0x4ac0, line 0x4ac0]
4669   0  Directory             MemData    S_M&gt;S      [0x4ac0, line 0x4ac0]
4674   0        Seq                Done       &gt;       [0x4aec, line 0x4ac0] 33 cycles
4674   0    L1Cache       DataDirNoAcks   IS_D&gt;S      [0x4ac0, line 0x4ac0]
5321   0        Seq               Begin       &gt;       [0x4aec, line 0x4ac0] ST
5322   0    L1Cache               Store      S&gt;SM_AD  [0x4ac0, line 0x4ac0]
5327   0  Directory                GetM      S&gt;M_M    [0x4ac0, line 0x4ac0]
</code></pre>
<p>Maybe there is a sharer in the sharers list when there shouldn't be? We
can add a defensive assert in clearOwner and setOwner.</p>
<pre><code class="language-cpp">action(setOwner, &quot;sO&quot;, desc=&quot;Set the owner&quot;) {
    assert(getDirectoryEntry(address).Sharers.count() == 0);
    peek(request_in, RequestMsg) {
        getDirectoryEntry(address).Owner.add(in_msg.Requestor);
    }
}

action(clearOwner, &quot;cO&quot;, desc=&quot;Clear the owner&quot;) {
    assert(getDirectoryEntry(address).Sharers.count() == 0);
    getDirectoryEntry(address).Owner.clear();
}
</code></pre>
<p>Now, I get the following error:</p>
<pre><code>panic: Runtime Error at MSI-dir.sm:301: assert failure.
</code></pre>
<p>This is in setOwner. Well, actually this is OK since we need to have the
sharers still set until we count them to send the ack count to the
requestor. Let's remove that assert and see what happens. Nothing. That
didn't help anything.</p>
<p>When are invalidations sent from the directory? Only on S-&gt;M_M. So,
here, we need to remove ourselves from the invalidation list. I think we
need to keep ourselves in the sharer list since we subtract one when
sending the number of acks.</p>
<p>Note: I'm coming back to this a little later. It turns out that both of
these asserts are wrong. I found this out when running with more than
one CPU below. The sharers are set before clearing the Owner in M-&gt;S_D
on a GetS.</p>
<p>So, onto the next problem!</p>
<pre><code>panic: Deadlock detected: current_time: 56091 last_progress_time: 6090 difference:  50001 processor: 0
</code></pre>
<p>Deadlocks are the worst kind of error. Whatever caused the deadlock is
ancient history (i.e., likely happened many cycles earlier), and often
very hard to track down.</p>
<p>Looking at the tail of the protocol trace (note: sometimes you must put
the protocol trace into a file because it grows <em>very</em> big) I see that
there is an address that is trying to be replaced. Let's start there.</p>
<pre><code>56091   0    L1Cache         Replacement   SM_A&gt;SM_A   [0x5ac0, line 0x5ac0]
56091   0    L1Cache         Replacement   SM_A&gt;SM_A   [0x5ac0, line 0x5ac0]
56091   0    L1Cache         Replacement   SM_A&gt;SM_A   [0x5ac0, line 0x5ac0]
56091   0    L1Cache         Replacement   SM_A&gt;SM_A   [0x5ac0, line 0x5ac0]
56091   0    L1Cache         Replacement   SM_A&gt;SM_A   [0x5ac0, line 0x5ac0]
56091   0    L1Cache         Replacement   SM_A&gt;SM_A   [0x5ac0, line 0x5ac0]
56091   0    L1Cache         Replacement   SM_A&gt;SM_A   [0x5ac0, line 0x5ac0]
56091   0    L1Cache         Replacement   SM_A&gt;SM_A   [0x5ac0, line 0x5ac0]
56091   0    L1Cache         Replacement   SM_A&gt;SM_A   [0x5ac0, line 0x5ac0]
56091   0    L1Cache         Replacement   SM_A&gt;SM_A   [0x5ac0, line 0x5ac0]
</code></pre>
<p>Before this replacement got stuck I see the following in the protocol
trace. Note: this is 50000 cycles in the past!</p>
<pre><code>...
5592   0    L1Cache               Store      S&gt;SM_AD  [0x5ac0, line 0x5ac0]
5597   0  Directory                GetM      S&gt;M_M    [0x5ac0, line 0x5ac0]
...
5641   0  Directory             MemData    M_M&gt;M      [0x5ac0, line 0x5ac0]
...
5646   0    L1Cache         DataDirAcks  SM_AD&gt;SM_A   [0x5ac0, line 0x5ac0]
</code></pre>
<p>Ah! This clearly should not be DataDirAcks since we only have a single
CPU! So, we seem to not be subtracting properly. Going back to the
previous error, I was wrong about needing to keep ourselves in the list.
I forgot that we no longer had the -1 thing. So, let's remove ourselves
from the sharing list before sending the invalidations when we
originally get the S-&gt;M request.</p>
<p>So! With those changes the Ruby tester completes with a single core.
Now, to make it harder we need to increase the number of loads we do and
then the number of cores.</p>
<p>And, of course, when I increase it to 10,000 loads there is a deadlock.
Fun!</p>
<p>What I'm seeing at the end of the protocol trace is the following.</p>
<pre><code>144684   0    L1Cache         Replacement   MI_A&gt;MI_A   [0x5bc0, line 0x5bc0]
...
144685   0  Directory                GetM   MI_M&gt;MI_M   [0x54c0, line 0x54c0]
...
144685   0    L1Cache         Replacement   MI_A&gt;MI_A   [0x5bc0, line 0x5bc0]
...
144686   0  Directory                GetM   MI_M&gt;MI_M   [0x54c0, line 0x54c0]
...
144686   0    L1Cache         Replacement   MI_A&gt;MI_A   [0x5bc0, line 0x5bc0]
...
144687   0  Directory                GetM   MI_M&gt;MI_M   [0x54c0, line 0x54c0]
...
</code></pre>
<p>This is repeated for a long time.</p>
<p>It seems that there is a circular dependence or something like that
causing this deadlock.</p>
<p>Well, it seems that I was correct. The order of the in_ports really
matters! In the directory, I previously had the order: request,
response, memory. However, there was a memory packet that was blocked
because the request queue was blocked, which caused the circular
dependence and the deadlock. The order <em>should</em> be memory, response, and
request. I believe the memory/response order doesn't matter since no
responses depend on memory and vice versa.</p>
<p>Now, let's try with two CPUs. First thing I run into is an assert
failure. I'm seeing the first assert in setState fail.</p>
<pre><code class="language-cpp">void setState(Addr addr, State state) {
    if (directory.isPresent(addr)) {
        if (state == State:M) {
            assert(getDirectoryEntry(addr).Owner.count() == 1);
            assert(getDirectoryEntry(addr).Sharers.count() == 0);
        }
        getDirectoryEntry(addr).DirState := state;
        if (state == State:I)  {
            assert(getDirectoryEntry(addr).Owner.count() == 0);
            assert(getDirectoryEntry(addr).Sharers.count() == 0);
        }
    }
}
</code></pre>
<p>To track this problem down, let's add a debug statement (DPRINTF) and
run with protocol trace. First I added the following line just before
the assert. Note that you are required to use the RubySlicc debug flag.
This is the only debug flag included in the generated SLICC files.</p>
<pre><code class="language-cpp">DPRINTF(RubySlicc, &quot;Owner %s\n&quot;, getDirectoryEntry(addr).Owner);
</code></pre>
<p>Then, I see the following output when running with ProtocolTrace and
RubySlicc.</p>
<pre><code>118   0  Directory             MemData    M_M&gt;M      [0x400, line 0x400]
118: system.caches.controllers2: MSI-dir.sm:160: Owner [NetDest (16) 1 0  -  -  - 0  -  -  -  -  -  -  -  -  -  -  -  -  - ]
118   0  Directory                GetM      M&gt;M      [0x400, line 0x400]
118: system.caches.controllers2: MSI-dir.sm:160: Owner [NetDest (16) 1 1  -  -  - 0  -  -  -  -  -  -  -  -  -  -  -  -  - ]
</code></pre>
<p>It looks like when we process the GetM when in state M we need to first
clear the owner before adding the new owner. The other options is in
setOwner we could have Set the Owner specifically instead of adding it
to the NetDest.</p>
<p>Oooo! This is a new error!</p>
<pre><code>panic: Runtime Error at MSI-dir.sm:229: Unexpected message type..
</code></pre>
<p>What is this message that fails? Let's use the RubyNetwork debug flag to
try to track down what message is causing this error. A few lines above
the error I see the following message whose destination is the
directory.</p>
<p>The destination is a NetDest which is a bitvector of MachineIDs. These
are split into multiple sections. I know I'm running with two CPUs, so
the first two 0's are for the CPUs, and the other 1 must be fore the
directory.</p>
<pre><code>2285: PerfectSwitch-2: Message: [ResponseMsg: addr = [0x8c0, line 0x8c0] Type = InvAck Sender = L1Cache-1 Destination = [NetDest (16) 0 0  -  -  - 1  -  -  -  -  -  -  -  -  -  -  -  -  - ] DataBlk = [ 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0xb1 0xb2 0xb3 0xb4 0xca 0xcb 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 ] MessageSize = Control Acks = 0 ]
</code></pre>
<p>This message has the type InvAck, which is clearly wrong! It seems that
we are setting the requestor wrong when we send the invalidate (Inv)
message to the L1 caches from the directory.</p>
<p>Yes. This is the problem. We need to make the requestor the original
requestor. This was already correct for the FwdGetS/M, but I missed the
invalidate somehow. On to the next error!</p>
<pre><code>panic: Invalid transition
system.caches.controllers0 time: 2287 addr: 0x8c0 event: LastInvAck state: SM_AD
</code></pre>
<p>This seems to be that I am not counting the acks correctly. It could
also be that the directory is much slower than the other caches at
responding since it has to get the data from memory.</p>
<p>If it's the latter (which I should be sure to verify), what we could do
is include an ack requirement for the directory, too. Then, when the
directory sends the data (and the owner, too) decrement the needed acks
and trigger the event based on the new ack count.</p>
<p>Actually, that first hypothesis was not quite right. I printed out the
number of acks whenever we receive an InvAck and what's happening is
that the other cache is responding with an InvAck before the directory
has told it how many acks to expect.</p>
<p>So, what we need to do is something like what I was talking about above.
First of all, we will need to let the acks drop below 0 and add the
total acks to it from the directory message. Then, we are going to have
to complicate the logic for triggering last ack, etc.</p>
<p>Ok. So now we're letting the tbe.Acks drop below 0 and then adding the
directory acks whenever they show up.</p>
<p>Next error: This is a tough one. The error is now that the data doesn't
match as it should. Kind of like the deadlock, the data could have been
corrupted in the ancient past. I believe the address is the last one in
the protocol trace.</p>
<pre><code>panic: Action/check failure: proc: 0 address: 19688 data: 0x779e6d0
byte\_number: 0 m\_value+byte\_number: 53 byte: 0 [19688, value: 53,
status: Check\_Pending, initiating node: 0, store\_count: 4]Time:
5843
</code></pre>
<p>So, it could be something to do with ack counts, though I don't think
this is the issue. Either way, it's a good idea to annotate the protocol
trace with the ack information. To do this, we can add comments to the
transition with APPEND_TRANSITION_COMMENT.</p>
<pre><code class="language-cpp">action(decrAcks, &quot;da&quot;, desc=&quot;Decrement the number of acks&quot;) {
    assert(is_valid(tbe));
    tbe.Acks := tbe.Acks - 1;
    APPEND_TRANSITION_COMMENT(&quot;Acks: &quot;);
    APPEND_TRANSITION_COMMENT(tbe.Acks);
}
</code></pre>
<pre><code>5737   1    L1Cache              InvAck  SM_AD&gt;SM_AD  [0x400, line 0x400] Acks: -1
</code></pre>
<p>For these data issues, the debug flag RubyNetwork is useful because it
prints the value of the data blocks at every point it is in the network.
For instance, for the address in question above, it looks like the data
block is all 0's after loading from main-memory. I believe this should
have valid data. In fact, if we go back in time some we see that there
was some non-zero elements.</p>
<pre><code>5382   1    L1Cache                 Inv      S&gt;I      [0x4cc0, line 0x4cc0]

&gt; 5383: PerfectSwitch-1: Message: [ResponseMsg: addr = [0x4cc0, line
&gt; 0x4cc0] Type = InvAck Sender = L1Cache-1 Destination = [NetDest (16) 1
&gt; 0 - - - 0 - - - - - - - - - - - - - ] DataBlk = [ 0x0 0x0 0x0 0x0 0x0
&gt; 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0
&gt; 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0
&gt; 0x0 0x35 0x36 0x37 0x61 0x6d 0x6e 0x6f 0x70 0x0 0x0 0x0 0x0 0x0 0x0
&gt; 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 ] MessageSize = Control Acks =
&gt; 0 ] ... ... ... 5389 0 Directory MemData M\_M\    &gt;M [0x4cc0, line 0x4cc0]
&gt; 5390: PerfectSwitch-2: incoming: 0 5390: PerfectSwitch-2: Message:
&gt; [ResponseMsg: addr = [0x4cc0, line 0x4cc0] Type = Data Sender =
&gt; Directory-0 Destination = [NetDest (16) 1 0 - - - 0 - - - - - - - - -
&gt; - - - - ] DataBlk = [ 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0
&gt; 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0
&gt; 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0
&gt; 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0
&gt; 0x0 ] MessageSize = Data Acks = 1 ]
</code></pre>
<p>It seems that memory is not being updated correctly on the M-&gt;S
transition. After lots of digging and using the MemoryAccess debug flag
to see exactly what was being read and written to main memory, I found
that in sendDataToMem I was using the request_in. This is right for
PutM, but not right for Data. We need to have another action to send
data from response queue!</p>
<pre><code>panic: Invalid transition
system.caches.controllers0 time: 44381 addr: 0x7c0 event: Inv state: SM_AD
</code></pre>
<p>Invalid transition is my personal favorite kind of SLICC error. For this
error, you know exactly what address caused it, and it's very easy to
trace through the protocol trace to find what went wrong. However, in
this case, nothing went wrong, I just forgot to put this transition in!
Easy fix!</p>
<div style="break-before: page; page-break-before: always;"></div><hr />
<h2>layout: documentation
title: Configuring for a standard protocol
doc: Learning gem5
parent: part3
permalink: /documentation/learning_gem5/part3/simple-MI_example/
author: Jason Lowe-Power</h2>
<h1 id="configuring-for-a-standard-protocol"><a class="header" href="#configuring-for-a-standard-protocol">Configuring for a standard protocol</a></h1>
<p>You can easily adapt the simple example configurations from this part to
the other SLICC protocols in gem5. In this chapter, we will briefly look
at an example with <code>MI_example</code>, though this can be easily extended to
other protocols.</p>
<p>However, these simple configuration files will only work in syscall
emulation mode. Full system mode adds some complications such as DMA
controllers. These scripts can be extended to full system.</p>
<p>For <code>MI_example</code>, we can use exactly the same runscript as before
(<code>simple_ruby.py</code>), we just need to implement a different
<code>MyCacheSystem</code> (and import that file in <code>simple_ruby.py</code>). Below, is
the classes needed for <code>MI_example</code>. There are only a couple of changes
from <code>MSI</code>, mostly due to different naming schemes. You can download the
file
<a href="part3//_pages/static/scripts/part3/configs/ruby_caches_MI_example.py">here</a>.</p>
<pre><code class="language-python">class MyCacheSystem(RubySystem):

    def __init__(self):
        if buildEnv['PROTOCOL'] != 'MI_example':
            fatal(&quot;This system assumes MI_example!&quot;)

        super(MyCacheSystem, self).__init__()

    def setup(self, system, cpus, mem_ctrls):
        &quot;&quot;&quot;Set up the Ruby cache subsystem. Note: This can't be done in the
           constructor because many of these items require a pointer to the
           ruby system (self). This causes infinite recursion in initialize()
           if we do this in the __init__.
        &quot;&quot;&quot;
        # Ruby's global network.
        self.network = MyNetwork(self)

        # MI example uses 5 virtual networks
        self.number_of_virtual_networks = 5
        self.network.number_of_virtual_networks = 5

        # There is a single global list of all of the controllers to make it
        # easier to connect everything to the global network. This can be
        # customized depending on the topology/network requirements.
        # Create one controller for each L1 cache (and the cache mem obj.)
        # Create a single directory controller (Really the memory cntrl)
        self.controllers = \
            [L1Cache(system, self, cpu) for cpu in cpus] + \
            [DirController(self, system.mem_ranges, mem_ctrls)]

        # Create one sequencer per CPU. In many systems this is more
        # complicated since you have to create sequencers for DMA controllers
        # and other controllers, too.
        self.sequencers = [RubySequencer(version = i,
                                # I/D cache is combined and grab from ctrl
                                icache = self.controllers[i].cacheMemory,
                                dcache = self.controllers[i].cacheMemory,
                                clk_domain = self.controllers[i].clk_domain,
                                ) for i in range(len(cpus))]

        for i,c in enumerate(self.controllers[0:len(cpus)]):
            c.sequencer = self.sequencers[i]

        self.num_of_sequencers = len(self.sequencers)

        # Create the network and connect the controllers.
        # NOTE: This is quite different if using Garnet!
        self.network.connectControllers(self.controllers)
        self.network.setup_buffers()

        # Set up a proxy port for the system_port. Used for load binaries and
        # other functional-only things.
        self.sys_port_proxy = RubyPortProxy()
        system.system_port = self.sys_port_proxy.slave

        # Connect the cpu's cache, interrupt, and TLB ports to Ruby
        for i,cpu in enumerate(cpus):
            cpu.icache_port = self.sequencers[i].slave
            cpu.dcache_port = self.sequencers[i].slave
            isa = buildEnv['TARGET_ISA']
            if isa == 'x86':
                cpu.interrupts[0].pio = self.sequencers[i].master
                cpu.interrupts[0].int_master = self.sequencers[i].slave
                cpu.interrupts[0].int_slave = self.sequencers[i].master
            if isa == 'x86' or isa == 'arm':
                cpu.itb.walker.port = self.sequencers[i].slave
                cpu.dtb.walker.port = self.sequencers[i].slave

class L1Cache(L1Cache_Controller):

    _version = 0
    @classmethod
    def versionCount(cls):
        cls._version += 1 # Use count for this particular type
        return cls._version - 1

    def __init__(self, system, ruby_system, cpu):
        &quot;&quot;&quot;CPUs are needed to grab the clock domain and system is needed for
           the cache block size.
        &quot;&quot;&quot;
        super(L1Cache, self).__init__()

        self.version = self.versionCount()
        # This is the cache memory object that stores the cache data and tags
        self.cacheMemory = RubyCache(size = '16kB',
                               assoc = 8,
                               start_index_bit = self.getBlockSizeBits(system))
        self.clk_domain = cpu.clk_domain
        self.send_evictions = self.sendEvicts(cpu)
        self.ruby_system = ruby_system
        self.connectQueues(ruby_system)

    def getBlockSizeBits(self, system):
        bits = int(math.log(system.cache_line_size, 2))
        if 2**bits != system.cache_line_size.value:
            panic(&quot;Cache line size not a power of 2!&quot;)
        return bits

    def sendEvicts(self, cpu):
        &quot;&quot;&quot;True if the CPU model or ISA requires sending evictions from caches
           to the CPU. Two scenarios warrant forwarding evictions to the CPU:
           1. The O3 model must keep the LSQ coherent with the caches
           2. The x86 mwait instruction is built on top of coherence
           3. The local exclusive monitor in ARM systems
        &quot;&quot;&quot;
        if type(cpu) is DerivO3CPU or \
           buildEnv['TARGET_ISA'] in ('x86', 'arm'):
            return True
        return False

    def connectQueues(self, ruby_system):
        &quot;&quot;&quot;Connect all of the queues for this controller.
        &quot;&quot;&quot;
        self.mandatoryQueue = MessageBuffer()
        self.requestFromCache = MessageBuffer(ordered = True)
        self.requestFromCache.master = ruby_system.network.slave
        self.responseFromCache = MessageBuffer(ordered = True)
        self.responseFromCache.master = ruby_system.network.slave
        self.forwardToCache = MessageBuffer(ordered = True)
        self.forwardToCache.slave = ruby_system.network.master
        self.responseToCache = MessageBuffer(ordered = True)
        self.responseToCache.slave = ruby_system.network.master

class DirController(Directory_Controller):

    _version = 0
    @classmethod
    def versionCount(cls):
        cls._version += 1 # Use count for this particular type
        return cls._version - 1

    def __init__(self, ruby_system, ranges, mem_ctrls):
        &quot;&quot;&quot;ranges are the memory ranges assigned to this controller.
        &quot;&quot;&quot;
        if len(mem_ctrls) &gt; 1:
            panic(&quot;This cache system can only be connected to one mem ctrl&quot;)
        super(DirController, self).__init__()
        self.version = self.versionCount()
        self.addr_ranges = ranges
        self.ruby_system = ruby_system
        self.directory = RubyDirectoryMemory()
        # Connect this directory to the memory side.
        self.memory = mem_ctrls[0].port
        self.connectQueues(ruby_system)

    def connectQueues(self, ruby_system):
        self.requestToDir = MessageBuffer(ordered = True)
        self.requestToDir.slave = ruby_system.network.master
        self.dmaRequestToDir = MessageBuffer(ordered = True)
        self.dmaRequestToDir.slave = ruby_system.network.master

        self.responseFromDir = MessageBuffer()
        self.responseFromDir.master = ruby_system.network.slave
        self.dmaResponseFromDir = MessageBuffer(ordered = True)
        self.dmaResponseFromDir.master = ruby_system.network.slave
        self.forwardFromDir = MessageBuffer()
        self.forwardFromDir.master = ruby_system.network.slave
        self.responseFromMemory = MessageBuffer()

class MyNetwork(SimpleNetwork):
    &quot;&quot;&quot;A simple point-to-point network. This doesn't not use garnet.
    &quot;&quot;&quot;

    def __init__(self, ruby_system):
        super(MyNetwork, self).__init__()
        self.netifs = []
        self.ruby_system = ruby_system

    def connectControllers(self, controllers):
        &quot;&quot;&quot;Connect all of the controllers to routers and connect the routers
           together in a point-to-point network.
        &quot;&quot;&quot;
        # Create one router/switch per controller in the system
        self.routers = [Switch(router_id = i) for i in range(len(controllers))]

        # Make a link from each controller to the router. The link goes
        # externally to the network.
        self.ext_links = [SimpleExtLink(link_id=i, ext_node=c,
                                        int_node=self.routers[i])
                          for i, c in enumerate(controllers)]

        # Make an &quot;internal&quot; link (internal to the network) between every pair
        # of routers.
        link_count = 0
        self.int_links = []
        for ri in self.routers:
            for rj in self.routers:
                if ri == rj: continue # Don't connect a router to itself!
                link_count += 1
                self.int_links.append(SimpleIntLink(link_id = link_count,
                                                    src_node = ri,
                                                    dst_node = rj))
</code></pre>
<div style="break-before: page; page-break-before: always;"></div><hr />
<h2>layout: documentation
title: gem5 101
doc: Learning gem5
parent: learning_gem5
permalink: /documentation/learning_gem5/gem5_101/
authors: Swapnil Haria</h2>
<h1 id="gem5-101"><a class="header" href="#gem5-101">gem5 101</a></h1>
<p>这是一个由六部分组成的课程，将帮助您掌握 gem5 的基础知识，并说明一些常见用法。本课程基于威斯康星大学麦迪逊分校教授的特定建筑课程 CS 752 和 CS 757 的作业。</p>
<h2 id="使用-gem5-和-hello-world-的第一步"><a class="header" href="#使用-gem5-和-hello-world-的第一步">使用 gem5 和 Hello World 的第一步！</a></h2>
<p><a href="http://pages.cs.wisc.edu/%7Edavid/courses/cs752/Fall2015/wiki/index.php?n=Main.Homework1">第一部分</a></p>
<p>在第一部分，您将首先学习正确下载和构建 gem5，为简单系统创建一个简单的配置脚本，编写一个简单的 C 程序并运行 gem5 模拟。然后，您将在您的系统中引入一个两级缓存层次结构（有趣的东西）。最后，您可以查看更改系统参数（例如内存类型、处理器频率和复杂性）对简单程序性能的影响。</p>
<h2 id="下来和肮脏"><a class="header" href="#下来和肮脏">下来和肮脏</a></h2>
<p><a href="http://pages.cs.wisc.edu/%7Edavid/courses/cs752/Fall2015/wiki/index.php?n=Main.Homework2">第二部分</a></p>
<p>对于第二部分，我们直接使用了 gem5 功能。现在，我们将通过扩展模拟器功能来见证 gem5 的灵活性和实用性。我们将引导您完成当前 gem5 中缺少的 x86 指令 (FSUBR) 的实现。这将向您介绍 gem5 用于描述指令集的语言，并说明如何将指令解码并分解为最终由处理器执行的微操作。</p>
<h2 id="流水线解决一切"><a class="header" href="#流水线解决一切">流水线解决一切</a></h2>
<p><a href="http://pages.cs.wisc.edu/%7Edavid/courses/cs752/Fall2015/wiki/index.php?n=Main.Homework3">第三部分</a></p>
<p>从 ISA，我们现在转向处理器微架构。第三部分介绍了在 gem5 中实现的各种不同的 CPU 模型，并分析了流水线实现的性能。具体来说，您将了解不同流水线阶段的延迟和带宽如何影响整体性能。此外，还免费提供了 gem5 伪指令的示例用法。</p>
<h2 id="一直在试验"><a class="header" href="#一直在试验">一直在试验</a></h2>
<p><a href="http://pages.cs.wisc.edu/%7Edavid/courses/cs752/Fall2015/wiki/index.php?n=Main.Homework4">第四部分</a></p>
<p>利用指令级并行性 (ILP) 是提高单线程性能的有用方法。分支预测和预测是利用 ILP 的两种常用技术。在这一部分，我们使用 gem5 来验证避免分支的图算法比使用分支的算法性能更好的假设。这是了解如何将 gem5 纳入您的研究过程的有用练习。</p>
<h2 id="冷硬缓存"><a class="header" href="#冷硬缓存">冷、硬、缓存</a></h2>
<p><a href="http://pages.cs.wisc.edu/%7Edavid/courses/cs752/Fall2015/wiki/index.php?n=Main.Homework5">第五部分</a></p>
<p>在查看了处理器内核之后，我们现在将注意力转向缓存层次结构。我们继续专注于实验，并考虑缓存设计中的权衡，例如替换策略和集合关联性。此外，我们还了解了有关 gem5 模拟器的更多信息，并创建了我们的第一个 simObject！</p>
<h2 id="单核这么晚了两千"><a class="header" href="#单核这么晚了两千">单核这么晚了两千</a></h2>
<p><a href="http://pages.cs.wisc.edu/%7Emarkhill/cs757/Spring2016/wiki/index.php?n=Main.Homework3">第六部分</a></p>
<p>对于最后一部分，我们同时使用多核和完整系统！我们分析一个简单应用程序的性能，为它提供更多的计算资源（核心）。我们还在 gem5 模拟的目标系统上启动了一个完整的未修改操作系统（Linux）。最重要的是，我们教您如何创建自己的、更简单的可怕的 fs.py 配置脚本版本，您可以轻松修改。</p>
<h2 id="完全的"><a class="header" href="#完全的">完全的！</a></h2>
<p>恭喜，您现在已经熟悉 gem5 的基础知识了。您现在可以佩戴“兄弟，您甚至是 gem5 吗？” T 恤（如果你能找到一件的话）。</p>
<h1 id="学分"><a class="header" href="#学分">学分</a></h1>
<p>多年来，很多人都参与了这些课程的作业开发。如果我们错过了任何人，请在此处添加。</p>
<ul>
<li>威斯康星大学麦迪逊分校 Multifacet 研究小组</li>
<li>马克·希尔教授、大卫·伍德</li>
<li>Jason Lowe-Power</li>
<li>尼莱·瓦伊什</li>
<li>莉娜奥尔森</li>
<li>斯瓦普尼尔·哈里亚</li>
<li>杰尼尔·甘地</li>
</ul>
<p>关于本教程的任何问题或疑问都应直接发送给 gem5-users 邮件列表，而不是作业中列出的个人联系人。</p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="part4_gem5_102"><a class="header" href="#part4_gem5_102">part4_gem5_102</a></h1>

                    </main>

                    <nav class="nav-wrapper" aria-label="Page navigation">
                        <!-- Mobile navigation buttons -->
                        <div style="clear: both"></div>
                    </nav>
                </div>
            </div>

            <nav class="nav-wide-wrapper" aria-label="Page navigation">
            </nav>

        </div>

        <script type="text/javascript">
            window.playground_copyable = true;
        </script>
        <script src="elasticlunr.min.js" type="text/javascript" charset="utf-8"></script>
        <script src="mark.min.js" type="text/javascript" charset="utf-8"></script>
        <script src="searcher.js" type="text/javascript" charset="utf-8"></script>
        <script src="clipboard.min.js" type="text/javascript" charset="utf-8"></script>
        <script src="highlight.js" type="text/javascript" charset="utf-8"></script>
        <script src="book.js" type="text/javascript" charset="utf-8"></script>

        <!-- Custom JS scripts -->
        <script type="text/javascript">
        window.addEventListener('load', function() {
            window.setTimeout(window.print, 100);
        });
        </script>
    </body>
</html>
