<!DOCTYPE html>
<html>
<head>
	<!-- Global site tag (gtag.js) - Google Analytics -->
	<script async src="https://www.googletagmanager.com/gtag/js?id='UA-133422980-2"></script>
	<script>
	  window.dataLayer = window.dataLayer || [];
	  function gtag(){dataLayer.push(arguments);}
	  gtag('js', new Date());

	  gtag('config', 'UA-133422980-2');
	</script>

	<meta charset="utf-8">
	<meta http-equiv="x-ua-compatible" content="ie=edge">
	<meta name="viewport" content="width=device-width, initial-scale=1">

	<title>
		gem5: Search 
	</title>

	<!-- SITE FAVICON -->
	<link rel="shortcut icon" type="image/gif" href="/assets/img/gem5ColorVert.gif"/>

	<link rel="canonical" href="http://localhost:4000/search/">
	<link href='https://fonts.googleapis.com/css?family=Open+Sans:400,300,700,800,600' rel='stylesheet' type='text/css'>
	<link href='https://fonts.googleapis.com/css?family=Muli:400,300' rel='stylesheet' type='text/css'>

	<!-- FAVICON -->
	<link rel="stylesheet" href="//maxcdn.bootstrapcdn.com/font-awesome/4.3.0/css/font-awesome.min.css">

	<!-- BOOTSTRAP -->
	<link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/css/bootstrap.min.css" integrity="sha384-MCw98/SFnGE8fJT3GXwEOngsV7Zt27NXFoaoApmYm81iuXoPkFOJwJ8ERdknLPMO" crossorigin="anonymous">

	<!-- CUSTOM CSS -->
	<link rel="stylesheet" href="/css/main.css">
</head>


<body>
	<nav class="navbar navbar-expand-md navbar-light bg-light">
  <a class="navbar-brand" href="/">
		<img src="/assets/img/gem5ColorLong.gif" alt="gem5" height=55px>
	</a>
  <button class="navbar-toggler" type="button" data-toggle="collapse" data-target="#navbarNavDropdown" aria-controls="navbarNavDropdown" aria-expanded="false" aria-label="Toggle navigation">
    <span class="navbar-toggler-icon"></span>
  </button>
  <div class="collapse navbar-collapse" id="navbarNavDropdown">
    <!-- LIST FOR NAVBAR -->
    <ul class="navbar-nav ml-auto">
      <!-- HOME -->
      <li class="nav-item ">
        <a class="nav-link" href="/">Home</a>
      </li>

      <!-- ABOUT -->
			<li class="nav-item dropdown ">
				<a class="nav-link dropdown-toggle" id="navbarDropdownMenuLink" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false">
					About
				</a>
				<div class="dropdown-menu" aria-labelledby="navbarDropdownMenuLink">
          <a class="dropdown-item" href="/about">About gem5</a>
          <a class="dropdown-item" href="/publications">Publications</a>
          <a class="dropdown-item" href="/governance">Governance</a>
				</div>
			</li>

      <!-- DOCUMENTATION -->
			<li class="nav-item dropdown ">
				<a class="nav-link dropdown-toggle" id="navbarDropdownMenuLink" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false">
					Documentation
				</a>
				<div class="dropdown-menu" aria-labelledby="navbarDropdownMenuLink">
					<!-- Pull navigation from _data/documentation.yml -->
					
            <a class="dropdown-item" href="/documentation">gem5 documentation</a>
					
            <a class="dropdown-item" href="/documentation/learning_gem5/introduction">Learning gem5</a>
					
            <a class="dropdown-item" href="http://doxygen.gem5.org/release/current/index.html">gem5 Doxygen</a>
					
            <a class="dropdown-item" href="/documentation/reporting_problems">Reporting Problems</a>
					
				</div>
			</li>

      <!-- EVENTS -->
			<li class="nav-item dropdown ">
        <a class="nav-link" href="/events/">Events</a>
			</li>

      <!-- CONTRIBUTING -->
      <li class="nav-item ">
        <a class="nav-link" href="/contributing">Contributing</a>
      </li>

      <!-- BLOG -->
      <li class="nav-item ">
        <a class="nav-link" href="/blog">Blog</a>
      </li>

      <!-- SEARCH -->
			<li class="nav-item active">
        <a class="nav-link" href="/search">Search</a>
      </li>
    </ul>
  </div>
</nav>

	<main>
		<br><br>
<div class="container">

  <h1 class="title">Search</h1>
  <br>
  <br>
<div class="search">
  <form action="/search" method="get">
    <label for="search-box"><i class="fa fa-search"></i></label>
    <input type="text" id="search-box" name="query" placeholder="search">
    <button type="submit" value="search" class="btn-outline-primary">Search</button>
  </form>
</div>
<br><br>


<ul id="search-results"></ul>

<script>
  window.store = {
    
      "about": {
        "title": "About",
        "content": "The gem5 simulator is a modular platform for computer-system architecture research, encompassing system-level architecture as well as processor microarchitecture.gem5 is an open source computer architecture simulator used in academia and in industry.gem5 has been under development for the past 15 years initially at the University of Michigan as the m5 project and at the University of Wisconsin as the GEMS project.Since the merger of m5 and GEMS in 2011, gem5 has been cited by over 2900 publications.gem5 is used by many industrial research labs including ARM Research, AMD Research, Google, Micron, Metempsy, HP, Samsung, and others.FeaturesMultiple interchangeable CPU models.gem5 provides four interpretation-based CPU models: a simple one-CPI CPU; adetailed model of an in-order CPU, and a detailed model of an out-of-order CPU.These CPU models use a common high-level ISA description. In addition, gem5features a KVM-based CPU that uses virtualisation to accelerate simulation.Event-driven memory system.gem5 features a detailed, event-driven memory system including caches,crossbars, snoop filters, and a fast and accurate DRAM controller model, forcapturing the impact of current and emerging memories, e.g. LPDDR3/4/5, DDR3/4,GDDR5, HBM1/2/3, HMC, WideIO1/2.  The components can be arranged flexibly,e.g., to model complex multi-level non-uniform cache hierarchies withheterogeneous memories.Multiple ISA supportgem5 decouples ISA semantics from its CPU models, enabling effective supportof multiple ISAs. Currently gem5 supports the Alpha, ARM, SPARC, MIPS, POWER,RISC-V and x86 ISAs. However, all guest platforms aren’tsupported on all host platforms (most notably Alpha requireslittle-endian hardware).Homogeneous and heterogeneous multi-coreThe CPU models and caches can be combined in arbitrary topologies, creatinghomogeneous, and heterogeneous multi-core systems. A MOESI snooping cachecoherence protocol keeps the caches coherent.Full-system capability  ARM: gem5 can model up to 64 (heterogeneous) cores of a    Realview ARM platform, and boot unmodified Linux and    Android with a combination of    in-order and out-of-order CPUs. The ARM implementation supports    32 or 64-bit kernels and applications.  x86: The gem5 simulator supports a standard PC platform and boots unmodified Linux  RISC-V: Support for RISC-V privileged ISA spec is a work in progress.  SPARC: The gem5 simulator models a single core of a    UltraSPARC T1 processor with sufficient detail to boot Solaris    in a similar manner as the Sun T1 Architecture simulator tools    (building the hypervisor with specific defines and using the    HSMID virtual disk driver).  Alpha: gem5 models a DEC Tsunami system in sufficient detail    to boot unmodified Linux 2.4/2.6, FreeBSD, or L4Ka::Pistachio.    We have also booted HP/Compaq’s Tru64 5.1 operating system in    the past, though we no longer actively maintain that capability.Application-only supportIn application-only (non-full-system) mode, gem5 can execute a variety ofarchitecture/OS binaries with Linux emulation.Multi-system capabilityMultiple systems can be instantiated within a single simulation process. Inconjunction with full-system modeling, this feature allows simulation of entireclient-server networks.Power and energy modelinggem5’s objects are arranged in OS-visible power and clock domains, enabling arange of experiments in power- and energy-efficiency. With out-of-the-boxsupport for OS-controller Dynamic Voltage and Frequency (DVFS) scaling, gem5provides a complete platform for research in future energy-efficient systems.However, the existing DVFS documentation is out of date. You can find this page at the old wiki.A trace-based CPUCPU model that plays back elastic traces, which are dependency and timingannotated traces generated by a probe attached to the out-of-order CPU model.The focus of the Trace CPU model is to achieve memory-system (cache-hierarchy,interconnects and main memory) performance exploration in a fast and reasonablyaccurate way instead of using the detailed CPU model.Co-simulation with SystemC.gem5 can be included in a SystemC simulation, effectively running as athread inside the SystemC event kernel, and keeping the events and timelinessynchronized between the two worlds. This functionality enables the gem5components to interoperate with a wide range of System on Chip (SoC) componentmodels, such as interconnects, devices and accelerators. A wrapper for SystemCTransaction Level Modelling (TLM) is provided.A NoMali GPU model.gem5 comes with an integrated NoMali GPU model that is compatible with theLinux and Android GPU driver stack, and thus removes the need for softwarerendering. The NoMali GPU does not produce any output, but ensures thatCPU-centric experiments produce representative results.LicensingThe gem5 simulator is released under a Berkeley-style open source license.Roughly speaking, you are free to use our code however you wish, as long as youleave our copyright on it. For more details, see the LICENSE file included inthe source download. Note that the portions of gem5 derived from other sourcesare also subject to the licensing restrictions of the original sources.AcknowledgmentsThe gem5 simulator has been developed with generous support from severalsources, including the National Science Foundation, AMD, ARM,Hewlett-Packard, IBM, Intel, MIPS, and Sun. Individuals working on gem5have also been supported by fellowships from Intel, Lucent, and theAlfred P. Sloan Foundation.Any opinions, findings and conclusions or recommendations expressed inthis material are those of the author(s) and do not necessarily reflectthe views of the National Science Foundation (NSF) or any other sponsor.",
        "url": "/about/"
      }
      ,
    
      "contributing": {
        "title": "A beginners guide to contributing",
        "content": "This document serves as a beginners guide to contributing to gem5. If questionsarise while following this guide, we advise consulting CONTRIBUTING.mdwhich contains more details on how to contribute to gem5.The following subsections outline, in order, the steps involved in contributingto the gem5 project.Determining what you can contributeThe easiest way to see how you can contribute to gem5 is to check our Jiraissue tracker: https://gem5.atlassian.net. From Jira you can check openissues.Browse these open issues and see if there are any which you are capable ofhandling. When you find a task you are happy to carry out, verify no one elseis presently assigned, then leave a comment asking if you may assign yourselfthis task (this will involve creating a Jira account). Though not mandatory, weadvise first-time contributors do this so developers more familiar with thetask may give advice on how best to implement the necessary changes.Once a developers has replied to your comment (and given any advice they mayhave), you may officially assign yourself the task. After this you shouldchange the status of the task from Todo to In progress. This helps the gem5development community understand which parts of the project are presently beingworked on.If, for whatever reason, you stop working on a task, please unassignyourself from the task and change the task’s status back to Todo.Obtaining the git repoThe gem5 git repository is hosted at https://gem5.googlesource.com.Please note: contributions made to other gem5 repos (e.g., our GitHub mirror)will not be considered. Please contribute to https://gem5.googlesource.comexclusively.To pull the gem5 git repo:git clone https://gem5.googlesource.com/public/gem5master-as-stable / develop branchBy default, the git repo will have the master branch checked-out. Themaster branch is considered the gem5 stable release branch. I.e., the HEADof this branch contains the latest stable release of gem5. (execute git tagon the master branch to see the list of stable releases. A particularrelease may be checked out by executing git checkout &lt;release&gt;). As themaster branch only contains officially released gem5 code contributorsshould not develop changes on top of the master branch they should insteaddevelop changes on top of the develop branch.To checkout the develop branch:git checkout --track origin/developChanges may be made on this branch to incorporate changes assigned to yourself.As the develop branch is frequently updated, regularly obtain the latestdevelop branch by executing:git pull --rebaseConflicts may need resolved between your local changes and new changes on thedevelop branch.Making modificationsDifferent tasks will require the project to be modified in different ways.Though, in all cases, our style-guide must be adhered to. The full style guideis outlined here.As a high-level overview:  Lines must not exceed 79 characters in length.  There should be no trailing white-space on any line.  Indentations must be 4 spaces (no tab characters).  Class names must use upper camel case (e.g., ThisIsAClass).  Class member variables must use lower camel case (e.g.,thisIsAMemberVariable).  Class member variables with their own public accessor must start with anunderscore (e.g., _variableWithAccessor).  Local variables must use snake case (e.g., this_is_a_local_variable).  Functions must use lower camel case (e.g., thisIsAFunction)  Function parameters must use snake case.  Macros must be in all caps with underscores (e.g., THIS_IS_A_MACRO).  Function declaration return types must be on their own line.  Function brackets must be on their own line.  for/if/while branching operations must be followed by a white-spacebefore the conditional statement (e.g., for (...)).  for/if/while branching operations’ opening bracket must be on thesame line, with the closing bracket on its own line (e.g.,for (...) {\\n ... \\n}\\n). There should be a space between the condition(s)and the opening bracket.  C++ access modifies must be indented by two spaces, with method/variablesdefined within indented by four spaces.Below is a simple toy example of how a class should be formatted:#DEFINE EXAMPLE_MACRO 7class ExampleClass{  private:    int _fooBar;    int barFoo;  public:    int    getFooBar()    {        return _fooBar;    }    int    aFunction(int parameter_one, int parameter_two)    {        int local_variable = 0;        if (true) {            int local_variable = parameter_one + parameter_two + barFoo                               + EXAMPLE_MACRO;        }        return local_variable;    }}Compiling and running testsThe minimum criteria for a change to be submitted is that the code iscompilable and the test cases pass.The following command both compiles the project and runs our system-levelchecks:cd testspython main.py runNote: These tests can take several hours to build and execute. main.py maybe run on multiple threads with the -j flag. E.g.: python main.py run-j6.The unit tests should also pass. To run the unit tests:scons build/NULL/unittests.optTo compile an individual gem5 binary:scons build/{ISA}/gem5.optwhere {ISA} is the target ISA. Common ISAs are ARM, MIPS, POWER,RISCV, SPARC, and X86. So, to build gem5 for X86:scons build/X86/gem5.optCommittingWhen you feel your change is done, you may commit. Start by adding the changedfiles:git add &lt;changed files&gt;Then commit using:git commitThe commit message must adhere to our style. The first line of the commit isthe “header”. The header starts with a tag (or tags, separated by a comma),then a colon. Which tags are used depend on which components of gem5you have modified. Please refer to the MAINTAINERS.md fora comprehensive list of accepted tags. After this colon a short descriptionof the commit must be provided. This header line must not exceed 65characters.After this, a more detailed description of the commit can be added. This isinserted below the header, separated by an empty line. Including a descriptionis optional but it’s strongly recommended. The description may span multiplelines, and multiple paragraphs. No line in the description may exceed 75characters.To improve the navigability of the gem5 project we would appreciate if commitmessages include a link to the relevant Jira issue/issues.Below is an example of how a gem5 commit message should be formatted:test,base: This commit tests some classes in the base componentThis is a more detailed description of the commit. This can be as long asis necessary to adequately describe the change.A description may spawn multiple paragraphs if desired.Jira Issue: https://gem5.atlassian.net/browse/GEM5-186If you feel the need to change your commit, add the necessary files thenamend the changes to the commit using:git commit --amendThis will give you opportunity to edit the commit message.Pushing to GerritPushing to Gerrit will allow others in the gem5 project to review the change tobe fully merged into the gem5 source.To start this process, execute:git push origin HEAD:refs/for/developAt this stage you may receive an error if you’re not registered to contributeto our Gerrit. To resolve this issue:  Create an account at https://gem5-review.googlesource.com.  Go to User Settings.  Select Obtain password (under HTTP Credentials).  A new tab shall open, explaining how to authenticate your machine to makecontributions to Gerrit. Follow these instructions and try pushing again.Gerrit will amend your commit message with a Change-ID. Any commit pushedto Gerrit with this Change-ID is assumed to be part of this change.Code reviewNow, at https://gem5-review.googlesource.com, you can view thechange you have submitted (Your -&gt; Changes -&gt; Outgoing reviews). Wesuggest that, at this stage, you mark the corresponding Jira issueas In Review. Adding a link to the change on Gerrit as a comment to theissue is also helpful.Through the Gerrit portal we strongly advise you add reviewers.Gerrit will automatically notify those you assign. The “maintainers” of thecomponents you have modified should be added as reviewers. These shouldcorrespond to the tags you included in the commit header. Please consultMAINTAINERS.md tosee who maintains which component. As an example, for a commit with a headerof tests,arch : This is testing the arch component then the maintainers forboth tests and arch should be included as reviewers.Reviewers will then review this change. There are three scores which the commitshall be evaluated: “Code-Review”, “Maintainer”, and “Verified”.Each reviewer can give a score from -2 to +2 to the “Code-Review” score,where +2 indicates the reviewer is 100% okay with the patch in its currentstate and -2 when the reviewer is certain they do not want the patchmerged in its current state.Maintainers can add +1 or -1 to the “Maintainer” score. A +1 scoreindicates that the maintainer is okay with the patch.When a Maintainer gives a +1 our continuous integration system will processthe change. At the time of writing, the continuous integration system will run:scons build/NULL/unittests.optcd testspython main.py runIf this executes successfully (i.e. the project builds and the tests pass) thecontinuous integration system will give a +1 to the “Verifier” score, and a-1 if it did not execute successfully.Gerrit will permit a commit to be merged if at least one reviewer has given a+2 to the “Reviewer” score, one maintainer has given a +1 to the“Maintainer” score, and the continuous integration system has given a +1 tothe “Verifier” score.For non-trivial changes, it is not unusual for a change to receive feedbackfrom reviewers that they will want incorporated before giving the commit ascore necessary for it to be merged. This leads to an iterative process.Making iterative improvements based on feedbackA reviewer will ask questions and post suggestions on Gerrit. You should readthese comments and answer these questions. All communications betweenreviewers and contributors should be done in a polite manner. Rude and/ordismissive remarks will not be tolerated.When you understand what changes are required, using the same workspace asbefore, make the necessary modifications to the gem5 repo, and amend thechanges to the commit:git commit --amendThen push the new changes to Gerrit:git push original HEAD:refs/for/developIf for some reason you no longer have your original workspace, you may pullthe change by going to your change in Gerrit, clicking Download and executingone of the listed commands.When your new change is uploaded via the git push command, the reviewers willre-review the change to ensure you have incorporated their suggestedimprovements. The reviewers may suggest more improvements and, in this case,you will have to incorporate them using the same process as above. Thisprocess is therefore iterative, and it may therefore take several cycles untilthe patch is in a state in which the reviewers are happy. Please do notbe deterred, it is very common for a change to require several iterations.Submit and mergeOnce this iterative process is complete. The patch may be merged. This is donevia Gerrit (Simply click Submit within the relevant Gerrit page).As one last step, you should change the corresponding Jira issue status toDone then link the Gerrit page as a comment on Jira as to provide evidencethat the task has been completed.Stable releases of gem5 are published three times per year. Therefore, a changesuccessfully submitted to the develop branch will be merged into the masterbranch within three to four months after submission.",
        "url": "/contributing"
      }
      ,
    
      "documentation-general-docs-architecture-support-arm-implementation": {
        "title": "ARM implementation",
        "content": "ARM ImplementationNote: The information in this page is outdated, and so are the hyperlinks.Supported features and modesThe ARM Architecture models within gem5 support an ARMv8-A profile of the ARM® architecture with multi-processor extensions. This includes both AArch32 and AArch64 state.In AArch32, this include support for Thumb®, Thumb-2, VFPv3 (32 double register variant) and NEON™, and Large Physical Address Extensions (LPAE). Optional features of the architecture that are not currently supported are TrustZone®, ThumbEE, Jazelle®, and Virtualization.Pertinent Non-supported FeaturesCurrently in ARMv8-A implementation in gem5, there isn’t support for interworking between AArch32 and AArch64 execution. This limits the ability to run some OSes that expect to execute both 32-bit and 64-bit code, but is expected to be fixed in the short term. Additionally, there has been limited testing of EL2 and EL3 modes in the implementation.Conditional Execution SupportMany instructions within the ARM architecture are predicated. To handle the predication within the gem5 framework and not have to generate N varieties of each instruction for every condition code, the instructions constructors determine which, if any, conditional execution flags are set and then conditionally read the condition codes or a “zero register” which is always available and doesn’t insert any dependencies in the dynamic execution of instructions.Special PC managementThe PCState object used for ARM® encodes additional execution state information so facilitate the use of the generic gem5 CPU components. In addition to the standard program counter, the Thumb® vs. ARM® instruction state is included as well as the ITSTATE (predication within Thumb® instructions).Boot loaderA simple bootloader for ARM is in the source tree under system/arm/. Two boot loaders exist, one for AArch64 (aarch64_bootloader) and another for AArch32 (simple_bootloader).For the AArch64 bootloader: The initial conditions of the boot loader are the same as those for Linux, r0 = device tree blob address; r6 = kernel start address. The boot loader starts the kernel with CPU 0 and places the other CPUs in a WFE spin-loop until the kernel starts them later.For the AArch32 boot loader: The initial conditions of the bootloader running are the same as those ffor Linux, r0 = 0; r1 = machine number; r2 = atags ptr; and some special registers for the boot loader to use r3 = start address of kernel; r4 = address of GIC; r5 = adderss of flags register. The bootloader works by reading the MPIDR register to determine the CPU number. CPU0 jumps immediately to the kernel while CPUn enables their interrupt interface and and wait for an interrupt. When CPU0 generates an IPI, CPUn reads the flags register until it is non-zero and then jumps to that address.",
        "url": "/documentation/general_docs/architecture_support/arm_implementation/"
      }
      ,
    
      "documentation-general-docs-architecture-support": {
        "title": "Architecture Support",
        "content": "Architecture SupportNote: The information in this page is outdated, and so are the hyperlinks.AlphaGem5 models a DEC Tsunami based system. In addition to the normal Tsunami system that support 4 cores, we have an extension which supports 64 cores (a custom PALcode and patched Linux kernel is required). The simulated system looks like an Alpha 21264 including the BWX, MVI, FIX, and CIX to user level code. For historical reasons the processor executes EV5 based PALcode.It can boot unmodified Linux 2.4/2.6, FreeBSD, or L4Ka::Pistachio as well as applications in syscall emulation mode. Many years ago it was possible to boot HP/Compaq’s Tru64 5.1 operating system. We no longer actively maintain that capability, however, and it does not currently work.ARMThe ARM Architecture models within gem5 support an ARMv8-A profile of the ARM® architecture with multi-processor extensions. This includes both AArch32 and AArch64 state. In AArch32, this include support for Thumb®, Thumb-2, VFPv3 (32 double register variant) and NEON™, and Large Physical Address Extensions (LPAE). Optional features of the architecture that are not currently supported are TrustZone®, ThumbEE, Jazelle®, and Virtualization.In full system mode gem5 is able to boot uni- or multi-processor Linux and bare metal applications built with ARM’s compilers. Newer Linux versions work out of the box (if used with gem5’s DTBs) we also provide gem5-specific Linux kernels with custom configurations and custom drivers Additionally, statically linked Linux binaries can be run in ARM’s syscall emulation mode.POWERSupport for the POWER ISA within gem5 is currently limited to syscall emulation only and is based on the POWER ISA v2.06 B Book.A big-endian, 32-bit processor is modeled. Most common instructions are available (enough to run all the SPEC CPU2000 integer benchmarks). Floating point instructions are available but support may be patchy. In particular, the Floating-Point Status and Control Register (FPSCR) is generally not updated at all. There is no support for vector instructions.Full system support for POWER would require a significant amount of effort and is not currently being developed. However, if there is interest in pursuing this, a set of patches-in-progress that make a start towards this can be obtained from Tim.SPARCThe gem5 simulator models a single core of a UltraSPARC T1 processor (UltraSPARC Architecture 2005).It can boot Solaris like the Sun T1 Architecture simulator tools do (building the hypervisor with specific defines and using the HSMID virtual disk driver). Multiprocessor support was never completed for full-system SPARC. With syscall emulation gem5 supports running Linux or Solaris binaries. New versions of Solaris no longer support generating statically compiled binaries which gem5 requires.x86X86 support within the gem5 simulator includes a generic x86 CPU with 64 bit extensions, more similar to AMD’s version of the architecture than Intel’s but not strictly like either. Unmodified versions of the Linux kernel can be booted in UP and SMP configurations, and patches are available for speeding up boot. SSE and 3dnow are implemented, but the majority of x87 floating point is not. Most effort has been focused on 64 bit mode, but compatibility mode and legacy modes have some support as well. Real mode works enough to bootstrap an AP, but hasn’t been extensively tested. The features of the architecture that are exercised by Linux and standard Linux binaries are implemented and should work, but other areas may not. 64 and 32 bit Linux binaries are supported in syscall emulation mode.MIPSRISC-V",
        "url": "/documentation/general_docs/architecture_support/"
      }
      ,
    
      "documentation-general-docs-architecture-support-isa-parser": {
        "title": "ISA Parser",
        "content": "ISA ParserThe gem5 ISA description language is a custom language designed specifically for generating the class definitions and decoder function needed by gem5. This section provides a practical, informal overview of the language itself. A formal grammar for the language is embedded in the “yacc” portion of the parser (look for the functions starting with p_ in isa_parser.py). A second major component of the parser processes C-like code specifications to extract instruction characteristics; this aspect is covered in the section Code parsing.At the highest level, an ISA description file is divided into two parts: a declarations section and a decode section. The decode section specifies the structure of the decoder and defines the specific instructions returned by the decoder. The declarations section defines the global information (classes, instruction formats, templates, etc.) required to support the decoder. Because the decode section is the focus of the description file, we will begin the discussion there.The decode sectionThe decode section of the description is a set of nested decode blocks. A decode block specifies a field of a machine instruction to decode and the result to be provided for particular values of that field. A decode block is similar to a C switch statement in both syntax and semantics. In fact, each decode block in the description file generates a switch statement in the resulting decode function.Let’s begin with a (slightly oversimplified) example:decode OPCODE {  0: add({{ Rc = Ra + Rb; }});  1: sub({{ Rc = Ra - Rb; }});}A decode block begins with the keyword decode followed by the name of the instruction field to decode. The latter must be defined in the declarations section of the file using a bitfield definition (see Bitfield definitions). The remainder of the decode block is a list of statements enclosed in braces. The most common statement is an integer constant and a colon followed by an instruction definition. This statement corresponds to a ‘case’ statement in a C switch (but note that the ‘case’ keyword is omitted for brevity). A comma-separated list of integer constants may be used to allow a single decode statement to apply to any of a set of bitfield values.Instruction definitions are similar in syntax to C function calls, with the instruction mnemonic taking the place of the function name. The comma-separated arguments are used when processing the instruction definition. In the example above, the instruction definitions each take a single argument, a ‘‘code literal’’. A code literal is operationally similar to a string constant, but is delimited by double braces ({{ and }}). Code literals may span multiple lines without escaping the end-of-line characters. No backslash escape processing is performed (e.g., \\t is taken literally, and does not produce a tab). The delimiters were chosen so that C-like code contained in a code literal would be formatted nicely by emacs C-mode.A decode statement may specify a nested decode block in place of an instruction definition. In this case, if the bitfield specified by the outer block matches the given value(s), the bitfield specified by the inner block is examined and an additional switch is performed.It is also legal, as in C, to use the keyword default in place of an integer constant to define a default action. However, it is more common to use the decode-block default syntax discussed in the section Decode block defaults below.Specifying instruction formatsWhen the ISA description file is processed, each instruction definition does in fact invoke a function call to generate the appropriate C++ code for the decode file. The function that is invoked is determined by the instruction format. The instruction format determines the number and type of the arguments given to the instruction definition, and how they are processed to generate the corresponding output. Note that the term “instruction format” as used in this context refers solely to one of these definition-processing functions, and does not necessarily map one-to-one to the machine instruction formats defined by the ISA.The one oversimplification in the previous example is that no instruction format was specified. As a result, the parser does not know how to process the instruction definitions.Instruction formats can be specified in two ways. An explicit format specification can be given before the mnemonic, separated by a double colon (::), as follows:decode OPCODE {  0: Integer::add({{ Rc = Ra + Rb; }});  1: Integer::sub({{ Rc = Ra - Rb; }});}In this example, both instruction definitions will be processed using the format Integer. A more common approach specifies the format for a set of definitions using a format block, as follows:decode OPCODE {  format Integer {    0: add({{ Rc = Ra + Rb; }});    1: sub({{ Rc = Ra - Rb; }});  }}In this example, the format “Integer” applies to all of the instruction definitions within the inner braces. The two examples are thus functionally equivalent. There are few restrictions on the use of format blocks. A format block may include only a subset of the statements in a decode block. Format blocks and explicit format specifications may be mixed freely, with the latter taking precedence. Format and decode blocks can be nested within each other arbitrarily. Note that a closing brace will always bind with the nearest format or decode block, making it syntactically impossible to generate format or decode blocks that do not nest fully inside the enclosing block.At any point where an instruction definition occurs without an explicit format specification, the format associated with the innermost enclosing format block will be used. If a definition occurs with no explicit format and no enclosing format block, a runtime error will be raised.Decode block defaultsDefault cases for decode blocks can be specified by default: labels, as in C switch statements. However, it is common in ISA descriptions that unspecified cases correspond to unknown or illegal instruction encodings. To avoid the requirement of a default: case in every decode block, the language allows an alternate default syntax that specifies a default case for the current decode block and any nested decode block with no explicit default. This alternate default is specified by giving the default keyword and an instruction definition after the bitfield specification (prior to the opening brace). Specifying the outermost decode block as follows:decode OPCODE default Unknown::unknown() {   [...]}is thus (nearly) equivalent to adding default: Unknown::unknown(); inside every decode block that does not otherwise specify a default case.Note: The appropriate format definition (see _Format definitions) is invoked each time an instruction definition is encountered.  Thus there is a semantic difference between having a single block-level default and a default within each nested block, which is that the former will invoke the format definition once, while the latter could result in multiple invocations of the format definition.  If the format definition generates header, decoder, or exec output, then that output will be included multiple times in the corresponding files, which typically leads to multiple definition errors when the C++ gets compiled.  If it is absolutely necessary to invoke the format definition for a single instruction multiple times, the format definition should be written to produce only decode-block output, and all needed header, decoder, and exec output should be produced once using_ output blocks (see _Output blocks)._Preprocessor directive handlingThe decode block may also contain C preprocessor directives. These directives are not processed by the parser; instead, they are passed through to the C++ output to be processed when the C++ decoder is compiled. The parser does not recognize any specific directives; any line with a # in the first column is treated as a preprocessor directive.The directives are copied to all of the output streams (the header, the decoder, and the execute files; see Format definitions. The directives maintain their position relative to the code generated by the instruction definitions within the decode block. The net result is that, for example, #ifdef/#endif pairs that surround a set of instruction definitions will enclose both the declarations generated by those definitions and the corresponding case statements within the decode function. Thus #ifdef and similar constructs can be used to delineate instruction definitions that will be conditionally compiled into the simulator based on preprocessor symbols (e.g., FULL_SYSTEM). It should be emphasized that #ifdef does not affect the ISA description parser. In an #ifdef/#else/#endif construct, all of the instruction definitions in both parts of the conditional will be processed. Only during the subsequent C++ compilation of the decoder will one or the other set of definitions be selected.The declaration sectionAs mentioned above, the decode section of the ISA description (consisting of a single outer decode block) is preceded by the declarations section. The primary purpose of the declarations section is to define the instruction formats and other supporting elements that will be used in the decode block, as well as supporting C++ code that is passed almost verbatim to the generated output.This section describes the components that appear in the declaration section: Format definitions, Template definitions, Output blocks, Let blocks, Bitfield definitions, Operand and operand type definitions, and Namespace declaration.Format definitionsAn instruction format is basically a Python function that takes the arguments supplied by an instruction definition (found inside a decode block) and generates up to four pieces of C++ code. The pieces of C++ code are distinguished by where they appear in the generated output.  The ‘‘header output’’ goes in the header file (decoder.hh) that is included in all the generated source files (decoder.cc and all the per-CPU-model execute .cc files). The header output typically contains the C++ class declaration(s) (if any) that correspond to the instruction.  The ‘‘decoder output’’ goes before the decode function in the same source file (decoder.cc). This output typically contains definitions that do not need to be visible to the execute() methods: inline constructor definitions, non-inline method definitions (e.g., for disassembly), etc.  The ‘‘exec output’’ contains per-CPU model definitions, i.e., the execute() methods for the instruction class.  The ‘‘decode block’’ contains a statement or block of statements that go into the decode function (in the body of the corresponding case statement). These statements take control once the bit pattern specified by the decode block is recognized, and are responsible for returning an appropriate instruction object.The syntax for defining an instruction format is as follows:def format FormatName(arg1, arg2) {{    [code omitted]}};In this example, the format is named “FormatName”. (By convention, instruction format names begin with a capital letter and use mixed case.) Instruction definitions using this format will be expected to provide two arguments (arg1 and arg2). The language also supports the Python variable-argument mechanism: if the final parameter begins with an asterisk (e.g., *rest), it receives a list of all the otherwise unbound arguments from the call site.Note that the next-to-last syntactic token in the format definition (prior to the semicolon) is simply a code literal (string constant), as described above. In this case, the text within the code literal is a Python code block. This Python code will be called at each instruction definition that uses the specified format.In addition to the explicit arguments, the Python code is supplied with two additional parameters: name, which is bound to the instruction mnemonic, and Name, which is the mnemonic with the first letter capitalized (useful for forming C++ class names based on the mnemonic).The format code block specifies the generated code by assigning strings to four special variables: header_output, decoder_output, exec_output, and decode_block. Assignment is optional; for any of these variables that does not receive a value, no code will be generated for the corresponding section. These strings may be generated by whatever method is convenient. In practice, nearly all instruction formats use the support functions provided by the ISA description parser to specialize code templates based on characteristics extracted automatically from C-like code snippets. Discussion of these features is deferred to the Code parsing page.Although the ISA description is completely independent of any specific simulator CPU model, some C++ code (particularly the exec output) must be specialized slightly for each model. This specialization is handled by automatic substitution of CPU-model-specific symbols. These symbols start with CPU_ and are treated specially by the parser. Currently there is only one model-specific symbol, CPU_exec_context, which evaluates to the model’s execution context class name. As with templates (see Template definitions), references to CPU-specific symbols use Python key-based format strings; a reference to the CPU_exec_context symbol thus appears in a string as %(CPU_exec_context)s.If a string assigned to header_output, decoder_output, or decode_block contains a CPU-specific symbol reference, the string is replicated once for each CPU model, and each instance has its CPU-specific symbols substituted according to that model. The resulting strings are then concatenated to form the final output. Strings assigned to exec_output are always replicated and subsituted once for each CPU model, regardless of whether they contain CPU-specific symbol references. The instances are not concatenated, but are tracked separately, and are placed in separate per-CPU-model files (e.g., simple_cpu_exec.cc).Template definitionsAs discussed in section Format definitions above, the purpose of an instruction format is to process the arguments of an instruction definition and generate several pieces of C++ code. These code pieces are usually generated by specializing a code template. The description language provides a simple syntax for defining these templates: the keywords def template, the template name, the template body (a code literal), and a semicolon. By convention, template names start with a capital letter, use mixed case, and end with “Declare” (for declaration (header output) templates), “Decode” (for decode-block templates), “Constructor” (for decoder output templates), or “Execute” (for exec output templates).For example, the simplest useful decode template is as follows:def template BasicDecode {{    return new %(class_name)s(machInst);}};An instruction format would specialize this template for a particular instruction by substituting the actual class name for %(class_name)s. (Template specialization relies on the Python string format operator %. The term %(class_name)s is an extension of the C %s format string indicating that the value of the symbol class_name should be substituted.) The resulting code would then cause the C++ decode function to create a new object of the specified class when the particular instruction was recognized.Templates are represented in the parser as Python objects. A template is used to generate a string typically by calling the template object’s subst() method. This method takes a single argument that specifies the mapping of substitution symbols in the template (e.g., %(class_name)s) to specific values. If the argument is a dictionary, the dictionary itself specifies the mapping. Otherwise, the argument must be another Python object, and the object’s attributes are used as the mapping. In practice, the argument to subst() is nearly always an instance of the parser’s InstObjParams class; see the InstObjParams class. A template may also reference other templates (e.g., %(BasicDecode)s) in addition to symbols specified by the subst() argument; these will be interpolated into the result by subst() as well.Template references to CPU-model-specific symbols (see Format definitions) are not expanded by subst(), but are passed through intact. This feature allows them to later be expanded appropriately according to whether the result is assigned to exec_output or another output section. However, when a template containing a CPU-model-specific symbol is referenced by another template, then the former template is replicated and expanded into a single string before interpolation, as with templates assigned to header_output or decoder_output. This policy guarantees that only templates directly containing CPU-model-specific symbols will be replicated, never templates that contain such symbols indirectly. This last feature is used to interpolate per-CPU declarations of the execute() method into the instruction class declaration template (see the BasicExecDeclare template in the Alpha ISA description).Output blocksOutput blocks allow the ISA description to include C++ code that is copied nearly verbatim to the output file. These blocks are useful for defining classes and local functions that are shared among multiple instruction objects. An output block has the following format:output &lt;destination&gt; {{    [code omitted]}};The &lt;destination&gt; keyword must be one of header, decoder, or exec. The code within the code literal is treated as if it were assigned to the header_output decoder_output, or exec_output variable within an instruction format, respectively, including the special processing of CPU-model-specific symbols. The only additional processing performed on the code literal is substitution of bitfield operators, as used in instruction definitions (see Bitfield operators, and interpolation of references to templates.Let blocksLet blocks provide for global Python code. These blocks consist simply of the keyword let followed by a code literal (double-brace delimited string) and a semicolon.The code literal is executed immediately by the Python interpreter. The parser maintains the execution context across let blocks, so that variables and functions defined in one let block will be accessible in subsequent let blocks. This context is also used when executing instruction format definitions. The primary purpose of let blocks is to define shared Python data structures and functions for use in instruction formats. The parser exports a limited set of definitions into this execution context, including the set of defined templates (see Template definitions, the InstObjParams and CodeBlock classes (see Code parsing), and the standard Python string and re (regular expression) modules.Bitfield definitionsA bitfield definition provides a name for a bitfield within a machine instruction. These names are typically used as the bitfield specifications in decode blocks. The names are also used within other C++ code in the decoder file, including instruction class definitions and decode code.The bitfield definition syntax is demonstrated in these examples:def bitfield OPCODE &lt;31:26&gt;;def bitfield IMM &lt;12&gt;;def signed bitfield MEMDISP &lt;15:0&gt;;The specified bit range is inclusive on both ends, and bit 0 is the least significant bit; thus the OPCODE bitfield in the example extracts the most significant six bits from a 32-bit instruction. A single index value extracts a one-bit field, IMM. The extracted value is zero-extended by default; with the additional signed keyword, as in the MEMDISP example, the extracted value will be sign extended. The implementation of bitfields is based on preprocessor macros and C++ template functions, so the size of the resulting value will depend on the context.To fully understand where bitfield definitions can be used, we need to go under the hood a bit. A bitfield definition simply generates a C++ preprocessor macro that extracts the specified bitfield from the implicit variable machInst. The machine instruction parameter to the decode function is also called machInst; thus any use of a bitfield name that ends up inside the decode function (such as the argument of a decode block or the decode piece of an instruction format’s output) will implicitly reference the instruction currently being decoded. The binary machine instruction stored in the StaticInst object is also named machInst, so any use of a bitfield name in a member function of an instruction object will reference this stored value. This data member is initialized in the StaticInst constructor, so it is safe to use bitfield names even in the constructors of derived objects.Operand and operand type definitionsThese statements specify the operand types that can be used in the code blocks that express the functional operation of instructions. See Operand type qualifiers  and Instruction parsing.Namespace declarationThe final component of the declaration section is the namespace declaration, consisting of the keyword namespace followed by an identifier and a semicolon. Exactly one namespace declaration must appear in the declarations section. The resulting C++ decode function, the declarations resulting from the instruction definitions in the decode block, and the contents of any declare statements occurring after then namespace declaration will be placed in a C++ namespace with the specified name. The contents of declare statements occurring before the namespace declaration will be outside the namespace.ISA parserFormatsoperandsdecode treelet blocksmicrocode assemblermicroopsmacroopsdirectivesrom objectLots more stuffCode parsingTo a large extent, the power and flexibility of the ISA description mechanism stem from the fact that the mapping from a brief instruction definition provided in the decode block to the resulting C++ code is performed in a general-purpose programming language (Python). (This function is performed by the “instruction format” definition described above in Format definitions. Technically, the ISA description language allows any arbitrary Python code to perform this mapping. However, the parser provides a library of Python classes and functions designed to automate the process of deducing an instruction’s characteristics from a brief description of its operation, and generating the strings required to populate declaration and decode templates. This library represents roughly half of the code in isa_parser.py.Instruction behaviors are described using C++ with two extensions: bitfield operators and operand type qualifiers. To avoid building a full C++ parser into the ISA description system (or conversely constraining the C++ that could be used for instruction descriptions), these extensions are implemented using regular expression matching and substitution. As a result, there are some syntactic constraints on their usage. The following two sections discuss these extensions in turn. The third section discusses operand parsing, the technique by which the parser automatically infers most instruction characteristics. The final two sections discuss the Python classes through which instruction formats interact with the library: CodeBlock, which analyzes and encapsulates instruction description code; and the instruction object parameter class, InstObjParams, which encapsulates the full set of parameters to be substituted into a template.Bitfield operatorsSimple bitfield extraction can be performed on rvalues using the &lt;:&gt; postfix operator. Bit numbering matches that used in global bitfield definitions (see Bitfield definitions). For example, Ra&lt;7:0&gt; extracts the low 8 bits of register Ra. Single-bit fields can be specified by eliminating the latter operand, e.g. Rb&lt;31:&gt;. Unlike in global bitfield definitions, the colon cannot be eliminated, as it becomes too difficult to distinguish bitfield operators from template arguments. In addition, the bit index parameters must be either identifiers or integer constants; expressions are not allowed. The bit operator will apply either to the syntactic token on its left, or, if that token is a closing parenthesis, to the parenthesized expression.Operand type qualifiersThe effective type of an instruction operand (e.g., a register) may be specified by appending a period and a type qualifier to the operand name. The list of type qualifiers is architecture-specific; the def operand_types statement in the ISA description is used to specify it. The specification is in the form of a Python dictionary which maps a type extension to type name. For example, the Alpha ISA definition is as follows:def operand_types {{    'sb' : 'int8_t',    'ub' : 'uint8_t',    'sw' : 'int16_t',    'uw' : 'uint16_t',    'sl' : 'int32_t',    'ul' : 'uint32_t',    'sq' : 'int64_t',    'uq' : 'uint64_t',    'sf' : 'float',    'df' : 'double'}};Thus the Alpha 32-bit add instruction addl could be defined as:Rc.sl = Ra.sl + Rb.sl;The operations are performed using the types specified; the result will be converted from the specified type to the appropriate register value (in this case by sign-extending the 32-bit result to 64 bits, since Alpha integer registers are 64 bits in size).Type qualifiers are allowed only on recognized instruction operands (see Instruction operands).Instruction operandsMost of the automation provided by the parser is based on its recognition of the operands used in the instruction definition code. Most relevant instruction characteristics can be inferred from the operands: floating-point vs. integer instructions can be recognized by the registers used, an instruction that reads from a memory location is a load, etc. In combination with the bitfield operands and type qualifiers described above, most instructions can be described in a single line of code. In addition, most of the differences between simulator CPU models lies in the operand access mechanisms; by generating the code for these accesses automatically, a single description suffices for a variety of situations.The ISA description provides a list of recognized instruction operands and their characteristics via the def operands statement. This statement specifies a Python dictionary that maps operand strings to a five-element tuple.  The elements of the tuple specify the operand as follows:  the operand class, which must be one of the strings “IntReg”, “FloatReg”, “Mem”, “NPC”, or “ControlReg”, indicating an integer register, floating-point register, memory location, the next program counter (NPC), or a control register, respectively.  the default type of the operand (an extension string defined in the def operand_types block),  a specifier indicating how specific instances of the operand are decoded (e.g., a bitfield name),  a string or triple of strings indicating the instruction flags that can be inferred when the operand is used, and  a sort priority used to control the order of operands in disassembly.For example, a simplified subset of the Alpha ISA operand traits map is as follows:def operands {{    'Ra': ('IntReg', 'uq', 'RA', 'IsInteger', 1),    'Rb': ('IntReg', 'uq', 'RB', 'IsInteger', 2),    'Rc': ('IntReg', 'uq', 'RC', 'IsInteger', 3),    'Fa': ('FloatReg', 'df', 'FA', 'IsFloating', 1),    'Fb': ('FloatReg', 'df', 'FB', 'IsFloating', 2),    'Fc': ('FloatReg', 'df', 'FC', 'IsFloating', 3),    'Mem': ('Mem', 'uq', None, ('IsMemRef', 'IsLoad', 'IsStore'), 4),    'NPC': ('NPC', 'uq', None, ( None, None, 'IsControl'), 4)}};The operand named Ra is an integer register, default type uq (unsigned quadword), uses the RA bitfield from the instruction, implies the IsInteger instruction flag, and has a sort priority of 1 (placing it first in any list of operands).For the instruction flag element, a single string (such as 'IsInteger' implies an unconditionally inferred instruction flag. If the flag operand is a triple, the first element is unconditional, the second is inferred when the operand is a source, and the third when it is a destination. Thus the ('IsMemRef', 'IsLoad', 'IsStore') element for memory references indicates that any instruction with a memory operand is marked as a memory reference. In addition, if the memory operand is a source, the instruction is marked as a load, while if the operand is a destination, the instruction is marked a store. Similarly, the (None, None, 'IsControl') tuple for the NPC operand indicates that any instruction that writes to the NPC is a control instruction, but instructions which merely reference NPC as a source do not receive any default flags.Note that description code parsing uses regular expressions, which limits the ability of the parser to infer the nature of a partciular operand.  In particular, destination operands are distinguished from source operands solely by testing whether the operand appears on the left-hand side of an assignment operator (=). Destination operands that are assigned to in a different fashion, e.g. by being passed by reference to other functions, must still appear on the left-hand side of an assignment to be properly recognized as destinations.  The parser also does not recognize C compound assignments, e.g., +=.  If an operand is both a source and a destination, it must appear on both the left- and right-hand sides of =.Another limitation of regular-expression-based code parsing is that control flow in the code block is not recognized.  Combined with the details of how register updates are performed in the CPU models, this means that destinations cannot be updated conditionally.  If a particular register is recognized as a destination register, that register will always be updated at the end of the execute() method, and thus the code must assign a valid value to that register along each possible code path within the block.The CodeBlock classAn instruction format requests processing of a string containing instruction description code by passing the string to the CodeBlock constructor. The constructor performs all of the needed analysis and processing, storing the results in the returned object. Among the CodeBlock fields are:  orig_code: the original code string.  code: a processed string containing legal C++ code, derived from the original code by substituting in the bitfield operators and munging operand type qualifiers (s/./_/) to make valid C++ identifiers.  constructor: code for the constructor of an instruction object, initializing various C++ object fields including the number of operands and the register indices of the operands.  exec_decl: code to declare the C++ variables corresponding to the operands, for use in an execution emulation function.  *_rd: code to read the actual operand values into the corresponding C++ variables for source operands. The first part of the name indicates the relevant CPU model (currently simple and dtld are supported).  *_wb: code to write the C++ variable contents back to the appropriate register or memory location. Again, the first part of the name reflects the CPU model.  *_mem_rd, *_nonmem_rd, *_mem_wb, *_nonmem_wb: as above, but with memory and non-memory operands segregated.  flags: the set of instruction flags implied by the operands.  op_class: a basic guess at the instruction’s operation class (see OpClass) based on the operand types alone.The InstObjParams classInstances of the InstObjParams class encapsulate all of the parameters needed to substitute into a code template, to be used as the argument to a template’s subst() method (see Template definitions).class InstObjParams(object):    def __init___(self, parser,                   mem, class_name, base_class = '',                  snippets = {}, opt_args = []):The first three constructor arguments populate the object’s mnemonic, class_name, and (optionally) base_class members. The fourth (optional) argument is a CodeBlock object; all of the members of the provided CodeBlock object are copied to the new object, making them accessible for template substitution. Any remaining arguments are interpreted as either additional instruction flags (appended to the flags list inherited from the CodeBlock argument, if any), or as an operation class (overriding any op_class from the CodeBlock).",
        "url": "/documentation/general_docs/architecture_support/isa_parser/"
      }
      ,
    
      "documentation-general-docs-architecture-support-x86-microop-isa": {
        "title": "X86 Micro-op ISA",
        "content": "Register OpsThese microops typically take two sources and produce one result. Most have a version that operates on only registers and a version which operates on registers and an immediate value. Some optionally set flags according to their operation. Some of them can be predicated.AddAddition.add Dest, Src1, Src2Dest # Dest &lt;- Src1 + Src2Adds the contents of the Src1 and Src2 registers and puts the result in the Dest register.addi Dest, Src1, ImmDest # Dest &lt;- Src1 + ImmAdds the contents of the Src1 register and the immediate Imm and puts the result in the Dest register.FlagsThis microop optionally sets the CF, ECF, ZF, EZF, PF, AF, SF, and OF flags.            Flag      Meaning                  CF and ECF      The carry out of the most significant bit.              ZF and EZF      Whether the result was zero.              PF      The parity of the result.              AF      The carry from the fourth to fifth bit positions.              SF      The sign of the result.              OF      Whether there was an overflow.      AdcAdd with carry.adc Dest, Src1, Src2Dest # Dest &lt;- Src1 + Src2 + CFAdds the contents of the Src1 and Src2 registers and the carry flag and puts the result in the Dest register.adci Dest, Src1, ImmDest # Dest &lt;- Src1 + Imm + CFAdds the contents of the Src1 register, the immediate Imm, and the carry flag and puts the result in the Dest register.FlagsThis microop optionally sets the CF, ECF, ZF, EZF, PF, AF, SF, and OF flags.            Flag      Meaning                  CF and ECF      The carry out of the most significant bit.              ZF and EZF      Whether the result was zero.              PF      The parity of the result.              AF      The carry from the fourth to fifth bit positions.              SF      The sign of the result.              OF      Whether there was an overflow.      SubSubtraction.sub Dest, Src1, Src2Dest # Dest &lt;- Src1 - Src2Subtracts the contents of the Src2 register from the Src1 register and puts the result in the Dest register.subi Dest, Src1, ImmDest # Dest &lt;- Src1 - ImmSubtracts the contents of the immediate Imm from the Src1 register and puts the result in the Dest register.FlagsThis microop optionally sets the CF, ECF, ZF, EZF, PF, AF, SF, and OF flags.            Flag      Meaning                  CF and ECF      The borrow into of the most significant bit.              ZF and EZF      Whether the result was zero.              PF      The parity of the result.              AF      The borrow from the fourth to fifth bit positions.              SF      The sign of the result.              OF      Whether there was an overflow.      SbbSubtract with borrow.sbb Dest, Src1, Src2Dest # Dest &lt;- Src1 - Src2 - CFSubtracts the contents of the Src2 register and the carry flag from the Src1 register and puts the result in the Dest register.sbbi Dest, Src1, ImmDest # Dest &lt;- Src1 - Imm - CFSubtracts the immediate Imm and the carry flag from the Src1 register and puts the result in the Dest register.FlagsThis microop optionally sets the CF, ECF, ZF, EZF, PF, AF, SF, and OF flags.            Flag      Meaning                  CF and ECF      The borrow into of the most significant bit.              ZF and EZF      Whether the result was zero.              PF      The parity of the result.              AF      The borrow from the fourth to fifth bit positions.              SF      The sign of the result.              OF      Whether there was an overflow.      Mul1sSigned multiply.mul1s Src1, Src2ProdHi:ProdLo # Src1 * Src2Multiplies the unsigned contents of the Src1 and Src2 registers and puts the high and low portions of the product into the internal registers ProdHi and ProdLo, respectively.mul1si Src1, ImmProdHi:ProdLo # Src1 * ImmMultiplies the unsigned contents of the Src1 register and the immediate Imm and puts the high and low portions of the product into the internal registers ProdHi and ProdLo, respectively.FlagsThis microop does not set any flags.Mul1uUnsigned multiply.mul1u Src1, Src2ProdHi:ProdLo # Src1 * Src2Multiplies the unsigned contents of the Src1 and Src2 registers and puts the high and low portions of the product into the internal registers ProdHi and ProdLo, respectively.mul1ui Src1, ImmProdHi:ProdLo # Src1 * ImmMultiplies the unsigned contents of the Src1 register and the immediate Imm and puts the high and low portions of the product into the internal registers ProdHi and ProdLo, respectively.FlagsThis microop does not set any flags.MulelUnload multiply result low.mulel DestDest # Dest &lt;- ProdLoMoves the value of the internal ProdLo register into the Dest register.FlagsThis microop does not set any flags.MulehUnload multiply result high.muleh DestDest # Dest &lt;- ProdHiMoves the value of the internal ProdHi register into the Dest register.FlagsThis microop optionally sets the CF, ECF, and OF flags.            Flag      Meaning                  CF and ECF      Whether ProdHi is non-zero.              OF      Whether ProdHi is zero.      Div1First stage of division.div1 Src1, Src2Quotient * Src2 + Remainder # Src1Divisor # Src2Begins a division operation where the contents of SrcReg1 is the high part of the dividend and the contents of SrcReg2 is the divisor. The remainder from this partial division is put in the internal register Remainder. The quotient is put in the internal register Quotient. The divisor is put in the internal register Divisor.div1i Src1, Imm:Quotient * Imm + Remainder # Src1Divisor # ImmBegins a division operation where the contents of SrcReg1 is the high part of the dividend and the immediate Imm is the divisor. The remainder from this partial division is put in the internal register Remainder. The quotient is put in the internal register Quotient. The divisor is put in the internal register Divisor.FlagsThis microop does not set any flags.Div2Second and later stages of division.div2 Dest, Src1, Src2Quotient * Divisor + Remainder # original Remainder with bits shifted in from Src1Dest # Dest &lt;- Src2 - number of bits shifted in abovePerforms subsequent steps of division following a div1 instruction. The contents of the register Src1 is the low portion of the dividend. The contents of the register Src2 denote the number of bits in Src1 that have not yet been used before this step in the division. Dest is set to the number of bits in Src1 that have not been used after this step. The internal registers Quotient, Divisor, and Remainder are updated by this instruction.If there are no remaining bits in Src1, this instruction does nothing except optionally compute flags.div2i Dest, Src1, ImmQuotient * Divisor + Remainder # original Remainder with bits shifted in from Src1Dest # Dest &lt;- Imm - number of bits shifted in abovePerforms subsequent steps of division following a div1 instruction. The contents of the register Src1 is the low portion of the dividend. The immediate Imm denotes the number of bits in Src1 that have not yet been used before this step in the division. Dest is set to the number of bits in Src1 that have not been used after this step. The internal registers Quotient, Divisor, and Remainder are updated by this instruction.If there are no remaining bits in Src1, this instruction does nothing except optionally compute flags.FlagsThis microop optionally sets the EZF flag.            Flag      Meaning                  EZF      Whether there are any remaining bits in Src1 after this step.      DivqUnload division quotient.divq DestDest # Dest &lt;- QuotientMoves the value of the internal Quotient register into the Dest register.FlagsThis microop does not set any flags.DivrUnload division remainder.divr DestDest # Dest &lt;- RemainderMoves the value of the internal Remainder register into the Dest register.FlagsThis microop does not set any flags.OrLogical or.or Dest, Src1, Src2Dest # Dest &lt;- Src1 | Src2Computes the bitwise or of the contents of the Src1 and Src2 registers and puts the result in the Dest register.ori Dest, Src1, ImmDest # Dest &lt;- Src1 | ImmComputes the bitwise or of the contents of the Src1 register and the immediate Imm and puts the result in the Dest register.FlagsThis microop optionally sets the CF, ECF, ZF, EZF, PF, AF, SF, and OF flags.There is nothing that prevents computing a value for the AF flag, but it’s value will be meaningless.            Flag      Meaning                  CF and ECF      Cleared.              ZF and EZF      Whether the result was zero.              PF      The parity of the result.              AF      Undefined.              SF      The sign of the result.              OF      Cleared.      AndLogical Andand Dest, Src1, Src2Dest # Dest &lt;- Src1 &amp; Src2Computes the bitwise and of the contents of the Src1 and Src2 registers and puts the result in the Dest register.andi Dest, Src1, ImmDest # Dest &lt;- Src1 &amp; ImmComputes the bitwise and of the contents of the Src1 register and the immediate Imm and puts the result in the Dest register.FlagsThis microop optionally sets the CF, ECF, ZF, EZF, PF, AF, SF, and OF flags.There is nothing that prevents computing a value for the AF flag, but it’s value will be meaningless.            Flag      Meaning                  CF and ECF      Cleared.              ZF and EZF      Whether the result was zero.              PF      The parity of the result.              AF      Undefined.              SF      The sign of the result.              OF      Cleared.      XorLogical exclusive or.xor Dest, Src1, Src2Dest # Dest &lt;- Src1 | Src2Computes the bitwise xor of the contents of the Src1 and Src2 registers and puts the result in the Dest register.xori Dest, Src1, ImmDest # Dest &lt;- Src1 | ImmComputes the bitwise xor of the contents of the Src1 register and the immediate Imm and puts the result in the Dest register.FlagsThis microop optionally sets the CF, ECF, ZF, EZF, PF, AF, SF, and OF flags.There is nothing that prevents computing a value for the AF flag, but it’s value will be meaningless.            Flag      Meaning                  CF and ECF      Cleared.              ZF and EZF      Whether the result was zero.              PF      The parity of the result.              AF      Undefined.              SF      The sign of the result.              OF      Cleared.      SllLogical left shift.sll Dest, Src1, Src2Dest # Dest &lt;- Src1 « Src2Shifts the contents of the Src1 register to the left by the value in the Src2 register and writes the result into the Dest register. The shift amount is truncated to either 5 or 6 bits, depending on the operand size.slli Dest, Src1, ImmDest # Dest &lt;- Src1 « ImmShifts the contents of the Src1 register to the left by the value in the immediate Imm and writes the result into the Dest register. The shift amount is truncated to either 5 or 6 bits, depending on the operand size.FlagsThis microop optionally sets the CF, ECF, and OF flags. If the shift amount is zero, no flags are modified.            Flag      Meaning                  CF and ECF      The last bit shifted out of the result.              OF      The exclusive OR of what this instruction would set the CF flag to, if requested, and the most significant bit of the result.      SrlLogical right shift.srl Dest, Src1, Src2Dest # Dest &lt;- Src1 »(logical) Src2Shifts the contents of the Src1 register to the right by the value in the Src2 register and writes the result into the Dest register. Bits which are shifted in sign extend the result. The shift amount is truncated to either 5 or 6 bits, depending on the operand size.srli Dest, Src1, ImmDest # Dest &lt;- Src1 »(logical) ImmShifts the contents of the Src1 register to the right by the value in the immediate Imm and writes the result into the Dest register. Bits which are shifted in sign extend the result. The shift amount is truncated to either 5 or 6 bits, depending on the operand size.FlagsThis microop optionally sets the CF, ECF, and OF flags. If the shift amount is zero, no flags are modified.            Flag      Meaning                  CF and ECF      The last bit shifted out of the result.              SF      The most significant bit of the original value to shift.      SraArithmetic right shift.sra Dest, Src1, Src2Dest # Dest &lt;- Src1 »(arithmetic) Src2Shifts the contents of the Src1 register to the right by the value in the Src2 register and writes the result into the Dest register. Bits which are shifted in zero extend the result. The shift amount is truncated to either 5 or 6 bits, depending on the operand size.srai Dest, Src1, ImmDest # Dest &lt;- Src1 »(arithmetic) ImmShifts the contents of the Src1 register to the right by the value in the immediate Imm and writes the result into the Dest register. Bits which are shifted in zero extend the result. The shift amount is truncated to either 5 or 6 bits, depending on the operand size.FlagsThis microop optionally sets the CF, ECF, and OF flags. If the shift amount is zero, no flags are modified.            Flag      Meaning                  CF and ECF      The last bit shifted out of the result.              OF      Cleared.      RorRotate right.ror Dest, Src1, Src2Rotates the contents of the Src1 register to the right by the value in the Src2 register and writes the result into the Dest register. The rotate amount is truncated to either 5 or 6 bits, depending on the operand size.rori Dest, Src1, ImmRotates the contents of the Src1 register to the right by the value in the immediate Imm and writes the result into the Dest register. The rotate amount is truncated to either 5 or 6 bits, depending on the operand size.FlagsThis microop optionally sets the CF, ECF, and OF flags. If the rotate amount is zero, no flags are modified.            Flag      Meaning                  CF and ECF      The most significant bit of the result.              OF      The exclusive OR of the most two significant bits of the original value.      RcrRotate right through carry.rcr Dest, Src1, Src2Rotates the contents of the Src1 register through the carry flag and to the right by the value in the Src2 register and writes the result into the Dest register. The rotate amount is truncated to either 5 or 6 bits, depending on the operand size.rcri Dest, Src1, ImmRotates the contents of the Src1 register through the carry flag and to the right by the value in the immediate Imm and writes the result into the Dest register. The rotate amount is truncated to either 5 or 6 bits, depending on the operand size.FlagsThis microop optionally sets the CF, ECF, and OF flags. If the rotate amount is zero, no flags are modified.            Flag      Meaning                  CF and ECF      The last bit shifted out of the result.              OF      The exclusive OR of the CF flag before the rotate and the most significant bit of the original value.      RolRotate left.rol Dest, Src1, Src2Rotates the contents of the Src1 register to the left by the value in the Src2 register and writes the result into the Dest register. The rotate amount is truncated to either 5 or 6 bits, depending on the operand size.roli Dest, Src1, ImmRotates the contents of the Src1 register to the left by the value in the immediate Imm and writes the result into the Dest register. The rotate amount is truncated to either 5 or 6 bits, depending on the operand size.FlagsThis microop optionally sets the CF, ECF, and OF flags. If the rotate amount is zero, no flags are modified.            Flag      Meaning                  CF and ECF      The least significant bit of the result.              OF      The exclusive OR of the most and least significant bits of the result.      RclRotate left through carry.rcl Dest, Src1, Src2Rotates the contents of the Src1 register through the carry flag and to the left by the value in the Src2 register and writes the result into the Dest register. The rotate amount is truncated to either 5 or 6 bits, depending on the operand size.rcli Dest, Src1, ImmRotates the contents of the Src1 register through the carry flag and to the left by the value in the immediate Imm and writes the result into the Dest register. The rotate amount is truncated to either 5 or 6 bits, depending on the operand size.FlagsThis microop optionally sets the CF, ECF, and OF flags. If the rotate amount is zero, no flags are modified.            Flag      Meaning                  CF and ECF      The last bit rotated out of the result.              OF      The exclusive OR of CF before the rotate and the most significant bit of the result.      MovMove.mov Dest, Src1, Src2Dest # Src1 &lt;- Src2Merge the contents of the Src2 register into the contents of Src1 and put the result into the Dest register.movi Dest, Src1, ImmDest # Src1 &lt;- ImmMerge the contents of the immediate Imm into the contents of Src1 and put the results into the Dest register.FlagsThis microop does not set any flags. It is optionally predicated.SextSign extend.sext Dest, Src1, ImmDest # Dest &lt;- sign_extend(Src1, Imm)Sign extend the value in the Src1 register starting at the bit position in the immediate Imm, and put the result in the Dest register.FlagsThis microop does not set any flags.ZextZero extend.zext Dest, Src1, ImmDest # Dest &lt;- zero_extend(Src1, Imm)Zero extend the value in the Src1 register starting at the bit position in the immediate Imm, and put the result in the Dest register.FlagsThis microop does not set any flags.RuflagRead user flag.ruflag Dest, ImmReads the user level flag stored in the bit position specified by the immediate Imm and stores it in the register Dest.The mapping between values of Imm and user level flags is show in the following table.            Imm      Flag                  0      CF (carry flag)              2      PF (parity flag)              3      ECF (emulation carry flag)              4      AF (auxiliary flag)              5      EZF (emulation zero flag)              6      ZF (zero flag)              7      CF (sign flag)              10      CF (direction flag)              11      CF (overflow flag)      FlagsThe EZF flag is always set. In the future this may become optional.RuflagsRead all user flags.ruflags DestDest # user flagsStore the user level flags into the Dest register.FlagsThis microop does not set any flags.WruflagsWrite all user flags.wruflags Src1, Src2user flags # Src1 ^ Src2Set the user level flags to the exclusive or of the Src1 and Src2 registers.wruflagsi Src1, Immuser flags # Src1 ^ ImmSet the user level flags to the exclusive or of the Src1 register and the immediate Imm.FlagsSee above.RdipRead the instruction pointer.rdip DestDest # rIPSet the Dest register to the current value of rIP.FlagsThis microop does not set any flags.WripWrite the instruction pointer.wrip Src1, Src2rIP # Src1 + Src2Set the rIP to the sum of the Src1 and Src2 registers. This causes a macroop branch at the end of the current macroop.wripi Src1, Immmicropc # Src1 + ImmSet the rIP to the sum of the Src1 register and immediate Imm. This causes a macroop branch at the end of the current macroop.FlagsThis microop does not set any flags. It is optionally predicated.ChksCheck selector.Not yet implemented.Load/Store OpsLdLoad.ld Data, Seg, Sib, DispLoads the integer register Data from memory.LdfLoad floating point.ldf Data, Seg, Sib, DispLoads the floating point register Data from memory.LdmLoad multimedia.ldm Data, Seg, Sib, DispLoad the multimedia register Data from memory.This is not implemented and may never be.LdstLoad with store check.Ldst Data, Seg, Sib, DispLoad the integer register Data from memory while also checking if a store to that location would succeed.This is not implemented currently.LdstlLoad with store check, locked.Ldst Data, Seg, Sib, DispLoad the integer register Data from memory while also checking if a store to that location would succeed, and also provide the semantics of the “LOCK” instruction prefix.This is not implemented currently.StStore.st Data, Seg, Sib, DispStores the integer register Data to memory.StfStore floating point.stf Data, Seg, Sib, DispStores the floating point register Data to memory.StmStore multimedia.stm Data, Seg, Sib, DispStore the multimedia register Data to memory.This is not implemented and may never be.StupdStore with base update.Stupd Data, Seg, Sib, DispStore the integer register Data to memory and update the base register.LeaLoad effective address.lea Data, Seg, Sib, DispCalculates the address for this combination of parameters and stores it in Data.CdaCheck data address.cda Seg, Sib, DispCheck whether the data address is valid.This is not implemented currently.CdafCDA with cache line flush.cdaf Seg, Sib, DispCheck whether the data address is valid, and flush cache linesThis is not implemented currently.CiaCheck instruction address.cia Seg, Sib, DispCheck whether the instruction address is valid.This is not implemented currently.TiaTLB invalidate addresstia Seg, Sib, DispInvalidate the tlb entry which corresponds to this address.This is not implemented currently.Load immediate OpLimmlimm Dest, ImmStores the 64 bit immediate Imm into the integer register Dest.Floating Point OpsMovfpmovfp Dest, SrcDest # SrcMove the contents of the floating point register Src into the floating point register Dest.This instruction is predicated.Xorfpxorfp Dest, Src1, Src2Dest # Src1 ^ Src2Compute the bitwise exclusive or of the floating point registers Src1 and Src2 and put the result in the floating point register Dest.Sqrtfpsqrtfp Dest, SrcDest # sqrt(Src)Compute the square root of the floating point register Src and put the result in floating point register Dest.Addfpaddfp Dest, Src1, Src2Dest # Src1 + Src2Compute the sum of the floating point registers Src1 and Src2 and put the result in the floating point register Dest.Subfpsubfp Dest, Src1, Src2Dest # Src1 - Src2Compute the difference of the floating point registers Src1 and Src2 and put the result in the floating point register Dest.Mulfpmulfp Dest, Src1, Src2Dest # Src1 * Src2Compute the product of the floating point registers Src1 and Src2 and put the result in the floating point register Dest.Divfpdivfp Dest, Src1, Src2Dest # Src1 / Src2Divide Src1 by Src2 and put the result in the floating point register Dest.Compfpcompfp Src1, Src2Compare floating point registers Src1 and Src2.Cvtf_i2dcvtf_i2d Dest, SrcConvert integer register Src into a double floating point value and store the result in the lower part of Dest.Cvtf_i2d_hicvtf_i2d_hi Dest, SrcConvert integer register Src into a double floating point value and store the result in the upper part of Dest.Cvtf_d2icvtf_d2i Dest, SrcConvert floating point register Src into an integer value and store the result in the integer register Dest.Special OpsFaultGenerate a fault.fault fault_codeUses the C++ code fault_code to allocate a Fault object to return.LddhaSet the default handler for a fault.This is not implemented currently.LdahaSet the alternate handler for a faultThis is not implemented currently.Sequencing OpsThese microops are used for control flow withing microcodeBrMicrocode branch. This is never considered the last microop of a sequence. If it appears at the end of a macroop, it is assumed that it branches to microcode in the ROM.br targetmicropc # targetSet the micropc to the 16 bit immediate target.FlagsThis microop does not set any flags. It is optionally predicated.EretReturn from emulation. This instruction is always considered the last microop in a sequence. When executing from the ROM, it is the only way to return to normal instruction decoding.eretReturn from emulation.FlagsThis microop does not set any flags. It is optionally predicated.",
        "url": "/documentation/general_docs/architecture_support/x86_microop_isa/"
      }
      ,
    
      "documentation-general-docs-building-extras": {
        "title": "Building EXTRAS",
        "content": "Building EXTRASThe EXTRAS SCons option is a way to add functionality in gem5 without adding your files to the gem5 source tree. Specifically, it allows you to identify one or more directories that will get compiled in with gem5 as if they appeared under the ‘src’ part of the gem5 tree, without requiring the code to be actually located under ‘src’. It’s present to allow user to compile in additional functionality (typically additional SimObject classes) that isn’t or can’t be distributed with gem5. This is useful for maintaining local code that isn’t suitable for incorporating into the gem5 source tree, or third-party code that can’t be incorporated due to an incompatible license. Because the EXTRAS location is completely independent of the gem5 repository, you can keep the code under a different version control system as well.The main drawback of the EXTRAS feature is that, by itself, it only supports adding code to gem5, not modifying any of the base gem5 code.One use of the EXTRAS feature is to support EIO traces. The trace reader for EIO is licensed under the SimpleScalar license, and due to the incompatibility of that license with gem5’s BSD license, the code to read these traces is not included in the gem5 distribution. Instead, the EIO code is distributed via a separate “encumbered” repository.The following examples show how to compile the EIO code. By adding to or modifying the extras path, any other suitable extra could be compiled in. To compile in code using EXTRAS simply execute the following scons EXTRAS=/path/to/encumbered build/ALPHA/gem5.optIn the root of this directory you should have a SConscript that uses the Source() and SimObject() scons functions that are used in the rest of M5 to compile the appropriate sources and add any SimObjects of interest. If you want to add more than one directory, you can set EXTRAS to a colon-separated list of paths.Note that EXTRAS is a “sticky” parameter, so after a value is provided to scons once, the value will be reused for future scons invocations targeting the same build directory (build/ALPHA_SE in this case) as long as it is not overridden. Thus you only need to specify EXTRAS the first time you build a particular configuration or if you want to override a previously specified value. To run a regression with EXTRAS use a command line similar to the following: ./util/regress --scons-opts = \"EXTRAS=/path/to/encumbered\" -j 2 quick",
        "url": "/documentation/general_docs/building/EXTRAS"
      }
      ,
    
      "documentation-general-docs-building": {
        "title": "Building gem5",
        "content": "Building gem5Dependencies  git : gem5 uses git for version control.  gcc 4.8+: gcc is used to compiled gem5. Version 4.8+ must be used. We donot presently support beyond Version 7. (Note:Support for gcc 4 may be dropped,and version &gt;7 may be supportedin future releases of gem5.  SCons : gem5 uses SCons as its build environment.  Python 2.7+ : gem5 replies on Python development libraries (due to theretirement of Python 2 we are likely tomigrate to Python 3 in future releases of gem5.  protobuf 2.1+ (Optional): The protobuf library is used for tracegeneration and playback.  Boost (Optional): The Boost library is a set of general purpose C++libraries. It is a necessary dependency if you wish to use the SystemCimplementation.If compiling gem5 on Debian, Ubuntu, or related Linux distributions, you mayinstall all these dependencies using APT:sudo apt install build-essential git m4 scons zlib1g zlib1g-dev \\    libprotobuf-dev protobuf-compiler libprotoc-dev libgoogle-perftools-dev \\    python-dev python libboost-all-devGetting the codegit clone https://gem5.googlesource.com/public/gem5Building with SConsgem5’s build system is based on SCons, an open source build system implementedin Python. You can find more information about scons at http://www.scons.org.The main scons file is called SConstruct and is found in the root of the sourcetree. Additional scons files are named SConscript and are found throughout thetree, usually near the files they’re associated with.Within the root of the gem5 directory, gem5 can be built with SCons using:scons build/{ISA}/gem5.{variant} -j {cpus}where {ISA} is the target (guest) Instruction Set Architecture, and{variant} specifies the compilation settings. For most intents and purposesopt is a good target for compilation. The -j flag is optional and allowsfor parallelization of compilation with {cpus} specifying the number ofthreads. A single-threaded compilation from scratch can take up to 2 hours onsome systems. We therefore strongly advise allocating more threads if possible.The valid ISAs are:  ARCH  ARM  NULL  MIPS  POWER  SPARC  X86The valid build variants are:  debug has optimizations turned off. This ensures that variables won’t beoptimized out, functions won’t be unexpectedly inlined, and control flow willnot behave in surprising ways. That makes this version easier to work with intools like gdb, but without optimizations this version is significantly slowerthan the others. You should choose it when using tools like gdb and valgrindand don’t want any details obscured, but other wise more optimized versions arerecommended.  opt has optimizations turned on and debugging functionality like assertsand DPRINTFs left in. This gives a good balance between the speed of thesimulation and insight into what’s happening in case something goes wrong. Thisversion is best in most circumstances.  fast has optimizations turned on and debugging functionality compiledout. This pulls out all the stops performance wise, but does so at the expenseof run time error checking and the ability to turn on debug output. Thisversion is recommended if you’re very confident everything is working correctlyand want to get peak performance from the simulator.  prof is similar to gem5.fast but also includes instrumentation thatallows it to be used with the gprof profiling tool. This version is not neededvery often, but can be used to identify the areas of gem5 that should befocused on to improve performance.  perf also includes instrumentation, but does so using google perftools,allowing it to be profiled with google-pprof. This profiling version iscomplementary to gem5.prof, and can probably replace it for all Linux-basedsystems.These versions are summarized in the following table.            Build variant      Optimizations      Run time debugging support      Profiling support                  debug             X                     opt      X      X                     fast      X                            prof      X             X              perf      X             X      For example, to build gem5 on 4 threads with opt and targeting x86:scons build/X86/gem5.opt -j 4UsageOnce compiled, gem5 can then be run using:./build/{ISA}/gem5.{variant} [gem5 options] {simulation script} [script options]Running with the --help flag will display all the available options:Usage=====  gem5.opt [gem5 options] script.py [script options]gem5 is copyrighted software; use the --copyright option for details.Options=======--version               show program's version number and exit--help, -h              show this help message and exit--build-info, -B        Show build information--copyright, -C         Show full copyright information--readme, -R            Show the readme--outdir=DIR, -d DIR    Set the output directory to DIR [Default: m5out]--redirect-stdout, -r   Redirect stdout (&amp; stderr, without -e) to file--redirect-stderr, -e   Redirect stderr to file--stdout-file=FILE      Filename for -r redirection [Default: simout]--stderr-file=FILE      Filename for -e redirection [Default: simerr]--listener-mode={on,off,auto}                        Port (e.g., gdb) listener mode (auto: Enable if                        running interactively) [Default: auto]--listener-loopback-only                        Port listeners will only accept connections over the                        loopback device--interactive, -i       Invoke the interactive interpreter after running the                        script--pdb                   Invoke the python debugger before running the script--path=PATH[:PATH], -p PATH[:PATH]                        Prepend PATH to the system path when invoking the                        script--quiet, -q             Reduce verbosity--verbose, -v           Increase verbosityStatistics Options--------------------stats-file=FILE       Sets the output file for statistics [Default:                        stats.txt]--stats-help            Display documentation for available stat visitorsConfiguration Options-----------------------dump-config=FILE      Dump configuration output file [Default: config.ini]--json-config=FILE      Create JSON output of the configuration [Default:                        config.json]--dot-config=FILE       Create DOT &amp; pdf outputs of the configuration                        [Default: config.dot]--dot-dvfs-config=FILE  Create DOT &amp; pdf outputs of the DVFS configuration                        [Default: none]Debugging Options-------------------debug-break=TICK[,TICK]                        Create breakpoint(s) at TICK(s) (kills process if no                        debugger attached)--debug-help            Print help on debug flags--debug-flags=FLAG[,FLAG]                        Sets the flags for debug output (-FLAG disables a                        flag)--debug-start=TICK      Start debug output at TICK--debug-end=TICK        End debug output at TICK--debug-file=FILE       Sets the output file for debug [Default: cout]--debug-ignore=EXPR     Ignore EXPR sim objects--remote-gdb-port=REMOTE_GDB_PORT                        Remote gdb base port (set to 0 to disable listening)Help Options--------------list-sim-objects      List all built-in SimObjects, their params and default                        valuesUsing EXTRASThe EXTRAS scons variable can beused to build additional directories of source files into gem5 by setting it toa colon delimited list of paths to these additional directories. EXTRAS is ahandy way to build on top of the gem5 code base without mixing your new sourcewith the upstream source. You can then manage your new body of code however youneed to independently from the main code base.",
        "url": "/documentation/general_docs/building"
      }
      ,
    
      "documentation-general-docs-checkpoints": {
        "title": "Checkpoints",
        "content": "CheckpointsCheckpoints are essentially snapshops of a simulation. You would want to use a checkpoint when your simulation takes an extremely long time (which is almost always the case) so you can resume from that checkpoint at a later time with the DerivO3CPU.CreationFirst of all, you need to create a checkpoint. Each checkpoint as saved in a new directory named ‘cpt.TICKNUMBER’, where TICKNUMBER refers to the tick value at which this checkpoint was created. There are several ways in which a checkpoint can be created:  After booting the gem5 simulator, execute the command m5 checkpoint. One can execute the command manually using m5term, or include it in a run script to do this automatically after the Linux kernel has booted up.  There is a pseudo instruction that can be used for creating checkpoints. For example, one may include this pseduo instruction in an application program, so that the checkpoint is created when the application has reached a certain state.  The option --take-checkpoints can be provided to the python scripts (fs.py, ruby_fs.py) so that checkpoints are dumped periodically. The option --checkpoint-at-end can be used for creating the checkpoint at the end of the simulation. Take a look at the file configs/common/Options.py for these options.While creating checkpoints with Ruby memory model, it is necessary to use the MOESI hammer protocol. This is because checkpointing the correct memory state requires that the caches are flushed to the memory. This flushing operation is currently supported only with the MOESI hammer protocol.RestoringRestoring from a checkpoint can usually be easily done from the command line, e.g.:  build/ALPHA/gem5.debug configs/example/fs.py -r N  OR  build/ALPHA/gem5.debug configs/example/fs.py --checkpoint-restore=NThe number N is integer that represents checkpoint number which usually starts from 1 then increases incrementally to 2,3,4…By default, gem5 assumes that the checkpoint is to be restored using Atomic CPUs. This may not work if the checkpoint was recorded using Timing / Detailed / Inorder CPU. One can mention the option  --restore-with-cpu &lt;CPU Type&gt; on the command line. The cpu type supplied with this option is then used for restoring from the checkpoint.Detailed example: ParsecIn the following section we would describe how checkpoints are created for workloads PARSEC benchmark suite. However similar procedure can be followed to create checkpoint for other workloads beyond PARSEC suite. Following are the high level steps of creating checkpoint:  Annotate each workload with start and end of Region of Interest and with start and end of work units in the program.  Take a checkpoint at the start of the Region of Interest.  Simulate the whole program in the Region of Interest and periodically take checkpoints.  Analyse the statistics corresponding to periodic checkpoints and select the most interesting section of the program execution.  Take warm up cache trace for Ruby before reaching most interesting portion of the program and take the final checkpoint.In each of the following sections we explain each of the above steps in more details.Annotating workloadsAnnotation is required for two purposes: for defining region of program beyond the initialization section of a program and for defining logical units of work in each of the workloads.Workloads in PARSEC benchmark suite, already has annotating demarcating start and end of portion of program without program initialization section and program finalization section. We just use gem5 specific annotation for start of Region of Interest. The start of the Region of Interest (ROI) is marked by m5_roi_begin() and the end of ROI is demarcated by m5_roi_end().Due to large simulation time its not always possible to simulate whole program. Moreover, unlike single threaded programs, simulating for a given number instructions in multi-threaded workloads is not a correct way to simulate portion of a program due to possible presence of instructions spinning on synchronization variable. Thus it is important define semantically meaningful logical units of work in each workload. Simulating for a given number of workuints in a multi-threaded workloads gives a reasonable way of simulating portion of workloads as the problem of instructions spinning on synchronization variables.Switchover/FastforwardingSamplingSampling (switching between functional and detailed models) can be implemented via your Python script. In your script you can direct the simulator to switch between two sets of CPUs. To do this, in your script setup a list of tuples of (oldCPU, newCPU). If there are multiple CPUs you wish to switch simultaneously, they can all be added to that list. For example:run_cpu1 = SimpleCPU()switch_cpu1 = DetailedCPU(switched_out=True)run_cpu2 = SimpleCPU()switch_cpu2 = FooCPU(switched_out=True)switch_cpu_list = [(run_cpu1,switch_cpu1),(run_cpu2,switch_cpu2)]Note that the CPU that does not immediately run should have the parameter “switched_out=True”. This keeps those CPUs from adding themselves to the list of CPUs to run; they will instead get added when you switch them in.In order for gem5 to instantiate all of your CPUs, you must make the CPUs that will be switched in a child of something that is in the configuration hierarchy. Unfortunately at the moment some configuration limitations force the switch CPU to be placed outside of the System object. The Root object is the next most convenient place to place the CPU, as shown below:m5.simulate(500)  # simulate for 500 cyclesm5.switchCpus(switch_cpu_list)m5.simulate(500)  # simulate another 500 cycles after switchingNote that gem5 may have to simulate for a few cycles prior to switching CPUs due to any outstanding state that may be present in the CPUs being switched out.",
        "url": "/documentation/general_docs/checkpoints/"
      }
      ,
    
      "documentation-general-docs-compiling-workloads": {
        "title": "Compiling Workloads",
        "content": "Compiling WorkloadsCross CompilersA cross compiler is a compiler set up to run on one ISA but generate binaries which run on another. You may need one if you intend to simulate a system which uses a particular ISA, Alpha for instance, but don’t have access to actual Alpha hardware.There are various sources for cross compilers. The following are some of them.  ARM.  RISC-V.QEMUAlternatively, you can use QEMU and a disk image to run the desired ISA in emulation. To create more recent disk images, see this page. The following is a youtube video of working with image files using qemu on Ubuntu 12.04 64bit.",
        "url": "/documentation/general_docs/compiling_workloads/"
      }
      ,
    
      "documentation-general-docs-cpu-models-execution-basics": {
        "title": "Execution Basics",
        "content": "Execution basicStaticInstsThe StaticInst provides all static information and methods for a binary instruction.It holds the following information/methods:  Flags to tell what kind of instruction it is (integer, floating point, branch, memory barrier, etc.)  The op class of the instruction  The number of source and destination registers  The number of integer and FP registers used  Method to decode a binary instruction into a StaticInst  Virtual function execute(), which defines how the specific architectural actions taken for an instruction (e.g. read r1, r2, add them and store in r3.)  Virtual functions to handle starting and completing memory operations  Virtual functions to execute the address calculation and memory access separately for models that split memory operations into two operations  Method to disassemble the instruction, printing it out in a human readable format. (e.g. addq r1 r2 r3)It does not have dynamic information, such as the PC of the instruction or the values of the source registers or the result. This allows a 1 to 1 mapping of StaticInst to unique binary machine instructions. We take advantage of this fact by caching the mapping of a binary instruction to a StaticInst in a hash_map, allowing us to decode a binary instruction only once, and directly using the StaticInst the rest of the time.Each ISA instruction derives from StaticInst and implements its own constructor, the execute() function, and, if it is a memory instruction, the memory access functions. See ISA_description_system for details about how these ISA instructions are specified.DynInstsThe DynInst is used to hold dynamic information about instructions. This is necessary for more detailed models or out-of-order models, both of which may need extra information beyond the StaticInsts in order to correctly execute instructions.Some of the dynamic information that it stores includes:  The PC of the instruction  The renamed register indices of the source and destination registers  The predicted next-PC  The instruction result  The thread number of the instruction  The CPU the instruction is executing on  Whether or not the instruction is squashedAdditionally the DynInst provides the ExecContext interface. When ISA instructions are executed, the DynInst is passed in as the ExecContext, handling all accesses of the ISA to CPU state.Detailed CPU models can derive from DynInst and create their own specific DynInst subclasses that implement any additional state or functions that might be needed. See src/cpu/o3/alpha/dyn_inst.hh for an example of this.Microcode supportExecContextThe ExecContext describes the interface that the ISA uses to access CPU state. Although there is a file src/cpu/exec_context.hh, it is purely for documentation purposes and classes do not derive from it. Instead, ExecContext is an implicit interface that is assumed by the ISA.The ExecContext interface provides methods to:  Read and write PC information  Read and write integer, floating point, and control registers  Read and write memory  Record and return the address of a memory access, prefetching, and trigger a system call  Trigger some full-system mode functionalityExample implementations of the ExecContext interface include:  SimpleCPU  DynInstSee the ISA description page for more details on how an instruction set is implemented.ThreadContextThreadContext is the interface to all state of a thread for anything outside of the CPU. It provides methods to read or write state that might be needed by external objects, such as the PC, next PC, integer and FP registers, and IPRs. It also provides functions to get pointers to important thread-related classes, such as the ITB, DTB, System, kernel statistics, and memory ports. It is an abstract base class; the CPU must create its own ThreadContext by either deriving from it, or using the templated ProxyThreadContext class.ProxyThreadContextThe ProxyThreadContext class provides a way to implement a ThreadContext without having to derive from it. ThreadContext is an abstract class, so anything that derives from it and uses its interface will pay the overhead of virtual function calls. This class is created to enable a user-defined Thread object to be used wherever ThreadContexts are used, without paying the overhead of virtual function calls when it is used by itself. The user-defined object must simply provide all the same functions as the normal ThreadContext, and the ProxyThreadContext will forward all calls to the user-defined object. See the code of SimpleThread for an example of using the ProxyThreadContext.Difference vs. ExecContextThe ThreadContext is slightly different than the ExecContext. The ThreadContext provides access to an individual thread’s state; an ExecContext provides ISA access to the CPU (meaning it is implicitly multithreaded on SMT systems). Additionally the ThreadState is an abstract class that exactly defines the interface; the ExecContext is a more implicit interface that must be implemented so that the ISA can access whatever state it needs. The function calls to access state are slightly different between the two. The ThreadContext provides read/write register methods that take in an architectural register index. The ExecContext provides read/write register methdos that take in a StaticInst and an index, where the index refers to the i’th source or destination register of that StaticInsts. Additionally the ExecContext provides read and write methods to access memory, while the ThreadContext does not provide any methods to access memory.ThreadStateThe ThreadState class is used to hold thread state that is common across CPU models, such as the thread ID, thread status, kernel statistics, memory port pointers, and some statistics of number of instructions completed. Each CPU model can derive from ThreadState and build upon it, adding in thread state that is deemed appropriate. An example of this is SimpleThread, where all of the thread’s architectural state has been added in. However, it is not necessary (or even feasible in some cases) for all of the thread’s state to be centrally located in a ThreadState derived class. The DetailedCPU keeps register values and rename maps in its own classes outside of ThreadState. ThreadState is only used to provide a more convenient way to centrally locate some state, and provide sharing across CPU models.FaultsRegistersRegister types - float, int, miscIndexing - register spaces stuffSee Register Indexing for a more thorough treatment.A “nickle tour” of flattening and register indexing in the CPU models.First, an instruction has identified that it needs register such and such as determined by its encoding (or the fact that it always uses a certain register, or …). For the sake of argument, lets say we’re talking about SPARC, the register is %g1, and the second bank of globals is active. From the instructions point of view, the unflattened register is %g1, which, likely, is just represented by the index 1.Next, we need to map from the instruction’s view of the register file(s) down to actual storage locations. Think of this like virtual memory. The instruction is working within an index space which is like a virtual address space, and it needs to be mapped down to the flattened space which is like physical memory. Here, the index 1 is likely mapped to, say, 9, where 0-7 is the first bank of globals and 8-15 is the second.This is the point where the CPU gets involved. The index 9 refers to an actual register the instruction expects to access, and it’s the CPU’s job to make that happen. Before this point, all the work was done by the ISA with no insight available to the CPU, and beyond this point all the work is done by the CPU with no insight available to the ISA.The CPU is free to provide a register directly like the simple CPU by having an array and just reading and writing the 9th element on behalf of the instruction. The CPU could, alternatively, do something complicated like renaming and mapping the flattened index further into a physical register like O3.One important property of all this, which makes sense if you think about the virtual memory analogy, is that the size of the index space before flattening has nothing to do with the size after. The virtual memory space could be very large (presumably with gaps) and map to a smaller physical space, or it could be small and map to a larger physical space where the extra is for, say, other virtual spaces used at other times. You need to make sure you’re using the right size (post flattening) to size your tables because that’s the space of possible options.One other tricky part comes from the fact that we add offsets into the indices to distinguish ints from floats from miscs. Those offsets might be one thing in the preflattening world, but then need to be something else in the post flattening world to keep things from landing on top of each other without leaving gaps. It’s easy to make a mistake here, and it’s one of the reasons I don’t like this offset idea as a way to keep the different types separate. I’d rather see a two dimensional index where the second coordinate was a register type. But in the world as it exists today, this is something you have to keep track of.PCsRegister IndexingCPU register indexing in gem5 is a complicated by the need to support multiple ISAs with sometimes very different register semantics (register windows, condition codes, mode-based alternate register sets, etc.). In addition, this support has evolved gradually as new ISAs have been added, so older code may not take advantage of newer features or terminology.Types of Register IndicesThere are three types of register indices used internally in the CPU models: relative, unified, and flattened.RelativeA relative register index is the index that is encoded in a machine instruction. There is a separate index space for each class of register (integer, floating point, etc.), starting at 0. The register class is implied by the opcode. Thus a value of “1” in a source register field may mean integer register 1 (e.g., “%r1”) or floating point register 1 (e.g., “%f1”) depending on the type of the instruction.UnifiedWhile relative register indices are good for keeping instruction encodings compact, they are ambiguous, and thus not convenient for things like managing dependencies. To avoid this ambiguity, the decoder maps the relative register indices into a unified register space by adding class-specific offsets to relocate each relative index range into a unique position. Integer registers are unmodified, and continue to start at zero. Floating-point register indices are offset by (at least) the number of integer registers, so that the first FP register (e.g., “%f0”) gets a unified index that is greater than that of the last integer register. Similarly, miscellaneous (a.k.a. control) registers are mapped past the end of the FP register index space.FlattenedUnified register indices provide an unambiguous description of all the registers that are accessible as instruction operands at a given point in the execution. Unfortunately, due to the complex features of some ISAs, they do not always unambiguously identify the actual state that the instruction is referencing. For example, in ISAs with register windows (notably SPARC), a particular register identifier such as “%o0” will refer to a different register after a “save” or “restore” operation than it did previously. Several ISAs have registers that are hidden in normal operation, but get mapped on top of ordinary registers when an interrupt occurs (e.g., ARM’s mode-specific registers), or under explicit supervisor control (e.g., SPARC’s “alternate globals”).We solve this problem by maintaining a flattened register space which provides a distinct index for every unique register storage location. For example, the integer portion of the SPARC flattened register space has distinct indices for the globals and the alternate globals, as well as for each of the available register windows. The “flattening” process of translating from a unified or relative register index to a flattened register index varies by ISA. On some ISAs, the mapping is trivial, while others use table lookups to do the translation.A key distinction between the generation of unified and flattened register indices is that the former can always be done statically while the latter often depends on dynamic processor state. That is, the translation from relative to unified indices depends only on the context provided by the instruction itself (which is convenient as the translation is done in the decoder). In contrast, the mapping to a flattened register index may depend on processor state such as the interrupt level or the current window pointer on SPARC.Combining Register Index TypesAlthough the typical progression for modifying register indices is relative -&gt; unified -&gt; flattened, it turns out that relative vs. unified and flattened vs. unflattened are orthogonal attributes. Relative vs. unified indicates whether the index is relative to the base register for its register class (integer, FP, or misc) or has the base offset for its class added in. Flattened vs. unflattened indicates whether the index has been adjusted to account for runtime context such as register window adjustments or alternate register file modes. Thus a relative flattened register index is one in which the runtime context has been accounted for, but is still expressed relative to the base offset for its class.A single set of class-specific offsets is used to generate unified indices from relative indices regardless of whether the indices are flattened or unflattened. Thus the offsets must be large enough to separate the register classes even when flattened addresses are being used. As a result, the unflattened unified register space is often discontiguous.IllustrationsAs an illustration, consider a hypothetical architecture with four integer registers (%r0-%r4), three FP registers (%f0-%f2), and two misc/control registers (%msr0-%msr1). In addition, the architecture supports a complete set of alternate integer and FP registers for fast context switching.The resulting register file layout, along with the unified flattened register file indices, is shown at right. Although the indices in the picture range from 0 to 15, the actual set of valid indices depends on the type of index and (for relative indices) the register class as well:            Relative unflattened      Int: 0-3; FP: 0-2; Misc: 0-1              Unified unflattened      0-3, 8-10, 14-15              Relative flattened      Int: 0-7; FP: 0-5; Misc: 0-1              Unified flattened      0-15      In this example, register %f1 in the alternate FP register file could be referred to via the relative flattened index 4 as well as the relative unflattened index 1, the unified unflattened index 9, or the unified flattened index 12. Note that the difference between the relative and unified indices is always 8 (regardless of flattening), and the difference between the unflattened and flattened indices is 3 (regardless of relative vs. unified status). Caveats  Although the gem5 code is unfortunately not always clear about which type of register index is expected by a particular function, functions whose name incorporates a register class (e.g., readIntReg()) expect a relative register index, and functions that expect a flattened index often have “flat” in the function name.  Although the general case is complicated, the common case can be deceptively simple. For example, because integer registers start at the beginning of the unified register space, relative and unified register indices are identical for integer registers. Furthermore, in an architecture with no (or rarely-used) alternate integer registers, the unflattened and flattened indices are (almost always) the same as well, meaning that all four types of register indices are interchangeable in this case. While this situation seems to be a simplification, it also tends to hide bugs where the wrong register index type is used.  The description above is intended to illustrate the typical usage of these index types. There may be exceptions that don’t precisely   follow this description, but I got tired of writing “typically” in every sentence.  The terms ‘relative’ and ‘unified’ were invented for use in this documentation, so you are unlikely see them in the code until the code starts catching up with this page.  This discussion pertains only to the architectural registers. An out-of-order CPU model such as O3 adds another layer of complexity by renaming these architectural registers (using the flattened register indices) to an underlying physical register file.",
        "url": "/documentation/general_docs/cpu_models/execution_basics"
      }
      ,
    
      "documentation-general-docs-cpu-models": {
        "title": "gem5's CPU models",
        "content": "",
        "url": "/documentation/general_docs/cpu_models/"
      }
      ,
    
      "documentation-general-docs-cpu-models-minor-cpu": {
        "title": "Minor CPU Model",
        "content": "Minor CPU ModelThis document contains a description of the structure and function of theMinor gem5 in-orderprocessor model.It is recommended reading for anyone who wants to understandMinor’s internalorganisation, design decisions, C++ implementation and Python configuration. Afamiliarity with gem5 and some of its internal structures is assumed. Thisdocument is meant to be read alongside theMinor source codeand to explain its general structure without being too slavish about namingevery function and data type.What is Minor?Minor is an in-orderprocessor model with a fixed pipeline but configurable data structures andexecute behaviour. It is intended to be used to model processors with strictin-order execution behaviour and allows visualisation of an instruction’sposition in the pipeline through the MinorTrace/minorview.py format/tool. Theintention is to provide a framework for micro-architecturally correlating themodel with a particular, chosen processor with similar capabilities.Design PhilosophyMultithreadingThe model isn’t currently capable of multithreading but there are THREADcomments in key places where stage data needs to be arrayed to supportmultithreading.Data structuresDecorating data structures with large amounts of life-cycle information isavoided. Only instructions(MinorDynInst) contain asignificant proportion of their data content whose values are not set atconstruction.All internal structures have fixed sizes on construction. Data held in queuesand FIFOs (MinorBuffer,FUPipeline) should havea BubbleIFinterface to allow a distinct ‘bubble’/no data value option for each type.Inter-stage ‘struct’ data is packaged in structures which are passed by value.Only MinorDynInst, the linedata in ForwardLineDataand the memory-interfacing objects Fetch1::FetchRequestand LSQ::LSQRequest are::new allocated while running the model.Model structureObjects of class MinorCPU are provided by themodel to gem5. MinorCPU implements theinterfaces of (cpu.hh) and can provide data and instruction interfaces forconnection to a cache system. The model is configured in a similar way to othergem5 models through Python. That configuration is passed on toMinorCPU::pipeline(of class Pipeline) whichactually implements the processor pipeline.The hierarchy of major unit ownership from MinorCPU down looks like this:MinorCPU--- Pipeline - container for the pipeline, owns the cyclic 'tick' event mechanism and the idling (cycle skipping) mechanism.--- --- Fetch1 - instruction fetch unit responsible for fetching cache lines (or parts of lines from the I-cache interface).--- --- --- Fetch1::IcachePort - interface to the I-cache from Fetch1.--- --- Fetch2 - line to instruction decomposition.--- --- Decode - instruction to micro-op decomposition.--- --- Execute - instruction execution and data memory interface.--- --- --- LSQ - load store queue for memory ref. instructions.--- --- --- LSQ::DcachePort - interface to the D-ache from Execute.Key data structuresInstruction and line identity: Instld (dyn_inst.hh)- T/S.P/L - for fetched cache lines- T/S.P/L/F - for instructions before Decode- T/S.P/L/F.E - for instructions from Decode onwardsfor example:- 0/10.12/5/6.7InstId fieldsare:            Field      Symbol      Generated by      Checked by      Function                  InstId::threadId      T      Fetch1      Everywhere the thread number is needed      Thread number (currently always 0).              InstId::streamSeqNum      S      Execute      Fetch1, Fetch2, Execute (to discard lines/insts)      Stream sequence number as chosen by Execute. Stream sequence numbers change after changes of PC (branches, exceptions) in Execue and are used to separate pre and post brnach instrucion streams.              InstId::predictionSeqNum      Fetch2      Fetch2 (while discarding lines after prediction)      Prediction sequence numbers represent branch prediction decisions. This is used by Fetch2 to mark lines/instructions/ according to the last followed branch prediction made by Fetch2. Fetch2 can signal to Fetch1 that it should change its fetch address and mark lines with a new prediction sequence number (which it will only do if the stream sequence number Fetch1 expects matches that of the request).                     InstId::lineSeqNum      Fetch1      (just for debugging)      Line fetch sequence number of this cache line or the line this instruction was extracted from.                     InstId::fetchSeqNum      Fetch2      Fetch2 (as the inst. sequence number for branches)      Instruction fetch order assigned by Fetch2 when lines are decomposed into instructions.                     InstId::execSeqNum      Decode      Execute (to check instruction identify in queues/FUs/LSQ      Instruction order after micro-op decomposition             The sequence number fields are all independent of each other and although, forinstance, InstId::execSeqNumfor an instruction will always be &gt;= InstId::fetchSeqNum,the comparison is not useful.The originating stage of each sequence number field keeps a counter for thatfield which can be incremented in order to generate new, unique numbers.Instructi ns: MinorDynInst (dyn_inst.hh)MinorDynInst representsan instruction’s progression through the pipeline. An instruction can be threethings:            Things      Predicate      Explanation                  A bubble      MinorDynInst::isBubble()      no instruction at all, just a space-filler              A fault      MinorDynInst::isFault()      a fault to pass down the pipeline in an insturction’s clothing              A decoded instruction      MinorDynInst::isInst()      instructions are actually passed to the gem5 decoder in Fetch2 and so are created fully decoded. MinorDynInst::staticInst is the decoded instruction form.      Instructions are reference counted using the gem5 RefCountingPtr (base/refcnt.hh)wrapper. They therefore usually appear as MinorDynInstPtr in code. Note that asRefCountingPtrinitialises as nullptr rather than an object that supportsBubbleIF::isBubblepassing raw MinorDynInstPtrs to Queues and other similarstructures from stage.hh without boxing is dangerous.ForwardLineData (pipe_data.hh)ForwardLineData is used to pass cache lines from Fetch1 to Fetch2. LikeMinorDynInsts, they can be bubbles (ForwardLineData::isBubble()),fault-carrying or can contain a line (partial line) fetched by Fetch1. The datacarried by ForwardLineData is owned by a Packet object returned from memory andis explicitly memory managed and do must be deleted once processed (by Fetch2deleting the Packet).ForwardInstData (pipe_data.hh)ForwardInstData can contain up to ForwardInstData::width()instructions in its ForwardInstData::instsvector. This structure is used to carry instructions between Fetch2, Decode andExecute and to store input buffer vectors in Decode and Execute.Fetch1::FetchRequest (fetch1.hh)FetchRequests represent I-cache line fetch requests. The are used in the memoryqueues of Fetch1 and are pushed into/popped from Packet::senderStatewhile traversing the memory system.FetchRequests contain a memory system Request (mem/request.hh) for that fetch access, apacket (Packet, mem/packet.hh), if the request gets tomemory, and a fault field that can be populated with a TLB-sourced prefetchfault (if any).LSQ::LSQRequest (execute.hh)LSQRequests are similar to FetchRequests but for D-cache accesses. They carrythe instruction associated with a memory access.The pipeline------------------------------------------------------------------------------    Key:    [] : inter-stage BufferBuffer    ,--.    |  | : pipeline stage    `--'    ---&gt; : forward communication    &lt;--- : backward communication    rv : reservation information for input buffers                ,------.     ,------.     ,------.     ,-------. (from  --[]-v-&gt;|Fetch1|-[]-&gt;|Fetch2|-[]-&gt;|Decode|-[]-&gt;|Execute|--&gt; (to Fetch1 Execute)    |  |      |&lt;-[]-|      |&lt;-rv-|      |&lt;-rv-|       |     &amp; Fetch2)             |  `------'&lt;-rv-|      |     |      |     |       |             `--------------&gt;|      |     |      |     |       |                             `------'     `------'     `-------'------------------------------------------------------------------------------The four pipeline stages are connected together by MinorBuffer FIFO(stage.hh, derived ultimately from TimeBuffer) structures whichallow inter-stage delays to be modelled. There is a MinorBuffers betweenadjacent stages in the forward direction (for example: passing lines fromFetch1 to Fetch2) and, between Fetch2 and Fetch1, a buffer in the backwardsdirection carrying branch predictions.Stages Fetch2, Decode and Execute have input buffers which, each cycle, canaccept input data from the previous stage and can hold that data if the stageis not ready to process it. Input buffers store data in the same form as it isreceived and so Decode and Execute’s input buffers contain the outputinstruction vector (ForwardInstData(pipe_data.hh)) fromtheir previous stages with the instructions and bubbles in the same positionsas a single buffer entry.Stage input buffers provide a Reservable (stage.hh)interface to their previous stages, to allow slots to be reserved in theirinput buffers, and communicate their input buffer occupancy backwards to allowthe previous stage to plan whether it should make an output in a given cycle.Event handling: MinorActivityRecorder (activity.hh, pipeline.hh)Minor is essentially a cycle-callable model with some ability to skip cyclesbased on pipeline activity. External events are mostly received by callbacks(e.g. Fetch1::IcachePort::recvTimingResp)and cause the pipeline to be woken up to service advancing request queues.Ticked (sim/ticked.hh)is a base class bringing together an evaluate member function and a providedSimObject. Itprovides a Ticked::start/stopinterface to start and pause clock events from being periodically issued.Pipeline isa derived class of Ticked.During evaluate calls, stages can signal that they still have work to do in thenext cycle by calling either MinorCPU::activityRecorder-&gt;activity()(for non-callable related activity) or MinorCPU::wakeupOnEvent() (forstage callback-related 'wakeup' activity).Pipeline::evaluatecontains calls to evaluate for each unit and a test for pipeline idling whichcan turns off the clock tick if no unit has signalled that it may become activenext cycle.Within Pipeline (pipeline.hh), the stages areevaluated in reverse order (and so will ::evaluate in reverse order) and theirbackwards data can be read immediately after being written in each cycleallowing output decisions to be ‘perfect’ (allowing synchronous stalling of thewhole pipeline). Branch predictions from Fetch2 to Fetch1 can also betransported in 0 cycles making fetch1ToFetch2BackwardDelay the onlyconfigurable delay which can be set as low as 0 cycles.The MinorCPU::activateContextand MinorCPU::suspendContextinterface can be called to start and pause threads (threads in the MT sense)and to start and pause the pipeline. Executing instructions can call thisinterface (indirectly through the ThreadContext) to idle the CPU/their threads.Each pipeline stageIn general, the behaviour of a stage (each cycle) is:    evaluate:        push input to inputBuffer        setup references to input/output data slots        do 'every cycle' 'step' tasks        if there is input and there is space in the next stage:            process and generate a new output            maybe re-activate the stage        send backwards data        if the stage generated output to the following FIFO:            signal pipe activity        if the stage has more processable input and space in the next stage:            re-activate the stage for the next cycle        commit the push to the inputBuffer if that data hasn't all been usedThe Execute stage differs from this model as its forward output (branch) datais unconditionally sent to Fetch1 and Fetch2. To allow this behaviour, Fetch1and Fetch2 must be unconditionally receptive to that data.Fetch1 stageFetch1 isresponsible for fetching cache lines or partial cache lines from the I-cacheand passing them on to Fetch2 to be decomposedinto instructions. It can receive ‘change of stream’ indications from bothExecute andFetch2 tosignal that it should change its internal fetch address and tag newly fetchedlines with new stream or prediction sequence numbers. When both Execute andFetch2 signalchanges of stream at the same time, Fetch1 takesExecute’schange.Every line issued by Fetch1 will bear aunique line sequence number which can be used for debugging stream changes.When fetching from the I-cache, Fetch1  will ask fordata from the current fetch address (Fetch1::pc) up to the end of the ‘datasnap’ size set in the parameter fetch1LineSnapWidth. Subsequent autonomous linefetches will fetch whole lines at a snap boundary and of size fetch1LineWidth.Fetch1 willonly initiate a memory fetch if it can reserve space in Fetch2 input buffer.That input buffer serves an the fetch queue/LFL for the system.Fetch1contains two queues: requests and transfers to handle the stages of translatingthe address of a line fetch (via the TLB) and accommodating therequest/response of fetches to/from memory.Fetch requests from Fetch1 are pushed intothe requests queue as newly allocated FetchRequest objects once they have beensent to the ITLB with a call to itb-&gt;translateTiming.A response from the TLB moves the request from the requests queue to thetransfers queue. If there is more than one entry in each queue, it is possibleto get a TLB response for request which is not at the head of the requestsqueue. In that case, the TLB response is marked up as a state change toTranslated in the request object, and advancing the request to transfers (andthe memory system) is left to calls to Fetch1::stepQueueswhich is called in the cycle following any event is received.Fetch1::tryToSendToTransfers—layout: documentationtitle: Execution Basicsdoc: gem5 documentationparent: cpu_modelspermalink: /documentation/general_docs/cpu_models/execution_basics—is responsible for moving requests between the two queues and issuing requeststo memory. Failed TLB lookups (prefetch aborts) continue to occupy space in thequeues until they are recovered at the head of transfers.Responses from memory change the request object state to Complete andFetch1::evaluatecan pick up response data, package it in the ForwardLineData object,and forward it to Fetch2’s input buffer.As space is always reserved in Fetch2::inputBuffer,setting the input buffer’s size to 1 results in non-prefetching behaviour.When a change of stream occurs, translated requests queue members and completedtransfers queue members can be unconditionally discarded to make way for newtransfers.Fetch2 stageFetch2 receives a line from Fetch1 into its input buffer. The data in the headline in that buffer is iterated over and separated into individual instructionswhich are packed into a vector of instructions which can be passed toDecode.Packing instructions can be aborted early if a fault is found in either theinput line as a whole or a decomposed instruction.Branch predictionFetch2 contains the branch prediction mechanism. This is a wrapper around the branch predictor interface provided by gem5 (cpu/pred/…).Branches are predicted for any control instructions found. If prediction isattempted for an instruction, the MinorDynInst::triedToPredictflag is set on that instruction.When a branch is predicted to take, the MinorDynInst::predictedTaken flag is set and MinorDynInst::predictedTarget is set to the predicted target PC value. The predicted branch instruction is then packed into Fetch2’s output vector, the prediction sequence number is incremented, and the branch is communicated to Fetch1.After signalling a prediction, Fetch2 will discard its input buffer contentsand will reject any new lines which have the same stream sequence number asthat branch but have a different prediction sequence number. This allowsfollowing sequentially fetched lines to be rejected without ignoring new linesgenerated by a change of stream indicated from a ‘real’ branch from Execute(which will have a new stream sequence number).The program counter value provided to Fetch2 by Fetch1 packets is only updatedwhen there is a change of stream. Fetch2::havePC indicates whether the PC willbe picked up from the next processed input line. Fetch2::havePC is necessary toallow line-wrapping instructions to be tracked through decode.Branches (and instructions predicted to branch) which are processed by Executewill generate BranchData (pipe_data.hh) data explaining theoutcome of the branch which is sent forwards to Fetch1 and Fetch2. Fetch1 usesthis data to change stream (and update its stream sequence number and addressfor new lines). Fetch2 uses it to update the branch predictor. Minor does notcommunicate branch data to the branch predictor for instructions which arediscarded on the way to commit.BranchData::BranchReason (pipe_data.hh) encodes the possiblebranch scenarios:            Branch enum val.      In Execute      Fetch1 reaction      Fetch2 reaction                  No Branch      (output bubble data)      -      -              CorrectlyPredictedBranch      Predicted, taken      -      Update BP as taken branch              UnpredictedBranch      Not predicted, taken and was taken      New stream      Update BP as taken branch              BadlyPredictedBranch      Predicted, not taken      New stream to restore to old Inst. source      Update BP as not taken branch              BadlyPredictedBranchTarget      Predicted, taken, but to a different target than predicted one      New stream      Update BTB to new target              SuspendThread      Hint to suspend fetch      Suspend fetch for this thread (branch to next inst. as wakeup fetch addr      -              Interrupt      Interrupt detected      New stream      -      layout: documentationtitle: Execution Basicsdoc: gem5 documentationparent: cpu_modelspermalink: /documentation/general_docs/cpu_models/execution_basics—Decode StageDecode takes avector of instructions from Fetch2 (via its inputbuffer) and decomposes those instructions into micro-ops (if necessary) andpacks them into its output instruction vector.The parameter executeInputWidth sets the number of instructions which can bepacked into the output per cycle. If the parameter decodeCycleInput is true,Decode can tryto take instructions from more than one entry in its input buffer per cycle.Execute StageExecute provides all the instruction execution and memory access mechanisms. Aninstructions passage through Execute can take multiple cycles with its precisetiming modelled by a functional unit pipeline FIFO.A vector of instructions (possibly including fault ‘instructions’) is providedto Execute by Decode and can be queued in the Execute input buffer before beingissued. Setting the parameter executeCycleInput allows execute to examine morethan one input buffer entry (more than one instruction vector). The number ofinstructions in the input vector can be set with executeInputWidth and thedepth of the input buffer can be set with parameter executeInputBufferSize.Functional unitsThe Execute stage contains pipelines for each functional unit comprising thecomputational core of the CPU. Functional units are configured via theexecuteFuncUnits parameter. Each functional unit has a number of instructionclasses it supports, a stated delay between instruction issues, and a delayfrom instruction issue to (possible) commit and an optional timing annotationcapable of more complicated timing.Each active cycle, Execute::evaluateperforms this action:    Execute::evaluate:        push input to inputBuffer        setup references to input/output data slots and branch output slot        step D-cache interface queues (similar to Fetch1)        if interrupt posted:            take interrupt (signalling branch to Fetch1/Fetch2)        else            commit instructions            issue new instructions        advance functional unit pipelines        reactivate Execute if the unit is still active        commit the push to the inputBuffer if that data hasn't all been usedFunctional unit FIFOsFunctional units are implemented as SelfStallingPipelines (stage.hh). These areTimeBuffer FIFOswith two distinct ‘push’ and ‘pop’ wires. They respond toSelfStallingPipeline::advancein the same way as TimeBuffers unless there is data at the far, ‘pop’, end ofthe FIFO. A ‘stalled’ flag is provided for signalling stalling and to allow astall to be cleared. The intention is to provide a pipeline for each functionalunit which will never advance an instruction out of that pipeline until it hasbeen processed and the pipeline is explicitly unstalled.The actions ‘issue’, ‘commit’, and ‘advance’ act on the functional units.IssueIssuing instructions involves iterating over both the input buffer instructionsand the heads of the functional units to try and issue instructions in order.The number of instructions which can be issued each cycle is limited by theparameter executeIssueLimit, how executeCycleInput is set, the availability of—layout: documentationtitle: Execution Basicsdoc: gem5 documentationparent: cpu_modelspermalink: /documentation/general_docs/cpu_models/execution_basics—pipeline space and the policy used to choose a pipeline in which theinstruction can be issued.At present, the only issue policy is strict round-robin visiting of eachpipeline with the given instructions in sequence. For greater flexibility,better (and more specific policies) will need to be possible.Memory operation instructions traverse their functional units to perform theirEA calculations. On ‘commit’, the ExecContext::initiateAccexecution phase is performed and any memory access is issued (via.ExecContext::{read,write}Mem calling LSQ::pushRequest)to the LSQ.Note that faults are issued as if they are instructions and can (currently) beissued to any functional unit.Every issued instruction is also pushed into the Execute::inFlightInsts queue.Memory ref. instructions are pushing into Execute::inFUMemInsts queue.CommitInstructions are committed by examining the head of the Execute::inFlightInstsqueue (which is decorated with the functional unit number to which theinstruction was issued). Instructions which can then be found in theirfunctional units are executed and popped from Execute::inFlightInsts.Memory operation instructions are committed into the memory queues (asdescribed above) and exit their functional unit pipeline but are not poppedfrom the Execute::inFlightInsts queue. The Execute::inFUMemInsts queue providesordering to memory operations as they pass through the functional units(maintaining issue order). On entering the LSQ, instructions are popped fromExecute::inFUMemInsts.If the parameter executeAllowEarlyMemoryIssue is set, memory operations can besent from their FU to the LSQ before reaching the head ofExecute::inFlightInsts but after their dependencies are met.MinorDynInst::instToWaitForis marked up with the latest dependent instruction execSeqNum required to becommitted for a memory operation to progress to the LSQ.Once a memory response is available (by testing the head ofExecute::inFlightInsts against LSQ::findResponse),commit will process that response (ExecContext::completeAcc) and pop theinstruction from Execute::inFlightInsts.Any branch, fault or interrupt will cause a stream sequence number change andsignal a branch to Fetch1/Fetch2. Only instructions with the current streamsequence number will be issued and/or committed.AdvanceAll non-stalled pipeline are advanced and may, thereafter, become stalled.Potential activity in the next cycle is signalled if there are any instructionsremaining in any pipeline.ScoreboardThe scoreboard (Scoreboard) is used tocontrol instruction issue. It contains a count of the number of in flightinstructions which will write each general purpose CPU integer or floatregister. Instructions will only be issued where the scoreboard contains acount of 0 instructions which will write to one of the instructions sourceregisters.Once an instruction is issued, the scoreboard counts for each destinationregister for an instruction will be incremented.The estimated delivery time of the instruction’s result is marked up in the scoreboard by adding the length of the issued-to FU to the current time. The timings parameter on each FU provides a list of additional rules for calculating the delivery time. These are documented in the parameter comments in MinorCPU.py.On commit, (for memory operations, memory response commit) the scoreboard counters for an instruction’s source registers are decremented. will be decremented.Execute::inFlightInstsThe Execute::inFlightInsts queue will always contain all instructions in flightin Execute inthe correct issue order. Execute::issueis the only process which will push an instruction into the queue.Execute::commitis the only process that can pop an instruction.LSQThe LSQ cansupport multiple outstanding transactions to memory in a number of conservativecases.There are three queues to contain requests: requests, transfers and the storebuffer. The requests and transfers queue operate in a similar manner to thequeues in Fetch1. The store buffer is used to decouple the delay of completingstore operations from following loads.Requests are issued to the DTLB as their instructions leave their functionalunit. At the head of requests, cacheable load requests can be sent to memoryand on to the transfers queue. Cacheable stores will be passed to transfersunprocessed and progress that queue maintaining order with other transactions.The conditions in LSQ::tryToSendToTransfersdictate when requests can be sent to memory.All uncacheable transactions, split transactions and locked transactions areprocessed in order at the head of requests. Additionally, store resultsresiding in the store buffer can have their data forwarded to cacheable loads(removing the need to perform a read from memory) but no cacheable load can beissue to the transfers queue until that queue’s stores have drained into thestore buffer.At the end of transfers, requests which are LSQ::LSQRequest::Complete(are faulting, are cacheable stores, or have been sent to memory and received aresponse) can be picked off by Execute and either committed(ExecContext::completeAcc) and, for stores, be sent to the store buffer.Barrier instructions do not prevent cacheable loads from progressing to memorybut do cause a stream change which will discard that load. Stores will not becommitted to the store buffer if they are in the shadow of the barrier butbefore the new instruction stream has arrived at Execute. As all other memorytransactions are delayed at the end of the requests queue until they are at thehead of Execute::inFlightInsts, they will be discarded by any barrier streamchange.After commit, LSQ::BarrierDataRequestrequests are inserted into the store buffer to track each barrier until allpreceding memory transactions have drained from the store buffer. No furthermemory transactions will be issued from the ends of FUs until after the barrierhas drained.DrainingDraining is mostly handled by the Execute stage. Wheninitiated by calling MinorCPU::drain,Pipeline::evaluatechecks the draining status of each unit each cycle and keeps the pipelineactive until draining is complete. It is Pipeline that signals the completionof draining. Execute is triggered by MinorCPU::drainand starts stepping through its Execute::DrainStatestate machine, starting from state Execute::NotDraining, in this order:            State      Meaning              Execute::NotDraining      Not trying to drain, normal execution              Execute::DrainCurrentInst      Draining micro-ops to complete inst.              Execute::DrainHaltFetch      Halt fetching instructions              Execute::DrainAllInsts      Discarding all instructions presented      When complete, a drained Execute unit will be in the Execute::DrainAllInstsstate where it will continue to discard instructions but has no knowledge ofthe drained state of the rest of the model.Debug optionsThe model provides a number of debug flags which can be passed to gem5 with the–debug-flags option.The available flags are:            Debug flag      Unit which will generate debugging output                  Activity      Debug ActivityMonitor actions              Branch      Fetch2 and Execute branch prediction decisions              MinorCPU      CPU global actions such as wakeup/thread suspension              Decode      Decode              MinorExec      Execute behaviour              Fetch      Fetch1 and Fetch2              MinorInterrupt      Execute interrupt handling              MinorMem      Execute memory interactions              MinorScoreboard      Execute scoreboard activity              MinorTrace      Generate MinorTrace cyclic state trace output (see below)              MinorTiming      MinorTiming instruction timing modification operations      The group flag Minorenables all the flags beginning with Minor.MinorTrace and minorview.pyThe debug flag MinorTrace causes cycle-by-cycle state data to be printed whichcan then be processed and viewed by the minorview.py tool. This output is veryverbose and so it is recommended it only be used for small examples.MinorTrace formatThere are three types of line outputted by MinorTrace:MinorTrace - Ticked unit cycle stateFor example: 110000: system.cpu.dcachePort: MinorTrace: state=MemoryRunning in_tlb_mem=0/0For each time step, the MinorTrace flag will cause one MinorTrace line to beprinted for every named element in the model.MinorInst - summaries of instructions issued by DecodeDecodeFor example: 140000: system.cpu.execute: MinorInst: id=0/1.1/1/1.1 addr=0x5c \\                             inst=\"  mov r0, #0\" class=IntAluMinorInst lines are currently only generated for instructions which are committed.MinorLine - summaries of line fetches issued by Fetch1Fetch1For example:  92000: system.cpu.icachePort: MinorLine: id=0/1.1/1 size=36 \\                                vaddr=0x5c paddr=0x5cminorview.pyMinorview (util/minorview.py) can be used to visualise the data created byMinorTrace.usage: minorview.py [-h] [--picture picture-file] [--prefix name]                   [--start-time time] [--end-time time] [--mini-views]                   event-fileMinor visualiserpositional arguments:  event-fileoptional arguments:  -h, --help            show this help message and exit  --picture picture-file                        markup file containing blob information (default:                        &lt;minorview-path&gt;/minor.pic)  --prefix name         name prefix in trace for CPU to be visualised                        (default: system.cpu)  --start-time time     time of first event to load from file  --end-time time       time of last event to load from file  --mini-views          show tiny views of the next 10 time stepsRaw debugging output can be passed to minorview.py as the event-file. It willpick out the MinorTrace lines and use other lines where units in the simulationare named (such as system.cpu.dcachePort in the above example) will appear as‘comments’ when units are clicked on the visualiser.Clicking on a unit which contains instructions or lines will bring up a speechbubble giving extra information derived from the MinorInst/MinorLine lines.–start-time and –end-time allow only sections of debug files to be loaded.–prefix allows the name prefix of the CPU to be inspected to be supplied.This defaults to system.cpu.In the visualiser, The buttons Start, End, Back, Forward, Play and Stop can beused to control the displayed simulation time.The diagonally striped coloured blocks are showing the InstId of theinstruction or line they represent. Note that lines in Fetch1 and f1ToF2.Fonly show the id fields of a line and that instructions in Fetch2, f2ToD, anddecode.inputBuffer do not yet have execute sequence numbers. The T/S.P/L/F.Ebuttons can be used to toggle parts of InstId on and off tomake it easier to understand the display. Useful combinations are:            Combination      Reason                  E      just show the final execute sequence number              F/E      show the instruction-related numbers              S/P      show just the stream-related numbers (watch the stream sequence change with branches and not change with predicted branches)              S/E      show instructions and their stream      The key to the right shows all the displayable colours (some of the colourchoices are quite bad!):            Symbol      Meaning                  U      Uknown data              B      Blocked stage              -      Bubble              E      Empty queue slot              R      Reserved queue slot              F      Fault              r      Read (used as the leftmost stripe on data in the dcachePort)              w      Write “ “              0 to 9      last decimal digit of the corresponding data          ,---------------.         .--------------.  *U    | |=|-&gt;|=|-&gt;|=| |         ||=|||-&gt;||-&gt;|| |  *-  &lt;- Fetch queues/LSQ    `---------------'         `--------------'  *R    === ======                                  *w  &lt;- Activity/Stage activity                              ,--------------.  *1    ,--.      ,.      ,.      | ============ |  *3  &lt;- Scoreboard    |  |-\\[]-\\||-\\[]-\\||-\\[]-\\| ============ |  *5  &lt;- Execute::inFlightInsts    |  | :[] :||-/[]-/||-/[]-/| -. --------  |  *7    |  |-/[]-/||  ^   ||      |  | --------- |  *9    |  |      ||  |   ||      |  | ------    |[]-&gt;|  |    -&gt;||  |   ||      |  | ----      |    |  |&lt;-[]&lt;-||&lt;-+-&lt;-||&lt;-[]&lt;-|  | ------    |-&gt;[] &lt;- Execute to Fetch1,    '--`      `'  ^   `'      | -' ------    |        Fetch2 branch data             ---. |  ---.     `--------------'             ---' |  ---'       ^       ^                  |   ^         |       `------------ Execute  MinorBuffer ----' input       `-------------------- Execute input buffer                    bufferStages show the colours of the instructions currently beinggenerated/processed.Forward FIFOs between stages show the data being pushed into them at thecurrent tick (to the left), the data in transit, and the data available attheir outputs (to the right).The backwards FIFO between Fetch2 and Fetch1 shows branchprediction data.In general, all displayed data is correct at the end of a cycle’s activity atthe time indicated but before the inter-stage FIFOs are ticked. Each FIFO has,therefore an extra slot to show the asserted new input data, and all the datacurrently within the FIFO.Input buffers for each stage are shown below the corresponding stage and showthe contents of those buffers as horizontal strips. Strips marked as reserved(cyan by default) are reserved to be filled by the previous stage. An inputbuffer with all reserved or occupied slots will, therefore, block the previousstage from generating output.Fetch queues and LSQ show thelines/instructions in the queues of each interface and show the number oflines/instructions in TLB and memory in the two striped colours of the top oftheir frames.Inside Execute, the horizontalbars represent the individual FU pipelines. The vertical bar to the left is theinput buffer and the bar to the right, the instructions committed this cycle.The background of Execute showsinstructions which are being committed this cycle in their original FU pipelinepositions.The strip at the top of the Execute block shows thecurrent streamSeqNum that Execute is committing.A similar stripe at the top of Fetch1 shows thatstage’s expected streamSeqNum and the stripe at the top of Fetch2 shows itsissuing predictionSeqNum.The scoreboard shows the number of instructions in flight which will commit aresult to the register in the position shown. The scoreboard contains slots foreach integer and floating point register.The Execute::inFlightInsts queue shows all the instructions in flight inExecute withthe oldest instruction (the next instruction to be committed) to the right.Stage activity shows the signalled activity (as E/1) for each stage (with CPUmiscellaneous activity to the left)Activity show a count of stage and pipe activity.minor.pic formatThe minor.pic file (src/minor/minor.pic) describes the layout of the modelsblocks on the visualiser. Its format is described in the supplied minor.picfile.",
        "url": "/documentation/general_docs/cpu_models/minor_cpu"
      }
      ,
    
      "documentation-general-docs-cpu-models-o3cpu": {
        "title": "Out of order CPU model",
        "content": "O3CPUTable of Contents  Pipeline stages  Execute-in-execute model  Template Policies  ISA independence  Interaction with ThreadContextThe O3CPU is our new detailed model for the v2.0 release. It is an out of order CPU model loosely based on the Alpha 21264. This page will give you a general overview of the O3CPU model, the pipeline stages and the pipeline resources. We have made efforts to keep the code well documented, so please browse the code for exact details on how each part of the O3CPU works.Pipeline stages      Fetch    Fetches instructions each cycle, selecting which thread to fetch from based on the policy selected. This stage is where the DynInst is first created. Also handles branch prediction.        Decode    Decodes instructions each cycle. Also handles early resolution of PC-relative unconditional branches.        Rename    Renames instructions using a physical register file with a free list. Will stall if there are not enough registers to rename to, or if back-end resources have filled up. Also handles any serializing instructions at this point by stalling them in rename until the back-end drains.        Issue/Execute/Writeback    Our simulator model handles both execute and writeback when the execute() function is called on an instruction, so we have combined these three stages into one stage. This stage (IEW) handles dispatching instructions to the instruction queue, telling the instruction queue to issue instruction, and executing and writing back instructions.        Commit    Commits instructions each cycle, handling any faults that the instructions may have caused. Also handles redirecting the front-end in the case of a branch misprediction.  Execute-in-execute modelFor the O3CPU, we’ve made efforts to make it highly timing accurate. In order to do this, we use a model that actually executes instructions at the execute stage of the pipeline. Most simulator models will execute instructions either at the beginning or end of the pipeline; SimpleScalar and our old detailed CPU model both execute instructions at the beginning of the pipeline and then pass it to a timing backend. This presents two potential problems: first, there is the potential for error in the timing backend that would not show up in program results. Second, by executing at the beginning of the pipeline, the instructions are all executed in order and out-of-order load interaction is lost. Our model is able to avoid these deficiencies and provide an accurate timing model.Template PoliciesThe O3CPU makes heavy use of template policies to obtain a level of polymorphism without having to use virtual functions. It uses template policies to pass in an “Impl” to almost all of the classes used within the O3CPU. This Impl has defined within it all of the important classes for the pipeline, such as the specific Fetch class, Decode class, specific DynInst types, the CPU class, etc. It allows any class that uses it as a template parameter to be able to obtain full type information of any of the classes defined within the Impl. By obtaining full type information, there is no need for the traditional virtual functions/base classes which are normally used to provide polymorphism. The main drawback is that the CPU must be entirely defined at compile time, and that the templated classes require manual instantiation. See src/cpu/o3/impl.hh  and src/cpu/o3/cpu_policy.hh for example Impl classes.ISA independenceThe O3CPU has been designed to try to separate code that is ISA dependent and code that is ISA independent. The pipeline stages and resources are all mainly ISA independent, as well as the lower level CPU code. The ISA dependent code implements ISA-specific functions. For example, the AlphaO3CPU implements Alpha-specific functions, such as hardware return from error interrupt (hwrei()) or reading the interrupt flags. The lower level CPU, the FullO3CPU, handles orchestrating all of the pipeline stages and handling other ISA-independent actions. We hope this separation makes it easier to implement future ISAs, as hopefully only the high level classes will have to be redefined.Interaction with ThreadContextThe ThreadContext provides interface for external objects to access thread state within the CPU. However, this is slightly complicated by the fact that the O3CPU is an out-of-order CPU. While it is well defined what the architectural state is at any given cycle, it is not well defined what happens if that architectural state is changed. Thus it is feasible to do reads to the ThreadContext without much effort, but doing writes to the ThreadContext and altering register state requires the CPU to flush the entire pipeline. This is because there may be in flight instructions that depend on the register that has been changed, and it is unclear if they should or should not view the register update. Thus accesses to the ThreadContext have the potential to cause slowdown in the CPU simulation.",
        "url": "/documentation/general_docs/cpu_models/O3CPU"
      }
      ,
    
      "documentation-general-docs-cpu-models-simplecpu": {
        "title": "Simple CPU Models",
        "content": "SimpleCPUThe SimpleCPU is a purely functional, in-order model that is suited for cases where a detailed model is not necessary. This can include warm-up periods, client systems that are driving a host, or just testing to make sure a program works.It has recently been re-written to support the new memory system, and is now broken up into three classes:Table of Contents  BaseSimpleCPU  AtomicSimpleCPU  TimingSimpleCPUBaseSimpleCPUThe BaseSimpleCPU serves several purposes:  Holds architected state, stats common across the SimpleCPU models.  Defines functions for checking for interrupts, setting up a fetch request, handling pre-execute setup, handling post-execute actions, and advancing the PC to the next instruction. These functions are also common across the SimpleCPU models.  Implements the ExecContext interface.The BaseSimpleCPU can not be run on its own. You must use one of the classes that inherits from BaseSimpleCPU, either AtomicSimpleCPU or TimingSimpleCPU.AtomicSimpleCPUThe AtomicSimpleCPU is the version of SimpleCPU that uses atomic memory accesses (see Memory systems for details). It uses the latency estimates from the atomic accesses to estimate overall cache access time. The AtomicSimpleCPU is derived from BaseSimpleCPU, and implements functions to read and write memory, and also to tick, which defines what happens every CPU cycle. It defines the port that is used to hook up to memory, and connects the CPU to the cache.TimingSimpleCPUThe TimingSimpleCPU is the version of SimpleCPU that uses timing memory accesses (see Memory systems for details). It stalls on cache accesses and waits for the memory system to respond prior to proceeding. Like the AtomicSimpleCPU, the TimingSimpleCPU is also derived from BaseSimpleCPU, and implements the same set of functions. It defines the port that is used to hook up to memory, and connects the CPU to the cache. It also defines the necessary functions for handling the response from memory to the accesses sent out.",
        "url": "/documentation/general_docs/cpu_models/SimpleCPU"
      }
      ,
    
      "documentation-general-docs-cpu-models-tracecpu": {
        "title": "Trace CPU Model",
        "content": "TraceCPUTable of Contents      Overview    Elastic Trace Generation          Scripts and options      Trace file format        Replay with Trace CPU          Scripts and options      OverviewThe Trace CPU model plays back elastic traces, which are dependency and timing annotated traces generated by the Elastic Trace Probe attached to the O3 CPU model. The focus of the Trace CPU model is to achieve memory-system (cache-hierarchy, interconnects and main memory) performance exploration in a fast and reasonably accurate way instead of using the detailed but slow O3 CPU model. The traces have been developed for single-threaded benchmarks simulating in both SE and FS mode. They have been correlated for 15 memory-sensitive SPEC 2006 benchmarks and a handful of HPC proxy apps by interfacing the Trace CPU with classic memory system and varying cache design parameters and DRAM memory type. In general, elastic traces can be ported to other simulation environments.Publication:Exploring System Performance using Elastic Traces: Fast, Accurate and Portable” Radhika Jagtap, Stephan Diestelhorst, Andreas Hansson, Matthias Jung and Norbert Wehn SAMOS 2016Trace generation and replay methodologyElastic Trace GenerationThe Elastic Trace Probe Listener listens to Probe Points inserted in O3 CPU pipeline stages. It monitors each instruction and creates a dependency graph by recording data Read-After-Write dependencies and order dependencies between loads and stores. It writes the instruction fetch request trace and the elastic data memory request trace as two separate files as shown below.Trace file formatsThe elastic data memory trace and fetch request trace are both encoded using google protobuf.Elastic Trace fields in protobuf format            Fields      Discritption                  required uint64 seq_num        Instruction number used as an id for tracking dependencies              required RecordType type        RecordType enum has values: INVALID, LOAD, STORE, COMP              optional uint64 p_addr        Physical memory address if instruction is a load/store              optional uint32 size        Size in bytes of data if instruction is a load/store              optional uint32 flags        Flags or attributes of the access, ex. Uncacheable              required uint64 rob_dep        Past instruction number on which there is order (ROB) dependency              required uint64 comp_delay        Execution delay between the completion of the last dependency and the execution of the instruction                repeated uint64 reg_dep        Past instruction number on which there is RAW data dependency              optional uint32 weight        To account for committed instructions that were filtered out              optional uint64 pc        Instruction address, i.e. the program counter              optional uint64 v_addr        Virtual memory address if instruction is a load/store              optional uint32 asid        Address Space ID      A decode script in Python is available at util/decode_inst_dep_trace.py that outputs the trace in ASCII format.Example of a trace in ASCII1,356521,COMP,8500::2,35656,1,COMP,0:,1:3,35660,1,LOAD,1748752,4,74,500:,2:4,35660,1,COMP,0:,3:5,35664,1,COMP,3000::,46,35666,1,STORE,1748752,4,74,1000:,3:,4,57,35666,1,COMP,3000::,48,35670,1,STORE,1748748,4,74,0:,6,3:,79,35670,1,COMP,500::,7Each record in the instruction fetch trace has the following fields.            Fields      Discritption                  required uint64 tick        Timestamp of the access              required uint32 cmd\t       Read or Write (in this case always Read)              required uint64 addr        Physical memory address              required uint32 size        Size in bytes of data              optional uint32 flags        Flags or attributes of the access              optional uint64 pkt_id        Id of the access              optional uint64 pc         Instruction address, i.e. the program counter      The decode script in Python at util/decode_packet_trace.py can be used to output the trace in ASCII format.Compile dependencies:You need to install google protocol buffer as the traces are recorded using this.sudo apt-get install protobuf-compilersudo apt-get install libprotobuf-devScripts and options  SE mode          build/ARM/gem5.opt [gem5.opt options] -d bzip_10Minsts configs/example/se.py [se.py options] --cpu-type=arm_detailed --caches --cmd=$M5_PATH/binaries/arm_arm/linux/bzip2 --options=$M5_PATH/data/bzip2/lgred/input/input.source -I 10000000 --elastic-trace-en --data-trace-file=deptrace.proto.gz --inst-trace-file=fetchtrace.proto.gz --mem-type=SimpleMemory        FS mode: Create a checkpoint for your region of interest and resume from the checkpoint but with O3 CPU model and tracing enabled          build/ARM/gem5.opt --outdir=m5out/bbench ./configs/example/fs.py [fs.py options] --benchmark bbench-ics      build/ARM/gem5.opt --outdir=m5out/bbench/capture_10M ./configs/example/fs.py [fs.py options] --cpu-type=arm_detailed --caches --elastic-trace-en --data-trace-file=deptrace.proto.gz --inst-trace-file=fetchtrace.proto.gz --mem-type=SimpleMemory --checkpoint-dir=m5out/bbench -r 0 --benchmark bbench-ics -I 10000000      Replay with Trace CPUThe execution trace generated above is then consumed by the Trace CPU as illustrated below.The Trace CPU model inherits from the Base CPU and interfaces with data and instruction L1 caches. A diagram of the Trace CPU explaining the major logic and control blocks is shown below.Scripts and options  A trace replay script in the examples folder can be used to play back SE and FS generated traces          build/ARM/gem5.opt [gem5.opt options] -d bzip_10Minsts_replay configs/example/etrace_replay.py [options] --cpu-type=trace --caches --data-trace-file=bzip_10Minsts/deptrace.proto.gz --inst-trace-file=bzip_10Minsts/fetchtrace.proto.gz --mem-size=4GB                  Fields      Discritption                  required uint64 seq_num      Timestamp of the access              required RecordType type      Read or Write (in this case always Read)              optional uint64 p_addr      Physical memory address if instruction is a load/store              optional uint32 size      Size in bytes of data if instruction is a load/store              optional uint32 flags      Flags or attributes of the access, ex. Uncacheable              required uint64 rob_dep      Past instruction number on which there is order (ROB) dependency              required uint64 comp_delay      Execution delay between the completion of the last dependency and the execution of the instruction              repeated uint64 reg_dep      Past instruction number on which there is RAW data dependency              optional uint32 weight      To account for committed instructions that were filtered out              optional uint64 pc      Instruction address, i.e. the program counter              optional uint64 v_addr      Virtual memory address if instruction is a load/store              optional uint32 asid      Address Space ID      ",
        "url": "/documentation/general_docs/cpu_models/TraceCPU"
      }
      ,
    
      "documentation-general-docs-cpu-models-visualization": {
        "title": "Visualization",
        "content": "VisualizationThis page contains information about different types of information visualization that is integrated or can be used with gem5.O3 Pipeline ViewerThe o3 pipeline viewer is a text based viewer of the out-of-order CPU pipeline. It shows when instructions are fetched (f), decoded (d), renamed (n), dispatched (p), issued (i), completed (c), and retired (r). It is very useful for understanding where the pipeline is stalling or squashing in a reasonable small sequence of code. Next to the colorized viewer that wraps around is the tick the current instruction retired, the pc of that instruction, it’s disassembly, and the o3 sequence number for that instruction.To generate output line you see above you first need to run an experiment with the o3 cpu:./build/ARM/gem5.opt --debug-flags=O3PipeView --debug-start=&lt;first tick of interest&gt; --debug-file=trace.out configs/example/se.py --cpu-type=detailed --caches -c &lt;path to binary&gt; -m &lt;last cycle of interest&gt;Then you can run the script to generate a trace similar to the above (500 is the number of ticks per clock (2GHz) in this case):./util/o3-pipeview.py -c 500 -o pipeview.out --color m5out/trace.outYou can view the output in color by piping the file through less:less -r pipeview.outWhen CYCLE_TIME (-c) is wrong, Right square brackets in output may not aligned to the same column. Default value of CYCLE_TIME is 1000. Be careful.The script has some additional integrated help: (type ‘./util/o3-pipeview.py –help’ for help).Minor ViewerThe new page on minor viewer is yet to be made, refer to old page for documentation.",
        "url": "/documentation/general_docs/cpu_models/visualization/"
      }
      ,
    
      "documentation-general-docs-debugging-and-testing-debugging-debugger-based-debugging": {
        "title": "Debugger-based Debugging",
        "content": "Debugger-based DebuggingIf traces alone are not sufficient, you’ll need to inspect what gem5 is doingin detail using a debugger (e.g., gdb). You definitely want to use thegem5.debug binary if you reach this point. Ideally, looking at traces shouldat least allow you to narrow down the range of cycles in which you thinksomething is going wrong. The fastest way to reach that point is to use aDebugEvent, which goes on gem5’s event queue and forces entry into thedebugger when the specified cycle is reached by sending the process a SIGTRAPsignal. You’ll need to to start gem5 under the debugger or have the debuggerattached to the gem5 process for this to work.You can create one or more DebugEvents when you invoke gem5 using the--debug-break=100 parameter. You can also create new DebugEvents from thedebugger prompt using the schedBreak() function. The following examplesession illustrates both of these approaches:% gdb m5/build/ALPHA/gem5.debugGNU gdb 6.1Copyright 2002 Free Software Foundation, Inc.[...](gdb) run --debug-break=2000 configs/run.pyStarting program: /z/stever/bk/m5/build/ALPHA/gem5.debug --debug-break=2000 configs/run.pyM5 Simulator System[...]warn: Entering event queue @ 0.  Starting simulation...Program received signal SIGTRAP, Trace/breakpoint trap.0xffffe002 in ?? ()(gdb) p curTick$1 = 2000(gdb) cContinuing.(gdb) call schedBreak(3000)(gdb) cContinuing.Program received signal SIGTRAP, Trace/breakpoint trap.0xffffe002 in ?? ()(gdb) p _curTick$3 = 3000(gdb)gem5 includes a number of functions specifically intended to be called from thedebugger (e.g., using the gdb call command, as in the schedBreak() exampleabove). Many of these are “dump” functions which display internal simulatordata structures. For example, eventq_dump() displays the events scheduled onthe main event queue. Most of the other dump functions are associated withparticular objects, such as the instruction queue and the ROB in the detailedCPU model. These include:            Function      Effect                  schedBreak(&lt;tick&gt;)      Schedule a SIGTRAP to occur at &lt;tick&gt;              setDebugFlag(\"&lt;flag&gt;\")      Enable a debug flag from the debugger              clearDebugFlag(\"&lt;flag&gt;\")      Disable a debug flags from the debugger              eventqDump()      Print out all events on the event queue              takeCheckpoint(&lt;tick&gt;)      Create a checkpoint at cycle &lt;tick&gt;              SimObject::find(\"system.qualified.name\")      Returns the pointer to the object with the specified name      Debugging Python with PDBYou can debug configuration scripts with the Python debug (PDB) just as you would other Pythonscripts. You can enter PDB before your configuration script is executed bygiving the --pdb argument to the gem5 binary. Another approach is to put thefollowing line in your configuration script (e.g., fs.py or se.py) whereveryou would like to enter the debugger:import pdb; pdb.set_trace()Note that the Python files under src are compiled in to the gem5 binary, so youmust rebuild the binary if you add this line (or make other changes) in thesefiles. Alternatively, you can set the M5_OVERRIDE_PY_SOURCE environmentvariable to “true” (see src/python/importer.py).See the official PDB documentation for more details on using PDB.Using ValgrindValgrind is a dynamic analysis tool used (primarily) to profile a targetapplication and detect the source of run-time errors, as well as detect memoryleaks.For Valgrind to function, the target gem5 binary must have been compiled toinclude debugging information. Therefore, the gem5.debug binaries must beused.To run a check using Valgrind, execute the following:valgrind --leak-check=yes --suppressions=util/valgrind-suppressions build/{Target ISA}/gem5.debug {gem5 arguments}The above will run the gem5 and do two things:  Give a stack trace if a run-time error is received.  Give information about potential memory leaks.The util/valgrind-suppressions file contains a set of warnings that arereported by Valgrind but are not considered a problem by gem5 developers.Valgrind is known to provide false positives. util/valgrind-suppressionsshould be updated as these false positives are revealed. More informationabout suppressing Valgrind warnings can be found in the Valgrind User Manual.If a run-time error is received, Valgrind will return an output which looks likethe following (taken from the Valgrind Quick Start Guide):==19182== Invalid write of size 4==19182==    at 0x804838F: f (example.c:6)==19182==    by 0x80483AB: main (example.c:11)In this output:  19182 is the process ID  Invalid write is what kind of error.  Below this error is the stack trace. In this example the leak occurred atline 6 in example.c. This line is contained within function f which wascalled by the main method at line 11 (also in example.c).  0x804838F is the code address. This is usually not important.Valgrind may also return warnings about memory leaks, such as:==19182== 40 bytes in 1 blocks are definitely lost in loss record 1 of 1==19182==    at 0x1B8FF5CD: malloc (vg_replace_malloc.c:130)==19182==    by 0x8048385: f (a.c:5)==19182==    by 0x80483AB: main (a.c:11)The stack trace will tell you where the memory leak occurred. If Valgrindstates that a block of memory was “definitely lost” then there is a memoryleak. However, if Valgrind states that a block was “probably lost”, Valgrindhas reason to believe memory is leaking but perhaps not (this is normally ifthe code is doing something complex with pointers).If Valgrind returns an output in which a root cause is difficult to determine,try running Valgrind with --track-origins=yes. This will increase executiontime but will provide more information.The Valgrind User Manual shouldbe consulted for more advanced features.",
        "url": "/documentation/general_docs/debugging_and_testing/debugging/debugger_based_debugging"
      }
      ,
    
      "documentation-general-docs-debugging-and-testing-debugging-debugging-simulated-code": {
        "title": "Debugging Simulated Code",
        "content": "Debugging Simulated Codegem5 has built-in support for gdb’s remote debugger interface. If you areinterested in monitoring what the code on the simulated machine is doing(the kernel, in FS mode, or program, in SE mode) you can fire up gdb on thehost platform and have it talk to the simulated gem5 system as if it were areal machine/process (only better, since gem5 executions are deterministic andgem5’s remote debugger interface is guaranteed not to perturb execution on thesimulated system).If you are simulating a system that uses a different ISA from the host you’rerunning on, you’ll need a cross-architecture gdb; see below for instructions.If you are simulating the native ISA of your host, you can very likely just usethe pre-installed native gdb.When gem5 is run, each CPU listens for a remote debugging connection on a TCPport. The first port allocated is generally 7000, though if a port is in use,the next port will be tried.To attach the remote debugger, it’s necessary to have a copy of the kernel andof the source. Also to view the kernel’s call stack, you must make sure Linuxwas built with the necessary debug configuration parameters enabled. To run theremote debugger, do the following:ziff% gdb-linux-alpha arch/alpha/boot/vmlinuxGNU gdbCopyright 2002 Free Software Foundation, Inc.GDB is free software, covered by the GNU General Public License, and you arewelcome to change it and/or distribute copies of it under certain conditions.Type \"show copying\" to see the conditions.There is absolutely no warranty for GDB.  Type \"show warranty\" for details.This GDB was configured as \"--host=i686-pc-linux-gnu --target=alpha-linux\"...(no debugging symbols found)...(gdb) set remote Z-packet on                [ This can be put in .gdbinit ](gdb) target remote ziff:7000Remote debugging using ziff:70000xfffffc0000496844 in strcasecmp (a=0xfffffc0000b13a80 \"\", b=0x0)    at arch/alpha/lib/strcasecmp.c:2323              } while (ca == cb &amp;&amp; ca != '\\0');(gdb)The gem5 simulator is already running and the target remote command connects tothe already running simulator and stops it in the middle of execution. You canset breakpoints and use the debugger to debug the kernel. It is also possibleto use the remote debugger to debug console code and palcode. Setting that upis similar, but a how to will be left for future work.If you’re using both the remote debugger and the debugger on the simulator, itis possible to trigger the remote debugger from the main debugger by doing acall debugger(). Before you do this you’ll need to figure out what CPU (thecpu id) you want to debug and set current_debugger to that cpuid. If youonly have one cpu, then it will be cpuid 0, however if there are multiplecpus you will need to match the cpu id with the corresponding port number forthe remote gdb session. For example, using the following sample output fromgem5, calling the kernel debugger for cpu 3 requires the kernel debugger to belistening on port 7001.%./build/ALPHA/gem5.debug configs/example/fs.py...making dual systemGlobal frequency set at 1000000000000 ticks per secondListening for testsys connection on port 3456Listening for drivesys connection on port 34570: testsys.remote_gdb.listener: listening for remote gdb #0 on port 70020: testsys.remote_gdb.listener: listening for remote gdb #1 on port 70030: testsys.remote_gdb.listener: listening for remote gdb #2 on port 70000: testsys.remote_gdb.listener: listening for remote gdb #3 on port 70010: drivesys.remote_gdb.listener: listening for remote gdb #4 on port 70040: drivesys.remote_gdb.listener: listening for remote gdb #5 on port 70050: drivesys.remote_gdb.listener: listening for remote gdb #6 on port 70060: drivesys.remote_gdb.listener: listening for remote gdb #7 on port 7007Getting a cross-architecture gdbTo use a remote debugger with gem5, the most important part is that you havegdb compiled to work with the target system you’re simulating (e.g.alpha-linux if simulating an Alpha target, arm-linux if simulating anARM target, etc). It is possible to compile an non-native architecture gdb onan x86 machine for example. All that must be done is add the --target=option to configure when you compile gdb. You may also get pre-compileddebuggers with cross compilers. See Download for links to some cross compilersthat include debuggers.% wget http://ftp.gnu.org/gnu/gdb/gdb-6.3.tar.gz--08:05:33--  http://ftp.gnu.org/gnu/gdb/gdb-6.3.tar.gz           =&gt; `gdb-6.3.tar.gz'Resolving ftp.gnu.org... done.Connecting to ftp.gnu.org[199.232.41.7]:80... connected.HTTP request sent, awaiting response... 200 OKLength: 17,374,476 [application/x-tar]100%[====================================&gt;] 17,374,476   216.57K/s    ETA 00:0008:06:52 (216.57 KB/s) - `gdb-6.3.tar.gz' saved [17374476/17374476]% tar xfz gdb-6.3.tar.gz% cd gdb-6.3% ./configure --target=alpha-linux&lt;configure output....&gt;% make&lt;make output...this may take a while&gt;The end result is gdb/gdb which will work for remote debugging.Target-specific instructionsARM TargetIf you’re planning to debug an ARM kernel you’ll need a reasonably new versionof gdb (7.1 or greater). Additionally, you’ll have to manually specify thetspecs like this (port number may be different). The tspec file isavailable in the gdb source code:set remote Z-packet onset tdesc filename path/to/features/arm-with-neon.xmlsymbol-file &lt;path to vmlinux used for gem5&gt;target remote &lt;ip addr of host running gem5 or if local host 127.0.0.1&gt;:7000",
        "url": "/documentation/general_docs/debugging_and_testing/debugging/debugging_simulated_code"
      }
      ,
    
      "documentation-general-docs-debugging-and-testing-debugging-trace-based-debugging": {
        "title": "Trace-based Debugging",
        "content": "Trace-based DebuggingIntroductionThe simplest method of debugging is to have gem5 print out traces of what it’sdoing. The simulator contains many DPRINTF statements that print trace messagesdescribing potentially interesting events. Each DPRINTF is associated with adebug flag (e.g., Bus, Cache, Ethernet, Disk, etc.). To turn on themessages for a particular flag, use the --debug-flags command line argument.Multiple flags can be specified by giving a list of strings, e.g.:build/ALPHA/gem5.opt --debug-flags=Bus,Cache configs/examples/fs.pywould turn on a group of debug flags related to instruction execution but leaveout Tick (timing) information. This is useful if you want to compare executionbetween two runs where the same instructions execute but at different rates.Note that the gem5.fast binary does not support tracing; part of what makes itfaster than gem5.opt is that the DPRINTF code is compiled out.The --debug-flags command line option should come after the gem5 executablebut before the simulation script. This is because debug flags are handled bygem5 itself, and whether command line options are before or after thesimulation script determine if they’re for gem5 or the script.Debugging Options-------------------debug-break=TIME[,TIME]                        Tick to create a breakpoint--debug-help            Print help on debug flags--debug-flags=FLAG[,FLAG]                        Sets the flags for debug output (-FLAG disables a                        flag)--debug-start=TIME      Start debug output at TIME (must be in ticks)--debug-file=FILE       Sets the output file for debug [Default: cout]--debug-ignore=EXPR     Ignore EXPR sim objectsThe complete list of debug/trace flags can be seen by running gem5 with the--debug-help option.If you find that events of interest are not being traced, feel free to addDPRINTFs yourself. You can add new debug flags simply by adding DebugFlag()command to any SConscript file (preferably the one nearest where you are usingthe new flag). If you use a debug flag in a C++ source file, you would need toinclude the header file debug/&lt;name of debug flag&gt;.hh in that file.For more complex bugs, the trace can be useful in simply identifying points inthe simulation where more in-depth investigation is needed. The --debug-breakoption lets you re-run your simulation under a debugger and stop on aparticular tick as identified by the trace. You can also schedule breakpointsand enable or disable debug flags from within the debugger itself. See the pageon Debugger Based Debugging for more information.The Exec debug flagThe Exec compound debug flag is very useful because it turns on instructiontracing in gem5. It makes the simulator print a disassembled version of eachinstruction as it finishes executing, along with other useful information likethe time, pc, the address if it was a memory instruction, etc. These individualpieces of information can be turned on and off with the base debug flags Execcontrols. For example, you can disable the use of function symbol names inplace of absolute PC addresses (if they’re available) by turning off theExecSymbol flag (e.g., --debug-flags=Exec,-ExecSymbol).If some supposedly innocuous change has caused gem5 to stop working correctly,you can compare trace outputs from before and after the change using thetracediff script in the src/util directory. Comments in the script describehow to use it.Reducing trace file sizeTrace file can become very large very quickly, but they also compress very well(e.g. about 90%). If you’d like to make gem5 output a compressed trace, justadd a .gz extension to the output file name. For example--debug-file=trace.out will produce an uncompressed file as normal, but--debug-file=trace.out.gz will produce a gzip compressed file. You can usethe zcat program and pipes to process the output. The editor vim also canuncompress gzip compressed files in memory.The tracediff and rundiff utilitiestracediff and rundiff utilities allow the simple diffing of two streams oftrace data from gem5 to find any differences. It’s very handy for debugging whyregression tests fail, figuring out why your minor code change seems to causesome unrelated execution problem, or comparing the execution of CPU models.Both utilities are found in the util directory. rundiff is a simplediff-like program. Unlike regular diff, this script does not read in the entireinput before comparing its inputs, so it can be used on lengthy outputs pipedfrom other programs (e.g., gem5 traces). tracediff is a front end forrundiff that provides an easy way to run two similar copies of gem5 and difftheir outputs. It takes a common gem5 command line with embedded alternativesand executes the two alternative commands in separate subdirectories withoutput piped to rundiff.Script arguments are handled uniformly as follows:  If the argument does not contain a ‘|’ character, it is appended to bothcommand lines.  If the argument has a ‘|” character in it, the text on either size of the ‘|’is appended to the respective command lines. Note that you’ll have to quote thearg or escape the ‘|’ with a backslash so that the shell doesn’t thing you’redoing a pipe or put quotes around it.  Arguments with ‘#’ characters are split at those characters, processed foralternatives (‘|’s) as independent terms, then pasted back into a singleargument (without the ‘#’s). (Sort of inspired by the C preprocessor ‘##’ tokenpasting operator.)In other words, the arguments should look like the command line you want torun, with ‘|’ used to list the alternatives for the parts that you want todiffer between the two runs.For example: % tracediff gem5.opt --opt1 '--opt2|--opt3' --opt4# would compare these two runs:gem5.opt --opt1 --opt2 --opt4gem5.opt --opt1 --opt3 --opt4% tracediff 'path1|path2#/m5.opt' --opt1 --opt2# would compare these two runs:path1/gem5.opt --opt1 --opt2path2/gem5.opt --opt1 --opt2If you want to add arguments to one run only, just put a ‘|’ in with text onlyon one side (--onlyOn1|). You can do this with multiple arguments togethertoo (|-a -b -c adds three args to the second run only).The -n argument to tracediff allows you to preview the two generated commandlines without running them.For tracediff to be useful some trace flags must be enabled. The most commontrace flags to use with tracediff are --debug-flags=Exec,-ExecTicks whichremoves the timestamp from each trace making it suitable to diff when slighttiming variations are present.Tracediff is also useful for comparing CPU models when one fails and the otherdoesn’t. In this case it’s best to create a checkpoint before the problemoccurs (this can be done by just creating a bunch of checkpoints and findingone that fails). If the failure occurs in kernel code, use the-ExecUser debug flag, on the other hand if it occurs in user code try the-ExecKernel debug flag to isolate user code in the trace. You can thencompare the traces and see when the execution diverges.Comparing traces across machinesSometimes gem5 executions differ inexplicably across different environments,and you’d like to use rundiff to help pinpoint where they diverge. Rather thantry and reproduce those environments on the same machine, you can use netcatwith rundiff to compare traces from gem5 instances running on separate systemsacross the network.First, start rundiff running on one machine, configured to compare the traceoutput from a local instance of gem5 with the output of a netcat “server”.Since the network is likely to be the bottleneck, we’ll compress the tracegoing across netcat, which means we need to uncompress it as it arrives. Forexample (choosing port number 33335 arbitrarily):util/rundiff 'gem5.opt --debug-flag=Exec &lt;gem5 args&gt; |' 'nc -d -l 33335 | gunzip -c |' &gt;&amp; tracediff.out &amp;Now go to the second machine, start a copy of gem5 there, and ship itscompressed trace output to the netcat instance running on the first machine.For example:gem5.opt --debug-flag=Exec &lt;gem5 args&gt; |&amp; gzip -c |&amp; nc &lt;hostname&gt; 33335Internal Exec tracing implementation (InstTracer)The “Trace-based debugging” section above talked about how to use the Exectrace flag to print information about each instruction as it completes. Thatfunctionality is actually implemented by an InstTracer object which collectsinformation about instructions as the execute. These objects can be swappedout, and different objects can do different things with the information theycollect. For instance, the IntelTrace object prints out a trace in adifferent format which is compatible with an external tool. The objects canalso do more than just print a trace. NativeTrace objects send informationabout architectural state over a socket to the statetrace tool (describedbelow) instruction by instruction to validate execution. InstTracer objectsare SimObjects which are assigned to the tracer parameter of each CPU. Ifyou want to install a different tracer, just assign it to that parameter on theCPU of interest.When writing your own InstTracer, you’ll write at least two differentclasses, one which inherits from InstTracer and one that inherits fromInstRecord. The InstTracer class’s main responsibility is to generateInstRecord objects which are associated with a particular instruction. Bysubclassing InstTracer, you’ll be able to return your own specialized versionof InstRecord which is the class that really does most of the work.The InstRecord class have a number of fields which hold information about thehistory of an instruction. For instance, InstRecord records the instruction’sPC, what address it used if it accessed memory, a “data” value which itproduced (multiple data values aren’t handled), etc. The InstRecord functionalso has a pointer to a ThreadContext which can be used to read outarchitectural state. When an instruction is finished executing, theInstRecord’s dump() virtual function is called to process the record. Forthe default InstTracer, this is where the instruction’s assembly languageform, etc., is printed which is the output you see when you turn on Exec. ForNativeTrace, this is where architectural state is gathered up to send tostatetrace.Comparing traces with a real machineThe statetrace tool runs alongside gem5 and compares execution of a workload ona real machine with execution in gem5. In the simulator and the real system,the workload is allowed to run one instruction at a time. After eachinstruction, architectural state is collected and compared and any differencesare reported. It can be tricky to get it set up and producing useful results(described below), but it’s an extremely valuable tool for debugging because ittends to quickly pinpoint exactly where a problem is coming from, likely savingmany hours of painful debugging per bug.Native TraceIn gem5, a NativeTrace InstTracer object (described above) needs to beinstalled on the CPU that will run the workload of interest. When executionstarts, the tracer will wait for the state trace utility to connect to it.Then, after each instruction executes, it uses the ThreadContext pointer inthe InstRecord object to gather architectural state from the currentlyrunning process. It also reads in architectural state gather by state tracethrough the connection they established. The two versions of state arecompared, and any meaningful differences are reported. The exact makeup of thestate and how it should be compared is very ISA dependent on ISA, so each ISAdefines its own version of NativeTrace. These specialized classes can handlethings like expected differences when registers may become undefined, orsituations where execution skips ahead for one reason or another.statetrace utilityThe statetrace utility is found in the util directory and is responsible forrunning the workload on the real machine. It uses the ptrace mechanism providedby the Linux kernel to single step the target process and to access its state.It uses scons, but is independent of scons as used by the rest of gem5. Tobuild a version of statetrace suitable for a particular ISA, use thebuild/${ARCH}/statetrace target where ${ARCH} is replaced by the ISA ofinterest. Currently recognized values for ${ARCH} are amd64, arm, i686,and sparc. You can override the compiler used for any ISA using the CXX sconsargument, and the compiler used for a particular ISA with ${ARCH}CXX. Forinstance, to build an arm version of statetrace, you could run:cd util/statetracescons ARMCXX=arm-softfloat-linux-gnueabi-g++ build/arm/statetracestatetrace accepts four flags, -h to print the help, --host to specify whatip and port gem5 is listening at, -i to print out what’s on the initial stackframe, and -nt to disable tracing. -nt is typically used with -i to getinformation about a processes initial stack without running it. The end of thecommand line options is marked with two dashes. Next, put the command line youwant statetrace to run.The exact text of the program name and arguments matters because these will bepassed to the process on its stack. Longer values take up more room on thestack, that displaces other items to different addresses, and statetrace clogup with lots of unimportant differences. For instance, if you need to run aprogram found in your home directory in a gem5 subdirectory and you run thiscommand:statetrace -- ~/gem5/my_benchmark arg1 arg2You must also override arg0 in gem5 to be ~/gem5/my_benchmark.Tuningstatetrace is a very sensitive system, and any minor difference betweensimulated execution and real execution could produce lots and lots of spuriousdifferences. In order to get useful information from statetrace you’ll need toadjust the real system and gem5 so that everything lines up perfectly. Inormally create a patch which has all the modifications I’ve made to gem5 forstatetrace. Then I can easily remove them or reapply them for as I find and fixproblems. Mercurial queues is useful for managing that patch and patches for myfixes. The following is an incomplete list of the differences you may have tocorrect.Address randomization: To improve security, Linux will randomize the addressspace of processes, moving around their stack and heap areas. This makes itharder for an attacker to predict what memory will look like, but it alsothoroughly defeats statetrace. To disable it, echo 0 into/proc/sys/kernel/randomize_va_space. You’ll almost certainly need rootpermissions to do that.argv values: Be sure to use exactly the same text for each argument to yourprogram in gem5 and on the real system. This includes arg0, the program name.File block size: Glibc uses the block size associated with a file to decide howto buffer it. Different behavior will throw off execution and preventstatetrace from working. You can change the block size gem5 reports in theconvertStatBuf and convertStat64Buf functions in src/sim/syscall_emul.hh.Initial stack contents: Depending on your version of Linux, the contents of theinitial stack may be different. You can use the -i and -nt options to printout the content of the initial stack on the real machine. statetrace attemptsto interpret the initial stack so you can more easily see what’s on it. You’llneed to adjust how gem5 sets up the stack to match your real system. This codeis typically in a file called process.cc in the appropriate arch directory.gem5’s code has been painstakingly constructed so that it sets up a stack asidentically to Linux as possible, but the underlying mechanism would change.Also, Linux puts a collection of auxiliary vectors on the initial stack. Theseare type, value pairs which let the kernel provide extra information to theprocess as it starts. From time to time Linux introduces a new type ofauxiliary vector and adds it to the stack. You may need to dig into the Linuxsource and emulate any new entries.CaveatsBecause statetrace is very sensitive to any changes in execution, it can’t beused with programs that don’t behave in very predictable ways. For instance, ifa program reads in a random value from /dev/random and uses that in acalculation (or worse in control flow) then that program can’t be used. Lessobviously, if the program relies on the system time which is unpredictable, italso can’t be used. Generally speaking, many benchmarks try to be verydeterministic so that they can be used to generate reproducible data. Thatmakes them work well with statetrace.Statetrace can’t be used at the operating system level for at least two mainreasons. First, no system is implemented or will be implemented in theforeseeable future for single stepping an operating system. Second, realoperating systems are not determinstic. Interrupts from hardware devices willalmost certainly come in at unpredictable times, some devices will returnunpredictable data, and gem5 is much less likely to exactly match the behaviorof a system at that level where firmware and other implementation details arenon longer abstracted away. Second the amount of state that’s relevant at thesystem level is typically larger than at the user level, especially in complexISAs like x86. Gathering, comparing, and transporting all that extra statewould significantly impact performance.Not all implementations of ptrace actually work properly. For instance when Ilast used statetrace with ARM, certain functions called into a region ofmemory set up by the kernel which had kernel specific implementations of forvarious operations. Ptrace relied on software breakpoints which work byreplacing the next instruction in the program with one that will trap. Becausethe region of memory really belonged to the kernel, ptrace couldn’t modify itto install a breakpoint. The process “escaped” single stepped execution andquickly ran to completion, leaving gem5 waiting for an update that never came.statetrace isn’t able to track changes to memory. Because memory is very largeand there isn’t a convenient way to detect modifications to it, statetrace onlytracks register based architectural state. If an instruction changes registerscorrectly but stores the wrong value to memory and/or to the wrong address,that problem may not be detected for many instructions. Fortunately, thosesorts of errors are the exception.To compare execution to a real machine, you ideally need to have a real machineat your disposal. It’s still quite possible, however, to run statetrace insidean emulator like qemu. That’s likely a little slower and compares executionagainst the emulator and not real hardware, but it can still help identifybugs.ISA supportCurrently SPARC, ARM, and x86 support state. ARM’s support is currentlythe most sophisticated, only sending differences in state across the connectionwhich improves performance, and only printing when differences start or stopwhich reduces output and improves readability. Those features are planned to beported to the other ISAs. Hopefully that code can be factored out and put intothe base NativeTrace class so that all ISAs can use it easily.",
        "url": "/documentation/general_docs/debugging_and_testing/debugging/trace_based_debugging"
      }
      ,
    
      "documentation-general-docs-debugging-and-testing-directed-testers-garnet-synthetic-traffic": {
        "title": "Garnet Synthetic Traffic",
        "content": "Garnet Synthetic TrafficThe Garnet Synthetic Traffic provides a framework for simulating the Garnetnetwork with controlled inputs. This is useful for network testing/debugging,or for network-only simulations with synthetic traffic.Note: The garnet synthetic traffic injector only works with Garnet_standalonecoherence protocol.Related Files  configs/example/garnet_synth_traffic.py : file to invoke the network tester  src/cpu/tester/garnet_sythetic_taffic/GarnetSyntheticTraffic.* : filesimplementing the testerHow to runFirst build gem5 with the Garnet_standalone coherence protocol. This protocolis ISA-agnostic, and hence we build it with the NULL ISA.scons build/NULL/gem5.debug PROTOCOL=Garnet_standaloneExample command:./build/NULL/gem5.debug configs/example/garnet_synth_traffic.py  \\--num-cpus=16 \\--num-dirs=16 \\--network=garnet2.0 \\--topology=Mesh_XY \\--mesh-rows=4  \\--sim-cycles=1000 \\--synthetic=uniform_random \\--injectionrate=0.01Parameterized Options            System Configuration            Description                  --num-cpus      Number of cpus. This is the number of source (injection) nodes in the network.              --num-dirs      Number of directories. This is the number of destination (ejection) nodes in the network.              --network      Network model: simple or garnet2.0. Use garnet2.0 for running synthetic traffic.              --topology      Topology for connecting the cpus and dirs to the network routers/switches.              --mesh-rows      The number of rows in the mesh. Only valid when –topology is Mesh* MeshDirCorners*                  Network Configuration        Description                  --router-latency      Default number of pipeline stages in the garnet router. Has to be &gt;= 1. Can be over-ridden on a per router basis in the topology file.              --link-latency      Default latency of each link in the network. Has to be &gt;= 1. Can be over-ridden on a per link basis in the topology file.              --vcs-per-vnet      Number of VCs per Virtual Network.              --link-width-bits      Width in bits for all links inside the garnet network. Default = 128.                  Traffic Injection Configuration                Description                  --sim-cycles      Total number of cycles for which the simulation should run.              --synthetic      The type of synthetic traffic to be injected. The following synthetic traffic patterns are currently supported: uniform_random, tornado, bit_complement, bit_reverse, bit_rotation, neighbor, shuffle, and transpose              --injectionrate      Traffic Injection Rate in packets/node/cycle. It can take any decimal value between 0 and 1. The number of digits of precision after the decimal point can be controlled by --precision which is set to 3 as default in garnet_synth_traffic.py.              --single-sender-id      Only inject from this sender. To send from all nodes, set to -1.              --single-dest-id      Only send to this destination. To send to all destinations as specified by the synthetic traffic pattern, set to -1.              --num-packets-max      Maximum number of packets to be injected by each cpu node. Default value is -1 (keep injecting till sim-cycles).              --inj-vnet      Only inject in this vnet (0, 1 or 2). 0 and 1 are 1-flit, 2 is 5-flit. Set to -1 to inject randomly in all vnets.      Implementation of Garnet Synthetic TrafficThe synthetic traffic injector is implemente in GarnetSnytheticTraffic.cc.The sequence of steps involved in generating and sending a packet are asfollows.  Every cycle, each cpu performs a bernouli trial with probability equal to–injectionrate to determine whether to generate a packet or not.  If --num-packets-max is non negative, each cpu stops generating new packetsafter generating --num-packets-max number of packets. The injector terminatesafter --sim-cycles.  If the cpu has to generate a new packet, it computes the destination for thenew packet based on the synthetic traffic type (--synthetic).  This destination is embedded into the bits after block offset in the packetaddress.  The generated packet is randomly tagged as a ReadReq, or an INST_FETCH,or a WriteReq, and sent to the Ruby Port(src/mem/ruby/system/RubyPort.hh/cc).  The Ruby Port converts the packet into a RubyRequestType:LD,RubyRequestType:IFETCH, and RubyRequestType:ST, respectively, and sends itto the Sequencer, which in turn sends it to the Garnet_standalone cachecontroller.  The cache controller extracts the destination directory from the packetaddress.  The cache controller injects the LD, IFETCH and ST into virtualnetworks 0, 1 and 2 respectively.  LD and IFETCH are injected as control packets (8 bytes), while ST isinjected as a data packet (72 bytes).  The packet traverses the network and reaches the directory.  The directory controller simply drops it.",
        "url": "/documentation/general_docs/debugging_and_testing/directed_testers/garnet_synthetic_traffic/"
      }
      ,
    
      "documentation-general-docs-debugging-and-testing-directed-testers-ruby-random-tester": {
        "title": "Ruby Random Tester",
        "content": "Ruby Random TesterA cache coherence protocol usually has several different types of statemachines, with state machine having several different states. For example, theMESI CMP directory protocol has four different state machines (L1, L2,directory, dma). Testing such a protocol for functional correctness is achallenging task. gem5 provides a random tester for testing coherenceprotocols. It is called the Ruby Random Tester. The source files related to thetester are present in the directory src/cpu/testers/rubytest. The fileconfigs/examples/ruby_random_test.py is used for configuration and executionof the test. For example, the following command can be used for testing aprotocol:./build/X86/gem5.fast ./configs/example/ruby_random_test.pyThough one can specify many different options to the random tester, some ofthem are note worthy.            Parameter      Description                  -n, --num-cpus      Number of cpus injecting load/store requests to the memory system.              --num-dirs      Number of directory controllers in the system.              -m, --maxtick      Number of cycles to simulate.              -l, --checks      Number of loads to be performed.              --random_seed      Seed for initialization of the random number generator.      Testing a coherence protocol with the random tester is a tedious task andrequires patience. First, build gem5 with the protocol to be tested. Then, runthe ruby random tester as mentioned above. Initially one should run the testerwith a single processor, and few loads. It is likely that one would encounterproblems. Use the debug flags to get a trace of the events ocurring in thesystem. You may find the flag ProtocolTrace particularly useful. As these arerectified, keep on increasing the number of loads, say by a factor of 10 eachtime till one can execute one to ten million loads. Once it starts working fora single processor, a similar process now needs to be followed for a twoprocessor system, followed by larger systems.Theoretical approaches exist for verifying coherence protocols, but gem5 currently does not include anytesters based on those.",
        "url": "/documentation/general_docs/debugging_and_testing/directed_testers/ruby_random_tester/"
      }
      ,
    
      "documentation-general-docs-debugging-and-testing": {
        "title": "Debugging and Testing",
        "content": "TODO",
        "url": "/documentation/general_docs/debugging_and_testing/"
      }
      ,
    
      "documentation-general-docs-development-coding-style": {
        "title": "Coding Style",
        "content": "Coding StyleWe strive to maintain a consistent coding style in the M5 source code to make the source more readable and maintainable. This necessarily involves compromise among the multiple developers who work on this code. We feel that we have been successful in finding such a compromise, as each of the primary M5 developers is annoyed by at least one of the rules below. We ask that you abide by these guidelines as well if you develop code that you would like to contribute back to M5. An Emacs c++-mode style embodying the indentation rules is available in the source tree at util/emacs/m5-c-style.el.Indentation and Line BreaksIndentation will be 4 spaces per level, though namespaces should not increase the indentation.  Exception: labels followed by colons (case and goto labels and public/private/protected modifiers) are indented two spaces from the enclosing context.Indentation should use spaces only (no tabs), as tab widths are not always set consistently, and tabs make output harder to read when used with tools such as diff.Lines must be a maximum of 79 characters long.BracesFor control blocks (if, while, etc.), opening braces must be on the same line as the control keyword with a space between the closing parenthesis and the opening brace.  Exception: for multi-line expressions, the opening brace may be placed on a separate line to distinguish the control block from the statements inside the block.if (...) {    ...}// exception casefor (...;     ...;     ...) // brace could be up here{ // but this is optionally OK *only* when the 'for' spans multiple lines    ...}‘Else’ keywords should follow the closing ‘if’ brace on the same line, as follows:if (...) {    ...} else if (...) {    ...} else {    ...}Blocks that consist of a single statement that fits on a single line may optionally omit the braces. Braces are still required if the single statement spans multiple lines, or if the block is part of an else/if chain where other blocks have braces.// This is OK with or without bracesif (a &gt; 0)    --a;// In the following cases, braces are still requiredif (a &gt; 0) {    obnoxiously_named_function_with_lots_of_args(verbose_arg1,                                                 verbose_arg2,                                                 verbose_arg3);}if (a &gt; 0) {    --a;} else {    underflow = true;    warn(\"underflow on a\");}For function definitions or class declarations, the opening brace must be in the first column of the following line.In function definitions, the return type should be on one line, followed by the function name, left-justified, on the next line. As mentioned above, the opening brace should also be on a separate line following the function name.See examples below:intexampleFunc(...){    ...}class ExampleClass{  public:    ...};Functions should be preceded by a block comment describing the function.Inline function declarations longer than one line should not be placed inside class declarations. Most functions longer than one line should not be inline anyway.SpacingThere should be:  one space between keywords (if, for, while, etc.) and opening parentheses  one space around binary operators (+, -, &lt;, &gt;, etc.) including assignment operators (=, +=, etc.)  no space around ‘=’ when used in parameter/argument lists, either to bind default parameter values (in Python or C++) or to bind keyword arguments (in Python)  no space between function names and opening parentheses for arguments  no space immediately inside parentheses, except for very complex expressions. Complex expressions are preferentially broken into multiple simpler expressions using temporary variables.For pointer and reference argument declarations, either of the following are acceptable:FooType *fooPtr;FooType &amp;fooRef;orFooType* fooPtr;FooType&amp; fooRef;However, style should be kept consistent within a file. If you are editing an existing file, please keep consistent with the existing code. If you are writing new code in a new file, feel free to choose the style of your preference.NamingClass and type names are mixed case, start with an uppercase letter, and do not contain underscores (e.g., ClassName). Exception: names that are acronyms should be all upper case (e.g., CPU). Class member names (method and variables, including const variables) are mixed case, start with a lowercase letter, and do not contain underscores (e.g., aMemberVariable). Class members that have accessor methods should have a leading underscore to indicate that the user should be using an accessor. The accessor functions themselves should have the same name as the variable without the leading underscore.Local variables are lower case, with underscores separating words (e.g., local_variable). Function parameters should use underscores and be lower case.C preprocessor symbols (constants and macros) should be all caps with underscores. However, these are deprecated, and should be replaced with const variables and inline functions, respectively, wherever possible.class FooBarCPU{  private:    static const int minLegalFoo = 100;  // consts are formatted just like other vars    int _fooVariable;   // starts with '_' because it has public accessor functions    int barVariable;    // no '_' since it's internal use only  public:    // short inline methods can go all on one line    int fooVariable() const { return _fooVariable; }    // longer inline methods should be formatted like regular functions,    // but indented    void    fooVariable(int new_value)    {        assert(new_value &gt;= minLegalFoo);        _fooVariable = new_value;    }};#includesWhenever possible favor C++ includes over C include. E.g. choose cstdio, not stdio.h.The block of #includes at the top of the file should be organized. We keep several sorted groups. This makes it easy to find #include and to avoid duplicate #includes.Always include Python.h first if you need that header. This is mandated by the integration guide. The next header file should be your main header file (e.g., for foo.cc you’d include foo.hh first). Having this header first ensures that it is independent and can be included in other places without missing dependencies.// Include Python.h first if you need it.#include &lt;Python.h&gt;// Include your main header file before any other non-Python headers (i.e., the one with the same name as your cc source file)#include \"main_header.hh\"// C includes in sorted order#include &lt;fcntl.h&gt;#include &lt;sys/time.h&gt;// C++ includes#include &lt;cerrno&gt;#include &lt;cstdio&gt;#include &lt;string&gt;#include &lt;vector&gt;// Shared headers living in include/. These are used both in the simulator and utilities such as the m5 tool.#include &lt;gem5/asm/generic/m5ops.h&gt;// M5 includes#include \"base/misc.hh\"#include \"cpu/base.hh\"#include \"params/BaseCPU.hh\"#include \"sim/system.hh\"File structure and modularitySource files (.cc files) should never contain extern declarations; instead, include the header file associated with the .cc file in which the object is defined. This header file should contain extern declarations for all objects exported from that .cc file. This header should also be included in the defining .cc file. The key here is that we have a single external declaration in the .hh file that the compiler will automatically check for consistency with the .cc file. (This isn’t as important in C++ as it was in C, since linker name mangling will now catch these errors, but it’s still a good idea.)When sufficient (i.e., when declaring only pointers or references to a class), header files should use forward class declarations instead of including full header files.Header files should never contain using namespace declarations at the top level. This forces all the names in that namespace into the global namespace of any source file including that header file, which basically completely defeats the point of using namespaces. It is OK to use using namespace declarations at the top level of a source (.cc) file since the effect is entirely local to that .cc file. It’s also OK to use them in _impl.hh files, since for practical purposes these are source (not header) files despite their extension.Documenting the codeEach file/class/member should be documented using doxygen style comments.Doxygen allows users to quickly create documentation for our code by extracting the relavent information from the code and comments. It is able to document all the code structures including classes, namespaces, files, members, defines, etc. Most of these are quite simple to document, you only need to place a special documentation block before the declaration. The Doxygen documentation within gem5 is processed every night and the following web pages are generated: DoxygenUsing DoxygenThe special documentation blocks take the form of a javadoc style comment. A javadoc comment is a C style comment with 2 *’s at the start, like this:/** * ...documentation... */The intermediate asterisks are optional, but please use them to clearly delineate the documentation comments.The documentation within these blocks is made up of at least a brief description of the documented structure, that can be followed by a more detailed description and other documentation. The brief description is the first sentence of the comment. It ends with a period followed by white space or a new line. For example:/** * This is the brief description. This is the start of the detailed * description. Detailed Description continued. */If you need to have a period in the brief description, follow it with a backslash followed by a space./** * e.g.\\ This is a brief description with an internal period. */Blank lines within these comments are interpreted as paragraph breaks to help you make the documentation more readble.Special commandsPlacing these comments before the declaration works in most cases. For files however, you need to specify that you are documenting the file. To do this you use the @file special command. To document the file that you are currently in you just need to use the command followed by your comments. To comment a separate file (we shouldn’t have to do this) you can supply the name directly after the file command. There are some other special commands we will be using quite often. To document functions we will use @param and @return or @retval to document the parameters and the return value. @param takes the name of the paramter and its description. @return just describes the return value, while @retval adds a name to it. To specify pre and post conditions you can use @pre and @post.Some other useful commands are @todo and @sa. @todo allows you to place reminders of things to fix/implement and associate them with a specific class or member/function. @sa lets you place references to another piece of documentation (class, member, etc.). This can be useful to provide links to code that would be helpful in understanding the code being documented.Example of Simple DocumentationHere is a simple header file with doxygen comments added./** * @file * Contains an example of documentation style. */#include &lt;vector&gt;/** * Adds two numbers together. */#define DUMMY(a,b) (a+b)/** * A simple class description. This class does really great things in detail. * * @todo Update to new statistics model. */class foo{  /** This variable stores blah, which does foo and has invariants x,y,z         @warning never set this to 0         @invariant foo    */   int myVar; /**  * This function does something.  * @param a The number of times to do it.  * @param b The thing to do it to.  * @return The number of times it was done.  *  * @sa DUMMY  */ int bar(int a, long b); /**  * A function that does bar.  * @retval true if there is a problem, false otherwise.  */ bool manchu();};GroupingDoxygen also allows for groups of classes and member (or other groups) to be declared. We can use these to create a listing of all statistics/global variables. Or just to comment about the memory hierarchy as a whole. You define a group using @defgroup and then add to it using @ingroup or @addgroup. For example:/** * @defgroup statistics Statistics group *//**  * @defgroup substat1 Statistitics subgroup  * @ingroup statistics  *//** *  A simple class. */class foo{  /**   * Collects data about blah.   * @ingroup statistics   */  Stat stat1;  /**   * Collects data about the rate of blah.   * @ingroup statistics   */  Stat stat2;  /**   * Collects data about flotsam.   * @ingroup statistics   */  Stat stat3;  /**   * Collects data about jetsam.   * @ingroup substat1   */  Stat stat4;};This places stat1-3 in the statistics group and stat4 in the subgroup. There is a shorthand method to place objects in groups. You can use @{ and @} to mark the start and end of group inclusion. The example above can be rewritten as:/** * @defgroup statistics Statistics group *//**  * @defgroup substat1 Statistitics subgroup  * @ingroup statistics  *//** *  A simple class. */class foo{  /**   * @ingroup statistics   * @{   */  /** Collects data about blah.*/  Stat stat1;  /** Collects data about the rate of blah. */  Stat stat2;  /** Collects data about flotsam.*/  Stat stat3;  /** @} */  /**   * Collects data about jetsam.   * @ingroup substat1   */  Stat stat4;};It remains to be seen what groups we can come up with.Other featuresNot sure what other doxygen features we want to use.M5 Status MessagesFatal v. PanicThere are two error functions defined in src/base/misc.hh: panic() and fatal(). While these two functions have roughly similar effects (printing an error message and terminating the simulation process), they have distinct purposes and use cases. The distinction is documented in the comments in the header file, but is repeated here for convenience because people often get confused and use the wrong one.  panic() should be called when something happens that should never ever happen regardless of what the user does (i.e., an actual m5 bug). panic() calls abort() which can dump core or enter the debugger.  fatal() should be called when the simulation cannot continue due to some condition that is the user’s fault (bad configuration, invalid arguments, etc.) and not a simulator bug. fatal() calls exit(1), i.e., a “normal” exit with an error code.The reasoning behind these definitions is that there’s no need to panic if it’s just a silly user error; we only panic if m5 itself is broken. On the other hand, it’s not hard for users to make errors that are fatal, that is, errors that are serious enough that the m5 process cannot continue.Inform, Warn and HackThe file src/base/misc.hh also houses 3 functions that alert the user to various conditions happening within the simulation: inform(), warn() and hack(). The purpose of these functions is strictly to provide simulation status to the user so none of these functions will stop the simulator from running.      inform() and inform_once() should be called for informative messages that users should know, but not worry about. inform_once() will only display the status message generated by the inform_once() function the first time it is called.        warn() and warn_once() should be called when some functionality isn’t necessarily implemented correctly, but it might work well enough. The idea behind a warn() is to inform the user that if they see some strange behavior shortly after a warn() the description might be a good place to go looking for an error.    hack() should be called when some functionality isn’t implemented nearly as well as it could or should be but for expediency or history sake hasn’t been fixed.  inform() Provides status messages and normal operating messages to the console for the user to see, without any connotations of incorrect behavior. For example it’s used when secondary CPUs being executing code on ALPHA.",
        "url": "/documentation/general_docs/development/coding_style/"
      }
      ,
    
      "documentation-general-docs-development": {
        "title": "Developing gem5",
        "content": "",
        "url": "/documentation/general_docs/development/"
      }
      ,
    
      "documentation-general-docs-fullsystem-building-android-m": {
        "title": "Building Android Marshmallow",
        "content": "Building Android MarshmallowThis guide gives detailed step-by-step instructions on building an Android Marshmallow image along with a working kernel and .dtb file that work with gem5.OverviewTo successfully run Android in gem5, an image, a compatible kernel and a device tree blob.dtb file configured for the simulator are necessary. This guide shows how to build Android Marshmallow 32bit version using a 3.14 kernel with Mali support. An extra section will be added in the future on how to build the 4.4 kernel with Mali.Pre-requisitesThis guide assumes a 64-bit system running 14.04 LTS Ubuntu. Before starting it is important first to set up our system correctly. To do this the following packages need to be installed through shell.Tip: Always check for the up-to-date prerequisites at the Android build page.Update and install all the dependencies. This can be done with the following commands:sudo apt-get updatesudo apt-get install openjdk-7-jdk git-core gnupg flex bison gperf build-essential zip curl zlib1g-dev gcc-multilib g++-multilib libc6-dev-i386 lib32ncurses5-dev x11proto-core-dev libx11-dev lib32z-dev ccache libgl1-mesa-dev libxml2-utils xsltproc unzipAlso, make sure to have repo correctly installed (instructions here).Ensure that the default JDK is OpenJDK 1.7:javac -versionTo cross-compile the kernel (32bit) and for the device tree we will need the following packages to be installed:sudo apt-get install gcc-arm-linux-gnueabihf device-tree-compilerBefore getting started, as a final step make sure to have the gem5 binaries and busybox for 32-bit ARM.For the gem5 binaries just do the following starting from your gem5 directory:cd util/m5make -f Makefile.armcd ../termmakecd ../../system/arm/simple_bootloader/makeFor busybox you can find the guide here.Building AndroidWe build Android Marshmallow using an AOSP running build based on the release for the Pixel C. The AOSP provides other builds, which are untested with this guide.Tip: Synching with repo will take a long time. Use the -jN flag to speed up the make process, where N is the number of parallel jobs to run.Make a directory and pull the Android repository:mkdir androidcd androidrepo init --depth=1 -u https://android.googlesource.com/platform/manifest -b android-6.0.1_r63repo sync -c -jNBefore you start the AOSP build, you will need to make one change to the build system to enable building libion.so, which is used by the Mali driver. Edit the file aosp/system/core/libion/Android.mk to change LOCAL_MODULE_TAGS for libion from ‘optional’ to ‘debug’. Here is the output of repo diff:  --- a/system/core/libion/Android.mk  +++ b/system/core/libion/Android.mk  @@ -3,7 +3,7 @@ LOCAL_PATH := $(call my-dir)  include $(CLEAR_VARS)  LOCAL_SRC_FILES := ion.c  LOCAL_MODULE := libion  -LOCAL_MODULE_TAGS := optional  +LOCAL_MODULE_TAGS := debug  LOCAL_SHARED_LIBRARIES := liblog  LOCAL_C_INCLUDES := $(LOCAL_PATH)/include $(LOCAL_PATH)/kernel-headers  LOCAL_EXPORT_C_INCLUDE_DIRS := $(LOCAL_PATH)/include  $(LOCAL_PATH)/kernel-headersSource the environment setup and build Android:Tip: For root access and “debuggability” [sic] we choose userdebug. Build can be done in different modes as seen here.Tip: Making Android will take a long time. Use the -jN flag to speed up the make process, where N is the number of parallel jobs to run.Make sure to do this in a bash shell.source build/envsetup.shlunch aosp_arm-userdebugmake -jNCreating an Android imageAfter a successful build, we create an image of Android and add the init files and binaries that configure the system for gem5. The following example creates a 3GB image.Tip: If you want to add applications or data, make the image large enough to fit the build and anything else that is meant to be written into it.Create an empty image to flash the Android build and attach the image to a loopback device:dd if=/dev/zero of=myimage.img bs=1M count=2560sudo losetup /dev/loop0 myimage.imgWe now need to create three partitions: AndroidRoot (1.5GB), AndroidData (1GB), and AndroidCache (512MB).First, partition the device:sudo fdisk /dev/loop0Update the partition table:sudo partprobe /dev/loop0Name the partitions / Define filesystem as ext4:sudo mkfs.ext4 -L AndroidRoot /dev/loop0p1sudo mkfs.ext4 -L AndroidData /dev/loop0psudo mkfs.ext4 -L AndroidCache /dev/loop0p3Mount the Root partition to a directory:sudo mkdir -p /mnt/androidRootsudo mount /dev/loop0p1 /mnt/androidRootLoad the build to the partition:cd /mnt/androidRootsudo zcat &lt;path/to/build/android&gt;/out/target/product/generic/ramdisk.img | sudo cpio -isudo mkdir cachesudo mkdir /mnt/tmpsudo mount -oro,loop &lt;path/to/build/android&gt;/out/target/product/generic/system.img /mnt/tmpsudo cp -a /mnt/tmp/* system/sudo umount /mnt/tmpDownload and unpack the overlays that are necessary from the gem5 Android KitKat page and make the following changes to the init.gem5.rc file. Here is the output of repo diff:  --- /kitkat_overlay/init.gem5.rc  +++ /m_overlay/init.gem5.rc  @@ -1,21 +1,13 @@  +   on early-init       mount debugfs debugfs /sys/kernel/debug     on init  -    export LD_LIBRARY_PATH ${LD_LIBRARY_PATH}:/vendor/lib/egl  -  -    # See storage config details at http://source.android.com/tech/storage/  -    mkdir /mnt/media_rw/sdcard 0700 media_rw media_rw  -    mkdir /storage/sdcard 0700 root root  +    # Support legacy paths  +    symlink /sdcard /mnt/sdcard       chmod 0666 /dev/mali0       chmod 0666 /dev/ion  -  -    export EXTERNAL_STORAGE /storage/sdcard  -  -    # Support legacy paths  -    symlink /storage/sdcard /sdcard  -    symlink /storage/sdcard /mnt/sdcard     on fs       mount_all /fstab.gem5  @@ -60,7 +52,6 @@       group root       oneshot    -# fusewrapped external sdcard daemon running as media_rw (1023)  -service fuse_sdcard /system/bin/sdcard -u 1023 -g 1023 -d  /mnt/media_rw/sdcard /storage/sdcard  +service fingerprintd /system/bin/fingerprintd       class late_start  -    disabled  +    user systemAdd the Android overlays and configure their permissions:sudo cp -r &lt;path/to/android/overlays&gt;/* /mnt/androidRoot/sudo chmod ug+x /mnt/androidRoot/init.gem5.rc/mnt/androidRoot/gem5/postboot.shAdd the m5 and busybox binaries under the sbin directory and make them executable:sudo cp &lt;path/to/gem5&gt;/util/m5/m5 /mnt/androidRoot/sbinsudo cp &lt;path/to/busybox&gt;/busybox /mnt/androidRoot/sbinsudo chmod a+x /mnt/androidRoot/sbin/busybox /mnt/androidRoot/sbin/m5Make the directories readable and searchable:sudo chmod a+rx /mnt/androidRoot/sbin/ /mnt/androidRoot/gem5/Remove the boot animation:sudo rm /mnt/androidRoot/system/bin/bootanimationDownload and unpack the Mali drivers, for gem5 Android 4.4, from here. Then, make the directories for the drivers and copy them:sudo mkdir -p /mnt/androidRoot/system/vendor/lib/eglsudo mkdir -p /mnt/androidRoot/system/vendor/lib/hwsudo cp &lt;path/to/userspace/Mali/drivers&gt;/lib/egl/libGLES_mali.so /mnt/androidRoot/system/vendor/lib/eglsudo cp &lt;path/to/userspace/Mali/drivers&gt;/lib/hw/gralloc.default.so /mnt/androidRoot/system/vendor/lib/hwChange the permissionssudo chmod 0755 /mnt/androidRoot/system/vendor/lib/hwsudo chmod 0755 /mnt/androidRoot/system/vendor/lib/eglsudo chmod 0644 /mnt/androidRoot/system/vendor/lib/egl/libGLES_mali.sosudo chmod 0644 /mnt/androidRoot/system/vendor/lib/hw/gralloc.default.soUnmount and remove loopback device:cd /..sudo umount /mnt/androidRootsudo losetup -d /dev/loop0Building the Kernel (3.14)After successfully setting up the image, a compatible kernel needs to be built and a .dtb file generated.Clone the repository containing the gem5 specific kernel:git clone -b ll_20140416.0-gem5 https://github.com/gem5/linux-arm-gem5.gitMake the following changes to the kernel gem5 config file at &lt;path/to/kernel/repo&gt;/arch/arm/configs/vexpress_gem5_defconfig. Here is the output of repo diff:  --- a/arch/arm/configs/vexpress_gem5_defconfig  +++ b/arch/arm/configs/vexpress_gem5_defconfig  @@ -200,4 +200,15 @@ CONFIG_EARLY_PRINTK=y  CONFIG_DEBUG_PREEMPT=n  # CONFIG_CRYPTO_ANSI_CPRNG is not set  # CONFIG_CRYPTO_HW is not set  +CONFIG_MALI_MIDGARD=y  +CONFIG_MALI_MIDGARD_DEBUG_SYS=y  +CONFIG_ION=y  +CONFIG_ION_DUMMY=y  CONFIG_BINARY_PRINTF=y  +CONFIG_NET_9P=y  +CONFIG_NET_9P_VIRTIO=y  +CONFIG_9P_FS=y  +CONFIG_9P_FS_POSIX_ACL=y  +CONFIG_9P_FS_SECURITY=y  +CONFIG_VIRTIO_BLK=y  +CONFIG_VMSPLIT_3G=y  +CONFIG_DNOTIFY=y  +CONFIG_FUSE_FS=yFor the device tree, add the Mali GPU device and increase the memory to 1.8GB. Do this with the following changes at &lt;path/to/kernel/repo&gt;/arch/arm/boot/dts/vexpress-v2p-ca15-tc1-gem5.dts. Here is the output of repo diff:  --- a/arch/arm/boot/dts/vexpress-v2p-ca15-tc1-gem5.dts  +++ b/arch/arm/boot/dts/vexpress-v2p-ca15-tc1-gem5.dts  @@ -45,7 +45,7 @@             memory@80000000 {                   device_type = \"memory\";  -                reg = &lt;0 0x80000000 0 0x40000000&gt;;  +                reg = &lt;0 0x80000000 0 0x74000000&gt;;           };            hdlcd@2b000000 {  @@ -59,6 +59,14 @@  //                mode = \"3840x2160MR-16@60\"; // UHD4K mode string                    framebuffer = &lt;0 0x8f000000 0 0x01000000&gt;;            };  +  +    gpu@0x2d000000 {  +        compatible = \"arm,mali-midgard\";  +        reg = &lt;0 0x2b400000 0 0x4000&gt;;  +        interrupts = &lt;0 86 4&gt;, &lt;0 87 4&gt;, &lt;0 88 4&gt;;  +        interrupt-names = \"JOB\", \"MMU\", \"GPU\";  +    };  +  /*          memory-controller@2b0a0000 {                    compatible = \"arm,pl341\", \"arm,primecell\";Download and unpack the userspace matching Mali kernel drivers for gem5 from [http://malideveloper.arm.com/resources/drivers/open-source-mali-midgard-gpu-kernel-drivers/ here]. Copy them to the gpu driver directory:cp -r &lt;path/to/kernelspace/Mali/drivers&gt;/driver/product/kernel/drivers/gpu/arm/ drivers/gpuChange the following in &lt;path/to/kernelspace/Mali/drivers&gt;/drivers/video/Kconfig and &lt;path/to/kernelspace/Mali/drivers&gt;/drivers/gpu/Makefile based on the following diffs:Here is the output of the Kconfig repo diff:  --- a/drivers/video/Kconfig  +++ b/drivers/video/Kconfig  @@ -23,6 +23,8 @@ source \"drivers/gpu/host1x/Kconfig\"    source \"drivers/gpu/drm/Kconfig\"    +source \"drivers/gpu/arm/Kconfig\"  +   config VGASTATE          tristate          default nHere is the output of the drivers/gpu/Makefile repo diff:  --- a/drivers/gpu/Makefile  +++ b/drivers/gpu/Makefile  @@ -1,2 +1,2 @@  -obj-y                += drm/ vga/  +obj-y                += drm/ vga/ arm/Finally, build the kernel and the .dtb file.Tip: Use the -jN flag to speed up the make process, where N is the number of parallel jobs to run.Build the kernel:make CROSS_COMPILE=arm-linux-gnueabihf- ARCH=arm vexpress_gem5_defconfigmake CROSS_COMPILE=arm-linux-gnueabihf- ARCH=arm vmlinux -jNCreate the .dtb file:dtc -I dts -O dtb arch/arm/boot/dts/vexpress-v2p-ca15-tc1-gem5.dts &gt; vexpress-v2p-ca15-tc1-gem5.dtbTesting the buildMake the following changes to example/fs.py. Here is the output repo diff:  --- a/configs/example/fs.py Thu Jun 02 20:34:39 2016 +0100  +++ b/configs/example/fs.py Fri Jun 10 15:37:29 2016 -0700  @@ -144,6 +144,13 @@       if is_kvm_cpu(TestCPUClass) or is_kvm_cpu(FutureClass):           test_sys.vm = KvmVM()    +    test_sys.gpu = NoMaliGpu(  +        gpu_type=\"T760\",  +        ver_maj=0, ver_min=0, ver_status=1,  +        int_job=118, int_mmu=119, int_gpu=120,  +        pio_addr=0x2b400000,  +        pio=test_sys.membus.master)  +      if options.ruby:          # Check for timing mode because ruby does not support atomic accesses          if not (options.cpu_type == \"detailed\" or options.cpu_type == \"timing\"):And the changes to FS config to either enable or disable software rendering.  --- a/configs/common/FSConfig.py Thu Jun 02 20:34:39 2016 +0100  +++ b/configs/common/FSConfig.py Thu Jun 16 10:23:44 2016 -0700  @@ -345,7 +345,7 @@               # release-specific tweaks             if 'kitkat' in mdesc.os_type():  -                cmdline += \" androidboot.hardware=gem5 qemu=1 qemu.gles=0 \" + \\  +                cmdline += \" androidboot.hardware=gem5 qemu=1 qemu.gles=1 \" + \\                            \"android.bootanim=0\"           self.boot_osflags = fillInCmdline(mdesc, cmdlineSet the following M5_PATH:M5_PATH=. build/ARM/gem5.opt configs/example/fs.py --cpu-type=atomic --mem-type=SimpleMemory --os-type=android-kitkat --disk-image=myimage.img --machine-type=VExpress_EMM --dtb-filename=vexpress-v2p-ca15-tc1-gem5.dtb -n 1 --mem-size=1800MBBuilding older versions of Androidgem5 has support for running even older versions of Android like KitKat. The documentation to do so, as well as the necessary drivers and files required, can be found on the old wiki here.",
        "url": "/documentation/general_docs/fullsystem/building_android_m"
      }
      ,
    
      "documentation-general-docs-fullsystem-building-arm-kernel": {
        "title": "Building ARM Kernel",
        "content": "Building ARM KernelThis page contains instructions for building up-to-date kernels for gem5 running on ARM.If you don’t want to build the Kernel on your own you could still download a prebuilt versionPrerequisitesThese instructions are for running headless systems. That is a more “server” style system where there is no frame-buffer. The description has been created using the latest known-working tag in the repositories linked below, however the tables in each section list previous tags that are known to work. To built the kernels on an x86 host you’ll need ARM cross compilers and the device tree compiler. If you’re running a reasonably new version of Ubuntu or Debian you can get required software through apt:apt-get install  gcc-arm-linux-gnueabihf gcc-aarch64-linux-gnu device-tree-compilerIf you can’t use these pre-made compilers the next easiest way to obtain the required compilers from Linaro.Depending on the exact source of your cross compilers, the compiler names used below will required small changes.To actually run the kernel, you’ll need to download or compile gem5’s bootloader. See the (bootloaders)(#bootloaders) section in this documents for details.Linux 4.xNewer gem5 kernels for ARM (v4.x and later) are based on the vanilla Linux kernel and typically have a small number of patches to make them work better with gem5. The patches are optional and you should be able to use a vanilla kernel as well. However, this requires you to configure the kernel yourself. Newer kernels all use the VExpress_GEM5_V1 gem5 platform for both AArch32 and AArch64. The required DTB files to describe the hardware to the OS ship with gem5. To build them, execute this command:make -C system/arm/dtKernel CheckoutTo checkout the kernel, execute the following command:git clone https://gem5.googlesource.com/arm/linuxThe repository contains a tag per gem5 kernel releases and working branches for major Linux revisions. Check the project page for a list of tags and branches. The clone command will, by default, check out the latest release branch. To checkout the v4.14 branch, execute the following in the repository:git checkout -b gem5/v4.14AArch32To compile the kernel, execute the following commands in the repository:make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- gem5_defconfigmake ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- -j `nproc`Testing the just built kernel:./build/ARM/gem5.opt configs/example/fs.py --kernel=/tmp/linux-arm-gem5/vmlinux --machine-type=VExpress_GEM5_V1 \\    --dtb-file=$PWD/system/arm/dt/armv7_gem5_v1_1cpu.dtbAArch64To compile the kernel, execute the following commands in the repository:make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- gem5_defconfigmake ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- -j `nproc`Testing the just built kernel:./build/ARM/gem5.opt configs/example/fs.py --kernel=/tmp/linux-arm-gem5/vmlinux --machine-type=VExpress_GEM5_V1 \\    --dtb-file=$PWD/system/arm/dt/armv8_gem5_v1_1cpu.dtb --disk-image=linaro-minimal-aarch64.imgLegacy kernels (pre v4.x)Older gem5 kernels for ARM (pre v4.x) are based on Linaro’s Linux kernel for ARM. These kernels use either the VExpress_EMM (AArch32) or VExpress_EMM64 (AArch64)  gem5 platform. Unlike the newer kernels, there is a separate AArch32 and AArch64 kernel repository and the device tree files are shipped with the kernel.32 bit kernel (AArch32)These are instructions to generate a 32-bit ARM Linux binary.To checkout the aarch32 kernel, execute the following command:git clone https://gem5.googlesource.com/arm/linux-arm-legacyThe repository contains a tag per gem5 kernel release. Check the project page for a list of branches and release tags. To checkout a tag, execute the following in the repository:git checkout -b TAGNAMETo compile the kernel, execute the following commands in the repository:make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- vexpress_gem5_server_defconfigmake ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- -j `nproc`Testing the just built kernel:./build/ARM/gem5.opt configs/example/fs.py  --kernel=/tmp/linux-arm-gem5/vmlinux \\   --machine-type=VExpress_EMM --dtb-file=/tmp/linux-arm-gem5/arch/arm/boot/dts/vexpress-v2p-ca15-tc1-gem5.dtb 64 bit kernel (AArch64)These are instructions to generate a 64-bit ARM Linux binary.To checkout the aarch64 kernel, execute the following command:git clone https://gem5.googlesource.com/arm/linux-arm64-legacyThe repository contains a tag per gem5 kernel release. Check the project page for a list of branches and release tags. To checkout a tag, execute the following in the repository:git checkout -b TAGNAMETo compile the kernel, execute the following commands in the repository:make ARCH=arm64 CROSS_COMPILE=aarch64-none-elf- gem5_defconfigmake ARCH=arm64 CROSS_COMPILE=aarch64-none-elf- -j4Testing the just built kernel:./build/ARM/gem5.opt configs/example/fs.py --kernel=/tmp/linux-arm64-gem5/vmlinux --machine-type=VExpress_EMM64 \\    --dtb-file=/tmp/linux-arm64-gem5/arch/arm64/boot/dts/aarch64_gem5_server.dtb --disk-image=linaro-minimal-aarch64.imgBootloadersThere are two different bootloaders for gem5. One of 32-bit kernels and one for 64-bit kernels. They can be compiled using the following command:make -C system/arm/bootloader/armmake -C system/arm/bootloader/arm64Once you have compiled the binaries, put them in the binaries directory in your M5_PATH.",
        "url": "/documentation/general_docs/fullsystem/building_arm_kernel"
      }
      ,
    
      "documentation-general-docs-fullsystem-devices": {
        "title": "Devices",
        "content": "Devices in full system modeI/O Device Base ClassesThe base classes in src/dev/*_device.* allow devices to be created with reasonable ease.The classes and virtual functions that must be implemented are listed below.Before reading the following it will help to be familiar with the Memory_System.PioPortThe PioPort class is a programmed I/O port that all devices that are sensitive to an address range use.The port takes all the memory access types and roles them into one read() and write() call that the device must respond to.The device must also provide the addressRanges() function with which it returns the address ranges it is interested in.If desired a device could have more than one PIO port.However in the normal case it would only have one port and return multiple ranges when the addressRange() function is called. The only time multiple PIO ports would be desirable is if your device wanted to have separate connection to two memory objects.PioDeviceThis is the base class which all devices senstive to an address range inherit from.There are three pure virtual functions which all devices must implement addressRanges(), read(), and write().The magic to choose which mode we are in, etc is handled by the PioPort so the device doesn’t have to bother.Parameters for each device should be in a Params struct derived from PioDevice::Params.BasicPioDeviceSince most PioDevices only respond to one address range BasicPioDevice provides an addressRanges() and parameters for the normal pio delay and the address to which the device responds to.Since the size of the device normally isn’t configurable a parameter is not used for this and anything that inherits from this class is expected to write it’s size into pioSize in its constructor.DmaPortThe DmaPort (in dma_device.hh) is used only for device mastered accesses.The recvTimingResp() method must be available to responses (nacked or not) to requests it makes.The port has two public methods dmaPending() which returns if the dma port is busy (e.g. It is still trying to send out all the pieces of the last request).All the code to break requests up into suitably sized chunks, collect the potentially multiple responses and respond to the device is accessed through dmaAction().A command, start address, size, completion event, and possibly data is handed to the function which will then execute the completion events process() method when the request has been completed.Internally the code uses DmaReqState to manage what blocks it has received and to know when to execute the completion event.DmaDeviceThis is the base class from which a DMA non-pci device would inherit from, however none of those exist currently within M5. The class does have some methods dmaWrite(), dmaRead() that select the appropriate command from a DMA read or write operation.NIC DevicesThe gem5 simulator has two different Network Interface Cards (NICs) devices that can be used to connect together two simulation instances over a simulated ethernet link.Getting a list of packets on the ethernet linkYou can get a list of the packet on the ethernet link by creating a Etherdump object, setting it’s file parameter, and setting the dump parameter on the EtherLink to it.This is easily accomplished with our fs.py example configuration by adding the command line option --etherdump=&lt;filename&gt;. The resulting file will be named &lt;file&gt; and be in a standard pcap format.This file can be read with wireshark or anything else that understands the pcap format.PCI devicesTo do: Explanation of platforms and systems, how they’re related, and what they’re each for",
        "url": "/documentation/general_docs/fullsystem/devices"
      }
      ,
    
      "documentation-general-docs-fullsystem-disks": {
        "title": "Creating disk images",
        "content": "Creating disk images for full system modeIn full-system mode, gem5 relies on a disk image with an installed operating system to run simulations.A disk device in gem5 gets its initial contents from disk image.The disk image file stores all the bytes present on the disk just as you would find them on an actual device.Some other systems also use disk images which are in more complicated formats and which provide compression, encryption, etc. gem5 currently only supports raw images, so if you have an image in one of those other formats, you’ll have to convert it into a raw image before you can use it in a simulation.There are often tools available which can convert between the different formats.There are multiple ways of creating a disk image which can be used with gem5.Following are four different methods to build disk images:  Using gem5 utils to create a disk image  Using gem5 utils and chroot to create a disk image  Using QEMU to create a disk image  Using Packer to create a disk imageAll of these methods are independent of each other.Next, we will discuss each of these methods one by one.1) Using gem5 utils to create a disk imageDisclaimer: This is from the old website and some of the stuff in this method can be out-dated.Because a disk image represents all the bytes on the disk itself, it contains more than just a file system.For hard drives on most systems, the image starts with a partition table.Each of the partitions in the table (frequently only one) is also in the image.If you want to manipulate the entire disk you’ll use the entire image, but if you want to work with just one partition and/or the file system on it, you’ll need to specifically select that part of the image.The losetup command (discussed below) has a -o option which lets you specify where to start in an image.A youtube video of working with image files using qemu on Ubuntu 12.04 64bit. Video resolution can be set to 1080Creating an empty imageYou can use the ./util/gem5img.py script provided with gem5 to build the disk image.It’s a good idea to understand how to build an image in case something goes wrong or you need to do something in an unusual way.However, in this mehtod, we are using gem5img.py script to go through the process of building and formatting an image.If you want to understand the guts of what it’s doing see below.Running gem5img.py may require you to enter the sudo password.You should never run commands as the root user that you don’t understand! You should look at the file util/gem5img.py and ensure that it isn’t going to do anything malicious to your computer!You can use the “init” option with gem5img.py to create an empty image, “new”, “partition”, or “format” to perform those parts of init independently, and “mount” or “umount” to mount or unmount an existing image.Mounting an imageTo mount a file system on your image file, first find a loopback device and attach it to your image with an appropriate offset as will be described further in the Formatting section.mount -o loop,offset=32256 foo.imgA youtube video of add file using mount on Ubuntu 12.04 64bit. Video resolution can be set to 1080UnmountingTo unmount an image, use the umount command like you normally would.umountImage ContentsNow that you can create an image file and mount it’s file system, you’ll want to actually put some files in it.You’re free to use whatever files you want, but the gem5 developers have found that Gentoo stage3 tarballs are a great starting point.They’re essentially an almost bootable and fairly minimal Linux installation and are available for a number of architectures.If you choose to use a Gentoo tarball, first extract it into your mounted image.The /etc/fstab file will have placeholder entries for the root, boot, and swap devices.You’ll want to update this file as apporpriate, deleting any entries you aren’t going to use (the boot partition, for instance).Next, you’ll want to modify the inittab file so that it uses the m5 utility program (described elsewhere) to read in the init script provided by the host machine and to run that.If you allow the normal init scripts to run, the workload you’re interested in may take much longer to get started, you’ll have no way to inject your own init script to dynamically control what benchmarks are started, for instance, and you’ll have to interact with the simulation through a simulated terminal which introduces non-determinism.ModificationsBy default gem5 does not store modifications to the disk back to the underlying image file.Any changes you make will be stored in an intermediate COW layer and thrown away at the end of the simulation.You can turn off the COW layer if you want to modify the underlying disk.Kernel and bootloaderAlso, generally speaking, gem5 skips over the bootloader portion of boot and loads the kernel into simulated memory itself. This means that there’s no need to install a bootloader like grub to your disk image, and that you don’t have to put the kernel you’re going to boot from on the image either.The kernel is provided separately and can be changed out easily without having to modify the disk image.Manipulating images with loopback devicesLoopback devicesLinux supports loopback devices which are devices backed by files.By attaching one of these to your disk image, you can use standard Linux commands on it which normally run on real disk devices.You can use the mount command with the “loop” option to set up a loopback device and mount it somewhere.Unfortunately you can’t specify an offset into the image, so that would only be useful for a file system image, not a disk image which is what you need.You can, however, use the lower level losetup command to set up a loopback device yourself and supply the proper offset.Once you’ve done that, you can use the mount command on it like you would on a disk partition, format it, etc.If you don’t supply an offset the loopback device will refer to the whole image, and you can use your favorite program to set up the partitions on it.Working with image filesTo create an empty image from scratch, you’ll need to create the file itself, partition it, and format (one of) the partition(s) with a file system.Create the actual fileFirst, decide how large you want your image to be.It’s a good idea to make it large enough to hold everything you know you’ll need on it, plus some breathing room.If you find out later it’s too small, you’ll have to create a new larger image and move everything over.If you make it too big, you’ll take up actual disk space unnecessarily and make the image harder to work with.Once you’ve decided on a size you’ll want to actually create the file.Basically, all you need to do is create a file of a certain size that’s full of zeros.One approach is to use the dd command to copy the right number of bytes from /dev/zero into the new file.Alternatively you could create the file, seek in it to the last byte, and write one zero byte.All of the space you skipped over will become part of the file and is defined to read as zeroes, but because you didn’t explicitly write any data there, most file systems are smart enough to not actually store that to disk.You can create a large image that way but take up very little space on your physical disk.Once you start writing to the file later that will change, and also if you’re not careful, copying the file may expand it to its full size.PartitioningFirst, find an available loopback device using the losetup command with the -f option.losetup -fNext, use losetup to attach that device to your image.If the available device was /dev/loop0 and your image is foo.img, you would use a command like this.losetup /dev/loop0 foo.img/dev/loop0 (or whatever other device you’re using) will now refer to your entire image file.Use whatever partitioning program you like on it to set up one (or more) paritions.For simplicity it’s probably a good idea to create only one parition that takes up the entire image.We say it takes up the entire image, but really it takes up all the space except for the partition table itself at the beginning of the file, and possibly some wasted space after that for DOS/bootloader compatibility.From now on we’ll want to work with the new partition we created and not the whole disk, so we’ll free up the loopback device using losetup’s -d optionlosetup -d /dev/loop0FormattingFirst, find an available loopback device like we did in the partitioning step above using losetup’s -f option.losetup -fWe’ll attach our image to that device again, but this time we only want to refer to the partition we’re going to put a file system on.For PC and Alpha systems, that partition will typically be one track in, where one track is 63 sectors and each sector is 512 bytes, or 63 * 512 = 32256 bytes.The correct value for you may be different, depending on the geometry and layout of your image.In any case, you should set up the loopback device with the -o option so that it represents the partition you’re interested in.losetup -o 32256 /dev/loop0 foo.imgNext, use an appropriate formating command, often mke2fs, to put a file system on the partition.mke2fs /dev/loop0You’ve now successfully created an empty image file.You can leave the loopback device attached to it if you intend to keep working with it (likely since it’s still empty) or clean it up using losetup -d.losetup -d /dev/loop0Don’t forget to clean up the loopback device attached to your image with the losetup -d command.losetup -d /dev/loop02) Using gem5 utils and chroot to create a disk imageThe discussion in this section assumes that you have already checked out a version of gem5 and can build and run gem5 in full-system mode.We will use the x86 ISA for gem5 in this discussion, and this is mostly applicable to other ISAs as well.Creating a blank disk imageThe first step is to create a blank disk image (usually a .img file).This is similar to what we did in the first metod.We can use the gem5img.py script provided by gem5 developers.To create a blank disk image, which is formatted with ext2 by default, simply run the following.&gt; util/gem5img.py init ubuntu-14.04.img 4096This command creates a new image, called “ubuntu-14.04.img” that is 4096 MB.This command may require you to enter the sudo password, if you don’t have permission to create loopback devices.You should never run commands as the root user that you don’t understand! You should look at the file util/gem5img.py and ensure that it isn’t going to do anything malicious to your computer!We will be using util/gem5img.py heavily throughout this section, so you may want to understand it better.If you just run util/gem5img.py, it displays all of the possible commands.Usage: %s [command] &lt;command arguments&gt;where [command] is one of    init: Create an image with an empty file system.    mount: Mount the first partition in the disk image.    umount: Unmount the first partition in the disk image.    new: File creation part of \"init\".    partition: Partition part of \"init\".    format: Formatting part of \"init\".Watch for orphaned loopback devices and delete them withlosetup -d. Mounted images will belong to root, so you may needto use sudo to modify their contentsCopying root files to the diskNow that we have created a blank disk, we need to populate it with all of the OS files.Ubuntu distributes a set of files explicitly for this purpose.You can find the Ubuntu core distribution for 14.04 at http://cdimage.ubuntu.com/releases/14.04/release/. Since we are simulating an x86 machine, we will use ubuntu-core-14.04-core-amd64.tar.gz.Download whatever image is appropriate for the system you are simulating.Next, we need to mount the blank disk and copy all of the files onto the disk.mkdir mnt../../util/gem5img.py mount ubuntu-14.04.img mntwget http://cdimage.ubuntu.com/ubuntu-core/releases/14.04/release/ubuntu-core-14.04-core-amd64.tar.gzsudo tar xzvf ubuntu-core-14.04-core-amd64.tar.gz -C mntThe next step is to copy a few required files from your working system onto the disk so we can chroot into the new disk. We need to copy /etc/resolv.conf onto the new disk.sudo cp /etc/resolv.conf mnt/etc/Setting up gem5-specific filesCreate a serial terminalBy default, gem5 uses the serial port to allow communication from the host system to the simulated system. To use this, we need to create a serial tty.Since Ubuntu uses upstart to control the init process, we need to add a file to /etc/init which will initialize our terminal.Also, in this file, we will add some code to detect if there was a script passed to the simulated system.If there is a script, we will execute the script instead of creating a terminal.Put the following code into a file called /etc/init/tty-gem5.conf# ttyS0 - getty## This service maintains a getty on ttyS0 from the point the system is# started until it is shut down again, unless there is a script passed to gem5.# If there is a script, the script is executed then simulation is stopped.start on stopped rc RUNLEVEL=[12345]stop on runlevel [!12345]console ownerrespawnscript   # Create the serial tty if it doesn't already exist   if [ ! -c /dev/ttyS0 ]   then      mknod /dev/ttyS0 -m 660 /dev/ttyS0 c 4 64   fi   # Try to read in the script from the host system   /sbin/m5 readfile &gt; /tmp/script   chmod 755 /tmp/script   if [ -s /tmp/script ]   then      # If there is a script, execute the script and then exit the simulation      exec su root -c '/tmp/script' # gives script full privileges as root user in multi-user mode      /sbin/m5 exit   else      # If there is no script, login the root user and drop to a console      # Use m5term to connect to this console      exec /sbin/getty --autologin root -8 38400 ttyS0   fiend scriptSetup localhostWe also need to set up the localhost loopback device if we are going to use any applications that use it.To do this, we need to add the following to the /etc/hosts file.127.0.0.1 localhost::1 localhost ip6-localhost ip6-loopbackfe00::0 ip6-localnetff00::0 ip6-mcastprefixff02::1 ip6-allnodesff02::2 ip6-allroutersff02::3 ip6-allhostsUpdate fstabNext, we need to create an entry in /etc/fstab for each partition we want to be able to access from the simulated system. Only one partition is absolutely required (/); however, you may want to add additional partitions, like a swap partition.The following should appear in the file /etc/fstab.# /etc/fstab: static file system information.## Use 'blkid' to print the universally unique identifier for a# device; this may be used with UUID= as a more robust way to name devices# that works even if disks are added and removed. See fstab(5).## &lt;file system&gt;    &lt;mount point&gt;   &lt;type&gt;  &lt;options&gt;   &lt;dump&gt;  &lt;pass&gt;/dev/hda1      /       ext3        noatime     0 1Copy the m5 binary to the diskgem5 comes with an extra binary application that executes pseudo-instructions to allow the simulated system to interact with the host system.To build this binary, run make -f Makefile.&lt;isa&gt; in the gem5/m5 directory, where &lt;isa&gt; is the ISA that you are simulating (e.g., x86). After this, you should have an m5 binary file.Copy this file to /sbin on your newly created disk.After updating the disk with all of the gem5-specific files, unless you are going on to add more applications or copying additional files, unmount the disk image.&gt; util/gem5img.py umount mntInstall new applicationsThe easiest way to install new applications on to your disk, is to use chroot.This program logically changes the root directory (“/”) to a different directory, mnt in this case.Before you can change the root, you first have to set up the special directories in your new root. To dothis, we use mount -o bind.&gt; sudo /bin/mount -o bind /sys mnt/sys&gt; sudo /bin/mount -o bind /dev mnt/dev&gt; sudo /bin/mount -o bind /proc mnt/procAfter binding those directories, you can now chroot:&gt; sudo /usr/sbin/chroot mnt /bin/bashAt this point you will see a root prompt and you will be in the /directory of your new disk.You should update your repository information.&gt; apt-get updateYou may want to add the universe repositories to your list with thefollowing commands.Note: The first command is require in 14.04.&gt; apt-get install software-properties-common&gt; add-apt-repository universe&gt; apt-get updateNow, you are able to install any applications you could install on anative Ubuntu machine via apt-get.Remember, after you exit you need to unmount all of the directories weused bind on.&gt; sudo /bin/umount mnt/sys&gt; sudo /bin/umount mnt/proc&gt; sudo /bin/umount mnt/dev3) Using QEMU to create a disk imageThis method is a follow-up on the previous method to create a disk image.We will see how to create, edit and set up a disk image using qemu instead of relying on gem5 tools.This section assumes that you have installed qemu on your system.In Ubuntu, this can be done withsudo apt-get install qemu-kvm libvirt-bin ubuntu-vm-builder bridge-utilsStep 1: Create an empty diskUsing the qemu disk tools, create a blank raw disk image.In this case, I chose to create a disk named “ubuntu-test.img” that is 8GB.qemu-img create ubuntu-test.img 8GStep 2: Install ubuntu with qemuNow that we have a blank disk, we are going to use qemu to install Ubuntu on the disk.It is encouraged that you use the server version of Ubuntu since gem5 does not have great support for displays.Thus, the desktop environment isn’t very useful.First, you need to download the installation CD image from the Ubuntu website.Next, use qemu to boot off of the CD image, and set the disk in the system to be the blank disk you created above.Ubuntu needs at least 1GB of memory to install correctly, so be sure to configure qemu to use at least 1GB memory.qemu-system-x86_64 -hda ../gem5-fs-testing/ubuntu-test.img -cdrom ubuntu-16.04.1-server-amd64.iso -m 1024 -enable-kvm -boot dWith this, you can simply follow the on-screen directions to install Ubuntu to the disk image.The only gotcha in the installation is that gem5’s IDE drivers don’t seem to play nicely with logical paritions.Thus, during the Ubuntu install, be sure to manually partition the disk and remove any logical partitions.You don’t need any swap space on the disk anyway, unless you’re doing something specifically with swap space.Step 3: Boot up and install needed softwareOnce you have installed Ubuntu on the disk, quit qemu and remove the -boot d option so that you are not booting off of the CD anymore.Now, you can again boot off of the main disk image you have installed Ubuntu on.Since we’re using qemu, you should have a network connection (although ping won’twork).When booting in qemu, you can just use sudo apt-get install andinstall any software you need on your disk.qemu-system-x86_64 -hda ../gem5-fs-testing/ubuntu-test.img -cdrom ubuntu-16.04.1-server-amd64.iso -m 1024 -enable-kvmStep 4: Update init scriptBy default, gem5 expects a modified init script which loads a script off of the host to execute in the guest.To use this feature, you need to follow the steps below.Alternatively, you can install the precompiled binaries for x86 found on this website.From qemu, you can run the following, which completes the above steps for you.wget http://cs.wisc.edu/~powerjg/files/gem5-guest-tools-x86.tgztar xzvf gem5-guest-tools-x86.tgzcd gem5-guest-tools/sudo ./installNow, you can use the system.readfile parameter in your Python config scripts. This file will automatically be loaded (by the gem5init script) and executed.Manually installing the gem5 init scriptFirst, build the m5 binary on the host.cd util/m5make -f Makefile.x86Then, copy this binary to the guest and put it in /sbin. Also, create a link from /sbin/gem5.Then, to get the init script to execute when gem5 boots, create file /lib/systemd/system/gem5.service with the following:[Unit]Description=gem5 init scriptDocumentation=http://gem5.orgAfter=getty.target[Service]Type=idleExecStart=/sbin/gem5initStandardOutput=ttyStandardInput=tty-forceStandardError=tty[Install]WantedBy=default.targetEnable the gem5 service and disable the ttyS0 service.systemctl enable gem5.serviceFinally, create the init script that is executed by the service. In/sbin/gem5init:#!/bin/bash -CPU=`cat /proc/cpuinfo | grep vendor_id | head -n 1 | cut -d ' ' -f2-`echo \"Got CPU type: $CPU\"if [ \"$CPU\" != \"M5 Simulator\" ];then    echo \"Not in gem5. Not loading script\"    exit 0fi# Try to read in the script from the host system/sbin/m5 readfile &gt; /tmp/scriptchmod 755 /tmp/scriptif [ -s /tmp/script ]then    # If there is a script, execute the script and then exit the simulation    su root -c '/tmp/script' # gives script full privileges as root user in multi-user mode    sync    sleep 10    /sbin/m5 exitfiecho \"No script found\"Problems and (some) solutionsYou might run into some problems while following this method.Some of the issues and solutions are discussed on this page.4) Using Packer to create a disk imageThis section discusses an automated way of creating gem5-compatible disk images with Ubuntu server installed. We make use of packer to do this which makes use of a .json template file to build and configure a disk image. The template file could be configured to build a disk image with specific benchmarks installed. The mentioned template file can be found here.Building a Simple Disk Image with Packera. How It Works, BrieflyWe use Packer and QEMU to automate the process of disk creation.Essentially, QEMU is responsible for setting up a virtual machine and all interactions with the disk image during the building process.The interactions include installing Ubuntu Server to the disk image, copying files from your machine to the disk image, and running scripts on the disk image after Ubuntu is installed.However, we will not use QEMU directly.Packer provides a simpler way to interact with QEMU using a JSON script, which is more expressive than using QEMU from command line.b. Install Required Software/DependenciesIf not already installed, QEMU can be installed using:sudo apt-get install qemuDownload the Packer binary from the official website.c. Customize the Packer ScriptThe default packer script template.json should be modified and adapted according to the required disk image and the avaiable resources for the build proces. We will rename the default template to [disk-name].json. The variables that should be modified appear at the end of [disk-name].json file, in variables section.The configuration files that are used to build the disk image, and the directory structure is shown below:disk-image/    [disk-name].json: packer script    Any experiment-specific post installation script    post-installation.sh: generic shell script that is executed after Ubuntu is installed    preseed.cfg: preseeded configuration to install Ubuntui. Customizing the VM (Virtual Machine)In [disk-name].json, following variables are available to customize the VM:            Variable      Purpose      Example                  vm_cpus (should be modified)      number of host CPUs used by VM      “2”: 2 CPUs are used by the VM              vm_memory (should be modified)      amount of VM memory, in MB      “2048”: 2 GB of RAM are used by the VM              vm_accelerator (should be modified)      accelerator used by the VM e.g. Kvm      “kvm”: kvm will be used      ii. Customizing the Disk ImageIn [disk-name].json, disk image size can be customized using following variable:            Variable      Purpose      Example                  image_size (should be modified)      size of the disk image, in megabytes      “8192”: the image has the size of 8 GB              [image_name]      name of the built disk image      “boot-exit”      iii. File TransferWhile building a disk image, users would need to move their files (benchmarks, data sets etc.) tothe disk image. In order to do this file transfer, in [disk-name].json under provisioners, you could add the following:{    \"type\": \"file\",    \"source\": \"post_installation.sh\",    \"destination\": \"/home/gem5/\",    \"direction\": \"upload\"}The above example copies the file post_installation.sh from the host to /home/gem5/ in the disk image.This method is also capable of copying a folder from host to the disk image and vice versa.It is important to note that the trailing slash affects the copying process (more details).The following are some notable examples of the effect of using slash at the end of the paths.            source      destination      direction      Effect                  foo.txt      /home/gem5/bar.txt      upload      copy file (host) to file (image)              foo.txt      bar/      upload      copy file (host) to folder (image)              /foo      /tmp      upload      mkdir /tmp/foo (image);  cp -r /foo/* (host) /tmp/foo/ (image);              /foo/      /tmp      upload      cp -r /foo/* (host) /tmp/ (image)      If direction is download, the files will be copied from the image to the host.Note: This is a way to run script once after installing Ubuntu without copying to the disk image.iv. Install Benchmark DependenciesTo install the dependencies, you can use a bash script post_installation.sh, which will be run after the Ubuntu installation and file copying is done.For example, if we want to install gfortran, add the following in post_installation.sh:echo '12345' | sudo apt-get install gfortran;In the above example, we assume that the user password is 12345.This is essentially a bash script that is executed on the VM after the file copying is done, you could modify the script as a bash script to fit any purpose.v. Running Other Scripts on Disk ImageIn [disk-name].json, we could add more scripts to provisioners.Note that the files are on the host, but the effects are on the disk image.For example, the following example runs post_installation.sh after Ubuntu is installed,{    \"type\": \"shell\",    \"execute_command\": \"echo '{{ user `ssh_password` }}' | {{.Vars}} sudo -E -S bash '{{.Path}}'\",    \"scripts\":    [        \"post-installation.sh\"    ]}d. Build the Disk Imagei. BuildIn order to build a disk image, the template file is first validated using:./packer validate [disk-name].jsonThen, the template file can be used to build the disk image:./packer build [disk-name].jsonOn a fairly recent machine, the building process should not take more than 15 minutes to complete.The disk image with the user-defined name (image_name) will be produced in a folder called [image_name]-image.We recommend to use a VNC viewer in order to inspect the building process.ii. Inspect the Building ProcessWhile the building of disk image takes place, Packer will run a VNC (Virtual Network Computing) server and you will be able to see the building process by connecting to the VNC server from a VNC client. There are a plenty of choices for VNC client. When you run the Packer script, it will tell you which port is used by the VNC server. For example, if it says qemu: Connecting to VM via VNC (127.0.0.1:5932), the VNC port is 5932.To connect to VNC server from the VNC client, use the address 127.0.0.1:5932 for a port number 5932.If you need port forwarding to forward the VNC port from a remote machine to your local machine, use SSH tunnelingssh -L 5932:127.0.0.1:5932 &lt;username&gt;@&lt;host&gt;This command will forward port 5932 from the host machine to your machine, and then you will be able to connect to the VNC server using the address 127.0.0.1:5932 from your VNC viewer.Note: While Packer is installing Ubuntu, the terminal screen will display “waiting for SSH” without any update for a long time.This is not an indicator of whether the Ubuntu installation produces any errors.Therefore, we strongly recommend using VNC viewer at least once to inspect the image building process.",
        "url": "/documentation/general_docs/fullsystem/disks"
      }
      ,
    
      "documentation-general-docs-fullsystem-guest-binaries": {
        "title": "Guest Binaries",
        "content": "  Manual Download  Google Cloud Utilities (gsutil)We provide a set of useful prebuilt binaries users can download (in case they don’t want torecompile them from scratch).There are two ways of downloading them:  Via Manual Download  Via Google Cloud UtilitiesManual DownloadHere follows a list of prebuilt binaries to be downloaded by just clicking the link:Arm FS Binaries (Kernel/Disk images)Latest Linux Kernel/Disk Image (recommended)  http://dist.gem5.org/dist/current/arm/aarch-system-201901106.tar.bz2Old Linux Kernel/Disk ImageThese images are not supported. If you run into problems, we will do our best to help, but there is no guarantee these will work with the latest gem5 version  http://dist.gem5.org/dist/current/arm/aarch-system-20170616.tar.xz  http://dist.gem5.org/dist/current/arm/aarch-system-20180409.tar.xz  http://dist.gem5.org/dist/current/arm/arm-system-dacapo-2011-08.tgz  http://dist.gem5.org/dist/current/arm/arm-system.tar.bz2  http://dist.gem5.org/dist/current/arm/arm64-system-02-2014.tgz  http://dist.gem5.org/dist/current/arm/kitkat-overlay.tar.bz2  http://dist.gem5.org/dist/current/arm/linux-arm-arch.tar.bz2  http://dist.gem5.org/dist/current/arm/vmlinux-emm-pcie-3.3.tar.bz2  http://dist.gem5.org/dist/current/arm/vmlinux.arm.smp.fb.3.2.tar.gzGoogle Cloud Utilities (gsutil)gsutil is a Python application that lets you access Cloud Storage from the command line.Please have a look at the following documentation which will guide you through the processof installing the utility  gsutil toolOnce installed (NOTE: It require you to provide a valid google account) it will be possible to inspect/download gem5 binaries via the following command line.gsutil cp -r gs://dist.gem5.org/dist/&lt;binary&gt;",
        "url": "/documentation/general_docs/fullsystem/guest_binaries"
      }
      ,
    
      "documentation-general-docs-fullsystem": {
        "title": "Full system support",
        "content": "",
        "url": "/documentation/general_docs/fullsystem/"
      }
      ,
    
      "documentation-general-docs-fullsystem-m5term": {
        "title": "m5 term",
        "content": "m5 termThe m5term program allows the user to connect to the simulated console interface that full-system gem5 provides. Simply change into the util/term directory and build m5term:% cd gem5/util/term% makegcc  -o m5term term.c% make installsudo install -o root -m 555 m5term /usr/local/binThe usage of m5term is:./m5term &lt;host&gt; &lt;port&gt;&lt;host&gt; is the host that is running gem5&lt;port&gt; is the console port to connect to. gem5 defaults tousing port 3456, but if the port is used, it will try the nexthigher port until it finds one available.If there are multiple systems running within one simulation,there will be a console for each one.  (The first system'sconsole will be on 3456 and the second on 3457 for example)m5term uses '~' as an escape character.  If you enterthe escape character followed by a '.', the m5term programwill exit.m5term can be used to interactively work with the simulator, though users must often set various terminal settings to get things to workA slightly shortened example of m5term in action:% m5term localhost 3456==== m5 slave console: Console 0 ====M5 consoleGot Configuration 127memsize 8000000 pages 4000First free page after ROM 0xFFFFFC0000018000HWRPB 0xFFFFFC0000018000 l1pt 0xFFFFFC0000040000 l2pt 0xFFFFFC0000042000 l3pt_rpb 0xFFFFFC0000044000 l3pt_kernel 0xFFFFFC0000048000 l2reserv 0xFFFFFC0000046000CPU Clock at 2000 MHz IntrClockFrequency=1024Booting with 1 processor(s)......VFS: Mounted root (ext2 filesystem) readonly.Freeing unused kernel memory: 480k freedinit started:  BusyBox v1.00-rc2 (2004.11.18-16:22+0000) multi-call binaryPTXdist-0.7.0 (2004-11-18T11:23:40-0500)mounting filesystems...EXT2-fs warning: checktime reached, running e2fsck is recommendedloading script...Script from M5 readfile is empty, starting bash shell...# lsbenchmarks  etc         lib         mnt         sbin        usrbin         floppy      lost+found  modules     sys         vardev         home        man         proc        tmp         z#",
        "url": "/documentation/general_docs/fullsystem/m5term"
      }
      ,
    
      "documentation-general-docs-m5ops": {
        "title": "M5ops",
        "content": "M5opsThis page explains the special opcodes that can be used in M5 to do checkpoints etc. The m5 utility program (on our disk image and in util/m5/*) provides some of this functionality on the command line. In many cases it is best to insert the operation directly in the source code of your application of interest. You should be able to link with the appropriate m5op_ARCH.o file and the m5op.h header file has prototypes for all the functions.The m5 Utility (FS mode)he m5 utility (see util/m5/) can be used in FS mode to issue special instructions to trigger simulation specific functionality. It currently offers the following options:  ivlb: Deprecated, present only for old binary compatibility  ivle: Deprecated, present only for old binary compatibility  initparam: Deprecated, present only for old binary compatibility  sw99param: Deprecated, present only for old binary compatibility  exit [delay]: Stop the simulation in delay nanoseconds.  resetstats [delay [period]]: Reset simulation statistics in delay nanoseconds; repeat this every period nanoseconds.  dumpstats [delay [period]]: Save simulation statistics to a file in delay nanoseconds; repeat this every period nanoseconds.  dumpresetstats [delay [period]]: same as dumpstats; resetstats  checkpoint [delay [period]]: Create a checkpoint in delay nanoseconds; repeat this every period nanoseconds.  readfile: Print the file specified by the config parameter system.readfile. This is how the the rcS files are copied into the simulation environment.  debugbreak: Call debug_break() in the simulator (causes simulator to get SIGTRAP signal, useful if debugging with GDB).  switchcpu: Cause an exit event of type, “switch cpu,” allowing the Python to switch to a different CPU model if desired.Other M5 opsThese are other M5 ops that aren’t useful in command line form.  quiesce: De-schedule the CPUs tick() call until an some asynchronous event wakes it (an interrupt)  quiesceNS: Same as above, but automatically wakes after a number of nanoseconds if it’s not woken up prior  quiesceCycles: Same as above but with CPU cycles instead of nanoseconds  quisceTIme: The amount of time the CPU was quiesced for  addsymbol: Add a symbol to the simulators symbol table. For example when a kernel module is loadedUsing gem5 ops in Java codeThese ops can also be used in Java code. These ops allow gem5 ops to be called from within java programs like the following:import jni.gem5Op;public  class HelloWorld {   public static void main(String[] args) {       gem5Op gem5 = new gem5Op();       System.out.println(\"Rpns0:\" + gem5.rpns());       System.out.println(\"Rpns1:\" + gem5.rpns());   }   static {       System.loadLibrary(\"gem5OpJni\");   }}When building you need to make sure classpath include gem5OpJni.jar:javac -classpath $CLASSPATH:/path/to/gem5OpJni.jar HelloWorld.javaand when running you need to make sure both the java and library path are set:java -classpath $CLASSPATH:/path/to/gem5OpJni.jar -Djava.library.path=/path/to/libgem5OpJni.so HelloWorldUsing gem5 ops with Fortran codegem5’s special opcodes (psuedo instructions) can be used with Fortran programs. In the Fortran code, one can add calls to C functions that invoke the special opcode. While creating the final binary, compile the object files for the Fortran program and the C program (for opcodes) together. I found the documentation provided here useful. Read the section -- Compiling a mixed C-Fortran program.",
        "url": "/documentation/general_docs/m5ops/"
      }
      ,
    
      "documentation-general-docs-memory-system-classic-coherence-protocol": {
        "title": "Classic memory system coherence",
        "content": "Classic Memory System coherenceM5 2.0b4 introduced a substantially rewritten and streamlined cachemodel, including a new coherence protocol. (The old pre-2.0 cache modelhad been patched up to work with the new MemorySystem introduced in 2.0beta, but notrewritten to take advantage of the new memory system’s features.)The key feature of the new coherence protocol is that it is designed towork with more-or-less arbitrary cache hierarchies (multiple caches eachon multiple levels). In contrast, the old protocol restricted sharing toa single bus.In the real world, a system architecture will have limits on the numberor configuration of caches that the protocol can be designed toaccommodate. It’s not practical to design a protocol that’s fullyrealistic and yet efficient for arbitrary configurations. In order toenable our protocol to work on (nearly) arbitrary configurations, wecurrently sacrifice a little bit of realism and a little bit ofconfigurability. Our intent is that this protocol is adequate forresearchers studying aspects of system behavior other than coherencemechanisms. Researchers studying coherence specifically will probablywant to replace the default coherence mechanism with implementations ofthe specific protocols under investigation.The protocol is a MOESI snooping protocol. Inclusion is notenforced; in a CMP configuration where you have several L1s whose totalcapacity is a significant fraction of the capacity of the common L2 theyshare, inclusion can be very inefficient.Requests from upper-level caches (those closer to the CPUs) propagatetoward memory in the expected fashion: an L1 miss is broadcast on thelocal L1/L2 bus, where it is snooped by the other L1s on that bus and(if none respond) serviced by the L2. If the request misses in the L2,then after some delay (currently set equal to the L2 hit latency), theL2 will issue the request on its memory-side bus, where it will possiblybe snooped by other L2s and then be issued to an L3 or memory.Unfortunately, propagating snoop requests incrementally back up thehierarchy in a similar fashion is a source of myriad nearly intractablerace conditions. Real systems don’t typically do this anyway; in generalyou want a single snoop operation at the L2 bus to tell you the state ofthe block in the whole L1/L2 hierarchy. There are a handful of methodsfor this:  just snoop the L2, but enforce inclusion so that the L2 has all theinfo you need about the L1s as well—an idea we’ve already rejectedabove  keep an extra set of tags for all the L1s at the L2 so those can besnooped at the same time (see the Compaq Piranha)—reasonable, ifyou’re hierarchy’s not too deep, but now you’ve got to size the tagsin the lower-level caches based on the number, size, andconfiguration of the upper-level caches, which is a configurationpain  snoop the L1s in parallel with the L2, something that’s not hard ifthey’re all on the same die (I believe Intel started doing this withthe Pentium Pro; not sure if they still do with the Core2 chips ornot, or if AMD does this as well, but I suspect so)—alsoreasonable, but adding explicit paths for these snoops would alsomake for a very cumbersome configuration processWe solve this dilemma by introducing “express snoops”, which are specialsnoop requests that get propagated up the hierarchy instantaneously andatomically (much like the atomic-mode accesses described on the MemorySystem page), even when the system is runningin timing mode. Functionally this behaves very much like options 2 or 3above, but because the snoops propagate along the regular businterconnects, there’s no additional configuration overhead. There issome timing inaccuracy introduced, but if we assume that there arededicated paths in the real hardware for these snoops (or formaintaining the additional copies of the upper-level tags at thelower-level caches) then the differences are probably minor.(More to come: how does a cache know when its request is completed? andother fascinating questions…)Note: there are still some bugs in this protocol as of 2.0b4,particularly if you have multiple L2s each with multiple L1s behind it,but I believe it works for any configuration that worked in 2.0b3.",
        "url": "/documentation/general_docs/memory_system/classic-coherence-protocol/"
      }
      ,
    
      "documentation-general-docs-memory-system-classic-caches": {
        "title": "Classic caches",
        "content": "Classic CachesThe default cache is a non-blocking cache with MSHR (miss status holdingregister) and WB (Write Buffer) for read and write misses. The Cache canalso be enabled with prefetch (typically in the last level of cache).There are multiple possible replacement policies and indexingpolicies implemented in gem5. These define, respectively, the possibleblocks that can be used for a block replacement given an address, andhow to use the address information to find a block's location. Bydefault the cache lines are replaced using LRU (least recently used),and indexed with the Set Associative policy.InterconnectsCrossbarsThe two types of traffic in the crossbar are memory-mapped packets andsnooping packets. The memory-mapped requests go down the memoryhierarchy, and responses go up the memory hierarchy (same route back).The snooping requests go horizontally and up the cache hierarchy,snooping responses go horizontally and down the hierarchy (same routeback). Normal snoops go horizontally and express snoops go up the cachehierarchy.BridgesOthers…DebuggingThere is a feature in the classic memory system for displaying the coherence state of a particular block from within the debugger (e.g., gdb). This feature is built on the classic memory system’s support for functional accesses. (Note that this feature is currently rarely used and may have bugs.)If you inject a functional request with the command set to PrintReq, the packet traverses the memory system (like a regular functional request) but on any object that matches (other queued packet, cache block, etc.) it simply prints out some information about that object.There’s a helper method on Port called printAddr() that takes an address and builds an appropriate PrintReq packet and injects it. Since it propagates using the same mechanism as a normal functional request, it needs to be injected from a port where it will propagate through the whole memory system, such as at a CPU. There are helper printAddr() methods on MemTest, AtomicSimpleCPU, and TimingSimpleCPU objects that simply call printAddr() on their respective cache ports. (Caveat: the latter two are untested.)Putting it all together, you can do this:(gdb) set print object(gdb) call SimObject::find(\" system.physmem.cache0.cache0.cpu\")$4 = (MemTest *) 0xf1ac60(gdb) p (MemTest*)$4$5 = (MemTest *) 0xf1ac60(gdb) call $5-&gt;printAddr(0x107f40)system.physmem.cache0.cache0  MSHRs    [107f40:107f7f] Fill   state:      Targets:        cpu: [107f40:107f40] ReadReqsystem.physmem.cache1.cache1  blk VEMsystem.physmem  0xd0… which says that cache0.cache0 has an MSHR allocated for that address to serve a target ReadReq from the CPU, but it’s not in service yet (else it would be marked as such); the block is valid, exclusive, and modified in cache1.cache1, and the byte has a value of 0xd0 in physical memory.Obviously it’s not necessarily all the info you’d want, but it’s pretty useful. Feel free to extend. There’s also a verbosity parameter that’s currently not used that could be exploited to have different levels of output.Note that the extra “p (MemTest*)$4” is needed since although “set print object” displays the derived type, internally gdb still considers the pointer to be of the base type, so if you try and call printAddr directly on the $4 pointer you get this:(gdb) call $4-&gt;printAddr(0x400000)Couldn't find method SimObject::printAddr",
        "url": "/documentation/general_docs/memory_system/classic_caches/"
      }
      ,
    
      "documentation-general-docs-memory-system-gem5-memory-system": {
        "title": "gem5_memory_syste",
        "content": "The gem5 Memory SystemThe document describes memory subsystem in gem5 with focus on program flowduring CPU’s simple memory transactions (read or write).Model HierarchyModel that is used in this document consists of two out-of-order (O3) ARM v7CPUs with corresponding L1 data caches and Simple Memory. It is created byrunning gem5 with the following parameters:configs/example/fs.py –-caches –-cpu-type=arm_detailed –-num-cpus=2Gem5 uses Simulation Objects derived objects as basic blocks for buildingmemory system. They are connected via ports with established master/slavehierarchy. Data flow is initiated on master port while the response messagesand snoop queries appear on the slave port.CPUData Cache objectimplements a standard cache structure:It is not in the scope of this document to describe O3 CPU model in details, sohere are only a few relevant notes about the model:Read access is initiated by sending message to the port towards DCacheobject. If DCache rejects the message (for being blocked or busy) CPU willflush the pipeline and the access will be re-attempted later on. The access iscompleted upon receiving reply message (ReadRep) from DCache.Write access is initiated by storing the request into store buffer whosecontext is emptied and sent to DCache on every tick. DCache may also reject therequest. Write access is completed when write reply (WriteRep) message isreceived from DCache.Load &amp; store buffers (for read and write access) don’t impose any restrictionon the number of active memory accesses. Therefore, the maximum number ofoutstanding CPU’s memory access requests is not limited by CPU SimulationObject but by underlying memory system model.Split memory access is implemented.The message that is sent by CPU contains memory type (Normal, Device, StronglyOrdered and cachebility) of the accessed region. However, this is not beingused by the rest of the model that takes more simplified approach towardsmemory types.Data Cache ObjectData Cache objectimplements a standard cache structure:Cached memory reads that match particular cache tag (with Valid &amp; Readflags) will be completed (by sending ReadResp to CPU) after a configurabletime. Otherwise, the request is forwarded to Miss Status and Handling Register(MSHR) block.Cached memory writes that match particular cache tag (with Valid, Read &amp;Write flags) will be completed (by sending WriteResp CPU) after the sameconfigurable time. Otherwise, the request is forwarded to Miss Status andHandling Register(MSHR) block.Uncached memory reads are forwarded to MSHR block.Uncached memory writes are forwarded to WriteBuffer block.Evicted (&amp; dirty) cache lines are forwarded to WriteBuffer block.CPU’s access to Data Cache is blocked if any of thefollowing is true:  MSHR block is full.(The size of MSHR’s buffer is configurable.)  Writeback block is full. (The size of the block’s buffer is configurable.)  The number of outstanding memory accesses against the same memory cache linehas reached configurable threshold value – see MSHR and Write Buffer fordetails.Data Cache in blockstate will reject any request from slave port (from CPU) regardless of whetherit would result in cache hit or miss. Note that incoming messages on masterport (response messages and snoop requests) are never rejected.Cache hit on uncachablememory region (unpredicted behaviour according to ARM ARM) will invalidatecache line and fetch data from memory.Tags &amp; Data BlockCache lines (referred asblocks in source code) are organised into sets with configurable associativityand size. They have the following status flags:  Valid. It holds data. Address tag is valid  Read. No read request will be accepted without this flag being set. Forexample, cache line is valid and unreadable when it waits for write flag tocomplete write access.  Write. It may accept writes. Cache line with Write flags identifiesUnique state – no other cache memory holds the copy.  Dirty. It needs Writeback when evicted.Read access will hit cache line if address tags match and Valid and Read flagsare set. Write access will hit cache line if address tags match and Valid, Readand Write flags are set.MSHR and Write Buffer QueuesMiss Status and Handling Register (MSHR) queue holds the list ofCPU’s outstanding memory requests that require read access to lower memorylevel. They are:  Cached Read misses.  Cached Write misses.  Uncached reads.WriteBuffer queue holds the following memory requests:  Uncached writes.  Writeback from evicted (&amp; dirty) cache lines.Each memory request is assigned to corresponding MSHR object (READ or WRITE ondiagram above) that represents particular block (cache line) of memory that hasto be read or written in order to complete the command(s). As shown on gigureabove, cached read/writes against the same cache line have a common MSHR object and will becompleted with a single memory access.The size of the block (and therefore the size of read/write access to lowermemory) is:  The size of cache line for cached access &amp; writeback;  As specified in CPU instruction for uncached access.In general, Data Cachemodel distinguishes between just two memory types:  Normal Cached memory. It is always treated as write back, read and writeallocate.  Normal uncached, Device and Strongly Ordered types are treated equally (asuncached memory)Memory Access OrderingAn unique order number is assigned to each CPU read/write request(as theyappear on slave port). Order numbers of MSHR objects are copied from thefirst assigned read/write.Memory read/writes from each of these two queues are executed in order(according to the assigned order number). When both queues are not empty themodel will execute memory read from MSHR block unless WriteBuffer isfull. It will, however, always preserve the order of read/writes on the same(or overlapping) memory cache line (block).In summary:  Order of accesses to cached memory is not preserved unless they target thesame cache line. For example, the accesses #1, #5 &amp; #10 will completesimultaneously in the same tick (still in order). The access #5 will completebefore #3.  Order of all uncached memory writes is preserved. Write#6 always completesbefore Write#13.  Order to all uncached memory reads is preserved. Read#2 always completesbefore Read#8.  The order of a read and a write uncached access is not necessarily preservedunless their access regions overlap. Therefore, Write#6 always completes beforeRead#8 (they target the same memory block). However, Write#13 may completebefore Read#8.Coherent Bus ObjectCoherent Bus object provides basic support for snoop protocol:All requests on the slave port are forwarded to the appropriate master port.Requests for cached memory regions are also forwarded to other slave ports (assnoop requests).Master port replies are forwarded to the appropriate slave port.Master port snoop requests are forwarded to all slave ports.Slave port snoop replies are forwarded to the port that was the source of therequest. (Note that the source of snoop request can be either slave or masterport.)The bus declares itself blocked for a configurable period of time after any ofthe following events:  A packet is sent (or failed to be sent) to a slave port.  A reply message is sent to a master port.  Snoop response from one slave port is sent to another slave port.The bus in blocked state rejects the following incoming messages:  Slave port requests.  Master port replies.  Master port snoop requests.Simple Memory ObjectIt never blocks the access on slave port.Memory read/write takes immediate effect. (Read or write is performed when therequest is received).Reply message is sent after a configurable period of time .Message FlowMemory Access OrderingThe following diagram shows read access that hits Data Cache line with Validand Read flags:Cache miss read access will generate the following sequence of messages:Note that bus object never gets response from both DCache2 and Memory object.It sends the very same ReadReq package (message) object to memory and datacache. When Data Cache wants to reply on snoop request it marks the messagewith MEM_INHIBIT flag that tells Memory object not to process the message.Memory Access OrderingThe following diagram shows write access that hits DCache1 cache line withValid &amp; Write flags:Next figure shows write access that hits DCache1 cache line with Valid but noWrite flags – which qualifies as write miss. DCache1 issues UpgradeReq toobtain write permission. DCache2::snoopTiming will invalidate cache line thathas been hit. Note that UpgradeResp message doesn’t carry data.The next diagram shows write miss in DCache. ReadExReq invalidates cache linein DCache2. ReadExResp carries the content of memory cache line.",
        "url": "/documentation/general_docs/memory_system/gem5_memory_system/"
      }
      ,
    
      "documentation-general-docs-memory-system": {
        "title": "Memory system",
        "content": "Memory systemM5’s new memory system (introduced in the first 2.0 beta release) wasdesigned with the following goals:  Unify timing and functional accesses in timing mode. With the oldmemory system the timing accesses did not have data and justaccounted for the time it would take to do an operation. Then aseparate functional access actually made the operation visible tothe system. This method was confusing, it allowed simulatedcomponents to accidentally cheat, and prevented the memory systemfrom returning timing-dependent values, which isn’t reasonable foran execute-in-execute CPU model.  Simplify the memory system code – remove the huge amount oftemplating and duplicate code.  Make changes easier, specifically to allow other memoryinterconnects besides a shared bus.For details on the new coherence protocol, introduced (along with asubstantial cache model rewrite) in 2.0b4, see CoherenceProtocol.MemObjectsAll objects that connect to the memory system inherit from MemObject.This class adds the pure virtual functions getMasterPort(conststd::string &amp;name, PortID idx) and getSlavePort(const std::string&amp;name, PortID idx) which returns a port corresponding to the given nameand index. This interface is used to structurally connect the MemObjectstogether.PortsThe next large part of the memory system is the idea of ports. Ports areused to interface memory objects to each other. They will always come inpairs, with a MasterPort and a SlavePort, and we refer to the other portobject as the peer. These are used to make the design more modular. Withports a specific interface between every type of object doesn’t have tobe created. Every memory object has to have at least one port to beuseful. A master module, such as a CPU, has one or more MasterPortinstances. A slave module, such as a memory controller, has one or moreSlavePorts. An interconnect component, such as a cache, bridge or bus,has both MasterPort and SlavePort instances.There are two groups of functions in the port object. The send*functions are called on the port by the object that owns that port. Forexample to send a packet in the memory system a CPU would callmyPort-&gt;sendTimingReq(pkt) to send a packet. Each send function has acorresponding recv function that is called on the ports peer. So theimplementation of the sendTimingReq() call above would simply bepeer-&gt;recvTimingReq(pkt) on the slave port. Using this method we onlyhave one virtual function call penalty but keep generic ports that canconnect together any memory system objects.Master ports can send requests and receive responses, whereas slaveports receive requests and send responses. Due to the coherenceprotocol, a slave port can also send snoop requests and receive snoopresponses, with the master port having the mirrored interface.ConnectionsIn Python, Ports are first-class attributes of simulation objects, muchlike Params. Two objects can specify that their ports should beconnected using the assignment operator. Unlike a normal variable orparameter assignment, port connections are symmetric: A.port1 =B.port2 has the same meaning as B.port2 = A.port1. The notion ofmaster and slave ports exists in the Python objects as well, and a checkis done when the ports are connected together.Objects such as busses that have a potentially unlimited number of portsuse “vector ports”. An assignment to a vector port appends the peer to alist of connections rather than overwriting a previous connection.In C++, memory ports are connected together by the python code after allobjects are instantiated.RequestA request object encapsulates the original request issued by a CPU orI/O device. The parameters of this request are persistent throughout thetransaction, so a request object’s fields are intended to be written atmost once for a given request. There are a handful of constructors andupdate methods that allow subsets of the object’s fields to be writtenat different times (or not at all). Read access to all request fields isprovided via accessor methods which verify that the data in the fieldbeing read is valid.The fields in the request object are typically not available to devicesin a real system, so they should normally be used only for statistics ordebugging and not as architectural values.Request object fields include:  Virtual address. This field may be invalid if the request was issueddirectly on a physical address (e.g., by a DMA I/O device).  Physical address.  Data size.  Time the request was created.  The ID of the CPU/thread that caused this request. May be invalid ifthe request was not issued by a CPU (e.g., a device access or acache writeback).  The PC that caused this request. Also may be invalid if the requestwas not issued by a CPU.PacketA Packet is used to encapsulate a transfer between two objects in thememory system (e.g., the L1 and L2 cache). This is in contrast to aRequest where a single Request travels all the way from the requester tothe ultimate destination and back, possibly being conveyed by severaldifferent Packets along the way.Read access to many packet fields is provided via accessor methods whichverify that the data in the field being read is valid.A packet contains the following all of which are accessed by accessorsto be certain the data is valid:  The address. This is the address that will be used to route thepacket to its target (if the destination is not explicitly set) andto process the packet at the target. It is typically derived fromthe request object’s physical address, but may be derived from thevirtual address in some situations (e.g., for accessing a fullyvirtual cache before address translation has been performed). It maynot be identical to the original request address: for example, on acache miss, the packet address may be the address of the block tofetch and not the request address.  The size. Again, this size may not be the same as that of theoriginal request, as in the cache miss scenario.  A pointer to the data being manipulated.          Set by dataStatic(), dataDynamic(), and dataDynamicArray()which control if the data associated with the packet is freedwhen the packet is, not, with delete, and with delete []respectively.      Allocated if not set by one of the above methods allocate()and the data is freed when the packet is destroyed. (Always safeto call).      A pointer can be retrived by calling getPtr()      get() and set() can be used to manipulate the data in thepacket. The get() method does a guest-to-host endian conversionand the set method does a host-to-guest endian conversion.        A status indicating Success, BadAddress, Not Acknowleged, andUnknown.  A list of command attributes associated with the packet          Note: There is some overlap in the data in the status field andthe command attributes. This is largely so that a packet an beeasily reinitialized when nacked or easily reused with atomic orfunctional accesses.        A SenderState pointer which is a virtual base opaque structureused to hold state associated with the packet but specific to thesending device (e.g., an MSHR). A pointer to this state is returnedin the packet’s response so that the sender can quickly look up thestate needed to process it. A specific subclass would be derivedfrom this to carry state specific to a particular sending device.  A CoherenceState pointer which is a virtual base opaque structureused to hold coherence-related state. A specific subclass would bederived from this to carry state specific to a particular coherenceprotocol.  A pointer to the request.Access TypesThere are three types of accesses supported by the ports.  Timing - Timing accesses are the most detailed access. Theyreflect our best effort for realistic timing and include themodeling of queuing delay and resource contention. Once a timingrequest is successfully sent at some point in the future the devicethat sent the request will either get the response or a NACK if therequest could not be completed (more below). Timing and Atomicaccesses can not coexist in the memory system.  Atomic - Atomic accesses are a faster than detailed access. Theyare used for fast forwarding and warming up caches and return anapproximate time to complete the request without any resourcecontention or queuing delay. When a atomic access is sent theresponse is provided when the function returns. Atomic and timingaccesses can not coexist in the memory system.  Functional - Like atomic accesses functional accesses happeninstantaneously, but unlike atomic accesses they can coexist in thememory system with atomic or timing accesses. Functional accessesare used for things such as loading binaries, examining/changingvariables in the simulated system, and allowing a remote debugger tobe attached to the simulator. The important note is when afunctional access is received by a device, if it contains a queue ofpackets all the packets must be searched for requests or responsesthat the functional access is effecting and they must be updated asappropriate. The Packet::intersect() and fixPacket() methods canhelp with this.Packet allocation protocolThe protocol for allocation and deallocation of Packet objects variesdepending on the access type. (We’re talking about low-level C++new/delete issues here, not anything related to the coherenceprotocol.)  Atomic and Functional : The Packet object is owned by therequester. The responder must overwrite the request packet with theresponse (typically using the Packet::makeResponse() method).There is no provision for having multiple responders to a singlerequest. Since the response is always generated beforesendAtomic() or sendFunctional() returns, the requester canallocate the Packet object statically or on the stack.  Timing : Timing transactions are composed of two one-way messages,a request and a response. In both cases, the Packet object must bedynamically allocated by the sender. Deallocation is theresponsibility of the receiver (or, for broadcast coherence packets,the target device, typically memory). In the case where the receiverof a request is generating a response, it may choose to reuse therequest packet for its response to save the overhead of callingdelete and then new (and gain the convenience of usingmakeResponse()). However, this optimization is optional, and therequester must not rely on receiving the same Packet object back inresponse to a request. Note that when the responder is not thetarget device (as in a cache-to-cache transfer), then the targetdevice will still delete the request packet, and thus the respondingcache must allocate a new Packet object for its response. Also,because the target device may delete the request packet immediatelyon delivery, any other memory device wishing to reference abroadcast packet past point where the packet is delivered must makea copy of that packet, as the pointer to the packet that isdelivered cannot be relied upon to stay valid.Timing Flow controlTiming requests simulate a real memory system, so unlike functional andatomic accesses their response is not instantaneous. Because the timingrequests are not instantaneous, flow control is needed. When a timingpacket is sent via sendTiming() the packet may or may not be accepted,which is signaled by returning true or false. If false is returned theobject should not attempt to sent anymore packets until it receives arecvRetry() call. At this time it should again try to callsendTiming(); however the packet may again be rejected. Note: Theoriginal packet does not need to be resent, a higher priority packet canbe sent instead. Once sendTiming() returns true, the packet may stillnot be able to make it to its destination. For packets that require aresponse (i.e. pkt-&gt;needsResponse() is true), any memory object canrefuse to acknowledge the packet by changing its result to Nacked andsending it back to its source. However, if it is a response packet, thiscan not be done. The true/false return is intended to be used for localflow control, while nacking is for global flow control. In both cases aresponse can not be nacked.Response and Snoop rangesRanges in the memory system are handled by having devices that aresensitive to an address range provide an implementation forgetAddrRanges in their slave port objects. This method returns anAddrRangeList of addresses it responds to. When these ranges change(e.g. from PCI configuration taking place) the device should callsendRangeChange() on its slave port so that the new ranges arepropagated to the entire hierarchy. This is precisely what happensduring init(); all memory objects call sendRangeChange(), and aflurry of range updates occur until everyones ranges have beenpropagated to all busses in the system.",
        "url": "/documentation/general_docs/memory_system/"
      }
      ,
    
      "documentation-general-docs-memory-system-indexing-policies": {
        "title": "Indexing Policies",
        "content": "Indexing PoliciesIndexing policies determine the locations to which a block is mappedbased on its address.The most important methods of indexing policies are getPossibleEntries()and regenerateAddr():  getPossibleEntries() determines the list of entries a given addresscan be mapped to.  regenerateAddr() uses the address information stored in an entry todetermine its full original address.For further information on Cache Indexing Policies, please refer to thewikipedia articles on Placement Policies andAssociativity.Set AssociativeThe set associative indexing policy is the standard for table-likestructures, and can be further divided into Direct-Mapped (or 1-wayset-associative), Set-Associative and Full-Associative (N-wayset-associative, where N is the number of table entries).A set associative cache can be seen as a skewed associative cache whoseskewing function maps to the same value for every way.Skewed AssociativeThe skewed associative indexing policy has a variable mapping based on ahash function, so a value x can be mapped to different sets, based onthe way being used. Gem5 implements skewed caches as described in“Skewed-AssociativeCaches”, from Seznec et al.Note that there are only a limited number of implemented hashingfunctions, so if the number of ways is higher than that number then asub-optimal automatically generated hash function is used.",
        "url": "/documentation/general_docs/memory_system/indexing_policies/"
      }
      ,
    
      "documentation-general-docs-memory-system-replacement-policies": {
        "title": "Replacement Policies",
        "content": "Replacement PoliciesGem5 has multiple implemented replacement policies. Each one uses itsspecific replacement data to determine a replacement victim onevictions.All of the replacement policies prioritize victimizing invalid blocks.A replacement policy consists of a reset(), touch(), invalidate() andgetVictim() methods. Each of which handles the replacement datadifferently.  reset() is used to initialize a replacement data (i.e., validate).It should be called only on entry insertion, and must not be calledagain until invalidation. The first touch to an entry must always bea reset().  touch() is used on accesses to the replacement data, and as suchshould be called on entry accesses. It updates the replacement data.  invalidate() is called whenever an entry is invalidated, possiblydue to coherence handling. It makes the entry as likely to beevicted as possible on the next victim search. An entry does notneed to be invalidated before a reset() is done. When the simulationstarts all entries are invalid.  getVictim() is called when there is a miss, and an eviction must bedone. It searches among all replacement candidates for an entry withthe worst replacement data, generally prioritizing the eviction ofinvalid entries.We briefly describe the replacement policies implemented in Gem5. Iffurther information is required, the Cache Replacement PoliciesWikipedia page, or the respective papers can be studied.RandomThe simplest replacement policy; it does not need replacement data, asit randomly selects a victim among the candidates.Least Recently Used (LRU)Its replacement data consists of a last touch timestamp, and the victimis chosen based on it: the oldest it is, the more likely its respectiveentry is to be victimized.Tree Pseudo Least Recently Used (TreePLRU)A variation of the LRU that uses a binary tree to keep track of therecency of use of the entries through 1-bit pointers.Bimodal Insertion Policy (BIP)The Bimodal Insertion Policy is similar to the LRU, however, blockshave a probability of being inserted as the MRU, according to a bimodalthrottle parameter (btp). The highest btp is, the highest is thelikelihood of a new block being inserted as MRU.LRU Insertion Policy (LIP)The LRU Insertion Policy consists of a LRUreplacement policy that instead of inserting blocks with the most recentlast touch timestamp, it inserts them as the LRU entry. On subsequenttouches to the block, its timestamp is updated to be the MRU, as in LRU.It can also be seen as a BIP where the likelihood of inserting a newblock as the most recently used is 0%.Most Recently Used (MRU)The Most Recently Used policy chooses replacement victims by theirrecency, however, as opposed to LRU, the newest the entry is, the morelikely it is to be victimized.Least Frequently Used (LFU)The victim is chosen using the reference frequency. The least referencedentry is chosen to be evicted, regardless of the amount of times it hasbeen touched, or how long has passed since its last touch.First-In, First-Out (FIFO)The victim is chosen using the insertion timestamp. If no invalidentries exist, the oldest one is victimized, regardless of the amount oftimes it has been touched.Second-ChanceThe Second-Chance replacement policy is similar to FIFO, howeverentries are given a second chance before being victimized. If an entrywould have been the next to be victimized, but its second chance bit isset, this bit is cleared, and the entry is re-inserted at the end of theFIFO. Following a miss, an entry is inserted with its second chance bitcleared.Not Recently Used (NRU)Not Recently Used (NRU) is an approximation of LRU that uses a singlebit to determine if a block is going to be re-referenced in the near ordistant future. If the bit is 1, it is likely to not be referenced soon,so it is chosen as the replacement victim. When a block is victimized,all its co-replacement candidates have their re-reference bitincremented.Re-Reference Interval Prediction (RRIP)Re-Reference Interval Prediction (RRIP) is an extension of NRU thatuses a re-reference prediction value to determine if blocks are going tobe re-used in the near future or not. The higher the value of the RRPV,the more distant the block is from its next access. From the originalpaper, this implementation of RRIP is also called Static RRIP (SRRIP),as it always inserts blocks with the same RRPV.Bimodal Re-Reference Interval Prediction (BRRIP)Bimodal Re-Reference Interval Prediction(BRRIP) is an extension ofRRIP that has a probability of not inserting blocks as the LRU, as inthe Bimodal Insertion Policy. This probability is controlled by thebimodal throtle parameter (btp).",
        "url": "/documentation/general_docs/memory_system/replacement_policies/"
      }
      ,
    
      "documentation-general-docs-ruby-garnet-standalone": {
        "title": "Garnet standalone",
        "content": "Garnet StandaloneThis is a dummy cache coherence protocol that is used to operate Garnetin a standalone manner. This protocol works in conjunction with theGarnet Synthetic Trafficinjector.Related Files  src/mem/protocols          Garnet_standalone-cache.sm: cache controller specification      Garnet_standalone-dir.sm: directory controllerspecification      Garnet_standalone-msg.sm: message type specification      Garnet_standalone.slicc: container file      Cache HierarchyThis protocol assumes a 1-level cache hierarchy. The role of the cacheis to simply send messages from the cpu to the appropriate directory(based on the address), in the appropriate virtual network (based on themessage type). It does not track any state. Infact, no CacheMemory iscreated unlike other protocols. The directory receives the messages fromthe caches, but does not send any back. The goal of this protocol is toenable simulation/testing of just the interconnection network.Stable States and Invariants            States      Invariants                  I      Default state of all cache blocks      Cache controller  Requests, Responses, Triggers:          Load, Instruction fetch, Store from the core.      The network tester (in src/cpu/testers/networktest/networktest.cc)generates packets of the type ReadReq, INST_FETCH, andWriteReq, which are converted into RubyRequestType:LD,RubyRequestType:IFETCH, and RubyRequestType:ST, respectively, bythe RubyPort (in src/mem/ruby/system/RubyPort.hh/cc). These messagesreach the cache controller via the Sequencer. The destination for thesemessages is determined by the traffic type, and embedded in the address.More details can be found here.  Main Operation:          The goal of the cache is only to act as a source node in theunderlying interconnection network. It does not track anystates.      On a LD from the core:                  it returns a hit, and          maps the address to a directory, and issues a message for itof type MSG, and size Control (8 bytes) in therequest vnet (0).          Note: vnet 0 could also be made to broadcast, instead ofsending a directed message to a particular directory, byuncommenting the appropriate line in the a_issueRequestaction in Network_test-cache.sm                    On a IFETCH from the core:                  it returns a hit, and          maps the address to a directory, and issues a message for itof type MSG, and size Control (8 bytes) in theforward vnet (1).                    On a ST from the core:                  it returns a hit, and          maps the address to a directory, and issues a message for itof type MSG, and size Data (72 bytes) in theresponse vnet (2).                    Note: request, forward and response are just used todifferentiate the vnets, but do not have any physicalsignificance in this protocol.      Directory controller  Requests, Responses, Triggers:          MSG from the cores        Main Operation:          The goal of the directory is only to act as a destination nodein the underlying interconnection network. It does not track anystates.      The directory simply pops its incoming queue upon receiving themessage.      Other featuresThis protocol assumes only 3 vnets.  It should only be used when running Garnet Synthetic    Traffic.",
        "url": "/documentation/general_docs/ruby/Garnet_standalone/"
      }
      ,
    
      "documentation-general-docs-ruby-mesi-two-level": {
        "title": "MESI two level",
        "content": "MESI Two LevelProtocol Overview  This protocol models two-level cache hierarchy. The L1 cache isprivate to a core, while the L2 cache is shared among the cores. L1Cache is split into Instruction and Data cache.  Inclusion is maintained between the L1 and L2 cache.  At high level the protocol has four stable states, M, E,S and I. A block in M state means the blocks is writable(i.e. has exclusive permission) and has been dirtied (i.e. its theonly valid copy on-chip). E state represent a cache block withexclusive permission (i.e. writable) but is not written yet. Sstate means the cache block is only readable and possible multiplecopies of it exists in multiple private cache and as well as in theshared cache. I means that the cache block is invalid.  The on-chip cache coherence is maintained through DirectoryCoherence scheme, where the directory information is co-locatedwith the corresponding cache blocks in the shared L2 cache.  The protocol has four types of controllers – L1 cache controller,L2 cache controller, Directory controller and DMA controller.L1 cache controller is responsible for managing L1 Instruction andL1 Data Cache. Number of instantiation of L1 cache controller isequal to the number of cores in the simulated system. L2 cachecontroller is responsible for managing the shared L2 cache and formaintaining coherence of on-chip data through directory coherencescheme. The Directory controller act as interface to the MemoryController/Off-chip main memory and also responsible for coherenceacross multiple chips/and external coherence request from DMAcontroller. DMA controller is responsible for satisfying coherentDMA requests.  One of the primary optimization in this protocol is that if a L1Cache request a data block even for read permission, the L2 cachecontroller if finds that no other core has the block, it returns thecache block with exclusive permission. This is an optimization donein anticipation that a cache blocks read would be written by thesame core soon and thus save an extra request with thisoptimization. This is exactly why E state exits (i.e. when acache block is writable but not yet written).  The protocol supports silent eviction of clean cache blocks fromthe private L1 caches. This means that cache blocks which have notbeen written to and has readable permission only can drop the cacheblock from the private L1 cache without informing the L2 cache. Thisoptimization helps reducing write-back traffic to the L2 cachecontroller.Related Files  src/mem/protocols          MESI_CMP_directory-L1cache.sm: L1 cache controllerspecification      MESI_CMP_directory-L2cache.sm: L2 cache controllerspecification      MESI_CMP_directory-dir.sm: directory controllerspecification      MESI_CMP_directory-dma.sm: dma controller specification      MESI_CMP_directory-msg.sm: coherence message typespecifications. This defines different field of different typeof messages that would be used by the given protocol      MESI_CMP_directory.slicc: container file      Controller Description**L1 cachecontroller**            States      Invariants and Semantic/Purpose of the state                  M      The cache block is held in exclusive state by only one L1 cache. There are no sharers of this block. The data is potentially is the only valid copy in the system. The copy of the cache block is writable and as well as readable.              E      The cache block is held with exclusive permission by exactly only one L1 cache. The difference with the M state is that the cache block is writable (and readable) but not yet written.              S      The cache block is held in shared state by 1 or more L1 caches and/or by the L2 cache. The block is only readable. No cache can have the cache block with exclusive permission.              I / NP      The cache block is invalid.              IS      Transient state. This means that GETS (Read) request has been issued for the cache block and awaiting for response. The cache block is neither readable nor writable.              IM      Transient state. This means that GETX (Write) request has been issued for the cache block and awaiting for response. The cache block is neither readable nor writable.              SM      Transient state. This means the cache block was originally in S state and then UPGRADE (Write) request was issued to get exclusive permission for the blocks and awaiting response. The cache block is readable.              IS_I      Transient state. This means that while in IS state the cache controller received Invalidation from the L2 Cache’s directory. This happens due to race condition due to write to the same cache block by other core, while the given core was trying to get the same cache blocks for reading. The cache block is neither readable nor writable..              M_I      Transient state. This state indicates that the cache is trying to replace a cache block in M state from its cache and the write-back (PUTX) to the L2 cache’s directory has been issued but awaiting write-back acknowledgement.              SINK_WB_ACK      Transient state. This state is reached when waiting for write-back acknowledgement from the L2 cache’s directory, the L1 cache received intervention (forwarded request from other cores). This indicates a race between the issued write-back to the directory and another request from the another cache has happened. This also indicates that the write-back has lost the race (i.e. before it reached the L2 cache’s directory, another core’s request has reached the L2). This state is essential to avoid possibility of complicated race condition that can happen if write-backs are silently dropped at the directory.                            L2 cache controllerRecall that the on-chip directory is co-located with the correspondingcache blocks in the L2 Cache. Thus following states in the L2 cacheblock encodes the information about the status and permissions of thecache blocks in the L2 cache as well as the coherence status of thecache block that may be present in one or more private L1 caches. Beyondthe coherence states there are also two more important fields per cacheblock that aids to make proper coherence actions. These fields areSharers field, which can be thought of as a bit-vector indicatingwhich of the private L1 caches potentially have the given cache block.The other important field is the Owner field, which is the identityof the private L1 cache in case the cache block is held with exclusivepermission in a L1cache.            States      Invariants and Semantic/Purpose of the state                  NP      The cache blocks is not present in the on-chip cache hierarchy.              SS      The cache block is present in potentially multiple private caches in only readable mode (i.e.in “S” state in private caches). Corresponding “Sharers” vector with the block should give the identity of the private caches which possibly have the cache block in its cache. The cache block in the L2 cache is valid and readable.              M      The cache block is present ONLY in the L2 cache and has exclusive permission. L1 Cache’s read/write requests (GETS/GETX) can be satisfied directly from the L2 cache.              MT      The cache block is in ONE of the private L1 caches with exclusive permission. The data in the L2 cache is potentially stale. The identity of the L1 cache which has the block can be found in the “Owner” field associated with the cache block. Any request for read/write (GETS/GETX) from other cores/private L1 caches need to be forwarded to the owner of the cache block. L2 can not service requests itself.              M_I      Its a transient state. This state indicates that the cache is trying to replace the cache block from its cache and the write-back (PUTX/PUTS) to the Directory controller (which act as interface to Main memory) has been issued but awaiting write-back acknowledgement. The data is neither readable nor writable.              MT_I      Its a transient state. This state indicates that the cache is trying to replace a cache block in MT state from its cache. Invalidation to the current owner (private L1 cache) of the cache block has been issued and awaiting write-back from the Owner L1 cache. Note that the this Invalidation (called back-invalidation) is instrumental in making sure that the inclusion is maintained between L1 and L2 caches. The data is neither readable nor writable.              MCT_I      Its a transient state.This state is same as MT_I, except that it is known that the data in the L2 cache is in clean state. The data is neither readable nor writable.              I_I      Its a transient state. The L2 cache is trying to replace a cache block in the SS state and the cache block in the L2 is in clean state. Invalidations has been sent to all potential sharers (L1 caches) of the cache block. The L2 cache’s directory is waiting for all the required Acknowledgements to arrive from the L1 caches. Note that the this Invalidation (called back-invalidation) is instrumental in making sure that the inclusion is maintained between L1 and L2 caches. The data is neither readable nor writable.              S_I      Its a transient state.Same as I_I, except the data in L2 cache for the cache block is dirty. This means unlike in the case of I_I, the data needs to be sent to the Main memory. The cache block is neither readable nor writable..              ISS      Its a transient state. L2 has received a GETS (read) request from one of the private L1 caches, for a cache block that it not present in the on-chip caches. A read request has been sent to the Main Memory (Directory controller) and waiting for the response from the memory. This state is reached only when the request is for data cache block (not instruction cache block). The purpose of this state is that if it is found that only one L1 cache has requested the cache block then the block is returned to the requester with exclusive permission (although it was requested for reading permission). The cache block is neither readable nor writable.              IS      Its a transient state. The state is similar to ISS, except the fact that if the requested cache block is Instruction cache block or more than one core request the same cache block while waiting for the response from the memory, this state is reached instead of ISS. Once the requested cache block arrives from the Main Memory, the block is sent to the requester(s) with read-only permission. The cache block is neither readable nor writable at this state.              IM      Its a transient state. This state is reached when a L1 GETX (write) request is received by the L2 cache for a cache blocks that is not present in the on-chip cache hierarchy. The request for the cache block in exclusive mode has been issued to the main memory but response is yet to arrive.The cache block is neither readable nor writable at this state.              SS_MB      Its a transient state. In general any state whose name ends with “B” (like this one) also means that it is a blocking coherence state. This means the directory awaiting for some response from the private L1 cache ans until it receives the desired response any other request is not entertained (i.e. request are effectively serialized). This particular state is reached when a L1 cache requests a cache block with exclusive permission (i.e. GETX or UPGRADE) and the coherence state of the cache blocks was in SS state. This means that the requested cache blocks potentially has readable copies in the private L1 caches. Thus before giving the exclusive permission to the requester, all the readable copies in the L1 caches need to be invalidated. This state indicate that the required invalidations has been sent to the potential sharers (L1 caches) and the requester has been informed about the required number of Invalidation Acknowledgement it needs before it can have the exclusive permission for the cache block. Once the requester L1 cache gets the required number of Invalidation Acknowledgement it informs the director about this by UNBLOCK message which allows the directory to move out of this blocking coherence state and thereafter it can resume entertaining other request for the given cache block. The cache block is neither readable nor writable at this state.              MT_MB      Its a transient state and also a blocking state. This state is reached when L2 cache’s directory has sent out a cache block with exclusive permission to a requester L1 cache but yet to receive UNBLOCK from the requester L1 cache acknowledging the receipt of exclusive permission. The cache block is neither readable nor writable at this state.              MT_IIB      Its a transient state and also a blocking state. This state is reached when a read request (GETS) request is received for a cache blocks which is currently held with exclusive permission in another private L1 cache (i.e. directory state is MT). On such requests the L2 cache’s directory forwards the request to the current owner L1 cache and transitions to this state. Two events need to happen before this cache block can be unblocked (and thus start entertaining further request for this cache block). The current owner cache block need to send a write-back to the L2 cache to update the L2’s copy with latest value. The requester L1 cache also needs to send UNBLOCK to the L2 cache indicating that it has got the requested cache block with desired coherence permissions. The cache block is neither readable nor writable at this state in the L2 cache.              MT_IB      Its a transient state and also a blocking state. This state is reached when at MT_IIB state the L2 cache controller receives the UNBLOCK from the requester L1 cache but yet to receive the write-back from the previous owner L1 cache of the block. The cache block is neither readable nor writable at this state in the L2 cache.              MT_SB      Its a transient state and also a blocking state. This state is reached when at MT_IIB state the L2 cache controller receives write-back from the previous owner L1 cache for the blocks, while yet to receive the UNBLOCK from the current requester for the cache block. The cache block is neither readable nor writable at this state in the L2 cache.      ",
        "url": "/documentation/general_docs/ruby/MESI_Two_Level/"
      }
      ,
    
      "documentation-general-docs-ruby-mi-example": {
        "title": "MI example",
        "content": "MI ExampleProtocol Overview  This is a simple cache coherence protocol that is used to illustrateprotocol specification using SLICC.  This protocol assumes a 1-level cache hierarchy. The cache isprivate to each node. The caches are kept coherent by a directorycontroller. Since the hierarchy is only 1-level, there is noinclusion/exclusion requirement.  This protocol does not differentiate between loads and stores.  This protocol cannot implement the semantics of LL/SC instructions,because external GETS requests that hit a block within a LL/SCsequence steal exclusive permissions, thus causing the SCinstruction to fail.Related Files  src/mem/protocols          MI_example-cache.sm: cache controller specification      MI_example-dir.sm: directory controller specification      MI_example-dma.sm: dma controller specification      MI_example-msg.sm: message type specification      MI_example.slicc: container file      Stable States and Invariants            States      Invariants                  M      The cache block has been accessed (read/written) by this node. No other node holds a copy of the cache block              I      The cache block at this node is invalid      The notation used in the controller FSM diagrams is describedhere.Cache controller  Requests, Responses, Triggers:          Load, Instruction fetch, Store from the core      Replacement from self      Data from the directory controller      Forwarded request (intervention) from the directory controller      Writeback acknowledgement from the directory controller      Invalidations from directory controller (on dma activity)        Main Operation:          On a load/Instruction fetch/Store request from the core:                  it checks whether the corresponding block is present in theM state. If so, it returns a hit          otherwise, if in I state, it initiates a GETX request fromthe directory controller                    On a replacement trigger from self:                  it evicts the block, issues a writeback request to thedirectory controller          it waits for acknowledgement from the directory controller(to prevent races)                    On a forwarded request from the directory controller:                  This means that the block was in M state at this node whenthe request was generated by some other node          It sends the block directly to the requesting node(cache-to-cache transfer)          It evicts the block from this node                    Invalidations are similar to replacements      Directory controller  Requests, Responses, Triggers:          GETX from the cores, Forwarded GETX to the cores      Data from memory, Data to the cores      Writeback requests from the cores, Writeback acknowledgements tothe cores      DMA read, write requests from the DMA controllers        Main Operation:          The directory maintains track of which core has a block in the Mstate. It designates this core as owner of the block.      On a GETX request from a core:                  If the block is not present, a memory fetch request isinitiated          If the block is already present, then it means the requestis generated from some other core                          In this case, a forwarded request is sent to theoriginal owner              Ownership of the block is transferred to the requestor                                          On a writeback request from a core:                  If the core is owner, the data is written to memory andacknowledgement is sent back to the core          If the core is not owner, a NACK is sent back                          This can happen in a race condition              The core evicted the block while a forwarded requestsome other core was on the way and the directory hasalready changed ownership for the core              The evicting core holds the data till the forwardedrequest arrives                                          On DMA accesses (read/write)                  Invalidation is sent to the owner node (if any). Otherwisedata is fetched from memory.          This ensures that the most recent data is available.                    Other features  MI protocols don’t support LL/SC semantics. A load from a remote    core will invalidate the cache block.  This protocol has no timeout mechanisms.",
        "url": "/documentation/general_docs/ruby/MI_example/"
      }
      ,
    
      "documentation-general-docs-ruby-moesi-cmp-directory": {
        "title": "MOESI CMP directory",
        "content": "MOESI CMP DirectoryProtocol Overview  TODO: cache hierarchy  In contrast with the MESI protocol, the MOESI protocol introduces anadditional Owned state.  The MOESI protocol also includes many coalescing optimizations notavailable in the MESI protocol.Related Files  src/mem/protocols          MOESI_CMP_directory-L1cache.sm: L1 cache controllerspecification      MOESI_CMP_directory-L2cache.sm: L2 cache controllerspecification      MOESI_CMP_directory-dir.sm: directory controllerspecification      MOESI_CMP_directory-dma.sm: dma controller specification      MOESI_CMP_directory-msg.sm: message type specification      MOESI_CMP_directory.slicc: container file      L1 Cache ControllerStable States and Invariants            States      Invariants                  MM      The cache block is held exclusively by this node and is potentially modified (similar to conventional “M” state).              MM_W      The cache block is held exclusively by this node and is potentially modified (similar to conventional “M” state). Replacements and DMA accesses are not allowed in this state. The block automatically transitions to MM state after a timeout.              O      The cache block is owned by this node. It has not been modified by this node. No other node holds this block in exclusive mode, but sharers potentially exist.              M      The cache block is held in exclusive mode, but not written to (similar to conventional “E” state). No other node holds a copy of this block. Stores are not allowed in this state.              M_W      The cache block is held in exclusive mode, but not written to (similar to conventional “E” state). No other node holds a copy of this block. Only loads and stores are allowed. Silent upgrade happens to MM_W state on store. Replacements and DMA accesses are not allowed in this state. The block automatically transitions to M state after a timeout.              S      The cache block is held in shared state by 1 or more nodes. Stores are not allowed in this state.              I      The cache block is invalid.      FSM AbstractionThe notation used in the controller FSM diagrams is describedhere.Optimizations            States      Description                  SM      A GETX has been issued to get exclusive permissions for an impending store to the cache block, but an old copy of the block is still present. Stores and Replacements are not allowed in this state.              OM      A GETX has been issued to get exclusive permissions for an impending store to the cache block, the data has been received, but all expected acknowledgments have not yet arrived. Stores and Replacements are not allowed in this state.      The notation used in the controller FSM diagrams is describedhere.L2 Cache ControllerStable States and Invariants            Intra-chip Inclusion      Inter-chip Exclusion      States      Description                  Not in any L1 or L2 at this chip      May be present at other chips      NP/I      The cache block at this chip is invalid.              Not in L2, but in 1 or more L1s at this chip      May be present at other chips      ILS      The cache block is not present at L2 on this chip. It is shared locally by L1 nodes in this chip.              ILO      The cache block is not present at L2 on this chip. Some L1 node in this chip is an owner of this cache block.                            ILOS      The cache block is not present at L2 on this chip. Some L1 node in this chip is an owner of this cache block. There are also L1 sharers of this cache block in this chip.                            Not present at any other chip      ILX      The cache block is not present at L2 on this chip. It is held in exclusive mode by some L1 node in this chip.                     ILOX      The cache block is not present at L2 on this chip. It is held exclusively by this chip and some L1 node in this chip is an owner of the block.                            ILOSX      The cache block is not present at L2 on this chip. It is held exclusively by this chip. Some L1 node in this chip is an owner of the block. There are also L1 sharers of this cache block in this chip.                            In L2, but not in any L1 at this chip      May be present at other chips      S      The cache block is not present at L1 on this chip. It is held in shared mode at L2 on this chip and is also potentially shared across chips.              O      The cache block is not present at L1 on this chip. It is held in owned mode at L2 on this chip. It is also potentially shared across chips.                            Not present at any other chip      M      The cache block is not present at L1 on this chip. It is present at L2 on this chip and is potentially modified.                     Both in L2, and 1 or more L1s at this chip      May be present at other chips      SLS      The cache block is present at L2 in shared mode on this chip. There exists local L1 sharers of the block on this chip. It is also potentially shared across chips.              OLS      The cache block is present at L2 in owned mode on this chip. There exists local L1 sharers of the block on this chip. It is also potentially shared across chips.                            Not present at any other chip      OLSX      The cache block is present at L2 in owned mode on this chip. There exists local L1 sharers of the block on this chip. It is held exclusively by this chip.             FSM AbstractionThe controller is described in 2 parts. The first picture showstransitions between all “intra-chip inclusion” categories and withincategories 1, 3, 4. Transitions within category 2 (Not in L2, but in 1or more L1s at this chip) are shown in the second picture.The notation used in the controller FSM diagrams is describedhere. Transitionsinvolving other chips are annotated inbrown.The second picture below expands the central hexagonal portion of theabove picture to show transitions within category 2 (Not in L2, but in 1or more L1s at this chip).The notation used in the controller FSM diagrams is describedhere. Transitionsinvolving other chips are annotated inbrown.Directory Controller**Stable States andInvariants**            States      Invariants                  M      The cache block is held in exclusive state by only 1 node (which is also the owner). There are no sharers of this block. The data is potentially different from that in memory.              O      The cache block is owned by exactly 1 node. There may be sharers of this block. The data is potentially different from that in memory.              S      The cache block is held in shared state by 1 or more nodes. No node has ownership of the block. The data is consistent with that in memory (Check).              I      The cache block is invalid.      FSM AbstractionThe notation used in the controller FSM diagrams is describedhere.Other featuresTimeouts:",
        "url": "/documentation/general_docs/ruby/MOESI_CMP_directory/"
      }
      ,
    
      "documentation-general-docs-ruby-moesi-cmp-token": {
        "title": "MOESI CMP token",
        "content": "MOESI CMP tokenProtocol Overview  This protocol also models a 2-level cache hierarchy.  It maintains coherence permission by explicitly exchanging andcounting tokens.  A fix number of token are assigned to each cache block in thebeginning, the number of token remains unchanged.  To write a block, the processor must have all the token for thatblock. For reading at least one token is required.  The protocol also has a persistent message support to avoidstarvation.Related Files  src/mem/protocols          MOESI_CMP_token-L1cache.sm: L1 cache controllerspecification      MOESI_CMP_token-L2cache.sm: L2 cache controllerspecification      MOESI_CMP_token-dir.sm: directory controller specification      MOESI_CMP_token-dma.sm: dma controller specification      MOESI_CMP_token-msg.sm: message type specification      MOESI_CMP_token.slicc: container file      Controller DescriptionL1 Cache            States      Invariants                  MM      The cache block is held exclusively by this node and is potentially modified (similar to conventional “M” state).              MM_W      The cache block is held exclusively by this node and is potentially modified (similar to conventional “M” state). Replacements and DMA accesses are not allowed in this state. The block automatically transitions to MM state after a timeout.              O      The cache block is owned by this node. It has not been modified by this node. No other node holds this block in exclusive mode, but sharers potentially exist.              M      The cache block is held in exclusive mode, but not written to (similar to conventional “E” state). No other node holds a copy of this block. Stores are not allowed in this state.              M_W      The cache block is held in exclusive mode, but not written to (similar to conventional “E” state). No other node holds a copy of this block. Only loads and stores are allowed. Silent upgrade happens to MM_W state on store. Replacements and DMA accesses are not allowed in this state. The block automatically transitions to M state after a timeout.              S      The cache block is held in shared state by 1 or more nodes. Stores are not allowed in this state.              I      The cache block is invalid.      L2 cache            States      Invariants                  NP      The cache block is held exclusively by this node and is potentially locally modified (similar to conventional “M” state).              O      The cache block is owned by this node. It has not been modified by this node. No other node holds this block in exclusive mode, but sharers potentially exist.              M      The cache block is held in exclusive mode, but not written to (similar to conventional “E” state). No other node holds a copy of this block. Stores are not allowed in this state.              S      The cache line holds the most recent, correct copy of the data. Other processors in the system may hold copies of the data in the shared state, as well. The cache line can be read, but not written in this state.              I      The cache line is invalid and does not hold a valid copy of the data.      Directory controller            States      Invariants                  O      Owner .              NO      Not Owner.              L      Locked.      ",
        "url": "/documentation/general_docs/ruby/MOESI_CMP_token/"
      }
      ,
    
      "documentation-general-docs-ruby-moesi-hammer": {
        "title": "MOESI hammer",
        "content": "MOESI HammerThis is an implementation of AMD’s Hammer protocol, which is used inAMD’s Hammer chip (also know as the Opteron or Athlon 64). The protocolimplements both the original a HyperTransport protocol, as well as themore recent ProbeFilter protocol. The protocol also includes a full-bitdirectory mode.Related Files  src/mem/protocols          MOESI_hammer-cache.sm: cache controller specification      MOESI_hammer-dir.sm: directory controller specification      MOESI_hammer-dma.sm: dma controller specification      MOESI_hammer-msg.sm: message type specification      MOESI_hammer.slicc: container file      Cache HierarchyThis protocol implements a 2-level private cache hierarchy. It assignsseparate Instruction and Data L1 caches, and a unified L2 cache to eachcore. These caches are private to each core and are controlled with oneshared cache controller. This protocol enforce exclusion between L1 andL2caches.Stable States and Invariants            States      Invariants                  MM      The cache block is held exclusively by this node and is potentially locally modified (similar to conventional “M” state).              O      The cache block is owned by this node. It has not been modified by this node. No other node holds this block in exclusive mode, but sharers potentially exist.              M      The cache block is held in exclusive mode, but not written to (similar to conventional “E” state). No other node holds a copy of this block. Stores are not allowed in this state.              S      The cache line holds the most recent, correct copy of the data. Other processors in the system may hold copies of the data in the shared state, as well. The cache line can be read, but not written in this state.              I      The cache line is invalid and does not hold a valid copy of the data.      Cache controllerThe notation used in the controller FSM diagrams is describedhere.MOESI_hammer supports cache flushing. To flush a cache line, the cachecontroller first issues a GETF request to the directory to block theline until the flushing is completed. It then issues a PUTF and writesback the cache line.Directory controllerMOESI_hammer memory module, unlike a typical directory protocol, doesnot contain any directory state and instead broadcasts requests to allthe processors in the system. In parallel, it fetches the data from theDRAM and forward the response to the requesters.probe filter: TODOStable States and Invariants            States      Invariants                  NX      Not Owner, probe filter entry exists, block in O at Owner.              NO      Not Owner, probe filter entry exists, block in E/M at Owner.              S      Data clean, probe filter entry exists pointing to the current owner.              O      Data clean, probe filter entry exists.              E      Exclusive Owner, no probe filter entry.      ControllerThe notation used in the controller FSM diagrams is describedhere.",
        "url": "/documentation/general_docs/ruby/MOESI_hammer/"
      }
      ,
    
      "documentation-general-docs-ruby-cache-coherence-protocols": {
        "title": "Cache Coherence Protocols",
        "content": "Cache Coherence ProtocolsCommon Notations and Data StructuresCoherence MessagesThese are described in the &lt;protocol-name&gt;-msg.sm file for eachprotocol.            Message      Description                  ACK/NACK      positive/negative acknowledgement for requests that wait for the direction of resolution before deciding on the next action. Examples are writeback requests, exclusive requests.              GETS      request for shared permissions to satisfy a CPU’s load or IFetch.              GETX      request for exclusive access.              INV      invalidation request. This can be triggered by the coherence protocol itself, or by the next cache level/directory to enforce inclusion or to trigger a writeback for a DMA access so that the latest copy of data is obtained.              PUTX      request for writeback of cache block. Some protocols (e.g. MOESI_CMP_directory) may use this only for writeback requests of exclusive data.              PUTS      request for writeback of cache block in shared state.              PUTO      request for writeback of cache block in owned state.              PUTO_Sharers      request for writeback of cache block in owned state but other sharers of the block exist.              UNBLOCK      message to unblock next cache level/directory for blocking protocols.      AccessPermissionsThese are associated with each cache block and determine what operationsare permitted on that block. It is closely correlated with coherenceprotocolstates.            Permissions      Description                  Invalid      The cache block is invalid. The block must first be obtained (from elsewhere in the memory hierarchy) before loads/stores can be performed. No action on invalidates (except maybe sending an ACK). No action on replacements. The associated coherence protocol states are I or NP and are stable states in every protocol.              Busy      TODO              Read_Only      Only operations permitted are loads, writebacks, invalidates. Stores cannot be performed before transitioning to some other state.              Read_Write      Loads, stores, writebacks, invalidations are allowed. Usually indicates that the block is dirty.      Data Structures  Message Buffers:TODO  TBE Table: TODO      Timer Table: This maintains a map of address-based timers. Foreach target address, a timeout value can be associated and added tothe Timer table. This data structure is used, for example, by the L1cache controller implementation of the MOESI_CMP_directoryprotocol to trigger separate timeouts for cache blocks. Internally,the Timer Table uses the event queue to schedule the timeouts. TheTimerTable supports a polling-based interface, isReady() tocheck if a timeout has occurred. Timeouts on addresses can be setusing the set() method and removed using the unset() method.    Related Files:          src/mem/ruby/system/TimerTable.hh: Declares the        TimerTable class      src/mem/ruby/system/TimerTable.cc: Implementation of the        methods of the TimerTable class, that deals with setting        addresses &amp; timeouts, scheduling events using the event        queue.      Coherence controller FSM Diagrams  The Finite State Machines show only the stable states  Transitions are annotated using the notation “Event list” or“Event list : Action list” or “Event list : Action list :Event list”. For example, Store : GETX indicates that on a Storeevent, a GETX message was sent whereas GETX : Mem Read indicatesthat on receiving a GETX message, a memory read request was sent.Only the main triggers and actions are listed.  Optional actions (e.g. writebacks depending on whether or not theblock is dirty) are enclosed within [ ]  In the diagrams, the transition labels are associated with the arcthat cuts across the transition label or the closest arc.",
        "url": "/documentation/general_docs/ruby/cache-coherence-protocols/"
      }
      ,
    
      "documentation-general-docs-ruby-garnet-2": {
        "title": "Garnet 2.0",
        "content": "More details of the gem5 Ruby Interconnection Network arehere.Garnet2.0: An On-Chip Network Model for Heterogeneous SoCsGarnet2.0 is a detailed interconnection network model inside gem5. It isin active development, and patches with more features will beperiodically pushed into gem5. Additional garnet-related patches andtool support under development (not part of the repo) can be found atthe Garnet page at GeorgiaTech.Garnet2.0 builds upon the original Garnet model which was published in2009.If your use of Garnet contributes to a published paper, please cite thefollowing paper:    @inproceedings{garnet,      title={GARNET: A detailed on-chip network model inside a full-system simulator},      author={Agarwal, Niket and Krishna, Tushar and Peh, Li-Shiuan and Jha, Niraj K},      booktitle={Performance Analysis of Systems and Software, 2009. ISPASS 2009. IEEE International Symposium on},      pages={33--42},      year={2009},      organization={IEEE}    }Garnet2.0 provides a cycle-accurate micro-architectural implementationof an on-chip network router. It leverages the Topology and Routing frastructureprovided by gem5’s ruby memory system model. The default router is astate-of-the-art 1-cycle pipeline. There is support to add additionaldelay of any number of cycles in any router, by specifying it within thetopology.Garnet2.0 can also be used to model an off-chip interconnection networkby setting appropriate delays in the routers and links.  Related Files:          src/mem/ruby/network/Network.py      src/mem/ruby/network/garnet2.0/GarnetNetwork.py      src/mem/ruby/network/Topology.cc      InvocationThe garnet networks can be enabled by adding –network=garnet2.0.ConfigurationGarnet2.0 uses the generic network parameters in Network.py:  number_of_virtual_networks: This is the maximum number ofvirtual networks. The actual number of active virtual networksis determined by the protocol.  control_msg_size: The size of control messages in bytes.Default is 8. m_data_msg_size in Network.cc is set to theblock size in bytes + control_msg_size.Additional parameters are specified in garnet2.0/GarnetNetwork.py:  ni_flit_size: flit size in bytes. Flits are thegranularity at which information is sent from one router to theother. Default is 16 (=&gt; 128 bits). [This default value of 16results in control messages fitting within 1 flit, and datamessages fitting within 5 flits]. Garnet requires theni_flit_size to be the same as the bandwidth_factor (innetwork/BasicLink.py) as it does not model variable bandwidthwithin the network. This can also be set from the command linewith –link-width-bits.  vcs_per_vnet: number of virtual channels (VC) per virtualnetwork. Default is 4. This can also be set from the commandline with –vcs-per-vnet.  buffers_per_data_vc: number of flit-buffers per VC in thedata message class. Since data messages occupy 5 flits, thisvalue can lie between 1-5. Default is 4.  buffers_per_ctrl_vc: number of flit-buffers per VC in thecontrol message class. Since control messages occupy 1 flit, anda VC can only hold one message at a time, this value has to be          Default is 1.        routing_algorithm: 0: Weight-based table (default), 1: XY,2: Custom. More details below.TopologyGarnet2.0 leverages theTopologyinfrastructureprovided by gem5’s ruby memory system model. Any heterogeneous topologycan be modeled. Each router in the topology file can be given anindependent latency, which overrides the default. In addition, each linkhas 2 optional parameters: src_outport and dst_inport, which arestrings with names of the output and input ports of the source anddestination routers for each link. These can be used inside garnet2.0 toimplement custom routing algorithms, as described next. For instance, ina Mesh, the west to east links have src_outport set to “west” anddst_inport” set to “east”.  Network Components:          GarnetNetwork: This is the top level object thatinstantiates all network interfaces, routers, and links.Topology.cc calls the methods to add “external links” betweenNIs and routers, and “internal links” between routers.      NetworkInterface: Each NI connects to one coherencecontroller via MsgBuffer interfaces on one side. It has a linkto a router on the other. Every protocol message is put into aone-flit control or multi (default=5)-flit data (depending onits vnet), and injected into the router. Multiple NIs canconnect to the same router (for e.g., in the Mesh topology,cache and dir controllers connect via individual NIs to the samerouter).      Router: The router manages arbitration for output links, andflow control between routers.      NetworkLink: Network links carry flits. They can be of oneof 3 types: EXT_OUT_ (router to NI), EXT_IN_ (NI to router),and INT_ (internal router to router)      CreditLink: Credit links carry VC/buffer credits betweenrouters for flow control.      RoutingGarnet2.0 leverages theRouting infrastructureprovided by gem5’s ruby memory system model. The default routingalgorithm is a deterministic table-based routing algorithm with shortestpaths. Link weights can be used to prioritize certain links over others.See src/mem/ruby/network/Topology.cc for details about how the routingtable is populated.Custom Routing: To model custom routing algorithms, say adaptive, weprovide a framework to name each link with a src_outport anddst_inport direction, and use these inside garnet to implement routingalgorithms. For instance, in a Mesh, West-first can be implemented bysending a flit along the “west” outport link till the flit no longer hasany X- hops remaining, and then randomly (or based on next router VCavailability) choosing one of the remaining links. See howoutportComputeXY() is implemented insrc/mem/ruby/network/garnet2.0/RoutingUnit.cc. Similarly,outportComputeCustom() can be implemented, and invoked by adding–routing-algorithm=2 in the command line.Multicast messages: The network modeled does not have hardwaremulti-cast support within the network. A multi-cast message gets brokeninto multiple uni-cast messages at the Network Interface.Flow ControlVirtual Channel Flow Control is used in the design. Each VC can hold onepacket. There are two kinds of VCs in the design - control and data. Thebuffer depth in each can be independently controlled fromGarnetNetwork.py. The default values are 1-flit deep control VCs, and4-flit deep data VCs. Default size of control packets is 1-flit, anddata packets is 5-flit.Router MicroarchitectureThe garnet2.0 router performs the following actions:  Buffer Write (BW): The incoming flit gets buffered in its VC.  Route Compute (RC) The buffered flit computes its output port,and this information is stored in its VC.  Switch Allocation (SA): All buffered flits try to reserve theswitch ports for the next cycle. [The allocation occurs in aseparable manner: First, each input chooses one input VC, usinginput arbiters, which places a switch request. Then, each outputport breaks conflicts via output arbiters]. All arbiters in orderedvirtual networks are queueing to maintain point-to-point ordering.All other arbiters are round-robin.  VC Selection (VS): The winner of SA selects a free VC (ifHEAD/HEAD_TAIL flit) from its output port.  Switch Traversal (ST): Flits that won SA traverse the crossbarswitch.  Link Traversal (LT): Flits from the crossbar traverse links toreach the next routers.In the default design, BW, RC, SA, VS, and ST all happen in 1-cycle. LThappens in the next cycle.Multi-cycle Router: Multi-cycle routers can be modeled by specifyinga per-router latency in the topology file, or changing the defaultrouter latency in src/mem/ruby/network/BasicRouter.py. This isimplemented by making a buffered flit wait in the router for (latency-1)cycles before becoming eligible for SA.Buffer ManagementEach router input port has number_of_virtual_networks Vnets, eachwith vcs_per_vnet VCs. VCs in control Vnets have a depth ofbuffers_per_ctrl_vc (default = 1) and VCs in data Vnets have a depthof buffers_per_data_vc (default = 4). Credits are used to relayinformation about free VCs, and number of buffers within each VC.Lifecycle of a Network Traversal  NetworkInterface.cc::wakeup()          Every NI connected to one coherence protocol controller on oneend, and one router on the other.      receives messages from coherence protocol buffer in appropriatevnet and converts them into network packets and sends them intothe network.                  garnet2.0 adds the ability to capture a network trace atthis point [under development].                    receives flits from the network, extracts the protocol messageand sends it to the coherence protocol buffer in appropriatevnet.      manages flow-control (i.e., credits) with its attached router.      The consuming flit/credit output link of the NI is put in theglobal event queue with a timestamp set to next cycle. Theeventqueue calls the wakeup function in the consumer.        NetworkLink.cc::wakeup()          receives flits from NI/router and sends it to NI/router afterm_latency cycles delay      Default latency value for every link can be set from commandline (see configs/network/Network.py)      Per link latency can be overwritten in the topology file      The consumer of the link (NI/router) is put in the global eventqueue with a timestamp set after m_latency cycles. Theeventqueue calls the wakeup function in the consumer.        Router.cc::wakeup()          Loop through all InputUnits and call their wakeup()      Loop through all OutputUnits and call their wakeup()      Call SwitchAllocator’s wakeup()      Call CrossbarSwitch’s wakeup()      The router’s wakeup function is called whenever any of itsmodules (InputUnit, OutputUnit, SwitchAllocator, CrossbarSwitch)have a ready flit/credit to act upon this cycle.        InputUnit.cc::wakeup()          Read input flit from upstream router if it is ready for thiscycle      For HEAD/HEAD_TAIL flits, perform route computation, and updateroute in the VC.      Buffer the flit for (m_latency - 1) cycles and mark it validfor SwitchAllocation starting that cycle.                  Default latency for every router can be set from commandline (see configs/network/Network.py)          Per router latency (i.e., num pipeline stages) can be set inthe topology file.                      OutputUnit.cc::wakeup()          Read input credit from downstream router if it is ready for thiscycle      Increment the credit in the appropriate output VC state.      Mark output VC as free if the credit carries is_free_signal astrue        SwitchAllocator.cc::wakeup()          Note: SwitchAllocator performs VC arbitration and selectionwithin it.      SA-I (or SA-i): Loop through all input VCs at every input port,and select one in a round robin manner.                  For HEAD/HEAD_TAIL flits only select an input VC whoseoutput port has at least one free output VC.          For BODY/TAIL flits, only select an input VC that hascredits in its output VC.                    Place a request for the output port from this VC.      SA-II (or SA-o): Loop through all output ports, and select oneinput VC (that placed a request during SA-I) as the winner forthis output port in a round robin manner.                  For HEAD/HEAD_TAIL flits, perform outvc allocation (i.e.,select a free VC from the output port.          For BODY/TAIL flits, decrement a credit in the output vc.                    Read the flit out from the input VC, and send it to theCrossbarSwitch      Send a increment_credit signal to the upstream router for thisinput VC.                  for HEAD_TAIL/TAIL flits, mark is_free_signal as true inthe credit.          The input unit sends the credit out on the credit link tothe upstream router.                    Reschedule the Router to wakeup next cycle for any flits readyfor SA next cycle.        CrossbarSwitch.cc::wakeup()          Loop through all input ports, and send the winning flit out ofits output port onto the output link.      The consuming flit output link of the router is put in theglobal event queue with a timestamp set to next cycle. Theeventqueue calls the wakeup function in the consumer.        NetworkLink.cc::wakeup()          receives flits from NI/router and sends it to NI/router afterm_latency cycles delay      Default latency value for every link can be set from commandline (see configs/network/Network.py)      Per link latency can be overwritten in the topology file      The consumer of the link (NI/router) is put in the global eventqueue with a timestamp set after m_latency cycles. Theeventqueue calls the wakeup function in the consumer.      Running Garnet2.0 with Synthetic TrafficGarnet2.0 can be run in a standalone manner and fed with synthetictraffic. The details are described here: Garnet SyntheticTraffic",
        "url": "/documentation/general_docs/ruby/garnet-2/"
      }
      ,
    
      "documentation-general-docs-ruby-garnet-synthetic-traffic": {
        "title": "Garnet Synthetic Traffic",
        "content": "Garnet Synthetic TrafficThe Garnet Synthetic Traffic provides a framework for simulating the Garnet network with controlled inputs. This is useful for network testing/debugging, or for network-only simulations with synthetic traffic.Note: The garnet synthetic traffic injector only works with the Garnet_standalone coherence protocol.Related files  configs/example/garnet_synth_traffic.py: file to invoke the network tester  src/cpu/testers/garnet_synthetic_traffic: files implementing the tester.          GarnetSyntheticTraffic.py      GarnetSyntheticTraffic.hh      GarnetSyntheticTraffic.cc      How to runFirst build gem5 with the Garnet_standalone coherence protocol. The Garnet_standalone protocol is ISA-agnostic, and hence we build it with the NULL ISA.scons build/NULL/gem5.debug PROTOCOL=Garnet_standaloneExample command:./build/NULL/gem5.debug configs/example/garnet_synth_traffic.py  \\        --num-cpus=16 \\        --num-dirs=16 \\        --network=garnet2.0 \\        --topology=Mesh_XY \\        --mesh-rows=4  \\        --sim-cycles=1000 \\        --synthetic=uniform_random \\        --injectionrate=0.01Parameterized Options            System Configuration      Description                  –num-cpus      Number of cpus. This is the number of source (injection) nodes in the network.              –num-dirs      Number of directories. This is the number of destination (ejection) nodes in the network.              –network      Network model: simple or garnet2.0. Use garnet2.0 for running synthetic traffic.              –topology      Topology for connecting the cpus and dirs to the network routers/switches. More detail about different topologies can be found (here)[Interconnection_Network#Topology].              –mesh-rows      The number of rows in the mesh. Only valid when ‘’–topology’’ is ‘‘Mesh_’’ or ‘‘MeshDirCorners_’’.                  Network Configuration      Description                  –router-latency      Default number of pipeline stages in the garnet router. Has to be &gt;= 1.  Can be over-ridden on a per router basis in the topology file.              –link-latency      Default latency of each link in the network. Has to be &gt;= 1.  Can be over-ridden on a per link basis in the topology file.              –vcs-per-vnet      Number of VCs per Virtual Network.              –link-width-bits      Width in bits for all links inside the garnet network. Default = 128.                  Traffic Injection      Description                  –sim-cycles      Total number of cycles for which the simulation should run.              –synthetic      The type of synthetic traffic to be injected. The following synthetic traffic patterns are currently supported: ‘uniform_random’, ‘tornado’, ‘bit_complement’, ‘bit_reverse’, ‘bit_rotation’, ‘neighbor’, ‘shuffle’,  and ‘transpose’.              –injectionrate      Traffic Injection Rate in packets/node/cycle. It can take any decimal value between 0 and 1. The number of digits of precision after the decimal point can be controlled by ‘’–precision’’ which is set to 3 as default in ‘‘garnet_synth_traffic.py’’.              –single-sender-id      Only inject from this sender. To send from all nodes, set to -1.              –single-dest-id      Only send to this destination. To send to all destinations as specified by the synthetic traffic pattern, set to -1.              –num-packets-max      Maximum number of packets to be injected by each cpu node. Default value is -1 (keep injecting till sim-cycles).              –inj-vnet      Only inject in this vnet (0, 1 or 2). 0 and 1 are 1-flit, 2 is 5-flit. Set to -1 to inject randomly in all vnets.      Implementation of Garnet synthetic trafficThe synthetic traffic injector is implemented in GarnetSyntheticTraffic.cc. The sequence of steps involved in generating and sending a packet are as follows.  Every cycle, each cpu performs a bernouli trial with probability equal to –injectionrate to determine whether to generate a packet or not.  If –num-packets-max is non negative, each cpu stops generating new packets after generating –num-packets-max number of packets. The injector terminates after –sim-cycles.  If the cpu has to generate a new packet, it computes the destination for the new packet based on the synthetic traffic type (–synthetic).  This destination is embedded into the bits after block offset in the packet address.  The generated packet is randomly tagged as a ReadReq, or an INST_FETCH, or a WriteReq, and sent to the Ruby Port (src/mem/ruby/system/RubyPort.hh/cc).  The Ruby Port converts the packet into a RubyRequestType:LD, RubyRequestType:IFETCH, and RubyRequestType:ST, respectively, and sends it to the Sequencer, which in turn sends it to the Garnet_standalone cache controller.  The cache controller extracts the destination directory from the packet address.  The cache controller injects the LD, IFETCH and ST into virtual networks 0, 1 and 2 respectively.          LD and IFETCH are injected as control packets (8 bytes), while ST is injected as a data packet (72 bytes).        The packet traverses the network and reaches the directory.  The directory controller simply drops it.",
        "url": "/documentation/general_docs/ruby/garnet_synthetic_traffic/"
      }
      ,
    
      "documentation-general-docs-ruby": {
        "title": "Introduction",
        "content": "RubyRuby implements a detailed simulation model for the memory subsystem. Itmodels inclusive/exclusive cache hierarchies with various replacementpolicies, coherence protocol implementations, interconnection networks,DMA and memory controllers, various sequencers that initiate memoryrequests and handle responses. The models are modular, flexible andhighly configurable. Three key aspects of these models are:  Separation of concerns – for example, the coherence protocolspecifications are separate from the replacement policies and cacheindex mapping, the network topology is specified separately from theimplementation.  Rich configurability – almost any aspect affecting the memoryhierarchy functionality and timing can be controlled.  Rapid prototyping – a high-level specification language, SLICC, isused to specify functionality of various controllers.The following picture, taken from the GEMS tutorial in ISCA 2005, showsa high-level view of the main components in Ruby.For a tutorial-based approach to Ruby see Part III of Learning gem5SLICC + Coherence protocols:SLICC stands for Specification Language forImplementing Cache Coherence. It is a domain specific language that isused for specifying cache coherence protocols. In essence, a cachecoherence protocol behaves like a state machine. SLICC is used forspecifying the behavior of the state machine. Since the aim is to modelthe hardware as close as possible, SLICC imposes constraints on thestate machines that can be specified. For example, SLICC can imposerestrictions on the number of transitions that can take place in asingle cycle. Apart from protocol specification, SLICC also combinestogether some of the components in the memory model. As can be seen inthe following picture, the state machine takes its input from the inputports of the inter-connection network and queues the output at theoutput ports of the network, thus tying together the cache / memorycontrollers with the inter-connection network itself.The following cache coherence protocols are supported:  MI_example: example protocol, 1-levelcache.  MESI_Two_Level: single chip,2-level caches, strictly-inclusive hierarchy.  MOESI_CMP_directory:multiple chips, 2-level caches, non-inclusive (neither strictlyinclusive nor exclusive) hierarchy.  MOESI_CMP_token: 2-level caches.TODO.  MOESI_hammer: single chip, 2-levelprivate caches, strictly-exclusive hierarchy.  Garnet_standalone: protocol torun the Garnet network in a standalone manner.  MESI Three Level: 3-level caches,strictly-inclusive hierarchy. Based on MESI Two Level with an extra L0 cache.Commonly used notations and data structures in the protocols have beendescribed in detail here.Protocol independent memory components  Sequencer  Cache Memory  Replacement Policies  Memory ControllerIn general cache coherence protocol independent components comprises ofthe Sequencer, Cache Memory structure, Cache Replacement policies andthe Memory controller. The Sequencer class is responsible for feedingthe memory subsystem (including the caches and the off-chip memory) withload/store/atomic memory requests from the processor. Every memoryrequest when completed by the memory subsystem also send back theresponse to the processor via the Sequencer. There is one Sequencer foreach hardware thread (or core) simulated in the system. The Cache Memorymodels a set-associative cache structure with parameterizable size,associativity, replacement policy. L1, L2, L3 caches (if exists)in thesystem are instances of Cache Memory. The Cache Replacement policies arekept modular from the Cache Memory, so that different instances of CacheMemory can use different replacement policies of their choice. Currentlytwo replacement polices – LRU and Pseudo-LRU – are distributed withthe release. Memory Controller is responsible for simulating andservicing any request that misses on all the on-chip caches of thesimulated system. Memory Controller currently simple, but models DRAMban contention, DRAM refresh faithfully. It also models close-pagepolicy for DRAM buffer.Interconnection NetworkThe interconnection network connects the various components of thememory hierarchy (cache, memory, dma controllers) together.The key components of an interconnection network are:  Topology  Routing  Flow Control  Router MicroarchitectureMore details about the network model implementation are describedhere.Alternatively, Interconnection network could be replaced with theexternal simulator TOPAZ. Thissimulator is ready to run within gem5 and adds a significant number offeaturesover original ruby network simulator. It includes, new advanced routermicro-architectures, new topologies, precision-performance adjustablerouter models, mechanisms to speed-up network simulation, etc.Life of a memory request in RubyIn this section we will provide a high level overview of how a memoryrequest is serviced by Ruby as a whole and what components in Ruby itgoes through. For detailed operations within each components though,refer to previous sections describing each component in isolation.  A memory request from a core or hardware context of gem5 enters thejurisdiction of Ruby through the RubyPort::recvTiminginterface (in src/mem/ruby/system/RubyPort.hh/cc). The number ofRubyport instantiation in the simulated system is equal to thenumber of hardware thread context or cores (in case ofnon-multithreaded cores). A port from the side of each core istied to a corresponding RubyPort.  The memory request arrives as a gem5 packet and RubyPort isresponsible for converting it to a RubyRequest object that isunderstood by various components of Ruby. It also finds out if therequest is for some PIO or not and maneuvers the packet to correctPIO. Finally once it has generated the corresponding RubyRequestobject and ascertained that the request is a normal memory request(not PIO access), it passes the request to theSequencer::makeRequest interface of the attached Sequencerobject with the port (variable ruby_port holds the pointer toit). Observe that Sequencer class itself is a derived class from theRubyPort class.  As mentioned in the section describing Sequencer class of Ruby,there are as many objects of Sequencer in a simulated system as thenumber of hardware thread context (which is also equal to the numberof RubyPort object in the system) and there is an one-to-one mappingbetween the Sequencer objects and the hardware thread context. Oncea memory request arrives at the Sequencer::makeRequest, itdoes various accounting and resource allocation for the request andfinally pushes the request to the Ruby’s coherent cache hierarchyfor satisfying the request while accounting for the delay inservicing the same. The request is pushed to the Cache hierarchy byenqueueing the request to the mandatory queue after accounting forL1 cache access latency. The mandatory queue (variable namem_mandatory_q_ptr) effectively acts as the interface betweenthe Sequencer and the SLICC generated cache coherence files.  L1 cache controllers (generated by SLICC according to the coherenceprotocol specifications) dequeues request from the mandatory queueand looks up the cache, makes necessary coherence state transitionsand/or pushes the request to the next level of cache hierarchy asper the requirements. Different controller and components of SLICCgenerated Ruby code communicates among themselves throughinstantiations of MessageBuffer class of Ruby(src/mem/ruby/buffers/MessageBuffer.cc/hh) , which can act asordered or unordered buffer or queues. Also the delays in servicingdifferent steps for satisfying a memory request gets accounted forscheduling enqueue-ing and dequeue-ing operations accordingly. Ifthe requested cache block may be found in L1 caches and withrequired coherence permissions then the request is satisfied andimmediately returned. Otherwise the request is pushed to the nextlevel of cache hierarchy through MessageBuffer. A request can goall the way up to the Ruby’s Memory Controller (also calledDirectory in many protocols). Once the request get satisfied it ispushed upwards in the hierarchy through MessageBuffers.  The MessageBuffers also act as entry point of coherence messagesto the on-chip interconnect modeled. The MesageBuffers are connectedaccording to the interconnect topology specified. The coherencemessages thus travel through this on-chip interconnect accordingly.  Once the requested cache block is available at L1 cache with desiredcoherence permissions, the L1 cache controller informs thecorresponding Sequencer object by calling its readCallback or‘writeCallback’’ method depending upon the type of the request.Note that by the time these methods on Sequencer are called thelatency of servicing the request has been implicitly accounted for.  The Sequencer then clears up the accounting information for thecorresponding request and then calls theRubyPort::ruby_hit_callback method. This ultimately returnsthe result of the request to the corresponding port of the core/hardware context of the frontend (gem5).Directory Structure  src/mem/          protocols: SLICC specification for coherence protocols      slicc: implementation for SLICC parser and code generator      ruby                  common: frequently used data structures, e.g. Address(with bit-manipulation methods), histogram, data block          filters: various Bloom filters (stale code from GEMS)          network: Interconnect implementation, sample topologyspecification, network power calculations, message buffersused for connecting controllers          profiler: Profiling for cache events, memory controllerevents          recorder: Cache warmup and access trace recording          slicc_interface: Message data structure, variousmappings (e.g. address to directory node), utility functions(e.g. conversion between address &amp; int, convert address tocache line address)          structures: Protocol independent memory components –CacheMemory, DirectoryMemory          system: Glue components – Sequencer, RubyPort,RubySystem                    ",
        "url": "/documentation/general_docs/ruby/"
      }
      ,
    
      "documentation-general-docs-ruby-interconnection-network": {
        "title": "Interconnection network",
        "content": "Interconnection NetworkThe various components of the interconnection network model insidegem5’s ruby memory system are described here.How to invoke the networkSimple Network:./build/ALPHA/gem5.debug \\                      configs/example/ruby_random_test.py \\                      --num-cpus=16  \\                      --num-dirs=16  \\                      --network=simple                      --topology=Mesh_XY  \\                      --mesh-rows=4The default network is simple, and the default topology is crossbar.Garnet network:./build/ALPHA/gem5.debug \\                      configs/example/ruby_random_test.py  \\                      --num-cpus=16 \\                      --num-dirs=16  \\                      --network=garnet2.0 \\                      --topology=Mesh_XY \\                      --mesh-rows=4TopologyThe connection between the various controllers are specified via pythonfiles. All external links (between the controllers and routers) arebi-directional. All internal links (between routers) are uni-directional– this allows a per-direction weight on each link to bias routingdecisions.  Related Files:          src/mem/ruby/network/topologies/Crossbar.py      src/mem/ruby/network/topologies/CrossbarGarnet.py      src/mem/ruby/network/topologies/Mesh_XY.py      src/mem/ruby/network/topologies/Mesh_westfirst.py      src/mem/ruby/network/topologies/MeshDirCorners_XY.py      src/mem/ruby/network/topologies/Pt2Pt.py      src/mem/ruby/network/Network.py      src/mem/ruby/network/BasicLink.py      src/mem/ruby/network/BasicRouter.py        Topology Descriptions:          Crossbar: Each controller (L1/L2/Directory) is connected toa simple switch. Each switch is connected to a central switch(modeling the crossbar). This can be invoked from command lineby –topology=Crossbar.      CrossbarGarnet: Each controller (L1/L2/Directory) isconnected to every other controller via one garnet router (whichinternally models the crossbar and allocator). This can beinvoked from command line by –topology=CrossbarGarnet.      Mesh_*: This topology requires the number of directoriesto be equal to the number of cpus. The number ofrouters/switches is equal to the number of cpus in the system.Each router/switch is connected to one L1, one L2 (if present),and one Directory. The number of rows in the mesh has to bespecified by –mesh-rows. This parameter enables thecreation of non-symmetrical meshes too.                  Mesh_XY: Mesh with XY routing. All x-directional linksare biased with a weight of 1, while all y-directional linksare biased with a weight of 2. This forces all messages touse X-links first, before using Y-links. It can be invokedfrom command line by –topology=Mesh_XY          Mesh_westfirst: Mesh with west-first routing. Allwest-directional links are biased with a weight of 1, alother links are biased with a weight of 2. This forces allmessages to use west-directional links first, before usingother links. It can be invoked from command line by–topology=Mesh_westfirst                    MeshDirCorners_XY: This topology requires the number ofdirectories to be equal to 4. number of routers/switches isequal to the number of cpus in the system. Each router/switch isconnected to one L1, one L2 (if present). Each cornerrouter/switch is connected to one Directory. It can be invokedfrom command line by –topology=MeshDirCorners_XY. Thenumber of rows in the mesh has to be specified by–mesh-rows. The XY routing algorithm is used.      Pt2Pt: Each controller (L1/L2/Directory) is connected toevery other controller via a direct link. This can be invokedfrom command line by      Pt2Pt: All to all point-to-point connection      In each topology, each link and each router can independently bepassed a parameter that overrides the defaults (in BasicLink.py andBasicRouter.py):  Link Parameters:          latency: latency of traversal within the link.      weight: weight associated with this link. This parameter isused by the routing table while deciding routes, as explainednext in Routing.      bandwidth_factor: Only used by simple network to specifywidth of the link in bytes. This translates to a bandwidthmultiplier (simple/SimpleLink.cc) and the individual linkbandwidth becomes bandwidth multiplier x endpoint_bandwidth(specified in SimpleNetwork.py). In garnet, the bandwidth isspecified by ni_flit_size in GarnetNetwork.py)        Internal Link Parameters:          src_outport: String with name for output port from sourcerouter.      dst_inport: String with name for input port at destinationrouter.      These two parameters can be used by routers to implement custom routingalgorithms in garnet2.0  Router Parameters:          latency: latency of each router. Only supported bygarnet2.0.      RoutingTable-based Routing (Default): Based on the topology, shortestpath graph traversals are used to populate routing tables at eachrouter/switch. This is done in src/mem/ruby/network/Topology.cc Thedefault routing algorithm is table-based and tries to choose the routewith minimum number of link traversals. Links can be given weights inthe topology files to model different routing algorithms. For example,in Mesh_XY.py and MeshDirCorners_XY.py Y-direction links are givenweights of 2, while X-direction links are given weights of 1, resultingin XY traversals. In Mesh_westfirst.py, the west-links are givenweights of 1, and all other links are given weights of 2. In garnet2.0,the routing algorithm randomly chooses between links with equal weights.In simple network, it statically chooses between links with equalweights.Custom Routing algorithms: In garnet2.0, we provide additionalsupport to implement custom (including adaptive) routing algorithms (SeeoutportComputeXY() in src/mem/ruby/network/garnet2.0/RoutingUnit.cc).The src_outport and dst_inport fields of the links can be used to givecustom names to each link (e.g., directions if a mesh), and these can beused inside garnet to implement any routing algorithm. A custom routingalgorithm can be selected from the command line by setting–routing-algorithm=2. See configs/network/Network.py andsrc/mem/ruby/network/garnet2.0/GarnetNetwork.pyFlow-Control and Router MicroarchitectureRuby supports two network models, Simple and Garnet, which trade-offdetailed modeling versus simulation speed respectively.Simple NetworkThe default network model in Ruby is the simple network.  Related Files:          src/mem/ruby/network/Network.py      src/mem/ruby/network/simple      src/mem/ruby/network/simple/SimpleNetwork.py      ConfigurationSimple network uses the generic network parameters in Network.py:  number_of_virtual_networks: This is the maximum number of    virtual networks. The actual number of active virtual networks    is determined by the protocol.  control_msg_size: The size of control messages in bytes.    Default is 8. m_data_msg_size in Network.cc is set to the    block size in bytes + control_msg_size.Additional parameters are specified in simple/SimpleNetwork.py:  buffer_size: Size of buffers at each switch input andoutput ports. A value of 0 implies infinite buffering.  endpoint_bandwidth: Bandwidth at the end points of thenetwork in 1000th of byte.  adaptive_routing: This enables adaptive routing based onoccupancy of output buffers.Switch ModelThe simple network models hop-by-hop network traversal, but abstractsout detailed modeling within the switches. The switches are modeled insimple/PerfectSwitch.cc while the links are modeled insimple/Throttle.cc. The flow-control is implemented by monitoring theavailable buffers and available bandwidth in output links beforesending.Garnet2.0Details of the new (2016) Garnet2.0 network arehere.Running the Network with Synthetic TrafficThe interconnection networks can be run in a standalone manner and fedwith synthetic traffic. We recommend doing this with garnet2.0.Running Garnet Standalone with Synthetic Traffic",
        "url": "/documentation/general_docs/ruby/interconnection-network/"
      }
      ,
    
      "documentation-general-docs-ruby-slicc": {
        "title": "SLICC",
        "content": "SLICCSLICC is a domain specific language for specifying cache coherenceprotocols. The SLICC compiler generates C++ code for differentcontrollers, which can work in tandem with other parts of Ruby. Thecompiler also generates an HTML specification of the protocol. HTMLgeneration is turned off by default. To enable HTML output, pass theoption “SLICC_HTML=True” to scons when compiling.Input To the CompilerThe SLICC compiler takes, as input, files that specify the controllersinvolved in the protocol. The .slicc file specifies the different filesused by the particular protocol under consideration. For example, iftrying to specify the MI protocol using SLICC, then we may use MI.sliccas the file that specifies all the files necessary for the protocol. Thefiles necessary for specifying a protocol include the definitions of thestate machines for different controllers, and of the network messagesthat are passed on between these controllers.The files have a syntax similar to that of C++. The compiler, writtenusing PLY (Python Lex-Yacc), parses thesefiles to create an Abstract Syntax Tree (AST). The AST is then traversedto build some of the internal data structures. Finally the compileroutputs the C++ code by traversing the tree again. The AST representsthe hierarchy of different structures present with in a state machine.We describe these structures next.Protocol State MachinesIn this section we take a closer look at what goes in to a filecontaining specification of a state machine.Specifying Data MembersEach state machine is described using SLICC’s machine datatype. Eachmachine has several different types of members. Machines for cache anddirectory controllers include cache memory and directory memory datamembers respectively. We will use the MI protocol available insrc/mem/protocol as our running example. So here is how you might wantto start writing a state machinemachine(MachineType:L1Cache, \"MI Example L1 Cache\")  : Sequencer * sequencer,    CacheMemory * cacheMemory,    int cache_response_latency = 12,    int issue_latency = 2 {      // Add rest of the stuff    }In order to let the controller receive messages from differententities in the system, the machine has a number of MessageBuffers. These act as input and output ports for the machine. Hereis an example specifying the output ports. MessageBuffer requestFromCache, network=\"To\", virtual_network=\"2\", ordered=\"true\"; MessageBuffer responseFromCache, network=\"To\", virtual_network=\"4\", ordered=\"true\";Note that Message Buffers have some attributes that need to be specifiedcorrectly. Another example, this time for specifying the inputports. MessageBuffer forwardToCache, network=\"From\", virtual_network=\"3\", ordered=\"true\"; MessageBuffer responseToCache, network=\"From\", virtual_network=\"4\", ordered=\"true\";Next the machine includes a declaration of the states thatmachine can possibly reach. In cache coherence protocol, states canbe of two types – stable and transient. A cache block is said to bein a stable state if in the absence of any activity (in comingrequest for the block from another controller, for example), thecache block would remain in that state for ever. Transient statesare required for transitioning between stable states. They areneeded when ever the transition between two stable states can not bedone in an atomic fashion. Next is an example that shows how statesare declared. SLICC has a keyword state_declaration that has tobe used for declaringstates.state_declaration(State, desc=\"Cache states\") {   I, AccessPermission:Invalid, desc=\"Not Present/Invalid\";   II, AccessPermission:Busy, desc=\"Not Present/Invalid, issued PUT\";   M, AccessPermission:Read_Write, desc=\"Modified\";   MI, AccessPermission:Busy, desc=\"Modified, issued PUT\";   MII, AccessPermission:Busy, desc=\"Modified, issued PUTX, received nack\";   IS, AccessPermission:Busy, desc=\"Issued request for LOAD/IFETCH\";   IM, AccessPermission:Busy, desc=\"Issued request for STORE/ATOMIC\";}The states I and M are the only stable states in this example. Againnote that certain attributes have to be specified with the states.The state machine needs to specify the events it can handle andthus transition from one state to another. SLICC provides thekeyword enumeration which can be used for specifying the set ofpossible events. An example to shed more light on this -enumeration(Event, desc=\"Cache events\") {   // From processor   Load,       desc=\"Load request from processor\";   Ifetch,     desc=\"Ifetch request from processor\";   Store,      desc=\"Store request from processor\";   Data,       desc=\"Data from network\";   Fwd_GETX,        desc=\"Forward from network\";   Inv,        desc=\"Invalidate request from dir\";   Replacement,  desc=\"Replace a block\";   Writeback_Ack,   desc=\"Ack from the directory for a writeback\";   Writeback_Nack,   desc=\"Nack from the directory for a writeback\";}While developing a protocol machine, we may need to definestructures that represent different entities in a memory system.SLICC provides the keyword structure for this purpose. Anexamplefollowsstructure(Entry, desc=\"...\", interface=\"AbstractCacheEntry\") {   State CacheState,        desc=\"cache state\";   bool Dirty,              desc=\"Is the data dirty (different than memory)?\";   DataBlock DataBlk,       desc=\"Data in the block\";}The cool thing about using SLICC’s structure is that it automaticallygenerates for you the get and set functions on different fields. It alsowrites a nice print function and overloads the &lt;&lt; operator. But incase you would prefer do everything on your own, you can make use of thekeyword external in the declaration of the structure. This wouldprevent SLICC from generating C++ code for this structure.structure(TBETable, external=\"yes\") {   TBE lookup(Address);   void allocate(Address);   void deallocate(Address);   bool isPresent(Address);}In fact many predefined types exist in src/mem/protocol/RubySlicc_*.smfiles. You can make use of them, or if you need new types, you candefine new ones as well. You can also use the keyword interface tomake use of inheritance features available in C++. Note that currentlySLICC supports public inheritance only.We can also declare and define functions as we do in C++. There arecertain functions that the compiler expects would always be definedby the controller. These include  getState()  setState()Input for the MachineSince protocol is state machine, we need to specify how to machinetransitions from one state to another on receiving inputs. As mentionedbefore, each machine has several input and output ports. For each inputport, the in_port keyword is used for specifying the behavior ofthe machine, when a message is received on that input port. An examplefollows that shows the syntax for declaring an inputport.in_port(mandatoryQueue_in, RubyRequest, mandatoryQueue, desc=\"...\") {  if (mandatoryQueue_in.isReady()) {    peek(mandatoryQueue_in, RubyRequest, block_on=\"LineAddress\") {      Entry cache_entry := getCacheEntry(in_msg.LineAddress);      if (is_invalid(cache_entry) &amp;&amp;          cacheMemory.cacheAvail(in_msg.LineAddress) == false ) {        // make room for the block        trigger(Event:Replacement, cacheMemory.cacheProbe(in_msg.LineAddress),                getCacheEntry(cacheMemory.cacheProbe(in_msg.LineAddress)),                TBEs[cacheMemory.cacheProbe(in_msg.LineAddress)]);      }      else {        trigger(mandatory_request_type_to_event(in_msg.Type), in_msg.LineAddress,                cache_entry, TBEs[in_msg.LineAddress]);      }    }  }}As you can see, in_port takes in multiple arguments. The firstargument, mandatoryQueue_in, is the identifier for the in_portthat is used in the file. The next argument, RubyRequest, is thetype of the messages that this input port receives. Each input portuses a queue to store the messages, the name of the queue is thethird argument.The keyword peek is used to extract messages from the queue ofthe input port. The use of this keyword implicitly declares avariable in_msg which is of the same type as specified in theinput port’s declaration. This variable points to the message at thehead of the queue. It can be used for accessing the fields of themessage as shown in the code above.Once the incoming message has been analyzed, it is time for usingthis message for taking some appropriate action and changing thestate of the machine. This done using the keyword trigger. Thetrigger function is actually used only in SLICC code and is notpresent in the generated code. Instead this call is converted in toa call to the doTransition() function which appears in thegenerated code. The doTransition() function is automaticallygenerated by SLICC for each of the state machines. The number ofarguments to trigger depend on the machine itself. In general, theinput arguments for trigger are the type of the message that needsto processed, the address for which this message is meant for, thecache and the transaction buffer entries for that address.trigger also increments a counter that is checked before atransition is made. In one ruby cycle, there is a limit on thenumber of transitions that can be carried out. This is done toresemble more closely to a hardware based state machine. @TODO:What happens if there are no more transitions left? Does the wakeupabort?ActionsIn this section we will go over how the actions that a state machine cancarry out are defined. These actions will be called in to action whenthe state machine receives some input message which is then used to makea transition. Let’s go over an example on how the key word actioncan be made use of.action(a_issueRequest, \"a\", desc=\"Issue a request\") {   enqueue(requestNetwork_out, RequestMsg, latency=issue_latency) {   out_msg.Address := address;     out_msg.Type := CoherenceRequestType:GETX;     out_msg.Requestor := machineID;     out_msg.Destination.add(map_Address_to_Directory(address));     out_msg.MessageSize := MessageSizeType:Control;   }}The first input argument is the name of the action, the nextargument is the abbreviation used for generating the documentationand last one is the description of the action which used in the HTMLdocumentation and as a comment in the C++ code.Each action is converted in to a C++ function of that name. Thegenerated C++ code implicitly includes up to three input parametersin the function header, again depending on the machine. Thesearguments are the memory address on which the action is being taken,the cache and transaction buffer entries pertaining to this address.Next useful thing to look at is the enqueue keyword. Thiskeyword is used for queuing a message, generated as a result of theaction, to an output port. The keyword takes three input arguments,namely, the name of the output port, the type of the message to bequeued and the latency after which this message can be dequeued.Note that in case randomization is enabled, the specified latency isignored. The use of the keyword implicitly declares a variableout_msg which is populated by the follow on statements.TransitionsA transition function is a mapping from the cross product of set ofstates and set of events to the set of states. SLICC provides thekeyword transition for specifying the transition function for statemachines. An example follows –transition(IM, Data, M) {   u_writeDataToCache;   sx_store_hit;   w_deallocateTBE;   n_popResponseQueue;}In this example, the initial state is IM. If an event of type Dataoccurs in that state, then final state would be M. Before making thetransition, the state machine can perform certain actions on thestructures that it maintains. In the given example,u_writeDataToCache is an action. All these operations are performedin an atomic fashion, i.e. no other event can occur before the set ofactions specified with the transition has been completed.For ease of use, sets of events and states can be provided as inputto transition. The cross product of these sets will map to the samefinal state. Note that the final state cannot be a set. If for aparticular event, the final state is same as the initial state, thenthe final state can be omitted.transition({IS, IM, MI, II}, {Load, Ifetch, Store, Replacement}) {   z_stall;}Special FunctionsStalling/Recycling/Waiting input portsOne of the more complicated internal features of SLICC and the resultingstate machines is how the deal with the situation when events cannot beprocess due to the cache block being in a transient state. There areseveral possible ways to deal with this situation and each solution hasdifferent tradeoffs. This sub-section attempts to explain thedifferences. Please email the gem5-user list for further follow-up.Stalling the input portThe simplest way to handle events that can’t be processed is to simplystall the input port. The correct way to do this is to include the“z_stall” action within the transition statement:transition({IS, IM, MI, II}, {Load, Ifetch, Store, Replacement}) {   z_stall;}Internally SLICC will return a ProtocolStall for this transition and nosubsequent messages from the associated input port will be processeduntil the stalled message is processed. However, the other input portswill be analyzed for ready messages and processed in parallel. Whilethis is a relatively simple solution, one may notice that stallingunrelated messages on the same input port will cause excessive andunnecessary stalls.One thing to note is Do Not leave the transition statement blanklike so:transition({IS, IM, MI, II}, {Load, Ifetch, Store, Replacement}) {   // stall the input port by simply not popping the message}This will cause SLICC to return success for this transition and SLICCwill continue to repeatedly analyze the same input port. The result iseventual deadlock.Recycling the input portThe better performance but more unrealistic solution is to recycle thestalled message on the input port. The way to do this is to use the“zz_recycleMandatoryQueue”action:action(zz_recycleMandatoryQueue, \"\\z\", desc=\"Send the head of the mandatory queue to the back of the queue.\") {   mandatoryQueue_in.recycle();}transition({IS, IM, MI, II}, {Load, Ifetch, Store, Replacement}) {   zz_recycleMandatoryQueue;}The result of this action is that the transition returns a ProtocolStall and the offending message moved to the back of the FIFO inputport. Therefore, other unrelated messages on the same input port can beprocessed. The problem with this solution is that recycled messages maybe analyzed and reanalyzed every cycle until an address changes state.Stall and wait the input portAn even better, but more complicated solution is to “stall and wait” theoffending input message. The way to do this is to use the“z_stallAndWaitMandatoryQueue”action:action(z_stallAndWaitMandatoryQueue, \"\\z\", desc=\"recycle L1 request queue\") {   stall_and_wait(mandatoryQueue_in, address);}transition({IS, IM, IS_I, M_I, SM, SINK_WB_ACK}, {Load, Ifetch, Store, L1_Replacement}) {   z_stallAndWaitMandatoryQueue;}The result of this action is that the transition returns success, whichis ok because stall_and_wait moves the offending message off the inputport and to a side table associated with the input port. The messagewill not be analyzed again until it is woken up. In the meantime, otherunrelated messages will be processed.The complicated part of stall and wait is that stalled messages must beexplicitly woken up by other messages/transitions. In particular,transitions that move an address to a base state should wake uppotentially stalled messages waiting for that address:action(kd_wakeUpDependents, \"kd\", desc=\"wake-up dependents\") {   wakeUpBuffers(address);}transition(M_I, WB_Ack, I) {   s_deallocateTBE;   o_popIncomingResponseQueue;   kd_wakeUpDependents;}Replacements are particularly complicated since stalled addresses arenot associated with the same address they are actually waiting tochange. In those situations all waiting messages must be wokenup:action(ka_wakeUpAllDependents, \"ka\", desc=\"wake-up all dependents\") {   wakeUpAllBuffers();}transition(I, L2_Replacement) {   rr_deallocateL2CacheBlock;   ka_wakeUpAllDependents;}Other Compiler Features      SLICC supports conditional statements in form of if andelse. Note that SLICC does not support else if.        Each function has return type which can be void as well. Returnedvalues cannot be ignored.        SLICC has limited support for pointer variables. is_valid() andis_invalid() operations are supported for testing whether a givenpointer ‘is not NULL’ and ‘is NULL’ respectively. The keywordOOD, which stands for Out of Domain, plays the role of keywordNULL used in C++.        SLICC does not support ! (the not operator).        Static type casting is supported in SLICC. The keywordstatic_cast has been provided for this purpose. For example, inthe following piece of code, a variable of type AbstractCacheEntryis being casted in to a variable of type Entry.     Entry L1Dcache_entry := static_cast(Entry, \"pointer\", L1DcacheMemory[addr]);SLICC InternalsC++ to Slicc Interface - @note: What do each of these filesdo/define???  src/mem/protocol/RubySlicc_interaces.sm          RubySlicc_Exports.sm      RubySlicc_Defines.sm      RubySlicc_Profiler.sm      RubySlicc_Types.sm      RubySlicc_MemControl.sm      RubySlicc_ComponentMapping.sm      Variable Assignments  Use the := operator to assign members in class (e.g. a memberdefined in RubySlicc_Types.sm):          an automatic m_ is added to the name mentioned in the SLICC  file.      ",
        "url": "/documentation/general_docs/ruby/slicc/"
      }
      ,
    
      "documentation-general-docs-statistics": {
        "title": "Statistics",
        "content": "Stats PackageThe philosophy of the stats package at the moment is to have a single base class called Stat which is merely a hook into every other aspect of the stat that may be important. Thus, this Stat base class has virtual functions to name, set precision for, set flags for, and initialize size for all the stats. For all Vector based stats, it is very important to do the initialization before using the stat so that appropriate storage allocation can occur. For all other stats, naming and flag setting is also important, but not as important for the actual proper execution of the binary. The way this is set up in the code is to have a regStats() pass in which all stats can be registered in the stats database and initialized.Thus, to add your own stats, just add them to the appropriate class’ data member list, and be sure to initialize/register them in that class’ regStats function.Here is a list of the various initialization functions. Note that all of these return a Stat&amp; reference, thus enabling a clean looking way of calling them all.  init(various args) //this differs for different types of stats.          Average: does not have an init()      Vector: init(size_t) //indicates size of vector      AverageVector: init(size_t) //indicates size of vector      Vector2d: init(size_t x, size_t y) //rows, columns      Distribution: init(min, max, bkt) //min refers to minimum value, max the maximum value, and bkt the size of the bkts. In other words, if you have min=0, max=15, and bkt=8, then 0-7 will go into bucket 0, and 8-15 will go into bucket 1.      StandardDeviation: does not have an init()      AverageDeviation: does not have an init()      VectorDistribution: init(size, min, max, bkt) //the size refers to the size of the vector, the rest are the same as for Distributions.      VectorStandardDeviation: init(size) //size refers to size of the vector      VectorAverageDeviation: init(size) //size refers to size of the vector      Formula: does not have an init()        name(const std::string name) //the name of the stat  desc(const std::string desc) //a brief description of the stat  precision(int p) //p refers to how many places after the decimal point to go. p=0 will force rounding to integers.  prereq(const Stat &amp;prereq) //this indicates that this stat should not be printed unless prereq has a non-zero value. (like if there are 0 cache accesses, don’t print cache misses, hits, etc.)  subname(int index, const std::string subname) //this is for Vector based stats to give a subname to each index of the vector.  subdesc(int index, const std::string subname) //also for Vector based stats, to give each index a subdesc. For 2d Vectors, the subname goes to each of the rows (x’s). The y’s can be named using a Vector2d member function ysubname, see code for details.flags(FormatFlags f) //these are various flags you can pass to the stat, which i’ll describe below.  none – no special formatting  total – this is for Vector based stats, if this flag is set, the total across the Vector will be printed at the end (for those stats which this is supported).  pdf – This will print the probability distribution of a stat  nozero – This will not print the stat if its value is zero  nonan – This will not print the stat if it’s Not a Number (nan).  cdf – This will print the cumulative distribution of a statBelow is an example of how to initialize a VectorDistribution:    vector_dist.init(4,0,5,2)        .name(\"Dummy Vector Dist\")        .desc(\"there are 4 distributions with buckets 0-1, 2-3, 4-5\")        .flags(nonan | pdf)        ;Stat TypesScalarThe most basic stat is the Scalar. This embodies the basic counting stat. It is a templatized stat and takes two parameters, a type and a bin. The default type is a Counter, and the default bin is NoBin (i.e. there is no binning on this stat). It’s usage is straightforward: to assign a value to it, just say foo = 10;, or to increment it, just use ++ or += like for any other type.AverageThis is a “special use” stat, geared toward calculating the average of something over the number of cycles in the simulation. This stat is best explained by example. If you wanted to know the average occupancy of the load-store queue over the course of the simulation, you’d need to accumulate the number of instructions in the LSQ each cycle and at the end divide it by the number of cycles. For this stat, there may be many cycles where there is no change in the LSQ occupancy. Thus, you could use this stat, where you only need to explicitly update the stat when there is a change in the LSQ occupancy. The stat itself will take care of itself for cycles where there is no change. This stat can be binned and it also templatized the same way Stat is.VectorA Vector is just what it sounds like, a vector of type T in the template parameters. It can also be binned. The most natural use of Vector is for something like tracking some stat over number of SMT threads. A Vector of size n can be declared just by saying Vector&lt;&gt; foo; and later initializing the size to n. At that point, foo can be accessed as if it were a regular vector or array, like foo[7]++.AverageVectorAn AverageVector is just a Vector of Averages.Vector2dA Vector2d is a 2 dimensional vector. It can be named in both the x and y directions, though the primary name is given across the x-dimension. To name in the y-dimension, use a special ysubname function only available to Vector2d’s.DistributionThis is essentially a Vector, but with minor differences. Whereas in a Vector, the index maps to the item of interest for that bucket, in a Distribution you could map different ranges of interest to a bucket. Basically, if you had the bkt parameter of init for a Distribution = 1, you might as well use a Vector.StandardDeviationThis stat calculates standard deviation over number of cycles in the simulation. It’s similar to Average in that it has behavior built into it, but it needs to be updated every cycle.AverageDeviationThis stat also calculates the standard deviation but it does not need to be updated every cycle, much like Average. It will handle cycles where there is no change itself.VectorDistributionThis is just a vector of distributions.VectorStandardDeviationThis is just a vector of standard deviations.VectorAverageDeviationThis is just a vector of AverageDeviations.HistogramThis stat puts each sampled value into one bin out of a configurable number of bins. All bins form a contiguous interval and are of equal length. The length of the bins is dynamically extended, if there is a sample value which does not fit into one the existing bins.SparseHistogramThis stat is similar to a histogram, except that it can only sample natural numbers. SparseHistogram is e.g. suitable for counting the number of accesses to memory addresses.FormulaThis is a Formula stat. This is for anything that requires calculations at the end of the simulation, for example something that is a rate. So, an example of defining a Formula would be:    Formula foo = bar + 10 / num;There are a few subtleties to Formula. If bar and num are both stats(including Formula type), then there is no problem. If bar or num are regular variables, then they must be qualified with constant(bar). This is essentially cast. If you want to use the value of bar or num at the moment of definition, then use constant(). If you want to use the value of bar or num at the moment the formula is calculated (i.e. the end), define num as a Scalar. If num is a Vector, use sum(num) to calculate its sum for the formula. The operation “scalar(num)”, which casts a regular variable to a Scalar, does no longer exist.",
        "url": "/documentation/general_docs/statistics/"
      }
      ,
    
      "documentation-general-docs-thermal-model": {
        "title": "Power and Thermal Model",
        "content": "Power and Thermal ModelThis document gives an overview of the power and thermal modellinginfrastructure in Gem5.The purpose is to give a high level view of all the pieces involved and howthey interact with each other and the simulator.Class overviewClasses involved in the power model are:  PowerModel:Represents a power model for a hardware component.  PowerModelState: Represents apower model for a hardware component in a certain power state. It is anabstract class that defines an interface that must be implemented for eachmodel.  MathExprPowerModel: Simpleimplementation of PowerModelState that assumesthat power can be modeled using a simple power.Classes involved in the thermal model are:  ThermalModel:Contains the system thermal model logic and state. It performs the power queryand temperature update. It also enables gem5 to query for temperature (for OSreporting).  ThermalDomain:Represents an entity that generates heat. It’s essentially a group ofSimObjects groupedunder a SubSystem component that have its own thermal behaviour.  ThermalNode:Represents a node in the thermal circuital equivalent. The node has atemperature and interacts with other nodes through connections (thermalresistors and capacitors).  ThermalReference: Temperaturereference for the thermal model (essentially a thermal node with a fixedtemperature), can be used to model air or any other constant temperaturedomains.  ThermalEntity:A thermal component that connects two thermal nodes and models a thermalimpedance between them. This class is just an abstract interface.  ThermalResistor: ImplementsThermalEntity tomodel a thermal resistance between the two nodes it connects. Thermalresistances model the capacity of a material to transfer heat (units in K/W).  ThermalCapacitor: ImplementsThermalEntity tomodel a thermal capacitance. Thermal capacitors are used to model material’sthermal capacitance, this is, the ability to change a certain materialtemperature (units in J/K).Thermal modelThe thermal model works by creating a circuital equivalent of the simulatedplatform. Each node in the circuit has a temperature (as voltage equivalent)and power flows between nodes (as current in a circuit).To build this equivalent temperature model the platform is required to groupthe power actors (any component that has a power model) under SubSystems andattach ThermalDomains to those subsystems. Other components might also becreated (like ThermalReferences) and connected all together by creating thermalentities (capacitors and resistors).Last step to conclude the thermal model is to create the ThermalModel instance itself andattach all the instances used to it, so it can properly update them at runtime.Only one thermal model instance is supported right now and it willautomatically report temperature when appropriate (ie. platform sensordevices).Power modelEvery ClockedObject has a power modelassociated. If this power model is non-null power will be calculated at everystats dump (although it might be possible to force power evaluation at anyother point, if the power model uses the stats, it is a good idea to keep bothevents in sync). The definition of a power model is quite vague in the sensethat it is as flexible as users want it to be. The only enforced contraints sofar is the fact that a power model has several power state models, one for eachpossible power state for that hardware block. When it comes to compute powerconsumption the power is just the weighted average of each power model.A power state model is essentially an interface that allows us to define twopower functions for dynamic and static. As an example implementation a classcalled MathExprPowerModel has beenprovided. This implementation allows the user to define a power model as anequation involving several statistics. There’s also some automatic (or “magic”)variables such as “temp”, which reports temperature.",
        "url": "/documentation/general_docs/thermal_model"
      }
      ,
    
      "documentation": {
        "title": "gem5 documentation",
        "content": "gem5 DocumentationLearning gem5Learning gem5 gives a prose-heavy introduction to using gem5 for computer architecture research written by Jason Lowe-Power.This is a great resource for junior researchers who plan on using gem5 heavily for a research project.It covers details of how gem5 works starting with how to create configuration scripts.It then goes on to describe how to modify and extend gem5 for your research including creating SimObjects, using gem5’s event-driven simulation infrastructure, and adding memory system objects.In Learning gem5 Part 3 the Ruby cache coherence model is discussed in detail including a full implementation of an MSI cache coherence protocol.More Learning gem5 parts are coming soon including:  CPU models and ISAs  Debugging gem5  Your idea here!Note: this has been migrated from learning.gem5.org and there are minor problems due to this migration (e.g., missing links, bad formatting).Please contact Jason (jason@lowepower.com) or create a PR if you find any errors!gem5 101gem5 101 is a set of assignments mostly from Wisconsin’s graduate computer architecture classes (CS 752, CS 757, and CS 758) which will help you learn to use gem5 for research.gem5 API documentationYou can find the doxygen-based documentation here: https://gem5.github.io/gem5-doxygen/Other general gem5 documentationSee the navigation on the left side of the page!",
        "url": "/documentation/"
      }
      ,
    
      "documentation-learning-gem5-gem5-101": {
        "title": "gem5 101",
        "content": "gem5 101This is a six part course which will help you pick up the basics of gem5, andillustrate some common uses. This course is based around the assignments from aparticular offering of architecture courses, CS 752 and CS 757, taught at theUniversity of Wisconsin-Madison.First steps with gem5, and Hello World!Part IIn part I, you will first learn to download and build gem5 correctly, create a simple configuration script for a simple system, write a simple C program and run a gem5 simulation. You will then introduce a two-level cache hierarchy in your system (fun stuff). Finally, you get to view the effect of changing system parameters such as memory types, processor frequency and complexity on the performance of your simple program.Getting down and dirtyPart IIFor part II, we had used gem5 capabilities straight out of the box. Now, we will witness the flexibility and usefulness of gem5 by extending the simulator functionality. We walk you through the implementation of an x86 instruction (FSUBR), which is currently missing from gem5. This will introduce you to gem5’s language for describing instruction sets, and illustrate how instructions are decoded and broken down into micro-ops which are ultimately executed by the processor.Pipelining solves everythingPart IIIFrom the ISA, we now move on to the processor micro-architecture. Part III introduces the various different cpu models implemented in gem5, and analyzes the performance of a pipelined implementation. Specifically, you will learn how the latency and bandwidth of different pipeline stages affect overall performance. Also, a sample usage of gem5 pseudo-instructions is also included at no additional cost.Always be experimentingPart IVExploiting instruction-level parallelism (ILP) is a useful way of improving single-threaded performance. Branch prediction and predication are two common techniques of exploiting ILP. In this part, we use gem5 to verify the hypothesis that graph algorithms that avoid branches perform better than algorithms that use branches. This is a useful exercise in understanding how to incorporate gem5 into your research process.Cold, hard, cachePart VAfter looking at the processor core, we now turn our attention to the cache hierarchy. We continue our focus on experimentation, and consider tradeoffs in cache design such as replacement policies and set-associativity. Furthermore, we also learn more about the gem5 simulator, and create our first simObject!Single-core is so two-thousand and latePart VIFor this last part, we go both multi-core and full system at the same time! We analyze the performance of a simple application on giving it more computational resources (cores). We also boot a full-fledged unmodified operating system (Linux) on the target system simulated by gem5. Most importantly, we teach you how to create your own, simpler version of the dreaded fs.py configuration script, one that you can feel comfortable modifying.Complete!Congrats, you are now familiar with the fundamentals of gem5. You are now allowed to wear the “Bro, do you even gem5?” t-shirt (if you manage to find one).CreditsA lot of people have been involved over the years in developing the assignments for these courses. If we have missed out on anyone, please add them here.  Multifacet research group at University of Wisconsin-Madison  Profs Mark Hill, David Wood  Jason Lowe-Power  Nilay Vaish  Lena Olson  Swapnil Haria  Jayneel Gandhi Any questions or queries regarding this tutorial should be directed towards the gem5-users mailing list, and not the individual contacts listed in the assignment. ",
        "url": "/documentation/learning_gem5/gem5_101/"
      }
      ,
    
      "documentation-learning-gem5-introduction": {
        "title": "Learning gem5",
        "content": "IntroductionThe goal of this document is to give you a thoroughintroduction on how to use gem5 and the gem5 codebase. The purpose ofthis document is not to provide a detailed description of every featurein gem5. After reading this document, you should feel comfortable usinggem5 in the classroom and for computer architecture research.Additionally, you should be able to modify and extend gem5 and thencontribute your improvements to the main gem5 repository.This document is colored by my personal experiences with gem5 over thepast six years as a graduate student at the University ofWisconsin-Madison. The examples presented are just one way to do it.Unlike Python, whose mantra is “There should be one– and preferablyonly one –obvious way to do it.” (from The Zen of Python. SeeThe Zen of Python), in gem5 there are a number of different ways toaccomplish the same thing. Thus, many of the examples presented in thisbook are my opinion of the best way to do things.One important lesson I have learned (the hard way) is when using complextools like gem5, it is important to actually understand how it worksbefore using it.Finish the previous paragraph about how it is a good idea to understandwhat your tools are actually doing.should add a list of terms. Things like “simulated system” vs “hostsystem”, etc.You can find the source for this book on our athttps://gem5.googlesource.com/public/gem5-website/+/refs/heads/master/_pages/documentation/learning_gem5/.What is gem5?gem5 is a modular discrete event driven computer system simulator platform. That means that:  gem5’s components can be rearranged, parameterized, extended or replaced easily to suit your needs.  It simulates the passing of time as a series of discrete events.  Its intended use is to simulate one or more computer systems in various ways.  It’s more than just a simulator; it’s a simulator platform that lets you use as many of its premade components as you want to build up your own simulation system.gem5 is written primarily in C++ and python and most components are provided under a BSD style license.It can simulate a complete system with devices and an operating system in full system mode (FS mode), or user space only programs where system services are provided directly by the simulator in syscall emulation mode (SE mode).There are varying levels of support for executing Alpha, ARM, MIPS, Power, SPARC, RISC-V, and 64 bit x86 binaries on CPU models including two simple single CPI models, an out of order model, and an in order pipelined model.A memory system can be flexibly built out of caches and crossbars or the Ruby simulator which provides even more flexible memory system modeling.There are many components and features not mentioned here, but from just this partial list it should be obvious that gem5 is a sophisticated and capable simulation platform.Even with all gem5 can do today, active development continues through the support of individuals and some companies, and new features are added and existing features improved on a regular basis.Capabilities out of the boxgem5 is designed for use in computer architecture research, but if you’re trying to research something new and novel it probably won’t be able to evaluate your idea out of the box. If it could, that probably means someone has already evaluated a similar idea and published about it.To get the most out of gem5, you’ll most likely need to add new capabilities specific to your project’s goals. gem5’s modular design should help you make modifications without having to understand every part of the simulator.As you add the new features you need, please consider contributing your changes back to gem5. That way others can take advantage of your hard work, and gem5 can become an even better simulator.Asking for helpgem5 has two main mailing lists where you can ask for help or advice.gem5-dev is for developers who are working on the main version of gem5.This is the version that’s distributed from the website and most likely what you’ll base your own work off of.gem5-users is a larger mailing list and is for people working on their own projects which are not, at least initially, going to be distributed as part of the official version of gem5.Most of the time, gem5-users is the right mailing list to use.Most of the people on gem5-dev are also on gem5-users including all the main developers, and in addition many other members of the gem5 community will see your post.That helps you because they might be able to answer your question, and it also helps them because they’ll be able to see the answers people send you.To find more information about the mailing lists, to sign up, or to look through archived posts visit Mailing Lists.Before reporting a problem on the mailing list, please read Reporting Problems.",
        "url": "/documentation/learning_gem5/introduction/"
      }
      ,
    
      "documentation-learning-gem5-part1-building": {
        "title": "Building gem5",
        "content": "Building gem5This chapter covers the details of how to set up a gem5 developmmentenvironment and build gem5.Requirements for gem5See gem5 requirementsfor more details.On Ubuntu, you can install all of the required dependencies with thefollowing command. The requirements are detailed below.sudo apt install build-essential git m4 scons zlib1g zlib1g-dev libprotobuf-dev protobuf-compiler libprotoc-dev libgoogle-perftools-dev python-dev python            git (Git):      The gem5 project uses Git for versioncontrol. Git is a distributed versioncontrol system. More information aboutGit can be found by following the link.Git should be installed by default on most platforms. However,to install Git in Ubuntu use        sudo apt install git                                gcc 4.8+      You may need to use environment variables to point to anon-default version of gcc.        On Ubuntu, you can install a development environment with        sudo apt install build-essential                                SCons      gem5 uses SCons as its build environment. SCons is like make onsteroids and uses Python scripts for all aspects of the buildprocess. This allows for a very flexible (if slow) build system.        To get SCons on Ubuntu use        sudo apt install scons                                Python 2.7+      gem5 relies on the Python development libraries. To installthese on Ubuntu use        sudo apt install python-dev                                protobuf 2.1+      “Protocol buffers are a language-neutral, platform-neutralextensible mechanism for serializing structured data.” In gem5,the protobuflibrary is used for trace generation and playback.protobuf isnot a required package, unless you plan on using it for tracegeneration and playback.        sudo apt install libprotobuf-dev python-protobuf protobuf-compiler libgoogle-perftools-dev                          Boost (Optional) : The Boost library is a set     of general purpose C++ libraries. It is a necessary dependency if you     wish to use the SystemC implementation.     ``` sudo apt install libboost-all-dev ```      Getting the codeChange directories to where you want to download the gem5 source. Then,to clone the repository, use the git clone command.git clone https://gem5.googlesource.com/public/gem5You can now change directories to gem5 which contains all of the gem5code.Your first gem5 buildLet’s start by building a basic x86 system. Currently, you must compilegem5 separately for every ISA that you want to simulate. Additionally,if using ruby-intro-chapter, you have to have separate compilations forevery cache coherence protocol.To build gem5, we will use SCons. SCons uses the SConstruct file(gem5/SConstruct) to set up a number of variables and then uses theSConscript file in every subdirectory to find and compile all of thegem5 source.SCons automatically creates a gem5/build directory when firstexecuted. In this directory you’ll find the files generated by SCons,the compiler, etc. There will be a separate directory for each set ofoptions (ISA and cache coherence protocol) that you use to compile gem5.There are a number of default compilations options in the build_optsdirectory. These files specify the parameters passed to SCons wheninitially building gem5. We’ll use the X86 defaults and specify that wewant to compile all of the CPU models. You can look at the filebuild_opts/X86 to see the default values for the Scons options. Youcan also specify these options on the command line to override anydefault.scons build/X86/gem5.opt -j9  gem5 binary types  The SCons scripts in gem5 currently have 5 different binaries you canbuild for gem5: debug, opt, fast, prof, and perf. These names aremostly self-explanatory, but detailed below.      debug    Built with no optimizations and debug symbols. This binary isuseful when using a debugger to debug if the variables you need toview are optimized out in the opt version of gem5. Running withdebug is slow compared to the other binaries.    opt    This binary is build with most optimizations on (e.g., -O3), butwith debug symbols included. This binary is much faster thandebug, but still contains enough debug information to be able todebug most problems.    fast    Built with all optimizations on (including link-time optimizationson supported platforms) and with no debug symbols. Additionally,any asserts are removed, but panics and fatals are still included.fast is the highest performing binary, and is much smaller thanopt. However, fast is only appropriate when you feel that it isunlikely your code has major bugs.    prof and perf    These two binaries are build for profiling gem5. prof includesprofiling information for the GNU profiler (gprof), and perfincludes profiling information for the Google performance tools(gperftools).    The main argument passed to SCons is what you want to build,build/X86/gem5.opt. In this case, we are building gem5.opt (anoptimized binary with debug symbols). We want to build gem5 in thedirectory build/X86. Since this directory currently doesn’t exist, SConswill look in build_opts to find the default parameters for X86. (Note:I’m using -j9 here to execute the build on 9 of my 8 cores on mymachine. You should choose an appropriate number for your machine,usually cores+1.)The output should look something like below:Checking for C header file Python.h... yesChecking for C library pthread... yesChecking for C library dl... yesChecking for C library util... yesChecking for C library m... yesChecking for C library python2.7... yesChecking for accept(0,0,0) in C++ library None... yesChecking for zlibVersion() in C++ library z... yesChecking for GOOGLE_PROTOBUF_VERIFY_VERSION in C++ library protobuf... yesChecking for clock_nanosleep(0,0,NULL,NULL) in C library None... yesChecking for timer_create(CLOCK_MONOTONIC, NULL, NULL) in C library None... noChecking for timer_create(CLOCK_MONOTONIC, NULL, NULL) in C library rt... yesChecking for C library tcmalloc... yesChecking for backtrace_symbols_fd((void*)0, 0, 0) in C library None... yesChecking for C header file fenv.h... yesChecking for C header file linux/kvm.h... yesChecking size of struct kvm_xsave ... yesChecking for member exclude_host in struct perf_event_attr...yesBuilding in /local.chinook/gem5/gem5-tutorial/gem5/build/X86Variables file /local.chinook/gem5/gem5-tutorial/gem5/build/variables/X86 not found,  using defaults in /local.chinook/gem5/gem5-tutorial/gem5/build_opts/X86scons: done reading SConscript files.scons: Building targets ... [ISA DESC] X86/arch/x86/isa/main.isa -&gt; generated/inc.d [NEW DEPS] X86/arch/x86/generated/inc.d -&gt; x86-deps [ENVIRONS] x86-deps -&gt; x86-environs [     CXX] X86/sim/main.cc -&gt; .o .... .... &lt;lots of output&gt; .... [   SHCXX] nomali/lib/mali_midgard.cc -&gt; .os [   SHCXX] nomali/lib/mali_t6xx.cc -&gt; .os [   SHCXX] nomali/lib/mali_t7xx.cc -&gt; .os [      AR]  -&gt; drampower/libdrampower.a [   SHCXX] nomali/lib/addrspace.cc -&gt; .os [   SHCXX] nomali/lib/mmu.cc -&gt; .os [  RANLIB]  -&gt; drampower/libdrampower.a [   SHCXX] nomali/lib/nomali_api.cc -&gt; .os [      AR]  -&gt; nomali/libnomali.a [  RANLIB]  -&gt; nomali/libnomali.a [     CXX] X86/base/date.cc -&gt; .o [    LINK]  -&gt; X86/gem5.optscons: done building targets.When compilation is finished you should have a working gem5 executableat build/X86/gem5.opt. The compilation can take a very long time,often 15 minutes or more, especially if you are compiling on a remotefile system like AFS or NFS.Common errorsWrong gcc versionError: gcc version 4.8 or newer required.       Installed version: 4.4.7Update your environment variables to point to the right gcc version, orinstall a more up to date version of gcc. Seebuilding-requirements-section.Python in a non-default locationIf you use a non-default version of Python, (e.g., version 2.7 when 2.5is your default), there may be problems when using SCons to build gem5.RHEL6 version of SCons uses a hardcoded location for Python, whichcauses the issue. gem5 often builds successfully in this case, but maynot be able to run. Below is one possible error you may see when you rungem5.Traceback (most recent call last):  File \"........../gem5-stable/src/python/importer.py\", line 93, in &lt;module&gt;    sys.meta_path.append(importer)TypeError: 'dict' object is not callableTo fix this, you can force SCons to use your environment’s Pythonversion by running python `which scons` build/X86/gem5.opt insteadof scons build/X86/gem5.opt. More information on this can be found onthe gem5 wiki about non-default Python locations: Using a non-defaultPythoninstallation.M4 macro processor not installedIf the M4 macro processor isn’t installed you’ll see an error similar tothis:...Checking for member exclude_host in struct perf_event_attr...yesError: Can't find version of M4 macro processor.  Please install M4 and try again.Just installing the M4 macro package may not solve this issue. You maynee to also install all of the autoconf tools. On Ubuntu, you can usethe following command.sudo apt-get install automake",
        "url": "/documentation/learning_gem5/part1/building/"
      }
      ,
    
      "documentation-learning-gem5-part1-cache-config": {
        "title": "Adding cache to configuration script",
        "content": "Adding cache to the configuration scriptUsing the previous configuration script as a starting point,this chapter will walk through a more complex configuration. We will adda cache hierarchy to the system as shown inthe figure below. Additionally, this chapterwill cover understanding the gem5 statistics output and adding commandline parameters to your scripts.Creating cache objectsWe are going to use the classic caches, instead of ruby-intro-chapter,since we are modeling a single CPU system and we don’t care aboutmodeling cache coherence. We will extend the Cache SimObject andconfigure it for our system. First, we must understand the parametersthat are used to configure Cache objects.  Classic caches and Ruby  gem5 currently has two completely distinct subsystems to model theon-chip caches in a system, the “Classic caches” and “Ruby”. Thehistorical reason for this is that gem5 is a combination of m5 fromMichigan and GEMS from Wisconsin. GEMS used Ruby as its cache model,whereas the classic caches came from the m5 codebase (hence“classic”). The difference between these two models is that Ruby isdesigned to model cache coherence in detail. Part of Ruby is SLICC, alanguage for defining cache coherence protocols. On the other hand,the classic caches implement a simplified and inflexible MOESIcoherence protocol.  To choose which model to use, you should ask yourself what you aretrying to model. If you are modeling changes to the cache coherenceprotocol or the coherence protocol could have a first-order impact onyour results, use Ruby. Otherwise, if the coherence protocol isn’timportant to you, use the classic caches.  A long-term goal of gem5 is to unify these two cache models into asingle holistic model.CacheThe Cache SimObject declaration can be found in src/mem/cache/Cache.py.This Python file defines the parameters which you can set of theSimObject. Under the hood, when the SimObject is instantiated theseparameters are passed to the C++ implementation of the object. TheCache SimObject inherits from the BaseCache object shown below.Within the BaseCache class, there are a number of parameters. Forinstance, assoc is an integer parameter. Some parameters, likewrite_buffers have a default value, 8 in this case. The defaultparameter is the first argument to Param.*, unless the first argumentis a string. The string argument of each of the parameters is adescription of what the parameter is (e.g.,tag_latency = Param.Cycles(\"Tag lookup latency\") means that the`tag_latency controls “The hit latency for this cache”).Many of these parameters do not have defaults, so we are required to setthese parameters before calling m5.instantiate().Now, to create caches with specific parameters, we are first going tocreate a new file, caches.py, in the same directory as simple.py,configs/tutorial. The first step is to import the SimObject(s) we aregoing to extend in this file.from m5.objects import CacheNext, we can treat the BaseCache object just like any other Python classand extend it. We can name the new cache anything we want. Let’s startby making an L1 cache.class L1Cache(Cache):    assoc = 2    tag_latency = 2    data_latency = 2    response_latency = 2    mshrs = 4    tgts_per_mshr = 20Here, we are setting some of the parameters of the BaseCache that do nothave default values. To see all of the possible configuration options,and to find which are required and which are optional, you have to lookat the source code of the SimObject. In this case, we are usingBaseCache.We have extended BaseCache and set most of the parameters that do nothave default values in the BaseCache SimObject. Next, let’s two moresub-classes of L1Cache, an L1DCache and L1ICacheclass L1ICache(L1Cache):    size = '16kB'class L1DCache(L1Cache):    size = '64kB'Let’s also create an L2 cache with some reasonable parameters.class L2Cache(Cache):    size = '256kB'    assoc = 8    tag_latency = 20    data_latency = 20    response_latency = 20    mshrs = 20    tgts_per_mshr = 12Now that we have specified all of the necessary parameters required forBaseCache, all we have to do is instantiate our sub-classes andconnect the caches to the interconnect. However, connecting lots ofobjects up to complex interconnects can make configuration files quicklygrow and become unreadable. Therefore, let’s first add some helperfunctions to our sub-classes of Cache. Remember, these are just Pythonclasses, so we can do anything with them that you can do with a Pythonclass.To the L1 cache let’s add two functions, connectCPU to connect a CPUto the cache and connectBus to connect the cache to a bus. We need toadd the following code to the L1Cache class.def connectCPU(self, cpu):    # need to define this in a base class!    raise NotImplementedErrordef connectBus(self, bus):    self.mem_side = bus.slaveNext, we have to define a separate connectCPU function for theinstruction and data caches, since the I-cache and D-cache ports have adifferent names. Our L1ICache and L1DCache classes now become:class L1ICache(L1Cache):    size = '16kB'    def connectCPU(self, cpu):        self.cpu_side = cpu.icache_portclass L1DCache(L1Cache):    size = '64kB'    def connectCPU(self, cpu):        self.cpu_side = cpu.dcache_portFinally, let’s add functions to the L2Cache to connect to thememory-side and CPU-side bus, respectively.def connectCPUSideBus(self, bus):    self.cpu_side = bus.masterdef connectMemSideBus(self, bus):    self.mem_side = bus.slaveThe full file can be found in the gem5 source atgem5/configs/learning_gem5/part1/caches.py.Adding caches the simple config fileNow, let’s add the caches we just created to the configuration script wecreated in the last chapter &lt;simple-config-chapter&gt;.First, let’s copy the script to a new name.cp ./configs/tutorial/simple.py ./configs/tutorial/two_level.pyFirst, we need to import the names from the caches.py file into thenamespace. We can add the following to the top of the file (after them5.objects import), as you would with any Python source.from caches import *Now, after creating the CPU, let’s create the L1 caches:system.cpu.icache = L1ICache()system.cpu.dcache = L1DCache()And connect the caches to the CPU ports with the helper function wecreated.system.cpu.icache.connectCPU(system.cpu)system.cpu.dcache.connectCPU(system.cpu)You need to remove the lines which connected the cacheports directly to the memory bus, replaying them withthe following two lines.system.cpu.icache_port = system.membus.slavesystem.cpu.dcache_port = system.membus.slaveWe can’t directly connect the L1 caches to the L2 cache since the L2cache only expects a single port to connect to it. Therefore, we need tocreate an L2 bus to connect our L1 caches to the L2 cache. The, we canuse our helper function to connect the L1 caches to the L2 bus.system.l2bus = L2XBar()system.cpu.icache.connectBus(system.l2bus)system.cpu.dcache.connectBus(system.l2bus)Next, we can create our L2 cache and connect it to the L2 bus and thememory bus.system.l2cache = L2Cache()system.l2cache.connectCPUSideBus(system.l2bus)system.l2cache.connectMemSideBus(system.membus)Everything else in the file stays the same! Now we have a completeconfiguration with a two-level cache hierarchy. If you run the currentfile, hello should now finish in 58513000 ticks. The full script canbe found in the gem5 source atgem5/configs/learning_gem5/part1/two_level.py.Adding parameters to your scriptWhen performing experiments with gem5, you don’t want to edit yourconfiguration script every time you want to test the system withdifferent parameters. To get around this, you can add command-lineparameters to your gem5 configuration script. Again, because theconfiguration script is just Python, you can use the Python librariesthat support argument parsing. Although :pyoptparse is officiallydeprecated, many of the configuration scripts that ship with gem5 use itinstead of pyargparse since gem5’s minimum Python version used to be2.5. The minimum Python version is now 2.7, so pyargparse is a betteroption when writing new scripts that don’t need to interact with thecurrent gem5 scripts. To get started using :pyoptparse, you can consultthe online Python documentation.To add options to our two-level cache configuration, after importing ourcaches, let’s add some options.from optparse import OptionParserparser = OptionParser()parser.add_option('--l1i_size', help=\"L1 instruction cache size\")parser.add_option('--l1d_size', help=\"L1 data cache size\")parser.add_option('--l2_size', help=\"Unified L2 cache size\")(options, args) = parser.parse_args()Now, you can runbuild/X86/gem5.opt configs/tutorial/two_level.py --help whichwill display the options you just added.Next, we need to pass these options onto the caches that we create inthe configuration script. To do this, we’ll simply change two_level.pyto pass the options into the caches as a parameter to their constructorand add an appropriate constructor, next.system.cpu.icache = L1ICache(options)system.cpu.dcache = L1DCache(options)...system.l2cache = L2Cache(options)In caches.py, we need to add constructors (__init__ functions inPython) to each of our classes. Starting with our base L1 cache, we’lljust add an empty constructor since we don’t have any parameters whichapply to the base L1 cache. However, we can’t forget to call the superclass’s constructor in this case. If the call to the super classconstructor is skipped, gem5’s SimObject attribute finding function willfail and the result will be“RuntimeError: maximum recursion depth exceeded” when you try toinstantiate the cache object. So, in L1Cache we need to add thefollowing after the static class members.def __init__(self, options=None):    super(L1Cache, self).__init__()    passNext, in the L1ICache, we need to use the option that we created(l1i_size) to set the size. In the following code, there is guards forif options is not passed to the L1ICache constructor and if nooption was specified on the command line. In these cases, we’ll just usethe default we’ve already specified for the size.def __init__(self, options=None):    super(L1ICache, self).__init__(options)    if not options or not options.l1i_size:        return    self.size = options.l1i_sizeWe can use the same code for the L1DCache:def __init__(self, options=None):    super(L1DCache, self).__init__(options)    if not options or not options.l1d_size:        return    self.size = options.l1d_sizeAnd the unified L2Cache:def __init__(self, options=None):    super(L2Cache, self).__init__()    if not options or not options.l2_size:        return    self.size = options.l2_sizeWith these changes, you can now pass the cache sizes into your scriptfrom the command line like below.build/X86/gem5.opt configs/tutorial/two_level.py --l2_size='1MB' --l1d_size='128kB'gem5 Simulator System.  http://gem5.orggem5 is copyrighted software; use the --copyright option for details.gem5 compiled Sep  6 2015 14:17:02gem5 started Sep  6 2015 15:06:51gem5 executing on galapagos-09.cs.wisc.educommand line: build/X86/gem5.opt ../tutorial/_static/scripts/part1/two_level_opts.py --l2_size=1MB --l1d_size=128kBGlobal frequency set at 1000000000000 ticks per secondwarn: DRAM device capacity (8192 Mbytes) does not match the address range assigned (512 Mbytes)0: system.remote_gdb.listener: listening for remote gdb #0 on port 7000Beginning simulation!info: Entering event queue @ 0.  Starting simulation...Hello world!Exiting @ tick 56742000 because target called exit()The full scripts can be found in the gem5 source atgem5/configs/learning_gem5/part1/caches.py andgem5/configs/learning_gem5/part1/two_level.py.",
        "url": "/documentation/learning_gem5/part1/cache_config/"
      }
      ,
    
      "documentation-learning-gem5-part1-example-configs": {
        "title": "Using the default configuration scripts",
        "content": "Using the default configuration scriptsIn this chapter, we’ll explore using the default configuration scriptsthat come with gem5. gem5 ships with many configuration scripts thatallow you to use gem5 very quickly. However, a common pitfall is to usethese scripts without fully understanding what is being simulated. It isimportant when doing computer architecture research with gem5 to fullyunderstand the system you are simulating. This chapter will walk youthrough some important options and parts of the default configurationscripts.In the last few chapters you have created your own configuration scriptsfrom scratch. This is very powerful, as it allows you to specify everysingle system parameter. However, some systems are very complex to setup (e.g., a full-system ARM or x86 machine). Luckily, the gem5developers have provided many scripts to bootstrap the process ofbuilding systems.A tour of the directory structureAll of gem5’s configuration files can be found in configs/. Thedirectory structure is shown below:configs/boot:ammp.rcS            halt.sh                micro_tlblat2.rcS              netperf-stream-udp-local.rcS...configs/common:Benchmarks.py     cpu2000.py     Options.pyCaches.py         FSConfig.py    O3_ARM_v7a.py     SysPaths.pyCacheConfig.py    CpuConfig.py   MemConfig.py      Simulation.pyconfigs/dram:sweep.pyconfigs/example:fs.py       read_config.py       ruby_mem_test.py      ruby_random_test.pymemtest.py  ruby_direct_test.py  ruby_network_test.py  se.pyconfigs/ruby:MESI_Three_Level.py  MI_example.py           MOESI_CMP_token.py  Network_test.pyMESI_Two_Level.py    MOESI_CMP_directory.py  MOESI_hammer.py     Ruby.pyconfigs/splash2:cluster.py  run.pyconfigs/topologies:BaseTopology.py  Cluster.py  Crossbar.py  MeshDirCorners.py  Mesh.py  Pt2Pt.py  Torus.pyEach directory is briefly described below:  boot/  These are rcS files which are used in full-system mode. These filesare loaded by the simulator after Linux boots and are executed bythe shell. Most of these are used to control benchmarks when runningin full-system mode. Some are utility functions, likehack_back_ckpt.rcS. These files are covered in more depth in thechapter on full-system simulation.  common/  This directory contains a number of helper scripts and functions tocreate simulated systems. For instance, Caches.py is similar tothe caches.py and caches_opts.py files created in previouschapters.    Options.py contains a variety of options that can be set on thecommand line. Like the number of CPUs, system clock, and many, manymore. This is a good place to look to see if the option you want tochange already has a command line parameter.    CacheConfig.py contains the options and functions for settingcache parameters for the classic memory system.    MemConfig.py provides some helper functions for setting the memorysystem.    FSConfig.py contains the necessary functions to set up full-systemsimulation for many different kinds of systems. Full-systemsimulation is discussed further in it’s own chapter.    Simulation.py contains many helper functions to set up and rungem5. A lot of the code contained in this file manages saving andrestoring checkpoints. The example configuration files below inexamples/ use the functions in this file to execute the gem5simulation. This file is quite complicated, but it also allows a lotof flexibility in how the simulation is run.    dram/  Contains scripts to test DRAM.  example/  This directory contains some example gem5 configuration scripts thatcan be used out-of-the-box to run gem5. Specifically, se.py andfs.py are quite useful. More on these files can be found in thenext section. There are also some other utility configurationscripts in this directory.  ruby/  This directory contains the configurations scripts for Ruby and itsincluded cache coherence protocols. More details can be found in thechapter on Ruby.  splash2/  This directory contains scripts to run the splash2 benchmark suitewith a few options to configure the simulated system.  topologies/  This directory contains the implementation of the topologies thatcan be used when creating the Ruby cache hierarchy. More details canbe found in the chapter on Ruby.Using se.py and fs.pyIn this section, I’ll discuss some of the common options that can bepassed on the command line to se.py and fs.py. More details on howto run full-system simulation can be found in the full-system simulationchapter. Here I’ll discuss the options that are common to the two files.Most of the options discussed in this section are found in Options.pyand are registered in the function addCommonOptions. This section doesnot detail all of the options. To see all of the options, run theconfiguration script with --help, or read the script’s source code.First, let’s simply run the hello world program without any parameters:build/X86/gem5.opt configs/example/se.py --cmd=tests/test-progs/hello/bin/x86/linux/helloAnd we get the following as output:gem5 Simulator System.  http://gem5.orggem5 is copyrighted software; use the --copyright option for details.gem5 compiled Jan 14 2015 16:11:34gem5 started Feb  2 2015 15:22:24gem5 executing on mustardseed.cs.wisc.educommand line: build/X86/gem5.opt configs/example/se.py --cmd=tests/test-progs/hello/bin/x86/linux/helloGlobal frequency set at 1000000000000 ticks per secondwarn: DRAM device capacity (8192 Mbytes) does not match the address range assigned (512 Mbytes)0: system.remote_gdb.listener: listening for remote gdb #0 on port 7000**** REAL SIMULATION ****info: Entering event queue @ 0.  Starting simulation...Hello world!Exiting @ tick 5942000 because target called exit()However, this isn’t a very interesting simulation at all! By default,gem5 uses the atomic CPU and uses atomic memory accesses, so there’s noreal timing data reported! To confirm this, you can look atm5out/config.ini. The CPU is shown on line 46:[system.cpu]type=AtomicSimpleCPUchildren=apic_clk_domain dtb interrupts isa itb tracer workloadbranchPred=Nullchecker=Nullclk_domain=system.cpu_clk_domaincpu_id=0do_checkpoint_insts=truedo_quiesce=truedo_statistics_insts=trueTo actually run gem5 in timing mode, let’s specify a CPU type. Whilewe’re at it, we can also specify sizes for the L1 caches.build/X86/gem5.opt configs/example/se.py --cmd=tests/test-progs/hello/bin/x86/linux/hello --cpu-type=TimingSimpleCPU --l1d_size=64kB --l1i_size=16kBgem5 Simulator System.  http://gem5.orggem5 is copyrighted software; use the --copyright option for details.gem5 compiled Jan 14 2015 16:11:34gem5 started Feb  2 2015 15:26:57gem5 executing on mustardseed.cs.wisc.educommand line: build/X86/gem5.opt configs/example/se.py --cmd=tests/test-progs/hello/bin/x86/linux/hello --cpu-type=TimingSimpleCPU --l1d_size=64kB --l1i_size=16kBGlobal frequency set at 1000000000000 ticks per secondwarn: DRAM device capacity (8192 Mbytes) does not match the address range assigned (512 Mbytes)0: system.remote_gdb.listener: listening for remote gdb #0 on port 7000**** REAL SIMULATION ****info: Entering event queue @ 0.  Starting simulation...Hello world!Exiting @ tick 344986500 because target called exit()Now, let’s check the config.ini file and make sure that these optionspropagated correctly to the final system. If you searchm5out/config.ini for “cache”, you’ll find that no caches were created!Even though we specified the size of the caches, we didn’t specify thatthe system should use caches, so they weren’t created. The correctcommand line should be:build/X86/gem5.opt configs/example/se.py --cmd=tests/test-progs/hello/bin/x86/linux/hello --cpu-type=TimingSimpleCPU --l1d_size=64kB --l1i_size=16kB --cachesgem5 Simulator System.  http://gem5.orggem5 is copyrighted software; use the --copyright option for details.gem5 compiled Jan 14 2015 16:11:34gem5 started Feb  2 2015 15:29:20gem5 executing on mustardseed.cs.wisc.educommand line: build/X86/gem5.opt configs/example/se.py --cmd=tests/test-progs/hello/bin/x86/linux/hello --cpu-type=TimingSimpleCPU --l1d_size=64kB --l1i_size=16kB --cachesGlobal frequency set at 1000000000000 ticks per secondwarn: DRAM device capacity (8192 Mbytes) does not match the address range assigned (512 Mbytes)0: system.remote_gdb.listener: listening for remote gdb #0 on port 7000**** REAL SIMULATION ****info: Entering event queue @ 0.  Starting simulation...Hello world!Exiting @ tick 29480500 because target called exit()On the last line, we see that the total time went from 344986500 ticksto 29480500, much faster! Looks like caches are probably enabled now.But, it’s always a good idea to double check the config.ini file.[system.cpu.dcache]type=BaseCachechildren=tagsaddr_ranges=0:18446744073709551615assoc=2clk_domain=system.cpu_clk_domaindemand_mshr_reserve=1eventq_index=0forward_snoops=truehit_latency=2is_top_level=truemax_miss_count=0mshrs=4prefetch_on_access=falseprefetcher=Nullresponse_latency=2sequential_access=falsesize=65536system=systemtags=system.cpu.dcache.tagstgts_per_mshr=20two_queue=falsewrite_buffers=8cpu_side=system.cpu.dcache_portmem_side=system.membus.slave[2]Some common options se.py and fs.pyAll of the possible options are printed when you run:build/X86/gem5.opt configs/example/se.py --helpBelow is a few important options from that list.",
        "url": "/documentation/learning_gem5/part1/example_configs/"
      }
      ,
    
      "extending-configs": {
        "title": "Extending gem5 to run ARM binaries",
        "content": "Extending gem5 for ARMThis chapter assumes you’ve already built a basic x86 system withgem5 and created a simple configuration script.Downloading ARM BinariesLet’s start by downloading some ARM benchmark binaries. Beginfrom the root of the gem5 folder:mkdir -p cpu_tests/benchmarks/bin/armcd cpu_tests/benchmarks/bin/armwget gem5.org/dist/current/gem5/cpu_tests/benchmarks/bin/arm/Bubblesortwget gem5.org/dist/current/gem5/cpu_tests/benchmarks/bin/arm/FloatMMWe’ll use these to further test our ARM system.Building gem5 to run ARM BinariesJust as we did when we first built our basic x86 system, we runthe same command, except this time we want it to compile with thedefault ARM configurations. To do so, we just replace x86 with ARM:scons build/ARM/gem5.opt -j20When compilation is finished you should have a working gem5 executableat build/ARM/gem5.opt.Modifying simple.py to run ARM BinariesBefore we can run any ARM binaries with our new system, we’ll haveto make a slight tweak to our simple.py.If you recall when we created our simple configuration script, it wasnoted that we did not have to connect the PIO and interrupt ports tothe memory bus for any ISA other than for an x86 system. So let’sremove those 3 lines:system.cpu.createInterruptController()#system.cpu.interrupts[0].pio = system.membus.master#system.cpu.interrupts[0].int_master = system.membus.slave#system.cpu.interrupts[0].int_slave = system.membus.mastersystem.system_port = system.membus.slaveYou can either delete or comment them out as above. Next let’s setthe processes command to one of our ARM benchmark binaries:process.cmd = ['cpu_tests/benchmarks/bin/arm/Bubblesort']If you’d like to test a simple hello program as before, justreplace x86 with arm:process.cmd = ['tests/test-progs/hello/bin/arm/linux/hello']Running gem5Simply run it as before, except replace X86 with ARM:build/ARM/gem5.opt configs/tutorial/simple.pyIf you set your process to be the Bubblesort benchmark, youroutput should look like this:gem5 Simulator System.  http://gem5.orggem5 is copyrighted software; use the --copyright option for details.gem5 compiled Oct  3 2019 16:02:35gem5 started Oct  6 2019 13:22:25gem5 executing on amarillo, pid 77129command line: build/ARM/gem5.opt configs/tutorial/simple.pyGlobal frequency set at 1000000000000 ticks per secondwarn: DRAM device capacity (8192 Mbytes) does not match the address range assigned (512 Mbytes)0: system.remote_gdb: listening for remote gdb on port 7002Beginning simulation!info: Entering event queue @ 0.  Starting simulation...info: Increasing stack size by one page.warn: readlink() called on '/proc/self/exe' may yield unexpected results in various settings.      Returning '/home/jtoya/gem5/cpu_tests/benchmarks/bin/arm/Bubblesort'-50000Exiting @ tick 258647411000 because exiting with last active thread context",
        "url": "/extending_configs/"
      }
      ,
    
      "documentation-learning-gem5-part1-gem5-stats": {
        "title": "Understanding gem5 statistics and output",
        "content": "Understanding gem5 statistics and outputIn addition to any information which your simulation script prints out,after running gem5, there are three files generated in a directorycalled m5out:  config.ini  Contains a list of every SimObject created for the simulation andthe values for its parameters.  config.json  The same as config.ini, but in json format.  stats.txt  A text representation of all of the gem5 statistics registered forthe simulation.Where these files are created can be controlled byconfig.iniThis file is the definitive version of what was simulated. All of theparameters for each SimObject that is simulated, whether they were setin the configuration scripts or the defaults were used, are shown inthis file.Below is pulled from the config.ini generated when the simple.pyconfiguration file from simple-config-chapter is run.[root]type=Rootchildren=systemeventq_index=0full_system=falsesim_quantum=0time_sync_enable=falsetime_sync_period=100000000000time_sync_spin_threshold=100000000[system]type=Systemchildren=clk_domain cpu dvfs_handler mem_ctrl membusboot_osflags=acache_line_size=64clk_domain=system.clk_domaindefault_p_state=UNDEFINEDeventq_index=0exit_on_work_items=falseinit_param=0kernel=kernel_addr_check=truekernel_extras=kvm_vm=Nullload_addr_mask=18446744073709551615load_offset=0mem_mode=timing...[system.membus]type=CoherentXBarchildren=snoop_filterclk_domain=system.clk_domaindefault_p_state=UNDEFINEDeventq_index=0forward_latency=4frontend_latency=3p_state_clk_gate_bins=20p_state_clk_gate_max=1000000000000p_state_clk_gate_min=1000point_of_coherency=truepoint_of_unification=truepower_model=response_latency=2snoop_filter=system.membus.snoop_filtersnoop_response_latency=4system=systemuse_default_range=falsewidth=16master=system.cpu.interrupts.pio system.cpu.interrupts.int_slave system.mem_ctrl.portslave=system.cpu.icache_port system.cpu.dcache_port system.cpu.interrupts.int_master system.system_port[system.membus.snoop_filter]type=SnoopFiltereventq_index=0lookup_latency=1max_capacity=8388608system=systemHere we see that at the beginning of the description of each SimObjectis first it’s name as created in the configuration file surrounded bysquare brackets (e.g., [system.membus]).Next, every parameter of the SimObject is shown with it’s value,including parameters not explicitly set in the configuration file. Forinstance, the configuration file sets the clock domain to be 1 GHz (1000ticks in this case). However, it did not set the cache line size (whichis 64 in the system) object.The config.ini file is a valuable tool for ensuring that you aresimulating what you think you’re simulating. There are many possibleways to set default values, and to override default values, in gem5. Itis a “best-practice” to always check the config.ini as a sanity checkthat values set in the configuration file are propagated to the actualSimObject instantiation.stats.txtgem5 has a flexible statistics generating system. gem5 statistics iscovered in some detail on the gem5 wikisite. Each instantiation of a SimObjecthas it’s own statistics. At the end of simulation, or when specialstatistic-dumping commands are issued, the current state of thestatistics for all SimObjects is dumped to a file.First, the statistics file contains general statistics about theexecution:---------- Begin Simulation Statistics ----------sim_seconds                                  0.000346                       # Number of seconds simulatedsim_ticks                                   345518000                       # Number of ticks simulatedfinal_tick                                  345518000                       # Number of ticks from beginning of simulation (restored from checkpoints and never reset)sim_freq                                 1000000000000                       # Frequency of simulated tickshost_inst_rate                                 144400                       # Simulator instruction rate (inst/s)host_op_rate                                   260550                       # Simulator op (including micro ops) rate (op/s)host_tick_rate                             8718625183                       # Simulator tick rate (ticks/s)host_mem_usage                                 778640                       # Number of bytes of host memory usedhost_seconds                                     0.04                       # Real time elapsed on the hostsim_insts                                        5712                       # Number of instructions simulatedsim_ops                                         10314                       # Number of ops (including micro ops) simulated———- Begin Simulation Statistics ———-sim_seconds 0.000508# Number of seconds simulated sim_ticks 507841000 # Number of tickssimulated final_tick 507841000 # Number of ticks from beginning ofsimulation (restored from checkpoints and never reset) sim_freq1000000000000 # Frequency of simulated ticks host_inst_rate 157744 #Simulator instruction rate (inst/s) host_op_rate 284736 # Simulatorop (including micro ops) rate (op/s) host_tick_rate 14017997125 #Simulator tick rate (ticks/s) host_mem_usage 642808 # Number of bytesof host memory used host_seconds 0.04 # Real time elapsed on the hostsim_insts 5712 # Number of instructions simulated sim_ops 10313 #Number of ops (including micro ops) simulatedThe statistic dump begins with---------- Begin Simulation Statistics ----------. There may bemultiple of these in a single file if there are multiple statistic dumpsduring the gem5 execution. This is common for long running applications,or when restoring from checkpoints.Each statistic has a name (first column), a value (second column), and adescription (last column preceded by #).Most of the statistics are self explanatory from their descriptions. Acouple of important statistics are sim_seconds which is the totalsimulated time for the simulation, sim_insts which is the number ofinstructions committed by the CPU, and host_inst_rate which tells youthe performance of gem5.Next, the SimObjects’ statistics are printed. For instance, the memorycontroller statistics. This has information like the bytes read by eachcomponent and the average bandwidth used by those components.system.clk_domain.voltage_domain.voltage            1                       # Voltage in Voltssystem.clk_domain.clock                          1000                       # Clock period in tickssystem.mem_ctrl.pwrStateResidencyTicks::UNDEFINED    507841000                       # Cumulative time (in ticks) in various power statessystem.mem_ctrl.bytes_read::cpu.inst            58264                       # Number of bytes read from this memorysystem.mem_ctrl.bytes_read::cpu.data             7167                       # Number of bytes read from this memorysystem.mem_ctrl.bytes_read::total               65431                       # Number of bytes read from this memorysystem.mem_ctrl.bytes_inst_read::cpu.inst        58264                       # Number of instructions bytes read from this memorysystem.mem_ctrl.bytes_inst_read::total          58264                       # Number of instructions bytes read from this memorysystem.mem_ctrl.bytes_written::cpu.data          7160                       # Number of bytes written to this memorysystem.mem_ctrl.bytes_written::total             7160                       # Number of bytes written to this memorysystem.mem_ctrl.num_reads::cpu.inst              7283                       # Number of read requests responded to by this memorysystem.mem_ctrl.num_reads::cpu.data              1084                       # Number of read requests responded to by this memorysystem.mem_ctrl.num_reads::total                 8367                       # Number of read requests responded to by this memorysystem.mem_ctrl.num_writes::cpu.data              941                       # Number of write requests responded to by this memorysystem.mem_ctrl.num_writes::total                 941                       # Number of write requests responded to by this memorysystem.mem_ctrl.bw_read::cpu.inst           114728823                       # Total read bandwidth from this memory (bytes/s)system.mem_ctrl.bw_read::cpu.data            14112685                       # Total read bandwidth from this memory (bytes/s)system.mem_ctrl.bw_read::total              128841507                       # Total read bandwidth from this memory (bytes/s)system.mem_ctrl.bw_inst_read::cpu.inst      114728823                       # Instruction read bandwidth from this memory (bytes/s)system.mem_ctrl.bw_inst_read::total         114728823                       # Instruction read bandwidth from this memory (bytes/s)system.mem_ctrl.bw_write::cpu.data           14098901                       # Write bandwidth from this memory (bytes/s)system.mem_ctrl.bw_write::total              14098901                       # Write bandwidth from this memory (bytes/s)system.mem_ctrl.bw_total::cpu.inst          114728823                       # Total bandwidth to/from this memory (bytes/s)system.mem_ctrl.bw_total::cpu.data           28211586                       # Total bandwidth to/from this memory (bytes/s)system.mem_ctrl.bw_total::total             142940409                       # Total bandwidth to/from this memory (bytes/s)Later in the file is the CPU statistics, which contains information onthe number of syscalls, the number of branches, total committedinstructions, etc.system.cpu.dtb.walker.pwrStateResidencyTicks::UNDEFINED    507841000                       # Cumulative time (in ticks) in various power statessystem.cpu.dtb.rdAccesses                        1084                       # TLB accesses on read requestssystem.cpu.dtb.wrAccesses                         941                       # TLB accesses on write requestssystem.cpu.dtb.rdMisses                             9                       # TLB misses on read requestssystem.cpu.dtb.wrMisses                             7                       # TLB misses on write requestssystem.cpu.apic_clk_domain.clock                16000                       # Clock period in tickssystem.cpu.interrupts.pwrStateResidencyTicks::UNDEFINED    507841000                       # Cumulative time (in ticks) in various power statessystem.cpu.itb.walker.pwrStateResidencyTicks::UNDEFINED    507841000                       # Cumulative time (in ticks) in various power statessystem.cpu.itb.rdAccesses                           0                       # TLB accesses on read requestssystem.cpu.itb.wrAccesses                        7284                       # TLB accesses on write requestssystem.cpu.itb.rdMisses                             0                       # TLB misses on read requestssystem.cpu.itb.wrMisses                            31                       # TLB misses on write requestssystem.cpu.workload.numSyscalls                    11                       # Number of system callssystem.cpu.pwrStateResidencyTicks::ON       507841000                       # Cumulative time (in ticks) in various power statessystem.cpu.numCycles                           507841                       # number of cpu cycles simulatedsystem.cpu.numWorkItemsStarted                      0                       # number of work items this cpu startedsystem.cpu.numWorkItemsCompleted                    0                       # number of work items this cpu completedsystem.cpu.committedInsts                        5712                       # Number of instructions committedsystem.cpu.committedOps                         10313                       # Number of ops (including micro ops) committedsystem.cpu.num_int_alu_accesses                 10204                       # Number of integer alu accessessystem.cpu.num_fp_alu_accesses                      0                       # Number of float alu accessessystem.cpu.num_vec_alu_accesses                     0                       # Number of vector alu accessessystem.cpu.num_func_calls                         221                       # number of times a function call or return occuredsystem.cpu.num_conditional_control_insts          986                       # number of instructions that are conditional controlssystem.cpu.num_int_insts                        10204                       # number of integer instructionssystem.cpu.num_fp_insts                             0                       # number of float instructionssystem.cpu.num_vec_insts                            0                       # number of vector instructionssystem.cpu.num_int_register_reads               19293                       # number of times the integer registers were readsystem.cpu.num_int_register_writes               7976                       # number of times the integer registers were writtensystem.cpu.num_fp_register_reads                    0                       # number of times the floating registers were readsystem.cpu.num_fp_register_writes                   0                       # number of times the floating registers were writtensystem.cpu.num_vec_register_reads                   0                       # number of times the vector registers were readsystem.cpu.num_vec_register_writes                  0                       # number of times the vector registers were writtensystem.cpu.num_cc_register_reads                 7020                       # number of times the CC registers were readsystem.cpu.num_cc_register_writes                3825                       # number of times the CC registers were writtensystem.cpu.num_mem_refs                          2025                       # number of memory refssystem.cpu.num_load_insts                        1084                       # Number of load instructionssystem.cpu.num_store_insts                        941                       # Number of store instructionssystem.cpu.num_idle_cycles                          0                       # Number of idle cyclessystem.cpu.num_busy_cycles                     507841                       # Number of busy cyclessystem.cpu.not_idle_fraction                        1                       # Percentage of non-idle cyclessystem.cpu.idle_fraction                            0                       # Percentage of idle cyclessystem.cpu.Branches                              1306                       # Number of branches fetched",
        "url": "/documentation/learning_gem5/part1/gem5_stats/"
      }
      ,
    
      "documentation-learning-gem5-part1-simple-config": {
        "title": "Creating a simple configuration script",
        "content": "Creating a simple configuration scriptThis chapter of the tutorial will walk you through how to set up asimple simulation script for gem5 and to run gem5 for the first time.It’s assumed that you’ve completed the first chapter of the tutorial andhave successfully built gem5 with an executable build/X86/gem5.opt.Our configuration script is going to model a very simple system. We’llhave just one simple CPU core. This CPU core will be connected to asystem-wide memory bus. And we’ll have a single DDR3 memory channel,also connected to the memory bus.gem5 configuration scriptsThe gem5 binary takes, as a parameter, a python script which sets up andexecutes the simulation. In this script, you create a system tosimulate, create all of the components of the system, and specify all ofthe parameters for the system components. Then, from the script, you canbegin the simulation.This script is completely user-defined. You can choose to use any validPython code in the configuration scripts. This book provides on exampleof a style that relies heavily classes and inheritance in Python. As agem5 user, it’s up to you how simple or complicated to make yourconfiguration scripts.There are a number of example configuration scripts that ship with gem5in configs/examples. Most of these scripts are all-encompassing andallow users to specify almost all options on the command line. Insteadof starting with these complex script, in this book we are going tostart with the most simple script that can run gem5 and build fromthere. Hopefully, by the end of this section you’ll have a good idea ofhow simulation scripts work.  An aside on SimObjects  gem5’s modular design is built around the SimObject type. Most ofthe components in the simulated system are SimObjects: CPUs, caches,memory controllers, buses, etc. gem5 exports all of these objects fromtheir C++ implementation to python. Thus, from the pythonconfiguration script you can create any SimObject, set its parameters,and specify the interactions between SimObjects.  See http://www.gem5.org/SimObjects for more information.Creating a config fileLet’s start by creating a new config file and opening it:mkdir configs/tutorialtouch configs/tutorial/simple.pyThis is just a normal python file that will be executed by the embeddedpython in the gem5 executable. Therefore, you can use any features andlibraries available in python.The first thing we’ll do in this file is import the m5 library and allSimObjects that we’ve compiled.import m5from m5.objects import *Next, we’ll create the first SimObject: the system that we are going tosimulate. The System object will be the parent of all the otherobjects in our simulated system. The System object contains a lot offunctional (not timing-level) information, like the physical memoryranges, the root clock domain, the root voltage domain, the kernel (infull-system simulation), etc. To create the system SimObject, we simplyinstantiate it like a normal python class:system = System()Now that we have a reference to the system we are going to simulate,let’s set the clock on the system. We first have to create a clockdomain. Then we can set the clock frequency on that domain. Settingparameters on a SimObject is exactly the same as setting members of anobject in python, so we can simply set the clock to 1 GHz, for instance.Finally, we have to specify a voltage domain for this clock domain.Since we don’t care about system power right now, we’ll just use thedefault options for the voltage domain.system.clk_domain = SrcClockDomain()system.clk_domain.clock = '1GHz'system.clk_domain.voltage_domain = VoltageDomain()Once we have a system, let’s set up how the memory will be simulated. Weare going to use timing mode for the memory simulation. You willalmost always use timing mode for the memory simulation, except inspecial cases like fast-forwarding and restoring from a checkpoint. Wewill also set up a single memory range of size 512 MB, a very smallsystem. Note that in the python configuration scripts, whenever a sizeis required you can specify that size in common vernacular and unitslike '512MB'. Similarly, with time you can use time units (e.g.,'5ns'). These will automatically be converted to a commonrepresentation, respectively.system.mem_mode = 'timing'system.mem_ranges = [AddrRange('512MB')]Now, we can create a CPU. We’ll start with the most simple timing-basedCPU in gem5, TimingSimpleCPU. This CPU model executes each instructionin a single clock cycle to execute, except memory requests, which flowthrough the memory system. To create the CPU you can simply justinstantiate the object:system.cpu = TimingSimpleCPU()Next, we’re going to create the system-wide memory bus:system.membus = SystemXBar()Now that we have a memory bus, let’s connect the cache ports on the CPUto it. In this case, since the system we want to simulate doesn’t haveany caches, we will connect the I-cache and D-cache ports directly tothe membus. In this example system, we have no caches.system.cpu.icache_port = system.membus.slavesystem.cpu.dcache_port = system.membus.slave  An aside on gem5 ports  To connect memory system components together, gem5 uses a portabstraction. Each memory object can have two kinds of ports, masterports and slave ports. Requests are sent from a master port to aslave port, and responses are sent from a slave port to a master port.When connecting ports, you must connect a master port to a slave port.  Connecting ports together is easy to do from the python configurationfiles. You can simply set the master port = to the slave port andthey will be connected. For instance:  memobject1.master = memobject2.slave    The master and slave can be on either side of the = and the sameconnection will be made. After making the connection, the master cansend requests to the slave port. There is a lot of magic going onbehind the scenes to set up the connection, the details of which areunimportant for most users.  We will discuss ports and MemObject in more detail inmemoryobject-chapter.Next, we need to connect up a few other ports to make sure that oursystem will function correctly. We need to create an I/O controller onthe CPU and connect it to the memory bus. Also, we need to connect aspecial port in the system up to the membus. This port is afunctional-only port to allow the system to read and write memory.Connecting the PIO and interrupt ports to the memory bus is anx86-specific requirement. Other ISAs (e.g., ARM) do not require these 3extra lines.system.cpu.createInterruptController()system.cpu.interrupts[0].pio = system.membus.mastersystem.cpu.interrupts[0].int_master = system.membus.slavesystem.cpu.interrupts[0].int_slave = system.membus.mastersystem.system_port = system.membus.slaveNext, we need to create a memory controller and connect it to themembus. For this system, we’ll use a simple DDR3 controller and it willbe responsible for the entire memory range of our system.system.mem_ctrl = DDR3_1600_8x8()system.mem_ctrl.range = system.mem_ranges[0]system.mem_ctrl.port = system.membus.masterAfter those final connections, we’ve finished instantiating oursimulated system! Our system should look like simple-config-fig.Next, we need to set up the process we want the CPU to execute. Since weare executing in syscall emulation mode (SE mode), we will just pointthe CPU at the compiled executable. We’ll execute a simple “Hello world”program. There’s already one that is compiled that ships with gem5, sowe’ll use that. You can specify any application built for x86 and that’sbeen statically compiled.  Full system vs syscall emulation  gem5 can run in two different modes called “syscall emulation” and“full system” or SE and FS modes. In full system mode (covered laterfull-system-part), gem5 emulates the entire hardware system and runsan unmodified kernel. Full system mode is similar to running a virtualmachine.  Syscall emulation mode, on the other hand, does not emulate all of thedevices in a system and focuses on simulating the CPU and memorysystem. Syscall emulation is much easier to configure since you arenot required to instantiate all of the hardware devices required in areal system. However, syscall emulation only emulates Linux systemcalls, and thus only models user-mode code.  If you do not need to model the operating system for your researchquestions, and you want extra performance, you should use SE mode.However, if you need high fidelity modeling of the system, or OSinteraction like page table walks are important, then you should useFS mode.First, we have to create the process (another SimObject). Then we setthe processes command to the command we want to run. This is a listsimilar to argv, with the executable in the first position and thearguments to the executable in the rest of the list. Then we set the CPUto use the process as it’s workload, and finally create the functionalexecution contexts in the CPU.process = Process()process.cmd = ['tests/test-progs/hello/bin/x86/linux/hello']system.cpu.workload = processsystem.cpu.createThreads()The final thing we need to do is instantiate the system and beginexecution. First, we create the Root object. Then we instantiate thesimulation. The instantiation process goes through all of the SimObjectswe’ve created in python and creates the C++ equivalents.As a note, you don’t have to instantiate the python class then specifythe parameters explicitly as member variables. You can also pass theparameters as named arguments, like the Root object below.root = Root(full_system = False, system = system)m5.instantiate()Finally, we can kick off the actual simulation! As a side now, gem5 isnow using Python 3-style print functions, so print is no longer astatement and must be called as a function.print(\"Beginning simulation!\")exit_event = m5.simulate()And once simulation finishes, we can inspect the state of the system.print('Exiting @ tick {} because {}'      .format(m5.curTick(), exit_event.getCause()))Running gem5Now that we’ve created a simple simulation script (the full version ofwhich can be found at gem5/configs/learning_gem5/part1/simple.py) we’reready to run gem5. gem5 can take many parameters, but requires just onepositional argument, the simulation script. So, we can simply run gem5from the root gem5 directory as:build/X86/gem5.opt configs/tutorial/simple.pyThe output should be:gem5 Simulator System.  http://gem5.orggem5 is copyrighted software; use the --copyright option for details.gem5 compiled Mar 16 2018 10:24:24gem5 started Mar 16 2018 15:53:27gem5 executing on amarillo, pid 41697command line: build/X86/gem5.opt configs/tutorial/simple.pyGlobal frequency set at 1000000000000 ticks per secondwarn: DRAM device capacity (8192 Mbytes) does not match the address range assigned (512 Mbytes)0: system.remote_gdb: listening for remote gdb on port 7000Beginning simulation!info: Entering event queue @ 0.  Starting simulation...Hello world!Exiting @ tick 507841000 because exiting with last active thread contextParameters in the configuration file can be changed and the resultsshould be different. For instance, if you double the system clock, thesimulation should finish faster. Or, if you change the DDR controller toDDR4, the performance should be better.Additionally, you can change the CPU model to MinorCPU to model anin-order CPU, or DerivO3CPU to model an out-of-order CPU. However,note that DerivO3CPU currently does not work with simple.py, becauseDerivO3CPU requires a system with separate instruction and data caches(DerivO3CPU does work with the configuration in the next section).Next, we will add caches to our configuration file to model a morecomplex system.",
        "url": "/documentation/learning_gem5/part1/simple_config/"
      }
      ,
    
      "documentation-learning-gem5-part2-debugging": {
        "title": "Debugging gem5",
        "content": "Debugging gem5In the previous chapters we covered how tocreate a very simple SimObject. In this chapter, we will replace thesimple print to stdout with gem5’s debugging support.gem5 provides support for printf-style tracing/debugging of your codevia debug flags. These flags allow every component to have manydebug-print statements, without all of them enabled at the same time.When running gem5, you can specify which debug flags to enable from thecommand line.Using debug flagsFor instance, when running the first simple.py script fromsimple-config-chapter, if you enable the DRAM debug flag, you get thefollowing output. Note that this generates a lot of output to theconsole (about 7 MB).    build/X86/gem5.opt --debug-flags=DRAM configs/learning_gem5/part1/simple.py | head -n 50gem5 Simulator System.  http://gem5.orgDRAM device capacity (gem5 is copyrighted software; use the --copyright option for details.gem5 compiled Jan  3 2017 16:03:38gem5 started Jan  3 2017 16:09:53gem5 executing on chinook, pid 19223command line: build/X86/gem5.opt --debug-flags=DRAM configs/learning_gem5/part1/simple.pyGlobal frequency set at 1000000000000 ticks per second      0: system.mem_ctrl: Memory capacity 536870912 (536870912) bytes      0: system.mem_ctrl: Row buffer size 8192 bytes with 128 columns per row buffer      0: system.remote_gdb.listener: listening for remote gdb #0 on port 7000Beginning simulation!info: Entering event queue @ 0.  Starting simulation...      0: system.mem_ctrl: recvTimingReq: request ReadReq addr 400 size 8      0: system.mem_ctrl: Read queue limit 32, current size 0, entries needed 1      0: system.mem_ctrl: Address: 400 Rank 0 Bank 0 Row 0      0: system.mem_ctrl: Read queue limit 32, current size 0, entries needed 1      0: system.mem_ctrl: Adding to read queue      0: system.mem_ctrl: Request scheduled immediately      0: system.mem_ctrl: Single request, going to a free rank      0: system.mem_ctrl: Timing access to addr 400, rank/bank/row 0 0 0      0: system.mem_ctrl: Activate at tick 0      0: system.mem_ctrl: Activate bank 0, rank 0 at tick 0, now got 1 active      0: system.mem_ctrl: Access to 400, ready at 46250 bus busy until 46250.  46250: system.mem_ctrl: processRespondEvent(): Some req has reached its readyTime  46250: system.mem_ctrl: number of read entries for rank 0 is 0  46250: system.mem_ctrl: Responding to Address 400..   46250: system.mem_ctrl: Done  77000: system.mem_ctrl: recvTimingReq: request ReadReq addr 400 size 8  77000: system.mem_ctrl: Read queue limit 32, current size 0, entries needed 1  77000: system.mem_ctrl: Address: 400 Rank 0 Bank 0 Row 0  77000: system.mem_ctrl: Read queue limit 32, current size 0, entries needed 1  77000: system.mem_ctrl: Adding to read queue  77000: system.mem_ctrl: Request scheduled immediately  77000: system.mem_ctrl: Single request, going to a free rank  77000: system.mem_ctrl: Timing access to addr 400, rank/bank/row 0 0 0  77000: system.mem_ctrl: Access to 400, ready at 101750 bus busy until 101750. 101750: system.mem_ctrl: processRespondEvent(): Some req has reached its readyTime 101750: system.mem_ctrl: number of read entries for rank 0 is 0 101750: system.mem_ctrl: Responding to Address 400..  101750: system.mem_ctrl: Done 132000: system.mem_ctrl: recvTimingReq: request ReadReq addr 400 size 8 132000: system.mem_ctrl: Read queue limit 32, current size 0, entries needed 1 132000: system.mem_ctrl: Address: 400 Rank 0 Bank 0 Row 0 132000: system.mem_ctrl: Read queue limit 32, current size 0, entries needed 1 132000: system.mem_ctrl: Adding to read queue 132000: system.mem_ctrl: Request scheduled immediately 132000: system.mem_ctrl: Single request, going to a free rank 132000: system.mem_ctrl: Timing access to addr 400, rank/bank/row 0 0 0 132000: system.mem_ctrl: Access to 400, ready at 156750 bus busy until 156750. 156750: system.mem_ctrl: processRespondEvent(): Some req has reached its readyTime 156750: system.mem_ctrl: number of read entries for rank 0 is 0Or, you may want to debug based on the exact instruction the CPU isexecuting. For this, the Exec debug flag may be useful. This debugflags shows details of how each instruction is executed by the simulatedCPU.    build/X86/gem5.opt --debug-flags=Exec configs/learning_gem5/part1/simple.py | head -n 50gem5 Simulator System.  http://gem5.orggem5 is copyrighted software; use the --copyright option for details.gem5 compiled Jan  3 2017 16:03:38gem5 started Jan  3 2017 16:11:47gem5 executing on chinook, pid 19234command line: build/X86/gem5.opt --debug-flags=Exec configs/learning_gem5/part1/simple.pyGlobal frequency set at 1000000000000 ticks per second      0: system.remote_gdb.listener: listening for remote gdb #0 on port 7000warn: ClockedObject: More than one power state change request encountered within the same simulation tickBeginning simulation!info: Entering event queue @ 0.  Starting simulation...  77000: system.cpu T0 : @_start    : xor   rbp, rbp  77000: system.cpu T0 : @_start.0  :   XOR_R_R : xor   rbp, rbp, rbp : IntAlu :  D=0x0000000000000000 132000: system.cpu T0 : @_start+3    : mov r9, rdx 132000: system.cpu T0 : @_start+3.0  :   MOV_R_R : mov   r9, r9, rdx : IntAlu :  D=0x0000000000000000 187000: system.cpu T0 : @_start+6    : pop rsi 187000: system.cpu T0 : @_start+6.0  :   POP_R : ld   t1, SS:[rsp] : MemRead :  D=0x0000000000000001 A=0x7fffffffee30 250000: system.cpu T0 : @_start+6.1  :   POP_R : addi   rsp, rsp, 0x8 : IntAlu :  D=0x00007fffffffee38 250000: system.cpu T0 : @_start+6.2  :   POP_R : mov   rsi, rsi, t1 : IntAlu :  D=0x0000000000000001 360000: system.cpu T0 : @_start+7    : mov rdx, rsp 360000: system.cpu T0 : @_start+7.0  :   MOV_R_R : mov   rdx, rdx, rsp : IntAlu :  D=0x00007fffffffee38 415000: system.cpu T0 : @_start+10    : and    rax, 0xfffffffffffffff0 415000: system.cpu T0 : @_start+10.0  :   AND_R_I : limm   t1, 0xfffffffffffffff0 : IntAlu :  D=0xfffffffffffffff0 415000: system.cpu T0 : @_start+10.1  :   AND_R_I : and   rsp, rsp, t1 : IntAlu :  D=0x0000000000000000 470000: system.cpu T0 : @_start+14    : push   rax 470000: system.cpu T0 : @_start+14.0  :   PUSH_R : st   rax, SS:[rsp + 0xfffffffffffffff8] : MemWrite :  D=0x0000000000000000 A=0x7fffffffee28 491000: system.cpu T0 : @_start+14.1  :   PUSH_R : subi   rsp, rsp, 0x8 : IntAlu :  D=0x00007fffffffee28 546000: system.cpu T0 : @_start+15    : push   rsp 546000: system.cpu T0 : @_start+15.0  :   PUSH_R : st   rsp, SS:[rsp + 0xfffffffffffffff8] : MemWrite :  D=0x00007fffffffee28 A=0x7fffffffee20 567000: system.cpu T0 : @_start+15.1  :   PUSH_R : subi   rsp, rsp, 0x8 : IntAlu :  D=0x00007fffffffee20 622000: system.cpu T0 : @_start+16    : mov    r15, 0x40a060 622000: system.cpu T0 : @_start+16.0  :   MOV_R_I : limm   r8, 0x40a060 : IntAlu :  D=0x000000000040a060 732000: system.cpu T0 : @_start+23    : mov    rdi, 0x409ff0 732000: system.cpu T0 : @_start+23.0  :   MOV_R_I : limm   rcx, 0x409ff0 : IntAlu :  D=0x0000000000409ff0 842000: system.cpu T0 : @_start+30    : mov    rdi, 0x400274 842000: system.cpu T0 : @_start+30.0  :   MOV_R_I : limm   rdi, 0x400274 : IntAlu :  D=0x0000000000400274 952000: system.cpu T0 : @_start+37    : call   0x9846 952000: system.cpu T0 : @_start+37.0  :   CALL_NEAR_I : limm   t1, 0x9846 : IntAlu :  D=0x0000000000009846 952000: system.cpu T0 : @_start+37.1  :   CALL_NEAR_I : rdip   t7, %ctrl153,  : IntAlu :  D=0x00000000004001ba 952000: system.cpu T0 : @_start+37.2  :   CALL_NEAR_I : st   t7, SS:[rsp + 0xfffffffffffffff8] : MemWrite :  D=0x00000000004001ba A=0x7fffffffee18 973000: system.cpu T0 : @_start+37.3  :   CALL_NEAR_I : subi   rsp, rsp, 0x8 : IntAlu :  D=0x00007fffffffee18 973000: system.cpu T0 : @_start+37.4  :   CALL_NEAR_I : wrip   , t7, t1 : IntAlu :1042000: system.cpu T0 : @__libc_start_main    : push   r151042000: system.cpu T0 : @__libc_start_main.0  :   PUSH_R : st   r15, SS:[rsp + 0xfffffffffffffff8] : MemWrite :  D=0x0000000000000000 A=0x7fffffffee101063000: system.cpu T0 : @__libc_start_main.1  :   PUSH_R : subi   rsp, rsp, 0x8 : IntAlu :  D=0x00007fffffffee101118000: system.cpu T0 : @__libc_start_main+2    : movsxd   rax, rsi1118000: system.cpu T0 : @__libc_start_main+2.0  :   MOVSXD_R_R : sexti   rax, rsi, 0x1f : IntAlu :  D=0x00000000000000011173000: system.cpu T0 : @__libc_start_main+5    : mov  r15, r91173000: system.cpu T0 : @__libc_start_main+5.0  :   MOV_R_R : mov   r15, r15, r9 : IntAlu :  D=0x00000000000000001228000: system.cpu T0 : @__libc_start_main+8    : push r14In fact, the Exec flag is actually an agglomeration of multiple debugflags. You can see this, and all of the available debug flags, byrunning gem5 with the --debug-help parameter.    build/X86/gem5.opt --debug-helpBase Flags:Activity: NoneAddrRanges: NoneAnnotate: State machine annotation debuggingAnnotateQ: State machine annotation queue debuggingAnnotateVerbose: Dump all state machine annotation detailsBaseXBar: NoneBranch: NoneBridge: NoneCCRegs: NoneCMOS: Accesses to CMOS devicesCache: NoneCachePort: NoneCacheRepl: NoneCacheTags: NoneCacheVerbose: NoneChecker: NoneCheckpoint: NoneClockDomain: None...Compound Flags:AnnotateAll: All Annotation flags    Annotate, AnnotateQ, AnnotateVerboseCacheAll: None    Cache, CachePort, CacheRepl, CacheVerbose, HWPrefetchDiskImageAll: None    DiskImageRead, DiskImageWrite...XBar: None    BaseXBar, CoherentXBar, NoncoherentXBar, SnoopFilter    XBar: None    BaseXBar, CoherentXBar, NoncoherentXBar, SnoopFilterAdding a new debug flagIn the previous chapters, we used a simplestd::cout to print from our SimObject. While it is possible to use thenormal C/C++ I/O in gem5, it is highly discouraged. So, we are now goingto replace this and use gem5’s debugging facilities instead.When creating a new debug flag, we first have to declare it in aSConscript file. Add the following to the SConscript file in thedirectory with your hello object code (src/learning_gem5/).DebugFlag('Hello')This declares a debug flag of “Hello”. Now, we can use this in debugstatements in our SimObject.By declaring the flag in the SConscript file, a debug header isautomatically generated that allows us to use the debug flag. The headerfile is in the debug directory and has the same name (andcapitalization) as what we declare in the SConscript file. Therefore, weneed to include the automatically generated header file in any fileswhere we plan to use the debug flag.In the hello_object.cc file, we need to include the header file.#include \"debug/Hello.hh\"Now that we have included the necessary header file, let’s replace thestd::cout call with a debug statement like so.DPRINTF(Hello, \"Created the hello object\\n\");DPRINTF is a C++ macro. The first parameter is a debug flag that hasbeen declared in a SConscript file. We can use the flag Hello since wedeclared it in the src/learning_gem5/SConscript file. The rest of thearguments are variable and can be anything you would pass to a printfstatement.Now, if you recompile gem5 and run it with the “Hello” debug flag, youget the following result.    build/X86/gem5.opt --debug-flags=Hello configs/learning_gem5/part2/run_hello.pygem5 Simulator System.  http://gem5.orggem5 is copyrighted software; use the --copyright option for details.gem5 compiled Jan  4 2017 09:40:10gem5 started Jan  4 2017 09:41:01gem5 executing on chinook, pid 29078command line: build/X86/gem5.opt --debug-flags=Hello configs/learning_gem5/part2/run_hello.pyGlobal frequency set at 1000000000000 ticks per second      0: hello: Created the hello objectBeginning simulation!info: Entering event queue @ 0.  Starting simulation...Exiting @ tick 18446744073709551615 because simulate() limit reachedYou can find the updated SConcript filehere and the updatedhello object codehere.Debug outputFor each dynamic DPRINTF execution, three things are printed tostdout. First, the current tick when the DPRINTF is executed.Second, the name of the SimObject that called DPRINTF. This name isusually the Python variable name from the Python config file. However,the name is whatever the SimObject name() function returns. Finally,you see whatever format string you passed to the DPRINTF function.You can control where the debug output goes with the --debug-fileparameter. By default, all of the debugging output is printed tostdout. However, you can redirect the output to any file. The file isstored relative to the main gem5 output directory, not the currentworking directory.Using functions other than DPRINTFDPRINTF is the most commonly used debugging function in gem5. However,gem5 provides a number of other functions that are useful in specificcircumstances.  These functions are like the previous functions :cppDDUMP,:cppDPRINTF, and :cppDPRINTFR except they do not take a flag as aparameter. Therefore, these statements will always print wheneverdebugging is enabled.All of these functions are only enabled if you compile gem5 in “opt” or“debug” mode. All other modes use empty placeholder macros for the abovefunctions. Therefore, if you want to use debug flags, you must useeither “gem5.opt” or “gem5.debug”.",
        "url": "/documentation/learning_gem5/part2/debugging/"
      }
      ,
    
      "documentation-learning-gem5-part2-environment": {
        "title": "Setting up your development environment",
        "content": "Setting up your development environmentThis is going to talk about getting started developing gem5.gem5-style guidelinesWhen modifying any open source project, it is important to follow theproject’s style guidelines. Details on gem5 style can be found on thegem5 wiki page.To help you conform to the style guidelines, gem5 includes a scriptwhich runs whenever you commit a changeset in git. This script should beautomatically added to your .git/config file by SCons the first time youbuild gem5. Please do not ignore these warnings/errors. However, in therare case where you are trying to commit a file that doesn’t conform tothe gem5 style guidelines (e.g., something from outside the gem5 sourcetree) you can use the git option --no-verify to skip running the stylechecker.The key takeaways from the style guide are:  Use 4 spaces, not tabs  Sort the includes  Use capitalized camel case for class names, camel case for membervariables and functions, and snake case for local variables.  Document your codegit branchesMost people developing with gem5 use the branch feature of git to tracktheir changes. This makes it quite simple to commit your changes back togem5. Additionally, using branches can make it easier to update gem5with new changes that other people make while keeping your own changesseparate. The Git book has a greatchapterdescribing the details of how to use branches.",
        "url": "/documentation/learning_gem5/part2/environment/"
      }
      ,
    
      "documentation-learning-gem5-part2-events": {
        "title": "Event-driven programming",
        "content": "Event-driven programminggem5 is an event-driven simulator. In this chapter, we will explore howto create and schedule events. We will be building from the simpleHelloObject from hello-simobject-chapter.Creating a simple event callbackIn gem5’s event-driven model, each event has a callback function inwhich the event is processed. Generally, this is a class that inheritsfrom :cppEvent. However, gem5 provides a wrapper function for creatingsimple events.In the header file for our HelloObject, we simply need to declare anew function that we want to execute every time the event fires(processEvent()). This function must take no parameters and returnnothing.Next, we add an Event instance. In this case, we will use anEventFunctionWrapper which allows us to execute any function.We also add a startup() function that will be explained below.class HelloObject : public SimObject{  private:    void processEvent();    EventFunctionWrapper event;  public:    HelloObject(HelloObjectParams *p);    void startup();};Next, we must construct this event in the constructor of HelloObject.The EventFuntionWrapper takes two parameters, a function to executeand a name. The name is usually the name of the SimObject that owns theevent. When printing the name, there will be an automatic“.wrapped_function_event” appended to the end of the name.The first parameter is simply a function that takes no parameters andhas no return value (std::function&lt;void(void)&gt;). Usually, this is asimple lambda function that calls a member function. However, it can beany function you want. Below, we captute this in the lambda ([this])so we can call member functions of the instance of the class.HelloObject::HelloObject(HelloObjectParams *params) :    SimObject(params), event([this]{processEvent();}, name()){    DPRINTF(Hello, \"Created the hello object\\n\");}We also must define the implementation of the process function. In thiscase, we’ll simply print something if we are debugging.voidHelloObject::processEvent(){    DPRINTF(Hello, \"Hello world! Processing the event!\\n\");}Scheduling eventsFinally, for the event to be processed, we first have to schedule theevent. For this we use the :cppschedule function. This functionschedules some instance of an Event for some time in the future(event-driven simulation does not allow events to execute in the past).We will initially schedule the event in the startup() function weadded to the HelloObject class. The startup() function is whereSimObjects are allowed to schedule internal events. It does not getexecuted until the simulation begins for the first time (i.e. thesimulate() function is called from a Python config file).voidHelloObject::startup(){    schedule(event, 100);}Here, we simply schedule the event to execute at tick 100. Normally, youwould use some offset from curTick(), but since we know the startup()function is called when the time is currently 0, we can use an explicittick value.The output when you run gem5 with the “Hello” debug flag is nowgem5 Simulator System.  http://gem5.orggem5 is copyrighted software; use the --copyright option for details.gem5 compiled Jan  4 2017 11:01:46gem5 started Jan  4 2017 13:41:38gem5 executing on chinook, pid 1834command line: build/X86/gem5.opt --debug-flags=Hello configs/learning_gem5/part2/run_hello.pyGlobal frequency set at 1000000000000 ticks per second      0: hello: Created the hello objectBeginning simulation!info: Entering event queue @ 0.  Starting simulation...    100: hello: Hello world! Processing the event!Exiting @ tick 18446744073709551615 because simulate() limit reachedMore event schedulingWe can also schedule new events within an event process action. Forinstance, we are going to add a latency parameter to the HelloObjectand a parameter for how many times to fire the event. In the nextchapter we will make these parameters accessiblefrom the Python config files.To the HelloObject class declaration, add a member variable for thelatency and number of times to fire.class HelloObject : public SimObject{  private:    void processEvent();    EventFunctionWrapper event;    Tick latency;    int timesLeft;  public:    HelloObject(HelloObjectParams *p);    void startup();};Then, in the constructor add default values for the latency andtimesLeft.HelloObject::HelloObject(HelloObjectParams *params) :    SimObject(params), event([this]{processEvent();}, name()),    latency(100), timesLeft(10){    DPRINTF(Hello, \"Created the hello object\\n\");}Finally, update startup() and processEvent().voidHelloObject::startup(){    schedule(event, latency);}voidHelloObject::processEvent(){    timesLeft--;    DPRINTF(Hello, \"Hello world! Processing the event! %d left\\n\", timesLeft);    if (timesLeft &lt;= 0) {        DPRINTF(Hello, \"Done firing!\\n\");    } else {        schedule(event, curTick() + latency);    }}Now, when we run gem5, the event should fire 10 times, and thesimulation will end after 1000 ticks. The output should now look likethe following.gem5 Simulator System.  http://gem5.orggem5 is copyrighted software; use the --copyright option for details.gem5 compiled Jan  4 2017 13:53:35gem5 started Jan  4 2017 13:54:11gem5 executing on chinook, pid 2326command line: build/X86/gem5.opt --debug-flags=Hello configs/learning_gem5/part2/run_hello.pyGlobal frequency set at 1000000000000 ticks per second      0: hello: Created the hello objectBeginning simulation!info: Entering event queue @ 0.  Starting simulation...    100: hello: Hello world! Processing the event! 9 left    200: hello: Hello world! Processing the event! 8 left    300: hello: Hello world! Processing the event! 7 left    400: hello: Hello world! Processing the event! 6 left    500: hello: Hello world! Processing the event! 5 left    600: hello: Hello world! Processing the event! 4 left    700: hello: Hello world! Processing the event! 3 left    800: hello: Hello world! Processing the event! 2 left    900: hello: Hello world! Processing the event! 1 left   1000: hello: Hello world! Processing the event! 0 left   1000: hello: Done firing!Exiting @ tick 18446744073709551615 because simulate() limit reachedYou can find the updated header filehere and theimplementation filehere.",
        "url": "/documentation/learning_gem5/part2/events/"
      }
      ,
    
      "documentation-learning-gem5-part2-helloobject": {
        "title": "Creating a very simple SimObject",
        "content": "Creating a very simple SimObjectAlmost all objects in gem5 inherit from the base SimObject type.SimObjects export the main interfaces to all objects in gem5. SimObjectsare wrapped C++ objects that are accessible from the Pythonconfiguration scripts.SimObjects can have many parameters, which are set via the Pythonconfiguration files. In addition to simple parameters like integers andfloating point numbers, they can also have other SimObjects asparameters. This allows you to create complex system hierarchies, likereal machines.In this chapter, we will walk through creating a simple “HelloWorld”SimObject. The goal is to introduce you to how SimObjects are createdand the required boilerplate code for all SimObjects. We will alsocreate a simple Python configuration script which instantiates ourSimObject.In the next few chapters, we will take this simple SimObject and expandon it to include debugging support, dynamicevents, and parameters.  Using git branches  It is common to use a new git branch for each new feature you add togem5.  The first step when adding a new feature or modifying something ingem5, is to create a new branch to store your changes. Details on gitbranches can be found in the Git book_.  git checkout -b hello-simobject  Step 1: Create a Python class for your new SimObjectEach SimObject has a Python class which is associated with it. ThisPython class describes the parameters of your SimObject that can becontrolled from the Python configuration files. For our simpleSimObject, we are just going to start out with no parameters. Thus, wesimply need to declare a new class for our SimObject and set it’s nameand the C++ header that will define the C++ class for the SimObject.We can create a file, HelloObject.py, in src/learning_gem5.from m5.params import *from m5.SimObject import SimObjectclass HelloObject(SimObject):    type = 'HelloObject'    cxx_header = \"learning_gem5/hello_object.hh\"It is not required that the type be the same as the name of the class,but it is convention. The type is the C++ class that you are wrappingwith this Python SimObject. Only in special circumstances should thetype and the class name be different.The cxx_header is the file that contains the declaration of the classused as the type parameter. Again, the convention is to use the nameof the SimObject with all lowercase and underscores, but this is onlyconvention. You can specify any header file here.Step 2: Implement your SimObject in C++Next, we need to create hello_object.hh and hello_object.cc whichwill implement the hello object.We’ll start with the header file for our C++ object. By convention,gem5 wraps all header files in #ifndef/#endif with the name of thefile and the directory its in so there are no circular includes.The only thing we need to do in the file is to declare our class. SinceHelloObject is a SimObject, it must inherit from the C++ SimObjectclass. Most of the time, your SimObject’s parent will be a subclass ofSimObject, not SimObject itself.The SimObject class specifies many virtual functions. However, none ofthese functions are pure virtual, so in the simplest case, there is noneed to implement any functions except for the constructor.The constructor for all SimObjects assumes it will take a parameterobject. This parameter object is automatically created by the buildsystem and is based on the Python class for the SimObject, like theone we created above. The name for this parameter type is generatedautomatically from the name of your object. For our “HelloObject” theparameter type’s name is “HelloObject*Params*”.The code required for our simple header file is listed below.#ifndef __LEARNING_GEM5_HELLO_OBJECT_HH__#define __LEARNING_GEM5_HELLO_OBJECT_HH__#include \"params/HelloObject.hh\"#include \"sim/sim_object.hh\"class HelloObject : public SimObject{  public:    HelloObject(HelloObjectParams *p);};#endif // __LEARNING_GEM5_HELLO_OBJECT_HH__Next, we need to implement two functions in the .cc file, not justone. The first function, is the constructor for the HelloObject. Herewe simply pass the parameter object to the SimObject parent and print“Hello world!”Normally, you would never use std::cout in gem5. Instead, youshould use debug flags. In the next chapter, wewill modify this to use debug flags instead. However, for now, we’llsimply use std::cout because it is simple.#include \"learning_gem5/hello_object.hh\"#include &lt;iostream&gt;HelloObject::HelloObject(HelloObjectParams *params) :    SimObject(params){    std::cout &lt;&lt; \"Hello World! From a SimObject!\" &lt;&lt; std::endl;}There is another function that we have to implement as well for theSimObject to be complete. We must implement one function for theparameter type that is implicitly created from the SimObject Pythondeclaration, namely, the create function. This function simply returnsa new instantiation of the SimObject. Usually this function is verysimple (as below).HelloObject*HelloObjectParams::create(){    return new HelloObject(this);}If you forget to add the create function for your SimObject, you willget a linker error when you compile. It will look something like thefollowing.build/X86/python/m5/internal/param_HelloObject_wrap.o: In function `_wrap_HelloObjectParams_create':/local.chinook/gem5/gem5-tutorial/gem5/build/X86/python/m5/internal/param_HelloObject_wrap.cc:3096: undefined reference to `HelloObjectParams::create()'collect2: error: ld returned 1 exit statusscons: *** [build/X86/gem5.opt] Error 1scons: building terminated because of errors.This undefined reference to `HelloObjectParams::create()' meansyou need to implement the create function for your SimObject.Step 3: Register the SimObject and C++ fileIn order for the C++ file to be compiled and the Python file to beparsed we need to tell the build system about these files. gem5 usesSCons as the build system, so you simply have to create a SConscriptfile in the directory with the code for the SimObject. If there isalready a SConscript file for that directory, simply add the followingdeclarations to that file.This file is simply a normal Python file, so you can write anyPython code you want in this file. Some of the scripting can becomequite complicated. gem5 leverages this to automatically create code forSimObjects and to compile the domain-specific languages like SLICC andthe ISA language.In the SConscript file, there are a number of functions automaticallydefined after you import them. See the section on that…To get your new SimObject to compile, you simply need to create a newfile with the name “SConscript” in the src/learning_gem5 directory. Inthis file, you have to declare the SimObject and the .cc file. Belowis the required code.Import('*')SimObject('HelloObject.py')Source('hello_object.cc')Step 4: (Re)-build gem5To compile and link your new files you simply need to recompile gem5.The below example assumes you are using the x86 ISA, but nothing in ourobject requires an ISA so, this will work with any of gem5’s ISAs.scons build/X86/gem5.optStep 5: Create the config scripts to use your new SimObjectNow that you have implemented a SimObject, and it has been compiled intogem5, you need to create or modify a Python config file to instantiateyour object. Since your object is very simple a system object is notrequired! CPUs are not needed, or caches, or anything, except a Rootobject. All gem5 instances require a Root object.Walking through creating a very simple configuration script, first,import m5 and all of the objects you have compiled.import m5from m5.objects import *Next, you have to instantiate the Root object, as required by all gem5instances.root = Root(full_system = False)Now, you can instantiate the HelloObject you created. All you need todo is call the Python “constructor”. Later, we will look at how tospecify parameters via the Python constructor. In addition to creatingan instantiation of your object, you need to make sure that it is achild of the root object. Only SimObjects that are children of theRoot object are instantiated in C++.root.hello = HelloObject()Finally, you need to call instantiate on the m5 module and actuallyrun the simulation!m5.instantiate()print(\"Beginning simulation!\")exit_event = m5.simulate()print('Exiting @ tick {} because {}'      .format(m5.curTick(), exit_event.getCause()))The output should look something like the followinggem5 Simulator System.  http://gem5.orggem5 is copyrighted software; use the --copyright option for details.gem5 compiled May  4 2016 11:37:41gem5 started May  4 2016 11:44:28gem5 executing on mustardseed.cs.wisc.edu, pid 22480command line: build/X86/gem5.opt configs/learning_gem5/run_hello.pyGlobal frequency set at 1000000000000 ticks per secondHello World! From a SimObject!Beginning simulation!info: Entering event queue @ 0.  Starting simulation...Exiting @ tick 18446744073709551615 because simulate() limit reachedCongrats! You have written your first SimObject. In the next chapters,we will extend this SimObject and explore what you can do withSimObjects.",
        "url": "/documentation/learning_gem5/part2/helloobject/"
      }
      ,
    
      "documentation-learning-gem5-part2-memoryobject": {
        "title": "Creating SimObjects in the memory system",
        "content": "Creating SimObjects in the memory systemIn this chapter, we will create a simple memory object that sits betweenthe CPU and the memory bus. In the next chapterwe will take this simple memory object and add some logic to it to makeit a very simple blocking uniprocessor cache.gem5 master and slave portsBefore diving into the implementation of a memory object, we shouldfirst understand gem5’s master and slave port interface. As previouslydiscussed in simple-config-chapter, all memory objects are connectedtogether via ports. These ports provide a rigid interface between thesememory objects.These ports implement three different memory system modes: timing,atomic, and functional. The most important mode is timing mode. Timingmode is the only mode that produces correct simulation results. Theother modes are only used in special circumstances.Atomic mode is useful for fastforwarding simulation to a region ofinterest and warming up the simulator. This mode assumes that no eventswill be generated in the memory system. Instead, all of the memoryrequests execute through a single long callchain. It is not required toimplement atomic accesses for a memory object unless it will be usedduring fastforward or during simulator warmup.Functional mode is better described as debugging mode. Functionalmode is used for things like reading data from the host into thesimulator memory. It is used heavily in syscall emulation mode. Forinstance, functional mode is used to load the binary in theprocess.cmd from the host into the simulated system’s memory so thesimulated system can access it. Functional accesses should return themost up-to-date data on a read, no matter where the data is, and shouldupdate all possible valid data on a write (e.g., in a system with cachesthere may be multiple valid cache blocks with the same address).PacketsIn gem5, Packets are sent across ports. A Packet is made up of aMemReq which is the memory request object. The MemReq holdsinformation about the original request that initiated the packet such asthe requestor, the address, and the type of request (read, write, etc.).Packets also have a MemCmd, which is the current command of thepacket. This command can change throughout the life of the packet (e.g.,requests turn into responses once the memory command is satisfied). Themost common MemCmd are ReadReq (read request), ReadResp (readresponse), WriteReq (write request), WriteResp (write response).There are also writeback requests (WritebackDirty, WritebackClean)for caches and many other command types.Packets also either keep the data for the request, or a pointer to thedata. There are options when creating the packet whether the data isdynamic (explicitly allocated and deallocated), or static (allocated anddeallocated by the packet object).Finally, packets are used in the classic caches as the unit to trackcoherency. Therefore, much of the packet code is specific to the classiccache coherence protocol. However, packets are used for allcommunication between memory objects in gem5, even if they are notdirectly involved in coherence (e.g., DRAM controllers and the CPUmodels).All of the port interface functions accept a Packet pointer as aparameter. Since this pointer is so common, gem5 includes a typedef forit: PacketPtr.Port interfaceThere are two types of ports in gem5: master ports and slave ports.Whenever you implement a memory object, you will implement at least oneof these types of ports. To do this, you create a new class thatinherits from either MasterPort or SlavePort for master and slaveports, respectively. Master ports send requests (and receive response),and slave ports receive requests (and send responses).master-slave-1-fig outlines the simplest interaction between a masterand slave port. This figure shows the interaction in timing mode. Theother modes are much simpler and use a simple callchain between themaster and the slave.As mentioned above, all of the port interfaces require a PacketPtr asa parameter. Each of these functions (sendTimingReq, recvTimingReq,etc.), accepts a single parameter, a PacketPtr. This packet is therequest or response to send or receive.To send a request packet, the master calls sendTimingReq. In turn,(and in the same callchain), the function recvTimingReq is called onthe slave with the same PacketPtr as its sole parameter.The recvTimingReq has a return type of bool. This boolean returnvalue is directly returned to the calling master. A return value oftrue signifies that the packet was accepted by the slave. A returnvalue of false, on the other hand, means that the slave was unable toaccept and the request must be retried sometime in the future.In master-slave-1-fig, first, the master sends a timing request bycalling sendTimingReq, which in turn calls recvTimingResp. Theslave, returns true from recvTimingResp, which is returned from thecall to sendTimingReq. The master continue executing, and the slavedoes whatever is necessary to complete the request (e.g., if it is acache, it looks up the tags to see if there is a match to the address inthe request).Once the slave completes the request, it can send a response to themaster. The slave calls sendTimingResp with the response packet (thisshould be the same PacketPtr as the request, but it should now be aresponse packet). In turn, the master function recvTimingResp iscalled. The master’s recvTimingResp function returns true, which isthe return value of sendTimingResp in the slave. Thus, the interactionfor that request is complete.Later in master-slave-example-section we will show the example code forthese functions.It is possible that the master or slave is busy when they receive arequest or a response. master-slave-2-fig shows the case where the slaveis busy when the original request was sent.In this case, the slave returns false from the recvTimingReqfunction. When a master receives false after calling sendTimingReq, itmust wait until the its function recvReqRetry is executed. Only whenthis function is called is the master allowed to retry callingsendTimingRequest. The above figure shows the timing request failingonce, but it could fail any number of times. Note: it is up to themaster to track the packet that fails, not the slave. The slave doesnot keep the pointer to the packet that fails.Similarly, master-slave-3-fig shows the case when the master is busy atthe time the slave tries to send a response. In this case, the slavecannot call sendTimingResp until it receives a recvRespRetry.Importantly, in both of these cases, the retry codepath can be a singlecall stack. For instance, when the master calls sendRespRetry,recvTimingReq can also be called in the same call stack. Therefore, itis easy to incorrectly create an infinite recursion bug, or other bugs.It is important that before a memory object sends a retry, that it isready at that instant to accept another packet.Simple memory object exampleIn this section, we will build a simple memory object. Initially, itwill simply pass requests through from the CPU-side (a simple CPU) tothe memory-side (a simple memory bus). See Figure simple-memobj-figure.It will have a single master port, to send requests to the memory bus,and two cpu-side ports for the instruction and data cache ports of theCPU. In the next chapter &lt;simplecache-chapter&gt;, we will add the logicto make this object a cache.Declare the SimObjectJust like when we were creating the simple SimObject inhello-simobject-chapter, the first step is to create a SimObject Pythonfile. We will call this simple memory object SimpleMemobj and createthe SimObject Python file in src/learning_gem5/simple_memobj.from m5.params import *from m5.proxy import *from MemObject import MemObjectclass SimpleMemobj(MemObject):    type = 'SimpleMemobj'    cxx_header = \"learning_gem5/simple_memobj/simple_memobj.hh\"    inst_port = SlavePort(\"CPU side port, receives requests\")    data_port = SlavePort(\"CPU side port, receives requests\")    mem_side = MasterPort(\"Memory side port, sends requests\")For this object, we inherit from MemObject, not SimObject since weare creating an object that will interact with the memory system. TheMemObject class has two pure virtual functions that we will have todefine in our C++ implementation, getMasterPort and getSlavePort.This object’s parameters are three ports. Two ports for the CPU toconnect the instruction and data ports and a port to connect to thememory bus. These ports do not have a default value, and they have asimple description.It is important to remember the names of these ports. We will explicitlyuse these names when implementing SimpleMemobj and defining thegetMasterPort and getSlavePort functions.You can download the SimObject filehere.Of course, you also need to create a SConscript file in the newdirectory as well that declares the SimObject Python file. You candownload the SConscript filehere.Define the SimpleMemobj classNow, we create a header file for SimpleMemobj.class SimpleMemobj : public MemObject{  private:  public:    /** constructor     */    SimpleMemobj(SimpleMemobjParams *params);};Define a slave port typeNow, we need to define classes for our two kinds of ports: the CPU-sideand the memory-side ports. For this, we will declare these classesinside the SimpleMemobj class since no other object will ever usethese classes.Let’s start with the slave port, or the CPU-side port. We are going toinherit from the SlavePort class. The following is the required codeto override all of the pure virtual functions in the SlavePort class.class CPUSidePort : public SlavePort{  private:    SimpleMemobj *owner;  public:    CPUSidePort(const std::string&amp; name, SimpleMemobj *owner) :        SlavePort(name, owner), owner(owner)    { }    AddrRangeList getAddrRanges() const override;  protected:    Tick recvAtomic(PacketPtr pkt) override { panic(\"recvAtomic unimpl.\"); }    void recvFunctional(PacketPtr pkt) override;    bool recvTimingReq(PacketPtr pkt) override;    void recvRespRetry() override;};This object requires five functions to be defined.This object also has a single member variable, its owner, so it can callfunctions on that object.Define a master port typeNext, we need to define a master port type. This will be the memory-sideport which will forward request from the CPU-side to the rest of thememory system.class MemSidePort : public MasterPort{  private:    SimpleMemobj *owner;  public:    MemSidePort(const std::string&amp; name, SimpleMemobj *owner) :        MasterPort(name, owner), owner(owner)    { }  protected:    bool recvTimingResp(PacketPtr pkt) override;    void recvReqRetry() override;    void recvRangeChange() override;};This class only has three pure virtual functions that we must override.Defining the MemObject interfaceNow that we have defined these two new types CPUSidePort andMemSidePort, we can declare our three ports as part of SimpleMemobj.We also need to declare the two pure virtual functions in theMemObject class, getMasterPort and getSlavePort. These twofunctions are used by gem5 during the initialization phase to connectmemory objects together via ports.class SimpleMemobj : public MemObject{  private:    &lt;CPUSidePort declaration&gt;    &lt;MemSidePort declaration&gt;    CPUSidePort instPort;    CPUSidePort dataPort;    MemSidePort memPort;  public:    SimpleMemobj(SimpleMemobjParams *params);    BaseMasterPort&amp; getMasterPort(const std::string&amp; if_name,                                  PortID idx = InvalidPortID) override;    BaseSlavePort&amp; getSlavePort(const std::string&amp; if_name,                                PortID idx = InvalidPortID) override;};You can download the header file for the SimpleMemobjhere.Implementing basic MemObject functionsFor the constructor of SimpleMemobj, we will simply call theMemObject constructor. We also need to initialize all of the ports.Each port’s constructor takes two parameters: the name and a pointer toits owner, as we defined in the header file. The name can be any string,but by convention, it is the same name as in the Python SimObject file.SimpleMemobj::SimpleMemobj(SimpleMemobjParams *params) :    MemObject(params),    instPort(params-&gt;name + \".inst_port\", this),    dataPort(params-&gt;name + \".data_port\", this),    memPort(params-&gt;name + \".mem_side\", this){}Next, we need to implement the interfaces to get the ports. Thisinterface is made of two functions getMasterPort and getSlavePort.These functions take two parameters. The if_name is the Pythonvariable name of the interface for this object. In the case of themaster port it will be mem_side since this is what we declared as aMasterPort in the Python SimObject file.To implement getMasterPort, we compare the if_name and check to seeif it is mem_side as specified in our Python SimObject file. If it is,then we return the memPort object. If not, then we pass the requestname to our parent. However, it will be an error if we try to connect aslave port to any other named port since the parent class has no portsdefined.BaseMasterPort&amp;SimpleMemobj::getMasterPort(const std::string&amp; if_name, PortID idx){    if (if_name == \"mem_side\") {        return memPort;    } else {        return MemObject::getMasterPort(if_name, idx);    }}To implement getSlavePort, we similarly check if the if_name matcheseither of the names we defined for our slave ports in the PythonSimObject file. If the name is \"inst_port\", then we return theinstPort, and if the name is data_port we return the data port.BaseSlavePort&amp;SimpleMemobj::getSlavePort(const std::string&amp; if_name, PortID idx){    if (if_name == \"inst_port\") {        return instPort;    } else if (if_name == \"data_port\") {        return dataPort;    } else {        return MemObject::getSlavePort(if_name, idx);    }}Implementing slave and master port functionsThe implementation of both the slave and master port is relativelysimple. For the most part, each of the port functions just forwards theinformation to the main memory object (SimpleMemobj).Starting with two simple functions, getAddrRanges and recvFunctionalsimply call into the SimpleMemobj.AddrRangeListSimpleMemobj::CPUSidePort::getAddrRanges() const{    return owner-&gt;getAddrRanges();}voidSimpleMemobj::CPUSidePort::recvFunctional(PacketPtr pkt){    return owner-&gt;handleFunctional(pkt);}The implementation of these functions in the SimpleMemobj are equallysimple. These implementations just pass through the request to thememory side. We can use DPRINTF calls here to track what is happeningfor debug purposes as well.voidSimpleMemobj::handleFunctional(PacketPtr pkt){    memPort.sendFunctional(pkt);}AddrRangeListSimpleMemobj::getAddrRanges() const{    DPRINTF(SimpleMemobj, \"Sending new ranges\\n\");    return memPort.getAddrRanges();}Similarly for the MemSidePort, we need to implement recvRangeChangeand forward the request through the SimpleMemobj to the slave port.voidSimpleMemobj::MemSidePort::recvRangeChange(){    owner-&gt;sendRangeChange();}voidSimpleMemobj::sendRangeChange(){    instPort.sendRangeChange();    dataPort.sendRangeChange();}Implementing receiving requestsThe implementation of recvTimingReq is slightly more complicated. Weneed to check to see if the SimpleMemobj can accept the request. TheSimpleMemobj is a very simple blocking structure; we only allow asingle request outstanding at a time. Therefore, if we get a requestwhile another request is outstanding, the SimpleMemobj will block thesecond request.To simplify the implementation, the CPUSidePort stores all of theflow-control information for the port interface. Thus, we need to add anextra member variable, needRetry, to the CPUSidePort, a boolean thatstores whether we need to send a retry whenever the SimpleMemobjbecomes free. Then, if the SimpleMemobj is blocked on a request, weset that we need to send a retry sometime in the future.boolSimpleMemobj::CPUSidePort::recvTimingReq(PacketPtr pkt){    if (!owner-&gt;handleRequest(pkt)) {        needRetry = true;        return false;    } else {        return true;    }}To handle the request for the SimpleMemobj, we first check if theSimpleMemobj is already blocked waiting for a response to anotherrequest. If it is blocked, then we return false to signal the callingmaster port that we cannot accept the request right now. Otherwise, wemark the port as blocked and send the packet out of the memory port. Forthis, we can define a helper function in the MemSidePort object tohide the flow control from the SimpleMemobj implementation. We willassume the memPort handles all of the flow control and always returntrue from handleRequest since we were successful in consuming therequest.boolSimpleMemobj::handleRequest(PacketPtr pkt){    if (blocked) {        return false;    }    DPRINTF(SimpleMemobj, \"Got request for addr %#x\\n\", pkt-&gt;getAddr());    blocked = true;    memPort.sendPacket(pkt);    return true;}Next, we need to implement the sendPacket function in theMemSidePort. This function will handle the flow control in case itspeer slave port cannot accept the request. For this, we need to add amember to the MemSidePort to store the packet in case it is blocked.It is the responsibility of the sender to store the packet if thereceiver cannot receive the request (or response).This function simply send the packet by calling the functionsendTimingReq. If the send fails, then this object store the packet inthe blockedPacket member function so it can send the packet later(when it receives a recvReqRetry). This function also contains somedefensive code to make sure there is not a bug and we never try tooverwrite the blockedPacket variable incorrectly.voidSimpleMemobj::MemSidePort::sendPacket(PacketPtr pkt){    panic_if(blockedPacket != nullptr, \"Should never try to send if blocked!\");    if (!sendTimingReq(pkt)) {        blockedPacket = pkt;    }}Next, we need to implement the code to resend the packet. In thisfunction, we try to resend the packet by calling the sendPacketfunction we wrote above.voidSimpleMemobj::MemSidePort::recvReqRetry(){    assert(blockedPacket != nullptr);    PacketPtr pkt = blockedPacket;    blockedPacket = nullptr;    sendPacket(pkt);}Implementing receiving responsesThe response codepath is similar to the receiving codepath. When theMemSidePort gets a response, we forward the response through theSimpleMemobj to the appropriate CPUSidePort.boolSimpleMemobj::MemSidePort::recvTimingResp(PacketPtr pkt){    return owner-&gt;handleResponse(pkt);}In the SimpleMemobj, first, it should always be blocked when wereceive a response since the object is blocking. Before sending thepacket back to the CPU side, we need to mark that the object no longerblocked. This must be done before calling sendTimingResp. Otherwise,it is possible to get stuck in an infinite loop as it is possible thatthe master port has a single callchain between receiving a response andsending another request.After unblocking the SimpleMemobj, we check to see if the packet is aninstruction or data packet and send it back across the appropriate port.Finally, since the object is now unblocked, we may need to notify theCPU side ports that they can now retry their requests that failed.boolSimpleMemobj::handleResponse(PacketPtr pkt){    assert(blocked);    DPRINTF(SimpleMemobj, \"Got response for addr %#x\\n\", pkt-&gt;getAddr());    blocked = false;    // Simply forward to the memory port    if (pkt-&gt;req-&gt;isInstFetch()) {        instPort.sendPacket(pkt);    } else {        dataPort.sendPacket(pkt);    }    instPort.trySendRetry();    dataPort.trySendRetry();    return true;}Similar to how we implemented a convenience function for sending packetsin the MemSidePort, we can implement a sendPacket function in theCPUSidePort to send the responses to the CPU side. This function callssendTimingResp which will in turn call recvTimingResp on the peermaster port. If this call fails and the peer port is currently blocked,then we store the packet to be sent later.voidSimpleMemobj::CPUSidePort::sendPacket(PacketPtr pkt){    panic_if(blockedPacket != nullptr, \"Should never try to send if blocked!\");    if (!sendTimingResp(pkt)) {        blockedPacket = pkt;    }}We will send this blocked packet later when we receive arecvRespRetry. This function is exactly the same as the recvReqRetryabove and simply tries to resend the packet, which may be blocked again.voidSimpleMemobj::CPUSidePort::recvRespRetry(){    assert(blockedPacket != nullptr);    PacketPtr pkt = blockedPacket;    blockedPacket = nullptr;    sendPacket(pkt);}Finally, we need to implement the extra function trySendRetry for theCPUSidePort. This function is called by the SimpleMemobj wheneverthe SimpleMemobj may be unblocked. trySendRetry checks to see if aretry is needed which we marked in recvTimingReq whenever theSimpleMemobj was blocked on a new request. Then, if the retry isneeded, this function calls sendRetryReq, which in turn callsrecvReqRetry on the peer master port (the CPU in this case).voidSimpleMemobj::CPUSidePort::trySendRetry(){    if (needRetry &amp;&amp; blockedPacket == nullptr) {        needRetry = false;        DPRINTF(SimpleMemobj, \"Sending retry req for %d\\n\", id);        sendRetryReq();    }}You can download the implementation for the SimpleMemobjhere.The following figure, memobj-api-figure, shows the relationships betweenthe CPUSidePort, MemSidePort, and SimpleMemobj. This figure showshow the peer ports interact with the implementation of theSimpleMemobj. Each bold function is one that we had to implement, andthe non-bold functions are the port interfaces to the peer ports. Thecolors highlight one API path through the object (e.g., receiving arequest or updating the memory ranges).For this simple memory object, packets are just forwarded from theCPU-side to the memory side. However, by modifying handleRequest andhandleResponse, we can create rich featureful objects, like a cache inthe next chapter.Create a config fileThis is all of the code needed to implement a simple memory object! Inthe next chapter, we will take this frameworkand add some caching logic to make this memory object into a simplecache. However, before that, let’s look at the config file to add theSimpleMemobj to your system.This config file builds off of the simple config file insimple-config-chapter. However, instead of connecting the CPU directlyto the memory bus, we are going to instantiate a SimpleMemobj andplace it between the CPU and the memory bus.import m5from m5.objects import *system = System()system.clk_domain = SrcClockDomain()system.clk_domain.clock = '1GHz'system.clk_domain.voltage_domain = VoltageDomain()system.mem_mode = 'timing'system.mem_ranges = [AddrRange('512MB')]system.cpu = TimingSimpleCPU()system.memobj = SimpleMemobj()system.cpu.icache_port = system.memobj.inst_portsystem.cpu.dcache_port = system.memobj.data_portsystem.membus = SystemXBar()system.memobj.mem_side = system.membus.slavesystem.cpu.createInterruptController()system.cpu.interrupts[0].pio = system.membus.mastersystem.cpu.interrupts[0].int_master = system.membus.slavesystem.cpu.interrupts[0].int_slave = system.membus.mastersystem.mem_ctrl = DDR3_1600_8x8()system.mem_ctrl.range = system.mem_ranges[0]system.mem_ctrl.port = system.membus.mastersystem.system_port = system.membus.slaveprocess = Process()process.cmd = ['tests/test-progs/hello/bin/x86/linux/hello']system.cpu.workload = processsystem.cpu.createThreads()root = Root(full_system = False, system = system)m5.instantiate()print \"Beginning simulation!\"exit_event = m5.simulate()print 'Exiting @ tick %i because %s' % (m5.curTick(), exit_event.getCause())You can download this config scripthere.Now, when you run this config file you get the following output.gem5 Simulator System.  http://gem5.orggem5 is copyrighted software; use the --copyright option for details.gem5 compiled Jan  5 2017 13:40:18gem5 started Jan  9 2017 10:17:17gem5 executing on chinook, pid 5138command line: build/X86/gem5.opt configs/learning_gem5/part2/simple_memobj.pyGlobal frequency set at 1000000000000 ticks per secondwarn: DRAM device capacity (8192 Mbytes) does not match the address range assigned (512 Mbytes)0: system.remote_gdb.listener: listening for remote gdb #0 on port 7000warn: CoherentXBar system.membus has no snooping ports attached!warn: ClockedObject: More than one power state change request encountered within the same simulation tickBeginning simulation!info: Entering event queue @ 0.  Starting simulation...Hello world!Exiting @ tick 507841000 because target called exit()If you run with the SimpleMemobj debug flag, you can see all of thememory requests and responses from and to the CPU.gem5 Simulator System.  http://gem5.orggem5 is copyrighted software; use the --copyright option for details.gem5 compiled Jan  5 2017 13:40:18gem5 started Jan  9 2017 10:18:51gem5 executing on chinook, pid 5157command line: build/X86/gem5.opt --debug-flags=SimpleMemobj configs/learning_gem5/part2/simple_memobj.pyGlobal frequency set at 1000000000000 ticks per secondBeginning simulation!info: Entering event queue @ 0.  Starting simulation...      0: system.memobj: Got request for addr 0x190  77000: system.memobj: Got response for addr 0x190  77000: system.memobj: Got request for addr 0x190 132000: system.memobj: Got response for addr 0x190 132000: system.memobj: Got request for addr 0x190 187000: system.memobj: Got response for addr 0x190 187000: system.memobj: Got request for addr 0x94e30 250000: system.memobj: Got response for addr 0x94e30 250000: system.memobj: Got request for addr 0x190 ...You may also want to change the CPU model to the out-of-order model(DerivO3CPU). When using the out-of-order CPU you will potentially seea different address stream since it allows multiple memory requestsoutstanding at a once. When using the out-of-order CPU, there will nowbe many stalls because the SimpleMemobj is blocking.",
        "url": "/documentation/learning_gem5/part2/memoryobject/"
      }
      ,
    
      "documentation-learning-gem5-part2-parameters": {
        "title": "Adding parameters to SimObjects and more events",
        "content": "Adding parameters to SimObjects and more eventsOne of the most powerful parts of gem5’s Python interface is the abilityto pass parameters from Python to the C++ objects in gem5. In thischapter, we will explore some of the kinds of parameters for SimObjectsand how to use them building off of the simple HelloObject from theprevious chapters &lt;events-chapter&gt;.Simple parametersFirst, we will add parameters for the latency and number of times tofire the event in the HelloObject. To add a parameter, modify theHelloObject class in the SimObject Python file(src/learning_gem5/HelloObject.py). Parameters are set by adding newstatements to the Python class that include a Param type.For instance, the following code has a parameter time_to_wait which isa “Latency” parameter and number_of_fires which is an integerparameter.class HelloObject(SimObject):    type = 'HelloObject'    cxx_header = \"learning_gem5/hello_object.hh\"    time_to_wait = Param.Latency(\"Time before firing the event\")    number_of_fires = Param.Int(1, \"Number of times to fire the event before \"                                   \"goodbye\")Param.&lt;TypeName&gt; declares a parameter of type TypeName. Common typesare Int for integers, Float for floats, etc. These types act likeregular Python classes.Each parameter declaration takes one or two parameters. When given twoparameters (like number_of_fires above), the first parameter is thedefault value for the parameter. In this case, if you instantiate aHelloObject in your Python config file without specifying any valuefor number_of_fires, it will take the default value of 1.The second parameter to the parameter declaration is a short descriptionof the parameter. This must be a Python string. If you only specify asingle parameter to the parameter declaration, it is the description (asfor time_to_wait).gem5 also supports many complex parameter types that are not justbuiltin types. For instance, time_to_wait is a Latency. Latencytakes a value as a time value as a string and converts it into simulatorticks. For instance, with a default tick rate of 1 picosecond(10\\^12 ticks per second or 1 THz), \"1ns\" is automatically convertedto 1000. There are other convenience parameters like Percent,Cycles, MemorySize and many more.Once you have declared these parameters in the SimObject file, you needto copy their values to your C++ class in its constructor. The followingcode shows the changes to the HelloObject constructor.HelloObject::HelloObject(HelloObjectParams *params) :    SimObject(params),    event(*this),    myName(params-&gt;name),    latency(params-&gt;time_to_wait),    timesLeft(params-&gt;number_of_fires){    DPRINTF(Hello, \"Created the hello object with the name %s\\n\", myName);}Here, we use the parameter’s values for the default values of latencyand timesLeft. Additionally, we store the name from the parameterobject to use it later in the member variable myName. Each paramsinstantiation has a name which comes from the Python config file when itis instantiated.However, assigning the name here is just an example of using the paramsobject. For all SimObjects, there is a name() function that alwaysreturns the name. Thus, there is never a need to store the name likeabove.To the HelloObject class declaration, add a member variable for thename.class HelloObject : public SimObject{  private:    void processEvent();    EventWrapper&lt;HelloObject, &amp;HelloObject::processEvent&gt; event;    std::string myName;    Tick latency;    int timesLeft;  public:    HelloObject(HelloObjectParams *p);    void startup();};When we run gem5 with the above, we get the following error:gem5 Simulator System.  http://gem5.orggem5 is copyrighted software; use the --copyright option for details.gem5 compiled Jan  4 2017 14:46:36gem5 started Jan  4 2017 14:46:52gem5 executing on chinook, pid 3422command line: build/X86/gem5.opt --debug-flags=Hello configs/learning_gem5/part2/run_hello.pyGlobal frequency set at 1000000000000 ticks per secondfatal: hello.time_to_wait without default or user set valueThis is because the time_to_wait parameter does not have a defaultvalue. Therefore, we need to update the Python config file(run_hello.py) to specify this value.root.hello = HelloObject(time_to_wait = '2us')Or, we can specify time_to_wait as a member variable. Either option isexactly the same because the C++ objects are not created untilm5.instantiate() is called.root.hello = HelloObject()root.hello.time_to_wait = '2us'The output of this simple script is the following when running the theHello debug flag.gem5 Simulator System.  http://gem5.orggem5 is copyrighted software; use the --copyright option for details.gem5 compiled Jan  4 2017 14:46:36gem5 started Jan  4 2017 14:50:08gem5 executing on chinook, pid 3455command line: build/X86/gem5.opt --debug-flags=Hello configs/learning_gem5/part2/run_hello.pyGlobal frequency set at 1000000000000 ticks per second      0: hello: Created the hello object with the name helloBeginning simulation!info: Entering event queue @ 0.  Starting simulation...2000000: hello: Hello world! Processing the event! 0 left2000000: hello: Done firing!Exiting @ tick 18446744073709551615 because simulate() limit reachedYou can also modify the config script to fire the event multiple times.Other SimObjects as parametersYou can also specify other SimObjects as parameters. To demonstratethis, we are going to create a new SimObject, GoodbyeObject. Thisobject is going to have a simple function that says “Goodbye” to anotherSimObject. To make it a little more interesting, the GoodbyeObject isgoing to have a buffer to write the message, and a limited bandwidth towrite the message.First, declare the SimObject in the SConscript file:Import('*')SimObject('HelloObject.py')Source('hello_object.cc')Source('goodbye_object.cc')DebugFlag('Hello')The new SConscript file can be downloadedhere.Next, you need to declare the new SimObject in a SimObject Python file.Since the GoodbyeObject is highly related to the HelloObject, wewill use the same file. You can add the following code toHelloObject.py.This object has two parameters, both with default values. The firstparameter is the size of a buffer and is a MemorySize parameter.Second is the write_bandwidth which specifies the speed to fill thebuffer. Once the buffer is full, the simulation will exit.class GoodbyeObject(SimObject):    type = 'GoodbyeObject'    cxx_header = \"learning_gem5/goodbye_object.hh\"    buffer_size = Param.MemorySize('1kB',                                   \"Size of buffer to fill with goodbye\")    write_bandwidth = Param.MemoryBandwidth('100MB/s', \"Bandwidth to fill \"                                            \"the buffer\")The updated HelloObject.py file can be downloadedhere.Now, we need to implement the GoodbyeObject.#ifndef __LEARNING_GEM5_GOODBYE_OBJECT_HH__#define __LEARNING_GEM5_GOODBYE_OBJECT_HH__#include &lt;string&gt;#include \"params/GoodbyeObject.hh\"#include \"sim/sim_object.hh\"class GoodbyeObject : public SimObject{  private:    void processEvent();    /**     * Fills the buffer for one iteration. If the buffer isn't full, this     * function will enqueue another event to continue filling.     */    void fillBuffer();    EventWrapper&lt;GoodbyeObject, &amp;GoodbyeObject::processEvent&gt; event;    /// The bytes processed per tick    float bandwidth;    /// The size of the buffer we are going to fill    int bufferSize;    /// The buffer we are putting our message in    char *buffer;    /// The message to put into the buffer.    std::string message;    /// The amount of the buffer we've used so far.    int bufferUsed;  public:    GoodbyeObject(GoodbyeObjectParams *p);    ~GoodbyeObject();    /**     * Called by an outside object. Starts off the events to fill the buffer     * with a goodbye message.     *     * @param name the name of the object we are saying goodbye to.     */    void sayGoodbye(std::string name);};#endif // __LEARNING_GEM5_GOODBYE_OBJECT_HH__#include \"learning_gem5/goodbye_object.hh\"#include \"debug/Hello.hh\"#include \"sim/sim_exit.hh\"GoodbyeObject::GoodbyeObject(GoodbyeObjectParams *params) :    SimObject(params), event(*this), bandwidth(params-&gt;write_bandwidth),    bufferSize(params-&gt;buffer_size), buffer(nullptr), bufferUsed(0){    buffer = new char[bufferSize];    DPRINTF(Hello, \"Created the goodbye object\\n\");}GoodbyeObject::~GoodbyeObject(){    delete[] buffer;}voidGoodbyeObject::processEvent(){    DPRINTF(Hello, \"Processing the event!\\n\");    fillBuffer();}voidGoodbyeObject::sayGoodbye(std::string other_name){    DPRINTF(Hello, \"Saying goodbye to %s\\n\", other_name);    message = \"Goodbye \" + other_name + \"!! \";    fillBuffer();}voidGoodbyeObject::fillBuffer(){    // There better be a message    assert(message.length() &gt; 0);    // Copy from the message to the buffer per byte.    int bytes_copied = 0;    for (auto it = message.begin();         it &lt; message.end() &amp;&amp; bufferUsed &lt; bufferSize - 1;         it++, bufferUsed++, bytes_copied++) {        // Copy the character into the buffer        buffer[bufferUsed] = *it;    }    if (bufferUsed &lt; bufferSize - 1) {        // Wait for the next copy for as long as it would have taken        DPRINTF(Hello, \"Scheduling another fillBuffer in %d ticks\\n\",                bandwidth * bytes_copied);        schedule(event, curTick() + bandwidth * bytes_copied);    } else {        DPRINTF(Hello, \"Goodbye done copying!\\n\");        // Be sure to take into account the time for the last bytes        exitSimLoop(buffer, 0, curTick() + bandwidth * bytes_copied);    }}GoodbyeObject*GoodbyeObjectParams::create(){    return new GoodbyeObject(this);}The header file can be downloadedhere and theimplementation can be downloadedhere.The interface to this GoodbyeObject is simple a function sayGoodbyewhich takes a string as a parameter. When this function is called, thesimulator builds the message and saves it in a member variable. Then, webegin filling the buffer.To model the limited bandwidth, each time we write the message to thebuffer, we pause for the latency it takes to write the message. We use asimple event to model this pause.Since we used a MemoryBandwidth parameter in the SimObjectdeclaration, the bandwidth variable is automatically converted intoticks per byte, so calculating the latency is simply the bandwidth timesthe bytes we want to write the buffer.Finally, when the buffer is full, we call the function exitSimLoop,which will exit the simulation. This function takes three parameters,the first is the message to return to the Python config script(exit_event.getCause()), the second is the exit code, and the third iswhen to exit.Adding the GoodbyeObject as a parameter to the HelloObjectFirst, we will also add a GoodbyeObject as a parameter to theHelloObject. To do this, you simply specify the SimObject class nameas the TypeName of the Param. You can have a default, or not, justlike a normal parameter.class HelloObject(SimObject):    type = 'HelloObject'    cxx_header = \"learning_gem5/hello_object.hh\"    time_to_wait = Param.Latency(\"Time before firing the event\")    number_of_fires = Param.Int(1, \"Number of times to fire the event before \"                                   \"goodbye\")    goodbye_object = Param.GoodbyeObject(\"A goodbye object\")The updated HelloObject.py file can be downloadedhere.Second, we will add a reference to a GoodbyeObject to theHelloObject class.class HelloObject : public SimObject{  private:    void processEvent();    EventWrapper&lt;HelloObject, &amp;HelloObject::processEvent&gt; event;    /// Pointer to the corresponding GoodbyeObject. Set via Python    const GoodbyeObject* goodbye;    /// The name of this object in the Python config file    const std::string myName;    /// Latency between calling the event (in ticks)    const Tick latency;    /// Number of times left to fire the event before goodbye    int timesLeft;  public:    HelloObject(HelloObjectParams *p);    void startup();};Then, we need to update the constructor and the process event functionof the HelloObject. We also add a check in the constructor to makesure the goodbye pointer is valid. It is possible to pass a nullpointer as a SimObject via the parameters by using the NULL specialPython SimObject. We should panic when this happens since it is not acase this object has been coded to accept.#include \"learning_gem5/part2/hello_object.hh\"#include \"base/misc.hh\"#include \"debug/Hello.hh\"HelloObject::HelloObject(HelloObjectParams *params) :    SimObject(params),    event(*this),    goodbye(params-&gt;goodbye_object),    myName(params-&gt;name),    latency(params-&gt;time_to_wait),    timesLeft(params-&gt;number_of_fires){    DPRINTF(Hello, \"Created the hello object with the name %s\\n\", myName);    panic_if(!goodbye, \"HelloObject must have a non-null GoodbyeObject\");}Once we have processed the number of event specified by the parameter,we should call the sayGoodbye function in the GoodbyeObject.voidHelloObject::processEvent(){    timesLeft--;    DPRINTF(Hello, \"Hello world! Processing the event! %d left\\n\", timesLeft);    if (timesLeft &lt;= 0) {        DPRINTF(Hello, \"Done firing!\\n\");        goodbye.sayGoodbye(myName);    } else {        schedule(event, curTick() + latency);    }}You can find the updated header filehere and theimplementation filehere.Updating the config scriptLastly, we need to add the GoodbyeObject to the config script. Createa new config script, hello_goodbye.py and instantiate both the helloand the goodbye objects. For instance, one possible script is thefollowing.import m5from m5.objects import *root = Root(full_system = False)root.hello = HelloObject(time_to_wait = '2us', number_of_fires = 5)root.hello.goodbye_object = GoodbyeObject(buffer_size='100B')m5.instantiate()print \"Beginning simulation!\"exit_event = m5.simulate()print 'Exiting @ tick %i because %s' % (m5.curTick(), exit_event.getCause())You can download this scripthere.Running this script generates the following output.gem5 Simulator System.  http://gem5.orggem5 is copyrighted software; use the --copyright option for details.gem5 compiled Jan  4 2017 15:17:14gem5 started Jan  4 2017 15:18:41gem5 executing on chinook, pid 3838command line: build/X86/gem5.opt --debug-flags=Hello configs/learning_gem5/part2/hello_goodbye.pyGlobal frequency set at 1000000000000 ticks per second      0: hello.goodbye_object: Created the goodbye object      0: hello: Created the hello objectBeginning simulation!info: Entering event queue @ 0.  Starting simulation...2000000: hello: Hello world! Processing the event! 4 left4000000: hello: Hello world! Processing the event! 3 left6000000: hello: Hello world! Processing the event! 2 left8000000: hello: Hello world! Processing the event! 1 left10000000: hello: Hello world! Processing the event! 0 left10000000: hello: Done firing!10000000: hello.goodbye_object: Saying goodbye to hello10000000: hello.goodbye_object: Scheduling another fillBuffer in 152592 ticks10152592: hello.goodbye_object: Processing the event!10152592: hello.goodbye_object: Scheduling another fillBuffer in 152592 ticks10305184: hello.goodbye_object: Processing the event!10305184: hello.goodbye_object: Scheduling another fillBuffer in 152592 ticks10457776: hello.goodbye_object: Processing the event!10457776: hello.goodbye_object: Scheduling another fillBuffer in 152592 ticks10610368: hello.goodbye_object: Processing the event!10610368: hello.goodbye_object: Scheduling another fillBuffer in 152592 ticks10762960: hello.goodbye_object: Processing the event!10762960: hello.goodbye_object: Scheduling another fillBuffer in 152592 ticks10915552: hello.goodbye_object: Processing the event!10915552: hello.goodbye_object: Goodbye done copying!Exiting @ tick 10944163 because Goodbye hello!! Goodbye hello!! Goodbye hello!! Goodbye hello!! Goodbye hello!! Goodbye hello!! GooYou can modify the parameters to these two SimObjects and see how theoverall execution time (Exiting @ tick 10944163) changes. To runthese tests, you may want to remove the debug flag so there is lessoutput to the terminal.In the next chapters, we will create a more complex and more usefulSimObject, culminating with a simple blocking uniprocessor cacheimplementation.",
        "url": "/documentation/learning_gem5/part2/parameters/"
      }
      ,
    
      "documentation-learning-gem5-part2-simplecache": {
        "title": "Creating a simple cache object",
        "content": "Creating a simple cache objectIn this chapter, we will take the framework for a memory object wecreated in the last chapter and add cachinglogic to it.SimpleCache SimObjectAfter creating the SConscript file, that you can downloadhere, we can createthe SimObject Python file. We will call this simple memory objectSimpleCache and create the SimObject Python file insrc/learning_gem5/simple_cache.from m5.params import *from m5.proxy import *from MemObject import MemObjectclass SimpleCache(MemObject):    type = 'SimpleCache'    cxx_header = \"learning_gem5/simple_cache/simple_cache.hh\"    cpu_side = VectorSlavePort(\"CPU side port, receives requests\")    mem_side = MasterPort(\"Memory side port, sends requests\")    latency = Param.Cycles(1, \"Cycles taken on a hit or to resolve a miss\")    size = Param.MemorySize('16kB', \"The size of the cache\")    system = Param.System(Parent.any, \"The system this cache is part of\")There are a couple of differences between this SimObject file and theone from the previous chapter. First, we have acouple of extra parameters. Namely, a latency for cache accesses and thesize of the cache. parameters-chapter goes into more detail about thesekinds of SimObject parameters.Next, we include a System parameter, which is a pointer to the mainsystem this cache is connected to. This is needed so we can get thecache block size from the system object when we are initializing thecache. To reference the system object this cache is connected to, we usea special proxy parameter. In this case, we use Parent.any.In the Python config file, when a SimpleCache is instantiated, thisproxy parameter searches through all of the parents of the SimpleCacheinstance to find a SimObject that matches the System type. Since weoften use a System as the root SimObject, you will often see asystem parameter resolved with this proxy parameter.The third and final difference between the SimpleCache and theSimpleMemobj is that instead of having two named CPU ports(inst_port and data_port), the SimpleCache use another specialparameter: the VectorPort. VectorPorts behave similarly to regularports (e.g., they are resolved via getMasterPort and getSlavePort),but they allow this object to connect to multiple peers. Then, in theresolution functions the parameter we ignored before (PortID idx) isused to differentiate between the different ports. By using a vectorport, this cache can be connected into the system more flexibly than theSimpleMemobj.Implementing the SimpleCacheMost of the code for the SimpleCache is the same as theSimpleMemobj. There are a couple of changes in the constructor and thekey memory object functions.First, we need to create the CPU side ports dynamically in theconstructor and initialize the extra member functions based on theSimObject parameters.SimpleCache::SimpleCache(SimpleCacheParams *params) :    MemObject(params),    latency(params-&gt;latency),    blockSize(params-&gt;system-&gt;cacheLineSize()),    capacity(params-&gt;size / blockSize),    memPort(params-&gt;name + \".mem_side\", this),    blocked(false), outstandingPacket(nullptr), waitingPortId(-1){    for (int i = 0; i &lt; params-&gt;port_cpu_side_connection_count; ++i) {        cpuPorts.emplace_back(name() + csprintf(\".cpu_side[%d]\", i), i, this);    }}In this function, we use the cacheLineSize from the system parametersto set the blockSize for this cache. We also initialize the capacitybased on the block size and the parameter and initialize other membervariables we will need below. Finally, we must create a number ofCPUSidePorts based on the number of connections to this object. Sincethe cpu_side port was declared as a VectorSlavePort in the SimObjectPython file, the parameter automatically has a variableport_cpu_side_connection_count. This is based on the Python name ofthe parameter. For each of these connections we add a new CPUSidePortto a cpuPorts vector declared in the SimpleCache class.We also add one extra member variable to the CPUSidePort to save itsid, and we add this as a parameter to its constructor.Next, we need to implement getMasterPort and getSlavePort. ThegetMasterPort is exactly the same as the SimpleMemobj. ForgetSlavePort, we now need to return the port based on the idrequested.BaseSlavePort&amp;SimpleCache::getSlavePort(const std::string&amp; if_name, PortID idx){    if (if_name == \"cpu_side\" &amp;&amp; idx &lt; cpuPorts.size()) {        return cpuPorts[idx];    } else {        return MemObject::getSlavePort(if_name, idx);    }}The implementation of the CPUSidePort and the MemSidePort is almostthe same as in the SimpleMemobj. The only difference is we need to addan extra parameter to handleRequest that is the id of the port whichthe request originated. Without this id, we would not be able to forwardthe response to the correct port. The SimpleMemobj knew which port tosend replies based on whether the original request was an instruction ordata accesses. However, this information is not useful to theSimpleCache since it uses a vector of ports and not named ports.The new handleRequest function does two different things than thehandleRequest function in the SimpleMemobj. First, it stores theport id of the request as discussed above. Since the SimpleCache isblocking and only allows a single request outstanding at a time, we onlyneed to save a single port id.Second, it takes time to access a cache. Therefore, we need to take intoaccount the latency to access the cache tags and the cache data for arequest. We added an extra parameter to the cache object for this, andin handleRequest we now use an event to stall the request for theneeded amount of time. We schedule a new event for latency cycles inthe future. The clockEdge function returns the tick that the nthcycle in the future occurs on.boolSimpleCache::handleRequest(PacketPtr pkt, int port_id){    if (blocked) {        return false;    }    DPRINTF(SimpleCache, \"Got request for addr %#x\\n\", pkt-&gt;getAddr());    blocked = true;    waitingPortId = port_id;    schedule(new AccessEvent(this, pkt), clockEdge(latency));    return true;}The AccessEvent is a little more complicated than the EventWrapperwe used in events-chapter. Instead of using an EventWrapper, in theSimpleCache we will use a new class. The reason we cannot use anEventWrapper, is that we need to pass the packet (pkt) fromhandleRequest to the event handler function. The following code is theAccessEvent class. We only need to implement the process function,that calls the function we want to use as our event handler, in thiscase accessTming. We also pass the flag AutoDelete to the eventconstructor so we do not need to worry about freeing the memory for thedynamically created object. The event code will automatically delete theobject after the process function has executed.class AccessEvent : public Event{  private:    SimpleCache *cache;    PacketPtr pkt;  public:    AccessEvent(SimpleCache *cache, PacketPtr pkt) :        Event(Default_Pri, AutoDelete), cache(cache), pkt(pkt)    { }    void process() override {        cache-&gt;accessTiming(pkt);    }};Now, we need to implement the event handler, accessTiming.voidSimpleCache::accessTiming(PacketPtr pkt){    bool hit = accessFunctional(pkt);    if (hit) {        pkt-&gt;makeResponse();        sendResponse(pkt);    } else {        &lt;miss handling&gt;    }}This function first functionally accesses the cache. This functionaccessFunctional (described below) performs the functional access ofthe cache and either reads or writes the cache on a hit or returns thatthe access was a miss.If the access is a hit, we simply need to respond to the packet. Torespond, you first must call the function makeResponse on the packet.This converts the packet from a request packet to a response packet. Forinstance, if the memory command in the packet was a ReadReq this getsconverted into a ReadResp. Writes behave similarly. Then, we can sendthe response back to the CPU.The sendResponse function does the same things as the handleResponsefunction in the SimpleMemobj except that it uses the waitingPortIdto send the packet to the right port. In this function, we need to markthe SimpleCache unblocked before calling sendPacket in case the peeron the CPU side immediately calls sendTimingReq. Then, we try to sendretries to the CPU side ports if the SimpleCache can now receiverequests and the ports need to be sent retries.void SimpleCache::sendResponse(PacketPtr pkt){    int port = waitingPortId;    blocked = false;    waitingPortId = -1;    cpuPorts[port].sendPacket(pkt);    for (auto&amp; port : cpuPorts) {        port.trySendRetry();    }}Back to the accessTiming function, we now need to handle the cachemiss case. On a miss, we first have to check to see if the missingpacket is to an entire cache block. If the packet is aligned and thesize of the request is the size of a cache block, then we can simplyforward the request to memory, just like in the SimpleMemobj.However, if the packet is smaller than a cache block, then we need tocreate a new packet to read the entire cache block from memory. Here,whether the packet is a read or a write request, we send a read requestto memory to load the data for the cache block into the cache. In thecase of a write, it will occur in the cache after we have loaded thedata from memory.Then, we create a new packet, that is blockSize in size and we callthe allocate function to allocate memory in the Packet object forthe data that we will read from memory. Note: this memory is freed whenwe free the packet. We use the original request object in the packet sothe memory-side objects know the original requestor and the originalrequest type for statistics.Finally, we save the original packet pointer (pkt) in a membervariable outstandingPacket so we can recover it when the SimpleCachereceives a response. Then, we send the new packet across the memory sideport.voidSimpleCache::accessTiming(PacketPtr pkt){    bool hit = accessFunctional(pkt);    if (hit) {        pkt-&gt;makeResponse();        sendResponse(pkt);    } else {        Addr addr = pkt-&gt;getAddr();        Addr block_addr = pkt-&gt;getBlockAddr(blockSize);        unsigned size = pkt-&gt;getSize();        if (addr == block_addr &amp;&amp; size == blockSize) {            DPRINTF(SimpleCache, \"forwarding packet\\n\");            memPort.sendPacket(pkt);        } else {            DPRINTF(SimpleCache, \"Upgrading packet to block size\\n\");            panic_if(addr - block_addr + size &gt; blockSize,                     \"Cannot handle accesses that span multiple cache lines\");            assert(pkt-&gt;needsResponse());            MemCmd cmd;            if (pkt-&gt;isWrite() || pkt-&gt;isRead()) {                cmd = MemCmd::ReadReq;            } else {                panic(\"Unknown packet type in upgrade size\");            }            PacketPtr new_pkt = new Packet(pkt-&gt;req, cmd, blockSize);            new_pkt-&gt;allocate();            outstandingPacket = pkt;            memPort.sendPacket(new_pkt);        }    }}On a response from memory, we know that this was caused by a cache miss.The first step is to insert the responding packet into the cache.Then, either there is an outstandingPacket, in which case we need toforward that packet to the original requestor, or there is nooutstandingPacket which means we should forward the pkt in theresponse to the original requestor.If the packet we are receiving as a response was an upgrade packetbecause the original request was smaller than a cache line, then we needto copy the new data to the outstandingPacket packet or write to thecache on a write. Then, we need to delete the new packet that we made inthe miss handling logic.boolSimpleCache::handleResponse(PacketPtr pkt){    assert(blocked);    DPRINTF(SimpleCache, \"Got response for addr %#x\\n\", pkt-&gt;getAddr());    insert(pkt);    if (outstandingPacket != nullptr) {        accessFunctional(outstandingPacket);        outstandingPacket-&gt;makeResponse();        delete pkt;        pkt = outstandingPacket;        outstandingPacket = nullptr;    } // else, pkt contains the data it needs    sendResponse(pkt);    return true;}Functional cache logicNow, we need to implement two more functions: accessFunctional andinsert. These two functions make up the key components of the cachelogic.First, to functionally update the cache, we first need storage for thecache contents. The simplest possible cache storage is a map (hashtable)that maps from addresses to data. Thus, we will add the following memberto the SimpleCache.std::unordered_map&lt;Addr, uint8_t*&gt; cacheStore;To access the cache, we first check to see if there is an entry in themap which matches the address in the packet. We use the getBlockAddrfunction of the Packet type to get the block-aligned address. Then, wesimply search for that address in the map. If we do not find theaddress, then this function returns false, the data is not in thecache, and it is a miss.Otherwise, if the packet is a write request, we need to update the datain the cache. To do this, we write the data from the packet to thecache. We use the writeDataToBlock function which writes the data inthe packet to the write offset into a potentially larger block of data.This function takes the cache block offset and the block size (as aparameter) and writes the correct offset into the pointer passed as thefirst parameter.If the packet is a read request, we need to update the packet’s datawith the data from the cache. The setDataFromBlock function performsthe same offset calculation as the writeDataToBlock function, butwrites the packet with the data from the pointer in the first parameter.boolSimpleCache::accessFunctional(PacketPtr pkt){    Addr block_addr = pkt-&gt;getBlockAddr(blockSize);    auto it = cacheStore.find(block_addr);    if (it != cacheStore.end()) {        if (pkt-&gt;isWrite()) {            pkt-&gt;writeDataToBlock(it-&gt;second, blockSize);        } else if (pkt-&gt;isRead()) {            pkt-&gt;setDataFromBlock(it-&gt;second, blockSize);        } else {            panic(\"Unknown packet type!\");        }        return true;    }    return false;}Finally, we also need to implement the insert function. This functionis called every time the memory side port responds to a request.The first step is to check if the cache is currently full. If the cachehas more entries (blocks) than the capacity of the cache as set by theSimObject parameter, then we need to evict something. The following codeevicts a random entry by leveraging the hashtable implementation of theC++ unordered_map.On an eviction, we need to write the data back to the backing memory incase it has been updated. For this, we create a new Request-Packetpair. The packet uses a new memory command: MemCmd::WritebackDirty.Then, we send the packet across the memory side port (memPort) anderase the entry in the cache storage map.Then, after a block has potentially been evicted, we add the new addressto the cache. For this we simply allocate space for the block and add anentry to the map. Finally, we write the data from the response packet into the newly allocated block. This data is guaranteed to be the size ofthe cache block since we made sure to make a new packet in the cachemiss logic if the packet was smaller than a cache block.voidSimpleCache::insert(PacketPtr pkt){    if (cacheStore.size() &gt;= capacity) {        // Select random thing to evict. This is a little convoluted since we        // are using a std::unordered_map. See http://bit.ly/2hrnLP2        int bucket, bucket_size;        do {            bucket = random_mt.random(0, (int)cacheStore.bucket_count() - 1);        } while ( (bucket_size = cacheStore.bucket_size(bucket)) == 0 );        auto block = std::next(cacheStore.begin(bucket),                               random_mt.random(0, bucket_size - 1));        RequestPtr req = new Request(block-&gt;first, blockSize, 0, 0);        PacketPtr new_pkt = new Packet(req, MemCmd::WritebackDirty, blockSize);        new_pkt-&gt;dataDynamic(block-&gt;second); // This will be deleted later        DPRINTF(SimpleCache, \"Writing packet back %s\\n\", pkt-&gt;print());        memPort.sendTimingReq(new_pkt);        cacheStore.erase(block-&gt;first);    }    uint8_t *data = new uint8_t[blockSize];    cacheStore[pkt-&gt;getAddr()] = data;    pkt-&gt;writeDataToBlock(data, blockSize);}Creating a config file for the cacheThe last step in our implementation is to create a new Python configscript that uses our cache. We can use the outline from thelast chapter as a starting point. The onlydifference is we may want to set the parameters of this cache (e.g., setthe size of the cache to 1kB) and instead of using the named ports(data_port and inst_port), we just use the cpu_side port twice.Since cpu_side is a VectorPort, it will automatically createmultiple port connections.import m5from m5.objects import *...system.cache = SimpleCache(size='1kB')system.cpu.icache_port = system.cache.cpu_sidesystem.cpu.dcache_port = system.cache.cpu_sidesystem.membus = SystemXBar()system.cache.mem_side = system.membus.slave...The Python config file can be downloadedhere.Running this script should produce the expected output from the hellobinary.gem5 Simulator System.  http://gem5.orggem5 is copyrighted software; use the --copyright option for details.gem5 compiled Jan 10 2017 17:38:15gem5 started Jan 10 2017 17:40:03gem5 executing on chinook, pid 29031command line: build/X86/gem5.opt configs/learning_gem5/part2/simple_cache.pyGlobal frequency set at 1000000000000 ticks per secondwarn: DRAM device capacity (8192 Mbytes) does not match the address range assigned (512 Mbytes)0: system.remote_gdb.listener: listening for remote gdb #0 on port 7000warn: CoherentXBar system.membus has no snooping ports attached!warn: ClockedObject: More than one power state change request encountered within the same simulation tickBeginning simulation!info: Entering event queue @ 0.  Starting simulation...Hello world!Exiting @ tick 56082000 because target called exit()Modifying the size of the cache, for instance to 128 KB, should improvethe performance of the system.gem5 Simulator System.  http://gem5.orggem5 is copyrighted software; use the --copyright option for details.gem5 compiled Jan 10 2017 17:38:15gem5 started Jan 10 2017 17:41:10gem5 executing on chinook, pid 29037command line: build/X86/gem5.opt configs/learning_gem5/part2/simple_cache.pyGlobal frequency set at 1000000000000 ticks per secondwarn: DRAM device capacity (8192 Mbytes) does not match the address range assigned (512 Mbytes)0: system.remote_gdb.listener: listening for remote gdb #0 on port 7000warn: CoherentXBar system.membus has no snooping ports attached!warn: ClockedObject: More than one power state change request encountered within the same simulation tickBeginning simulation!info: Entering event queue @ 0.  Starting simulation...Hello world!Exiting @ tick 32685000 because target called exit()Adding statistics to the cacheKnowing the overall execution time of the system is one importantmetric. However, you may want to include other statistics as well, suchas the hit and miss rates of the cache. To do this, we need to add somestatistics to the SimpleCache object.First, we need to declare the statistics in the SimpleCache object.They are part of the Stats namespace. In this case, we’ll make fourstatistics. The number of hits and the number of misses are justsimple Scalar counts. We will also add a missLatency which is ahistogram of the time it takes to satisfy a miss. Finally, we’ll add aspecial statistic called a Formula for the hitRatio that is acombination of other statistics (the number of hits and misses).class SimpleCache : public MemObject{  private:    ...    Tick missTime; // To track the miss latency    Stats::Scalar hits;    Stats::Scalar misses;    Stats::Histogram missLatency;    Stats::Formula hitRatio;  public:    ...    void regStats() override;};Next, we have to define the function to override the regStats functionso the statistics are registered with gem5’s statistics infrastructure.Here, for each statistic, we give it a name based on the “parent”SimObject name and a description. For the histogram statistic, we alsoneed to initialize it with how many buckets we want in the histogram.Finally, for the formula, we simply need to write the formula down incode.voidSimpleCache::regStats(){    // If you don't do this you get errors about uninitialized stats.    MemObject::regStats();    hits.name(name() + \".hits\")        .desc(\"Number of hits\")        ;    misses.name(name() + \".misses\")        .desc(\"Number of misses\")        ;    missLatency.name(name() + \".missLatency\")        .desc(\"Ticks for misses to the cache\")        .init(16) // number of buckets        ;    hitRatio.name(name() + \".hitRatio\")        .desc(\"The ratio of hits to the total accesses to the cache\")        ;    hitRatio = hits / (hits + misses);}Finally, we need to use update the statistics in our code. In theaccessTiming class, we can increment the hits and misses on a hitand miss respectively. Additionally, on a miss, we save the current timeso we can measure the latency.voidSimpleCache::accessTiming(PacketPtr pkt){    bool hit = accessFunctional(pkt);    if (hit) {        hits++; // update stats        pkt-&gt;makeResponse();        sendResponse(pkt);    } else {        misses++; // update stats        missTime = curTick();        ...Then, when we get a response, we need to add the measured latency to ourhistogram. For this, we use the sample function. This adds a singlepoint to the histogram. This histogram automatically resizes the bucketsto fit the data it receives.boolSimpleCache::handleResponse(PacketPtr pkt){    insert(pkt);    missLatency.sample(curTick() - missTime);    ...The complete code for the SimpleCache header file can be downloadedhere, and thecomplete code for the implementation of the SimpleCache can bedownloadedhere.Now, if we run the above config file, we can check on the statistics inthe stats.txt file. For the 1 KB case, we get the followingstatistics. 91% of the accesses are hits and the average miss latency is53334 ticks (or 53 ns).system.cache.hits                                8431                       # Number of hitssystem.cache.misses                               877                       # Number of missessystem.cache.missLatency::samples                 877                       # Ticks for misses to the cachesystem.cache.missLatency::mean           53334.093501                       # Ticks for misses to the cachesystem.cache.missLatency::gmean          44506.409356                       # Ticks for misses to the cachesystem.cache.missLatency::stdev          36749.446469                       # Ticks for misses to the cachesystem.cache.missLatency::0-32767                 305     34.78%     34.78% # Ticks for misses to the cachesystem.cache.missLatency::32768-65535             365     41.62%     76.40% # Ticks for misses to the cachesystem.cache.missLatency::65536-98303             164     18.70%     95.10% # Ticks for misses to the cachesystem.cache.missLatency::98304-131071             12      1.37%     96.47% # Ticks for misses to the cachesystem.cache.missLatency::131072-163839            17      1.94%     98.40% # Ticks for misses to the cachesystem.cache.missLatency::163840-196607             7      0.80%     99.20% # Ticks for misses to the cachesystem.cache.missLatency::196608-229375             0      0.00%     99.20% # Ticks for misses to the cachesystem.cache.missLatency::229376-262143             0      0.00%     99.20% # Ticks for misses to the cachesystem.cache.missLatency::262144-294911             2      0.23%     99.43% # Ticks for misses to the cachesystem.cache.missLatency::294912-327679             4      0.46%     99.89% # Ticks for misses to the cachesystem.cache.missLatency::327680-360447             1      0.11%    100.00% # Ticks for misses to the cachesystem.cache.missLatency::360448-393215             0      0.00%    100.00% # Ticks for misses to the cachesystem.cache.missLatency::393216-425983             0      0.00%    100.00% # Ticks for misses to the cachesystem.cache.missLatency::425984-458751             0      0.00%    100.00% # Ticks for misses to the cachesystem.cache.missLatency::458752-491519             0      0.00%    100.00% # Ticks for misses to the cachesystem.cache.missLatency::491520-524287             0      0.00%    100.00% # Ticks for misses to the cachesystem.cache.missLatency::total                   877                       # Ticks for misses to the cachesystem.cache.hitRatio                        0.905780                       # The ratio of hits to the total accessAnd when using a 128 KB cache, we get a slightly higher hit ratio. Itseems like our cache is working as expected!system.cache.hits                                8944                       # Number of hitssystem.cache.misses                               364                       # Number of missessystem.cache.missLatency::samples                 364                       # Ticks for misses to the cachesystem.cache.missLatency::mean           64222.527473                       # Ticks for misses to the cachesystem.cache.missLatency::gmean          61837.584812                       # Ticks for misses to the cachesystem.cache.missLatency::stdev          27232.443748                       # Ticks for misses to the cachesystem.cache.missLatency::0-32767                   0      0.00%      0.00% # Ticks for misses to the cachesystem.cache.missLatency::32768-65535             254     69.78%     69.78% # Ticks for misses to the cachesystem.cache.missLatency::65536-98303             106     29.12%     98.90% # Ticks for misses to the cachesystem.cache.missLatency::98304-131071              0      0.00%     98.90% # Ticks for misses to the cachesystem.cache.missLatency::131072-163839             0      0.00%     98.90% # Ticks for misses to the cachesystem.cache.missLatency::163840-196607             0      0.00%     98.90% # Ticks for misses to the cachesystem.cache.missLatency::196608-229375             0      0.00%     98.90% # Ticks for misses to the cachesystem.cache.missLatency::229376-262143             0      0.00%     98.90% # Ticks for misses to the cachesystem.cache.missLatency::262144-294911             2      0.55%     99.45% # Ticks for misses to the cachesystem.cache.missLatency::294912-327679             1      0.27%     99.73% # Ticks for misses to the cachesystem.cache.missLatency::327680-360447             1      0.27%    100.00% # Ticks for misses to the cachesystem.cache.missLatency::360448-393215             0      0.00%    100.00% # Ticks for misses to the cachesystem.cache.missLatency::393216-425983             0      0.00%    100.00% # Ticks for misses to the cachesystem.cache.missLatency::425984-458751             0      0.00%    100.00% # Ticks for misses to the cachesystem.cache.missLatency::458752-491519             0      0.00%    100.00% # Ticks for misses to the cachesystem.cache.missLatency::491520-524287             0      0.00%    100.00% # Ticks for misses to the cachesystem.cache.missLatency::total                   364                       # Ticks for misses to the cachesystem.cache.hitRatio                        0.960894                       # The ratio of hits to the total access",
        "url": "/documentation/learning_gem5/part2/simplecache/"
      }
      ,
    
      "documentation-learning-gem5-part3-msibuilding": {
        "title": "Compiling a SLICC protocol",
        "content": "Compiling a SLICC protocolThe SLICC fileNow that we have finished implementing the protocol, we need to compileit. You can download the complete SLICC files below:  MSI-cache.sm  MSI-dir.sm  MSI-msg.smBefore building the protocol, we need to create one more file:MSI.slicc. This file tells the SLICC compiler which state machinefiles to compile for this protocol. The first line contains the name ofour protocol. Then, the file has a number of include statements. Eachinclude statement has a file name. This filename can come from any ofthe protocol_dirs directories. We declared the current directory aspart of the protocol_dirs in the SConsopts file(protocol_dirs.append(str(Dir('.').abspath))). The other directory issrc/mem/protocol/. These files are included like C++h header files.Effectively, all of the files are processed as one large SLICC file.Thus, any files that declare types that are used in other files mustcome before the files they are used in (e.g., MSI-msg.sm must comebefore MSI-cache.sm since MSI-cache.sm uses the RequestMsg type).protocol \"MSI\";include \"RubySlicc_interfaces.slicc\";include \"MSI-msg.sm\";include \"MSI-cache.sm\";include \"MSI-dir.sm\";You can download the fill filehere.Compiling a protocol with SConsMost SCons defaults (found in build_opts/) specify the protocol asMI_example, an example, but poor performing protocol. Therefore, wecannot simply use a default build name (e.g., X86 or ARM). We haveto specify the SCons options on the command line. The command line belowwill build our new protocol with the X86 ISA.scons build/X86_MSI/gem5.opt --default=X86 PROTOCOL=MSI SLICC_HTML=TrueThis command will build gem5.opt in the directory build/X86_MSI. Youcan specify any directory here. This command line has two newparameters: --default and PROTOCOL. First, --default specifieswhich file to use in build_opts for defaults for all of the SConsvariables (e.g., ISA, CPU_MODELS). Next, PROTOCOL overrides anydefault for the PROTOCOL SCons variable in the default specified.Thus, we are telling SCons to specifically compile our new protocol, notwhichever protocol was specified in build_opts/X86.There is one more variable on this command line to build gem5:SLICC_HTML=True. When you specify this on the building command line,SLICC will generate the HTML tables for your protocol. You can find theHTML tables in &lt;build directory&gt;/mem/protocol/html. By default, theSLICC compiler skips building the HTML tables because it impacts theperformance of compiling gem5, especially when compiling on a networkfile system.After gem5 finishes compiling, you will have a gem5 binary with your newprotocol! If you want to build another protocol into gem5, you have tochange the PROTOCOL SCons variable. Thus, it is a good idea to use adifferent build directory for each protocol, especially if you will becomparing protocols.When building your protocol, you will likely encounter errors in yourSLICC code reported by the SLICC compiler. Most errors include the fileand line number of the error. Sometimes, this line number is the lineafter the error occurs. In fact, the line number can be far below theactual error. For instance, if the curly brackets do not matchcorrectly, the error will report the last line in the file as thelocation.",
        "url": "/documentation/learning_gem5/part3/MSIbuilding/"
      }
      ,
    
      "documentation-learning-gem5-part3-msidebugging": {
        "title": "Debugging SLICC Protocols",
        "content": "Debugging SLICC ProtocolsIn this section, I present the steps that I took while debugging the MSIprotocol implemented earlier in this chapter. Learning to debugcoherence protocols is a challenge. The best way is by working withothers who have written SLICC protocols in the past. However, since you,the reader, cannot look over my shoulder while I am debugging aprotocol, I am trying to present the next-best thing.Here, I first present some high-level suggestions to tackling protocolerrors. Next, I discuss some details about deadlocks, and how tounderstand protocol traces that can be used to fix them. Then, I presentmy experience debugging the MSI protocol in this chapter in astream-of-consciousness style. I will show the error that was generated,then the solution to the error, sometimes with some commentary of thedifferent tactics I tried to solve the error.General debugging tipsRuby has many useful debug flags. However, the most useful, by far, isProtocolTrace. Below, you will see several examples of using theprotocol trace to debug a protocol. The protocol trace prints everytransition for all controllers. Thus, you can simply trace the entireexecution of the cache system.Other useful debug flags include:  RubyGenerated  Prints a bunch of stuff from the ruby generated code.  RubyPort/RubySequencer  See the details of sending/receiving messages into/out of ruby.  RubyNetwork  Prints entire network messages including the sender/receiver and thedata within the message for all messages. This flag is useful whenthere is a data mismatch.The first step to debugging a Ruby protocol is to run it with the Rubyrandom tester. The random tester issues semi-random requests into theRuby system and checks to make sure the returned data is correct. Tomake debugging faster, the random tester issues read requests from onecontroller for a block and a write request for the same cache block (buta different byte) from a different controller. Thus, the Ruby randomtester does a good job exercising the transient states and raceconditions in the protocol.Unfortunately, the random tester’s configuration is slightly differentthan when using normal CPUs. Thus, we need to use a differentMyCacheSystem than before. You can download this different cachesystem filehere and youcan download the modified run scripthere. The testrun script is mostly the same as the simple run script, but creates theRubyRandomTester instead of CPUs.It is often a good idea to first run the random tester with a single“CPU”. Then, increase the number of loads from the default of 100 tosomething that takes a few minutes to execute on your host system. Next,if there are no errors, then increase the number of “CPUs” to two andreduce the number of loads to 100 again. Then, start increasing thenumber of loads. Finally, you can increase the number of CPUs tosomething reasonable for the system you are trying to simulate. If youcan run the random tester for 10-15 minutes, you can be slightlyconfident that the random tester isn’t going to find any other bugs.Once you have your protocol working with the random tester, you can moveon to using real applications. It is likely that real applications willexpose even more bugs in the protocol. If at all possible, it is mucheasier to debug your protocol with the random tester than with realapplications!Understanding Protocol TracesUnfortunately, despite extensive effort to catch bugs in them, coherenceprotocols (even heavily tested ones) will have bugs. Sometimes thesebugs are relatively simple fixes, while other times the bugs will bevery insidious and difficult to track down. In the worst case, the bugswill manifest themselves as deadlocks: bugs that literally prevent theapplication from making progress. Another similar problem is livelocks:where the program runs forever due to a cycle somewhere in the system.Whenever livelocks or deadlocks occur, the next thing to do is generatea protocol trace. Traces print a running list of every transition thatis happening in the memory system: memory requests starting andcompleting, L1 and directory transitions, etc. You can then use thesetraces to identify why the deadlock is occurring. However, as we willdiscuss in more detail below, debugging deadlocks in protocol traces isoften extremely challenging.Here, we discuss what appears in the protocol trace to help explain whatis happening. To start with, lets look at a small snippet of a protocoltrace (we will discuss the details of this trace further below):...4541   0    L1Cache         Replacement   MI_A&gt;MI_A   [0x4ac0, line 0x4ac0]4542   0    L1Cache              PutAck   MI_A&gt;I      [0x4ac0, line 0x4ac0]4549   0  Directory              MemAck   MI_M&gt;I      [0x4ac0, line 0x4ac0]4641   0        Seq               Begin       &gt;       [0x4aec, line 0x4ac0] LD4652   0    L1Cache                Load      I&gt;IS_D   [0x4ac0, line 0x4ac0]4657   0  Directory                GetS      I&gt;S_M    [0x4ac0, line 0x4ac0]4669   0  Directory             MemData    S_M&gt;S      [0x4ac0, line 0x4ac0]4674   0        Seq                Done       &gt;       [0x4aec, line 0x4ac0] 33 cycles4674   0    L1Cache       DataDirNoAcks   IS_D&gt;S      [0x4ac0, line 0x4ac0]5321   0        Seq               Begin       &gt;       [0x4aec, line 0x4ac0] ST5322   0    L1Cache               Store      S&gt;SM_AD  [0x4ac0, line 0x4ac0]5327   0  Directory                GetM      S&gt;M_M    [0x4ac0, line 0x4ac0]Every line in this trace has a set pattern in terms of what informationappears on that line. Specifically, the fields are:  Current Tick: the tick the print is occurs in  Machine Version: The number of the machine where this request iscoming from. For example, if there are 4 L1 caches, then the numberswould be 0-3. Assuming you have 1 L1 Cache per core, you can thinkof this as representing the core the request is coming from.  Component: which part of the system is doing the print. Generally,Seq is shorthand for Sequencer, L1Cache represents the L1 Cache,“Directory” represents the directory, and so on. For L1 caches andthe directory, this represents the name of the machine type (i.e.,what is after “MachineType:” in the machine() definition).  Action: what the component is doing. For example, “Begin” means theSequencer has received a new request, “Done” means that theSequencer is completing a previous request, and “DataDirNoAcks”means that our DataDirNoAcks event is being triggered.  Transition (e.g., MI_A&gt;MI_A): what state transition this actionis doing (format: “currentState&gt;nextState”). If no transition ishappening, this is denoted with “&gt;”.  Address (e.g., [0x4ac0, line 0x4ac0]): the physical address of therequest (format: [wordAddress, lineAddress]). This address willalways be cache-block aligned except for requests from theSequencer and mandatoryQueue.  (Optional) Comments: optionally, there is one additional field topass comments. For example, the “LD” , “ST”, and “33 cycles” linesuse this extra field to pass additional information to the trace –such as identifying the request as a load or store. For SLICCtransitions, APPEND_TRANSITION_COMMENT often use this, as wediscussed previously.Generally, spaces are used to separate each of these fields (the spacebetween the fields are added implicitly, you do not need to add them).However, sometimes if a field is very long, there may be no spaces orthe line may be shifted compared to other lines.Using this information, let’s analyze the above snippet. The first(tick) field tells us that this trace snippet is showing what washappening in the memory system between ticks 4541 and 5327. In thissnippet, all of the requests are coming from L1Cache-0 (core 0) andgoing to Directory-0 (the first bank of the directory). During thistime, we see several memory requests and state transitions for the cacheline 0x4ac0, both at the L1 caches and the directory. For example, intick 5322, the core executes a store to 0x4ac0. However, it currentlydoes not have that line in Modified in its cache (it is in Shared afterthe core loaded it from ticks 4641-4674), so it needs to requestownership for that line from the directory (which receives this requestin tick 5327). While waiting for ownership, L1Cache-0 transitions from S(Shared) to SM_AD (a transient state – was in S, going to M, waitingfor Ack and Data).To add a print to the protocol trace, you will need to add a print withthese fields with the ProtocolTrace flag. For example, if you look atsrc/mem/ruby/system/Sequencer.cc, you can see where theSeq               Begin and Seq                Done trace printscome from (search for ProtocolTrace).Errors I ran into debugging MSIgem5.opt: build/MSI/mem/ruby/system/Sequencer.cc:423: void Sequencer::readCallback(Addr, DataBlock&amp;, bool, MachineType, Cycles, Cycles, Cycles): Assertion `m_readRequestTable.count(makeLineAddress(address))' failed.I’m an idiot, it was that I called readCallback in externalStoreHitinstead of writeCallback. It’s good to start simple!gem5.opt: build/MSI/mem/ruby/network/MessageBuffer.cc:220: Tick MessageBuffer::dequeue(Tick, bool): Assertion `isReady(current_time)' failed.I ran gem5 in GDB to get more information. Look atL1Cache_Controller::doTransitionWorker. The current transition is:event=L1Cache_Event_PutAck, state=L1Cache_State_MI_A,&lt;next_state=@0x7fffffffd0a0&gt;: L1Cache_State_FIRST This is more simplyMI_A-&gt;I on a PutAck See it’s in popResponseQueue.The problem is that the PutAck is on the forward network, not theresponse network.panic: Invalid transitionsystem.caches.controllers0 time: 3594 addr: 3264 event: DataDirAcks state: IS_DHmm. I think this shouldn’t have happened. The needed acks should alwaysbe 0 or you get data from the owner. Ah. So I implemented sendDataToReqat the directory to always send the number of sharers. If we get thisresponse in IS_D we don’t care whether or not there are sharers. Thus,to make things more simple, I’m just going to transition to S onDataDirAcks. This is a slight difference from the originalimplementation in Sorin et al.Well, actually, I think it’s that we send the request after we addourselves to the sharer list. The above is incorrect. Sorin et al.were not wrong! Let’s try not doing that!So, I fixed this by checking to see if the requestor is the ownerbefore sending the data to the requestor at the directory. Only if therequestor is the owner do we include the number of sharers. Otherwise,it doesn’t matter at all and we just set the sharers to 0.panic: Invalid transition system.caches.controllers0 time: 5332addr: 0x4ac0 event: Inv state: SM\\_ADFirst, let’s look at where Inv is triggered. If you get an invalidate…only then. Maybe it’s that we are on the sharer list and shouldn’t be?We can use protocol trace and grep to find what’s going on.build/MSI/gem5.opt --debug-flags=ProtocolTrace configs/learning_gem5/part6/ruby_test.py | grep 0x4ac0...4541   0    L1Cache         Replacement   MI_A&gt;MI_A   [0x4ac0, line 0x4ac0]4542   0    L1Cache              PutAck   MI_A&gt;I      [0x4ac0, line 0x4ac0]4549   0  Directory              MemAck   MI_M&gt;I      [0x4ac0, line 0x4ac0]4641   0        Seq               Begin       &gt;       [0x4aec, line 0x4ac0] LD4652   0    L1Cache                Load      I&gt;IS_D   [0x4ac0, line 0x4ac0]4657   0  Directory                GetS      I&gt;S_M    [0x4ac0, line 0x4ac0]4669   0  Directory             MemData    S_M&gt;S      [0x4ac0, line 0x4ac0]4674   0        Seq                Done       &gt;       [0x4aec, line 0x4ac0] 33 cycles4674   0    L1Cache       DataDirNoAcks   IS_D&gt;S      [0x4ac0, line 0x4ac0]5321   0        Seq               Begin       &gt;       [0x4aec, line 0x4ac0] ST5322   0    L1Cache               Store      S&gt;SM_AD  [0x4ac0, line 0x4ac0]5327   0  Directory                GetM      S&gt;M_M    [0x4ac0, line 0x4ac0]Maybe there is a sharer in the sharers list when there shouldn’t be? Wecan add a defensive assert in clearOwner and setOwner.action(setOwner, \"sO\", desc=\"Set the owner\") {    assert(getDirectoryEntry(address).Sharers.count() == 0);    peek(request_in, RequestMsg) {        getDirectoryEntry(address).Owner.add(in_msg.Requestor);    }}action(clearOwner, \"cO\", desc=\"Clear the owner\") {    assert(getDirectoryEntry(address).Sharers.count() == 0);    getDirectoryEntry(address).Owner.clear();}Now, I get the following error:panic: Runtime Error at MSI-dir.sm:301: assert failure.This is in setOwner. Well, actually this is OK since we need to have thesharers still set until we count them to send the ack count to therequestor. Let’s remove that assert and see what happens. Nothing. Thatdidn’t help anything.When are invalidations sent from the directory? Only on S-&gt;M_M. So,here, we need to remove ourselves from the invalidation list. I think weneed to keep ourselves in the sharer list since we subtract one whensending the number of acks.Note: I’m coming back to this a little later. It turns out that both ofthese asserts are wrong. I found this out when running with more thanone CPU below. The sharers are set before clearing the Owner in M-&gt;S_Don a GetS.So, onto the next problem!panic: Deadlock detected: current_time: 56091 last_progress_time: 6090 difference:  50001 processor: 0Deadlocks are the worst kind of error. Whatever caused the deadlock isancient history (i.e., likely happened many cycles earlier), and oftenvery hard to track down.Looking at the tail of the protocol trace (note: sometimes you must putthe protocol trace into a file because it grows very big) I see thatthere is an address that is trying to be replaced. Let’s start there.56091   0    L1Cache         Replacement   SM_A&gt;SM_A   [0x5ac0, line 0x5ac0]56091   0    L1Cache         Replacement   SM_A&gt;SM_A   [0x5ac0, line 0x5ac0]56091   0    L1Cache         Replacement   SM_A&gt;SM_A   [0x5ac0, line 0x5ac0]56091   0    L1Cache         Replacement   SM_A&gt;SM_A   [0x5ac0, line 0x5ac0]56091   0    L1Cache         Replacement   SM_A&gt;SM_A   [0x5ac0, line 0x5ac0]56091   0    L1Cache         Replacement   SM_A&gt;SM_A   [0x5ac0, line 0x5ac0]56091   0    L1Cache         Replacement   SM_A&gt;SM_A   [0x5ac0, line 0x5ac0]56091   0    L1Cache         Replacement   SM_A&gt;SM_A   [0x5ac0, line 0x5ac0]56091   0    L1Cache         Replacement   SM_A&gt;SM_A   [0x5ac0, line 0x5ac0]56091   0    L1Cache         Replacement   SM_A&gt;SM_A   [0x5ac0, line 0x5ac0]Before this replacement got stuck I see the following in the protocoltrace. Note: this is 50000 cycles in the past!...5592   0    L1Cache               Store      S&gt;SM_AD  [0x5ac0, line 0x5ac0]5597   0  Directory                GetM      S&gt;M_M    [0x5ac0, line 0x5ac0]...5641   0  Directory             MemData    M_M&gt;M      [0x5ac0, line 0x5ac0]...5646   0    L1Cache         DataDirAcks  SM_AD&gt;SM_A   [0x5ac0, line 0x5ac0]Ah! This clearly should not be DataDirAcks since we only have a singleCPU! So, we seem to not be subtracting properly. Going back to theprevious error, I was wrong about needing to keep ourselves in the list.I forgot that we no longer had the -1 thing. So, let’s remove ourselvesfrom the sharing list before sending the invalidations when weoriginally get the S-&gt;M request.So! With those changes the Ruby tester completes with a single core.Now, to make it harder we need to increase the number of loads we do andthen the number of cores.And, of course, when I increase it to 10,000 loads there is a deadlock.Fun!What I’m seeing at the end of the protocol trace is the following.144684   0    L1Cache         Replacement   MI_A&gt;MI_A   [0x5bc0, line 0x5bc0]...144685   0  Directory                GetM   MI_M&gt;MI_M   [0x54c0, line 0x54c0]...144685   0    L1Cache         Replacement   MI_A&gt;MI_A   [0x5bc0, line 0x5bc0]...144686   0  Directory                GetM   MI_M&gt;MI_M   [0x54c0, line 0x54c0]...144686   0    L1Cache         Replacement   MI_A&gt;MI_A   [0x5bc0, line 0x5bc0]...144687   0  Directory                GetM   MI_M&gt;MI_M   [0x54c0, line 0x54c0]...This is repeated for a long time.It seems that there is a circular dependence or something like thatcausing this deadlock.Well, it seems that I was correct. The order of the in_ports reallymatters! In the directory, I previously had the order: request,response, memory. However, there was a memory packet that was blockedbecause the request queue was blocked, which caused the circulardependence and the deadlock. The order should be memory, response, andrequest. I believe the memory/response order doesn’t matter since noresponses depend on memory and vice versa.Now, let’s try with two CPUs. First thing I run into is an assertfailure. I’m seeing the first assert in setState fail.void setState(Addr addr, State state) {    if (directory.isPresent(addr)) {        if (state == State:M) {            assert(getDirectoryEntry(addr).Owner.count() == 1);            assert(getDirectoryEntry(addr).Sharers.count() == 0);        }        getDirectoryEntry(addr).DirState := state;        if (state == State:I)  {            assert(getDirectoryEntry(addr).Owner.count() == 0);            assert(getDirectoryEntry(addr).Sharers.count() == 0);        }    }}To track this problem down, let’s add a debug statement (DPRINTF) andrun with protocol trace. First I added the following line just beforethe assert. Note that you are required to use the RubySlicc debug flag.This is the only debug flag included in the generated SLICC files.DPRINTF(RubySlicc, \"Owner %s\\n\", getDirectoryEntry(addr).Owner);Then, I see the following output when running with ProtocolTrace andRubySlicc.118   0  Directory             MemData    M_M&gt;M      [0x400, line 0x400]118: system.caches.controllers2: MSI-dir.sm:160: Owner [NetDest (16) 1 0  -  -  - 0  -  -  -  -  -  -  -  -  -  -  -  -  - ]118   0  Directory                GetM      M&gt;M      [0x400, line 0x400]118: system.caches.controllers2: MSI-dir.sm:160: Owner [NetDest (16) 1 1  -  -  - 0  -  -  -  -  -  -  -  -  -  -  -  -  - ]It looks like when we process the GetM when in state M we need to firstclear the owner before adding the new owner. The other options is insetOwner we could have Set the Owner specifically instead of adding itto the NetDest.Oooo! This is a new error!panic: Runtime Error at MSI-dir.sm:229: Unexpected message type..What is this message that fails? Let’s use the RubyNetwork debug flag totry to track down what message is causing this error. A few lines abovethe error I see the following message whose destination is thedirectory.The destination is a NetDest which is a bitvector of MachineIDs. Theseare split into multiple sections. I know I’m running with two CPUs, sothe first two 0’s are for the CPUs, and the other 1 must be fore thedirectory.2285: PerfectSwitch-2: Message: [ResponseMsg: addr = [0x8c0, line 0x8c0] Type = InvAck Sender = L1Cache-1 Destination = [NetDest (16) 0 0  -  -  - 1  -  -  -  -  -  -  -  -  -  -  -  -  - ] DataBlk = [ 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0xb1 0xb2 0xb3 0xb4 0xca 0xcb 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 ] MessageSize = Control Acks = 0 ]This message has the type InvAck, which is clearly wrong! It seems thatwe are setting the requestor wrong when we send the invalidate (Inv)message to the L1 caches from the directory.Yes. This is the problem. We need to make the requestor the originalrequestor. This was already correct for the FwdGetS/M, but I missed theinvalidate somehow. On to the next error!panic: Invalid transitionsystem.caches.controllers0 time: 2287 addr: 0x8c0 event: LastInvAck state: SM_ADThis seems to be that I am not counting the acks correctly. It couldalso be that the directory is much slower than the other caches atresponding since it has to get the data from memory.If it’s the latter (which I should be sure to verify), what we could dois include an ack requirement for the directory, too. Then, when thedirectory sends the data (and the owner, too) decrement the needed acksand trigger the event based on the new ack count.Actually, that first hypothesis was not quite right. I printed out thenumber of acks whenever we receive an InvAck and what’s happening isthat the other cache is responding with an InvAck before the directoryhas told it how many acks to expect.So, what we need to do is something like what I was talking about above.First of all, we will need to let the acks drop below 0 and add thetotal acks to it from the directory message. Then, we are going to haveto complicate the logic for triggering last ack, etc.Ok. So now we’re letting the tbe.Acks drop below 0 and then adding thedirectory acks whenever they show up.Next error: This is a tough one. The error is now that the data doesn’tmatch as it should. Kind of like the deadlock, the data could have beencorrupted in the ancient past. I believe the address is the last one inthe protocol trace.panic: Action/check failure: proc: 0 address: 19688 data: 0x779e6d0byte\\_number: 0 m\\_value+byte\\_number: 53 byte: 0 [19688, value: 53,status: Check\\_Pending, initiating node: 0, store\\_count: 4]Time:5843So, it could be something to do with ack counts, though I don’t thinkthis is the issue. Either way, it’s a good idea to annotate the protocoltrace with the ack information. To do this, we can add comments to thetransition with APPEND_TRANSITION_COMMENT.action(decrAcks, \"da\", desc=\"Decrement the number of acks\") {    assert(is_valid(tbe));    tbe.Acks := tbe.Acks - 1;    APPEND_TRANSITION_COMMENT(\"Acks: \");    APPEND_TRANSITION_COMMENT(tbe.Acks);}5737   1    L1Cache              InvAck  SM_AD&gt;SM_AD  [0x400, line 0x400] Acks: -1For these data issues, the debug flag RubyNetwork is useful because itprints the value of the data blocks at every point it is in the network.For instance, for the address in question above, it looks like the datablock is all 0’s after loading from main-memory. I believe this shouldhave valid data. In fact, if we go back in time some we see that therewas some non-zero elements.5382   1    L1Cache                 Inv      S&gt;I      [0x4cc0, line 0x4cc0]&gt; 5383: PerfectSwitch-1: Message: [ResponseMsg: addr = [0x4cc0, line&gt; 0x4cc0] Type = InvAck Sender = L1Cache-1 Destination = [NetDest (16) 1&gt; 0 - - - 0 - - - - - - - - - - - - - ] DataBlk = [ 0x0 0x0 0x0 0x0 0x0&gt; 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0&gt; 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0&gt; 0x0 0x35 0x36 0x37 0x61 0x6d 0x6e 0x6f 0x70 0x0 0x0 0x0 0x0 0x0 0x0&gt; 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 ] MessageSize = Control Acks =&gt; 0 ] ... ... ... 5389 0 Directory MemData M\\_M\\    &gt;M [0x4cc0, line 0x4cc0]&gt; 5390: PerfectSwitch-2: incoming: 0 5390: PerfectSwitch-2: Message:&gt; [ResponseMsg: addr = [0x4cc0, line 0x4cc0] Type = Data Sender =&gt; Directory-0 Destination = [NetDest (16) 1 0 - - - 0 - - - - - - - - -&gt; - - - - ] DataBlk = [ 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0&gt; 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0&gt; 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0&gt; 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0&gt; 0x0 ] MessageSize = Data Acks = 1 ]It seems that memory is not being updated correctly on the M-&gt;Stransition. After lots of digging and using the MemoryAccess debug flagto see exactly what was being read and written to main memory, I foundthat in sendDataToMem I was using the request_in. This is right forPutM, but not right for Data. We need to have another action to senddata from response queue!panic: Invalid transitionsystem.caches.controllers0 time: 44381 addr: 0x7c0 event: Inv state: SM_ADInvalid transition is my personal favorite kind of SLICC error. For thiserror, you know exactly what address caused it, and it’s very easy totrace through the protocol trace to find what went wrong. However, inthis case, nothing went wrong, I just forgot to put this transition in!Easy fix!",
        "url": "/documentation/learning_gem5/part3/MSIdebugging/"
      }
      ,
    
      "documentation-learning-gem5-part3-msiintro": {
        "title": "Introduction to Ruby",
        "content": "Introduction to RubyRuby comes from the multifacet GEMSproject. Ruby provides a detailedcache memory and cache coherence models as well as a detailed networkmodel (Garnet).Ruby is flexible. It can model many different kinds of coherenceimplementations, including broadcast, directory, token, region-basedcoherence, and is simple to extend to new coherence models.Ruby is a mostly drop-in replacement for the classic memory system.There are interfaces between the classic gem5 MemObjects and Ruby, butfor the most part, the classic caches and Ruby are not compatible.In this part of the book, we will first go through creating an exampleprotocol from the protocol description to debugging and running theprotocol.Before diving into a protocol, we will first talk about some of thearchitecture of Ruby. The most important structure in Ruby is thecontroller, or state machine. Controllers are implemented by writing aSLICC state machine file.SLICC is a domain-specific language (Specification Language includingCache Coherence) for specifying coherence protocols. SLICC files end in“.sm” because they are state machine files. Each file describesstates, transitions from a begin to an end state on some event, andactions to take during the transition.Each coherence protocol is made up of multiple SLICC state machinefiles. These files are compiled with the SLICC compiler which is writtenin Python and part of the gem5 source. The SLICC compiler takes thestate machine files and output a set of C++ files that are compiled withall of gem5’s other files. These files include the SimObject declarationfile as well as implementation files for SimObjects and other C++objects.Currently, gem5 supports compiling only a single coherence protocol at atime. For instance, you can compile MI_example into gem5 (the default,poor performance, protocol), or you can use MESI_Two_Level. But, touse MESI_Two_Level, you have to recompile gem5 so the SLICC compilercan generate the correct files for the protocol. We discuss this furtherin the compilation section &lt;MSI-building-section&gt;Now, let’s dive into implementing our first coherence protocol!",
        "url": "/documentation/learning_gem5/part3/MSIintro/"
      }
      ,
    
      "documentation-learning-gem5-part3-cache-actions": {
        "title": "Action code blocks",
        "content": "Action code blocksThe next section of the state machine file is the action blocks. Theaction blocks are executed during a transition from one state toanother, and are called by the transition code blocks (which we willdiscuss in the next section &lt;MSI-transitions-section&gt;). Actions aresingle action blocks. Some examples are “send a message to thedirectory” and “pop the head of the buffer”. Each action should be smalland only perform a single action.The first action we will implement is an action to send a GetS requestto the directory. We need to send a GetS request to the directorywhenever we want to read some data that is not in the Modified or Sharedstates in our cache. As previously mentioned, there are three variablesthat are automatically populated inside the action block (like thein_msg in peek blocks). address is the address that was passedinto the trigger function, cache_entry is the cache entry passedinto the trigger function, and tbe is the TBE passed into thetrigger function.action(sendGetS, 'gS', desc=\"Send GetS to the directory\") {    enqueue(request_out, RequestMsg, 1) {        out_msg.addr := address;        out_msg.Type := CoherenceRequestType:GetS;        out_msg.Destination.add(mapAddressToMachine(address,                                MachineType:Directory));        // See mem/protocol/RubySlicc_Exports.sm for possible sizes.        out_msg.MessageSize := MessageSizeType:Control;        // Set that the requestor is this machine so we get the response.        out_msg.Requestor := machineID;    }}When specifying the action block, there are two parameters: adescription and a “shorthand”. These two parameters are used in the HTMLtable generation. The shorthand shows up in the transition cell, so itshould be as short as possible. SLICC provides a special syntax to allowfor bold (‘’), superscript (‘\\^’), and spaces (‘_’) in the shorthand tohelp keep them short. Second, the description also shows up in the HTMLtable when you click on a particular action. The description can belonger and help explain what the action does.Next, in this action we are going to send a message to the directory onthe request_out port as declared above the in_port blocks. Theenqueue function is similar to the peek function since it requires acode block. enqueue, however, has the special variable out_msg. Inthe enqueue block, you can modify the out_msg with the current data.The enqueue block takes three parameters, the message buffer to sendthe message, the type of the message, and a latency. This latency (1cycle in the example above and throughout this cache controller) is thecache latency. This is where you specify the latency of accessing thecache, in this case for a miss. Below we will see that specifying thelatency for a hit is similar.Inside the enqueue block is where the message data is populated. Forthe address of the request, we can use the automatically populatedaddress variable. We are sending a GetS message, so we use thatmessage type. Next, we need to specify the destination of the message.For this, we use the mapAddressToMachine function that takes theaddress and the machine type we are sending to. This will look up in thecorrect MachineID based on the address. We call Destination.addbecause Destination is a NetDest object, or a bitmap of allMachineID.Finally, we need to specify the message size (frommem/protocol/RubySlicc_Exports.sm) and set ourselves as the requestor.By setting this machineID as the requestor, it will allow thedirectory to respond to this cache or forward it to another cache torespond to this request.Similarly, we can create actions for sending other get and put requests.Note that get requests represent requests for data and put requestsrepresent requests where we downgrading or evicting our copy of thedata.action(sendGetM, \"gM\", desc=\"Send GetM to the directory\") {    enqueue(request_out, RequestMsg, 1) {        out_msg.addr := address;        out_msg.Type := CoherenceRequestType:GetM;        out_msg.Destination.add(mapAddressToMachine(address,                                MachineType:Directory));        out_msg.MessageSize := MessageSizeType:Control;        out_msg.Requestor := machineID;    }}action(sendPutS, \"pS\", desc=\"Send PutS to the directory\") {    enqueue(request_out, RequestMsg, 1) {        out_msg.addr := address;        out_msg.Type := CoherenceRequestType:PutS;        out_msg.Destination.add(mapAddressToMachine(address,                                MachineType:Directory));        out_msg.MessageSize := MessageSizeType:Control;        out_msg.Requestor := machineID;    }}action(sendPutM, \"pM\", desc=\"Send putM+data to the directory\") {    enqueue(request_out, RequestMsg, 1) {        out_msg.addr := address;        out_msg.Type := CoherenceRequestType:PutM;        out_msg.Destination.add(mapAddressToMachine(address,                                MachineType:Directory));        out_msg.DataBlk := cache_entry.DataBlk;        out_msg.MessageSize := MessageSizeType:Data;        out_msg.Requestor := machineID;    }}Next, we need to specify an action to send data to another cache in thecase that we get a forwarded request from the directory for anothercache. In this case, we have to peek into the request queue to get otherdata from the requesting message. This peek code block is exactly thesame as the ones in the in_port. When you nest an enqueue block in apeek block both in_msg and out_msg variables are available. Thisis needed so we know which other cache to send the data to.Additionally, in this action we use the cache_entry variable to getthe data to send to the other cache.action(sendCacheDataToReq, \"cdR\", desc=\"Send cache data to requestor\") {    assert(is_valid(cache_entry));    peek(forward_in, RequestMsg) {        enqueue(response_out, ResponseMsg, 1) {            out_msg.addr := address;            out_msg.Type := CoherenceResponseType:Data;            out_msg.Destination.add(in_msg.Requestor);            out_msg.DataBlk := cache_entry.DataBlk;            out_msg.MessageSize := MessageSizeType:Data;            out_msg.Sender := machineID;        }    }}Next, we specify actions for sending data to the directory and sendingan invalidation ack to the original requestor on a forward request whenthis cache does not have the data.action(sendCacheDataToDir, \"cdD\", desc=\"Send the cache data to the dir\") {    enqueue(response_out, ResponseMsg, 1) {        out_msg.addr := address;        out_msg.Type := CoherenceResponseType:Data;        out_msg.Destination.add(mapAddressToMachine(address,                                MachineType:Directory));        out_msg.DataBlk := cache_entry.DataBlk;        out_msg.MessageSize := MessageSizeType:Data;        out_msg.Sender := machineID;    }}action(sendInvAcktoReq, \"iaR\", desc=\"Send inv-ack to requestor\") {    peek(forward_in, RequestMsg) {        enqueue(response_out, ResponseMsg, 1) {            out_msg.addr := address;            out_msg.Type := CoherenceResponseType:InvAck;            out_msg.Destination.add(in_msg.Requestor);            out_msg.DataBlk := cache_entry.DataBlk;            out_msg.MessageSize := MessageSizeType:Control;            out_msg.Sender := machineID;        }    }}Another required action is to decrement the number of acks we arewaiting for. This is used when we get a invalidation ack from anothercache to track the total number of acks. For this action, we assume thatthere is a valid TBE and modify the implicit tbe variable in theaction block.Additionally, we have another example of making debugging easier inprotocols: APPEND_TRANSITION_COMMENT. This function takes a string, orsomething that can easily be converted to a string (e.g., int) as aparameter. It modifies the protocol trace output, which we willdiscuss in the debugging section. On eachprotocol trace line that executes this action it will print the totalnumber of acks this cache is still waiting on. This is useful since thenumber of remaining acks is part of the cache block state.action(decrAcks, \"da\", desc=\"Decrement the number of acks\") {    assert(is_valid(tbe));    tbe.AcksOutstanding := tbe.AcksOutstanding - 1;    APPEND_TRANSITION_COMMENT(\"Acks: \");    APPEND_TRANSITION_COMMENT(tbe.AcksOutstanding);}We also need an action to store the acks when we receive a message fromthe directory with an ack count. For this action, we peek into thedirectory’s response message to get the number of acks and store them inthe (required to be valid) TBE.action(storeAcks, \"sa\", desc=\"Store the needed acks to the TBE\") {    assert(is_valid(tbe));    peek(response_in, ResponseMsg) {        tbe.AcksOutstanding := in_msg.Acks + tbe.AcksOutstanding;    }    assert(tbe.AcksOutstanding &gt; 0);}The next set of actions are to respond to CPU requests on hits andmisses. For these actions, we need to notify the sequencer (theinterface between Ruby and the rest of gem5) of the new data. In thecase of a store, we give the sequencer a pointer to the data block andthe sequencer updates the data in-place.action(loadHit, \"Lh\", desc=\"Load hit\") {    assert(is_valid(cache_entry));    cacheMemory.setMRU(cache_entry);    sequencer.readCallback(address, cache_entry.DataBlk, false);}action(externalLoadHit, \"xLh\", desc=\"External load hit (was a miss)\") {    assert(is_valid(cache_entry));    peek(response_in, ResponseMsg) {        cacheMemory.setMRU(cache_entry);        // Forward the type of machine that responded to this request        // E.g., another cache or the directory. This is used for tracking        // statistics.        sequencer.readCallback(address, cache_entry.DataBlk, true,                               machineIDToMachineType(in_msg.Sender));    }}action(storeHit, \"Sh\", desc=\"Store hit\") {    assert(is_valid(cache_entry));    cacheMemory.setMRU(cache_entry);    // The same as the read callback above.    sequencer.writeCallback(address, cache_entry.DataBlk, false);}action(externalStoreHit, \"xSh\", desc=\"External store hit (was a miss)\") {    assert(is_valid(cache_entry));    peek(response_in, ResponseMsg) {        cacheMemory.setMRU(cache_entry);        sequencer.writeCallback(address, cache_entry.DataBlk, true,                               // Note: this could be the last ack.                               machineIDToMachineType(in_msg.Sender));    }}action(forwardEviction, \"e\", desc=\"sends eviction notification to CPU\") {    if (send_evictions) {        sequencer.evictionCallback(address);    }}In each of these actions, it is vital that we call setMRU on the cacheentry. The setMRU function is what allows the replacement policy toknow which blocks are most recently accessed. If you leave out thesetMRU call, the replacement policy will not operate correctly!On loads and stores, we call the read/writeCallback function on thesequencer. This notifies the sequencer of the new data or allows it towrite the data into the data block. These functions take four parameters(the last parameter is optional): address, data block, a boolean for ifthe original request was a miss, and finally, an optional MachineType.The final optional parameter is used for tracking statistics on wherethe data for the request was found. It allows you to track whether thedata comes from cache-to-cache transfers or from memory.Finally, we also have an action to forward evictions to the CPU. This isrequired for gem5’s out-of-order models to squash speculative loads ifthe cache block is evicted before the load is committed. We use theparameter specified at the top of the state machine file to check ifthis is needed or not.Next, we have a set of cache management actions that allocate and freecache entries and TBEs. To create a new cache entry, we must have spacein the CacheMemory object. Then, we can call the allocate function.This allocate function doesn’t actually allocate the host memory for thecache entry since this controller specialized the Entry type, which iswhy we need to pass a new Entry to the allocate function.Additionally, in these actions we call set_cache_entry,unset_cache_entry, and similar functions for the TBE. These set andunset the implicit variables that were passed in via the triggerfunction. For instance, when allocating a new cache block, we callset_cache_entry and in all actions proceeding allocateCacheBlock thecache_entry variable will be valid.There is also an action that copies the data from the cache data blockto the TBE. This allows us to keep the data around even after removingthe cache block until we are sure that this cache no longer areresponsible for the data.action(allocateCacheBlock, \"a\", desc=\"Allocate a cache block\") {    assert(is_invalid(cache_entry));    assert(cacheMemory.cacheAvail(address));    set_cache_entry(cacheMemory.allocate(address, new Entry));}action(deallocateCacheBlock, \"d\", desc=\"Deallocate a cache block\") {    assert(is_valid(cache_entry));    cacheMemory.deallocate(address);    // clear the cache_entry variable (now it's invalid)    unset_cache_entry();}action(writeDataToCache, \"wd\", desc=\"Write data to the cache\") {    peek(response_in, ResponseMsg) {        assert(is_valid(cache_entry));        cache_entry.DataBlk := in_msg.DataBlk;    }}action(allocateTBE, \"aT\", desc=\"Allocate TBE\") {    assert(is_invalid(tbe));    TBEs.allocate(address);    // this updates the tbe variable for other actions    set_tbe(TBEs[address]);}action(deallocateTBE, \"dT\", desc=\"Deallocate TBE\") {    assert(is_valid(tbe));    TBEs.deallocate(address);    // this makes the tbe variable invalid    unset_tbe();}action(copyDataFromCacheToTBE, \"Dct\", desc=\"Copy data from cache to TBE\") {    assert(is_valid(cache_entry));    assert(is_valid(tbe));    tbe.DataBlk := cache_entry.DataBlk;}The next set of actions are for managing the message buffers. We need toadd actions to pop the head message off of the buffers after the messagehas been satisfied. The dequeue function takes a single parameter, atime for the dequeue to take place. Delaying the dequeue for a cycleprevents the in_port logic from consuming another message from thesame message buffer in a single cycle.action(popMandatoryQueue, \"pQ\", desc=\"Pop the mandatory queue\") {    mandatory_in.dequeue(clockEdge());}action(popResponseQueue, \"pR\", desc=\"Pop the response queue\") {    response_in.dequeue(clockEdge());}action(popForwardQueue, \"pF\", desc=\"Pop the forward queue\") {    forward_in.dequeue(clockEdge());}Finally, the last action is a stall. Below, we are using a “z_stall”,which is the simplest kind of stall in SLICC. By leaving the actionblank, it generates a “protocol stall” in the in_port logic whichstalls all messages from being processed in the current message bufferand all lower priority message buffer. Protocols using “z_stall” areusually simpler, but lower performance since a stall on a high prioritybuffer can stall many requests that may not need to be stalled.action(stall, \"z\", desc=\"Stall the incoming request\") {    // z_stall}There are two other ways to deal with messages that cannot currently beprocessed that can improve the performance of protocols. (Note: We willnot be using these more complicated techniques in this simple exampleprotocol.) The first is recycle. The message buffers have a recyclefunction that moves the request on the head of the queue to the tail.This allows other requests in the buffer or requests in other buffers tobe processed immediately. recycle actions often improve theperformance of protocols significantly.However, recycle is not very realistic when compared to realimplementations of cache coherence. For a more realistichigh-performance solution to stalling messages, Ruby provides thestall_and_wait function on message buffers. This function takes thehead request and moves it into a separate structure tagged by anaddress. The address is user-specified, but is usually the request’saddress. Later, when the blocked request can be handled, there isanother function wakeUpBuffers(address) which will wake up allrequests stalled on address and wakeUpAllBuffers() that wakes up allof the stalled requests. When a request is “woken up” it is placed backinto the message buffer to be subsequently processed.",
        "url": "/documentation/learning_gem5/part3/cache-actions/"
      }
      ,
    
      "documentation-learning-gem5-part3-cache-declarations": {
        "title": "Declaring a state machine",
        "content": "Declaring a state machineLet’s start on our first state machine file! First, we will create theL1 cache controller for our MSI protocol.Create a file called MSI-cache.sm and the following code declares thestate machine.machine(MachineType:L1Cache, \"MSI cache\")    : &lt;parameters&gt;{    &lt;All state machine code&gt;}The first thing you’ll notice about the state machine code is that islooks very C++-like. The state machine file is like creating a C++object in a header file, if you included all of the code there as well.When in doubt, C++ syntax with probably work in SLICC. However, thereare many cases where C++ syntax is incorrect syntax for SLICC as well ascases where SLICC extends the syntax.With MachineType:L1Cache, we are naming this state machine L1Cache.SLICC will generate many different objects for us from the state machineusing that name. For instance, once this file is compiled, there will bea new SimObject: L1Cache_Controller that is the cache controller. Alsoincluded in this declaration is a description of this state machine:“MSI cache”.There are many cases in SLICC where you must include a description to goalong with the variable. The reason for this is that SLICC wasoriginally designed to just describe, not implement, coherenceprotocols. Today, these extra descriptions serve two purposes. First,they act as comments on what the author intended each variable, orstate, or event, to be used for. Second, many of them are still exportedinto HTML when building the HTML tables for the SLICC protocol. Thus,while browsing the HTML table, you can see the more detailed commentsfrom the author of the protocol. It is important to be clear with thesedescriptions since coherence protocols can get quite complicated.State machine parametersProceeding the machine() declaration is a colon, after which all ofthe parameters to the state machine are declared. These parameters aredirectly exported to the SimObject that is generated by the statemachine.For our MSI L1 cache, we have the following parameters:machine(MachineType:L1Cache, \"MSI cache\"): Sequencer *sequencer;  CacheMemory *cacheMemory;  bool send_evictions;  &lt;Message buffer declarations&gt;  {  }First, we have a Sequencer. This is a special class that isimplemented in Ruby to interface with the rest of gem5. The Sequencer isa gem5 MemObject with a slave port so it can accept memory requestsfrom other objects. The sequencer accepts requests from a CPU (or othermaster port) and converts the gem5 the packet into a RubyRequest.Finally, the RubyRequest is pushed onto the mandatoryQueue of thestate machine. We will revisit the mandatoryQueue inthe in-port section.Next, there is a CacheMemory object. This is what holds the cache data(i.e., cache entries). The exact implementation, size, etc. isconfigurable at runtime.Finally, we can specify any other parameters we would like, similar to ageneral SimObject. In this case, we have a boolean variablesend_evictions. This is used for out-of-order core models to notifythe load-store queue if an address is evicted after a load to squash aload if it is speculative.Next, also in the parameter block (i.e., before the first open bracket),we need to declare all of the message buffers that this state machinewill use. Message buffers are the interface between the state machineand the Ruby network. Messages are sent and received via the messagebuffers. Thus, for each virtual channel in our protocol we need aseparate message buffer.The MSI protocol needs three different virtual networks. Virtualnetworks are needed to prevent deadlock (e.g., it is bad if a responsegets stuck behind a stalled request). In this protocol, the highestpriority is responses (virtual network 2), followed by forwardedrequests (virtual network 1), then requests have the lowest priority(virtual network 0). See Sorin et al. for details on why these threevirtual networks are needed.The following code declares all of the needed message buffers.machine(MachineType:L1Cache, \"MSI cache\"): Sequencer *sequencer;  CacheMemory *cacheMemory;  bool send_evictions;  MessageBuffer * requestToDir, network=\"To\", virtual_network=\"0\", vnet_type=\"request\";  MessageBuffer * responseToDirOrSibling, network=\"To\", virtual_network=\"2\", vnet_type=\"response\";  MessageBuffer * forwardFromDir, network=\"From\", virtual_network=\"1\", vnet_type=\"forward\";  MessageBuffer * responseFromDirOrSibling, network=\"From\", virtual_network=\"2\", vnet_type=\"response\";  MessageBuffer * mandatoryQueue;{}We have five different message buffers: two “To”, two “From”, and onespecial message buffer. The “To” message buffers are similar to slaveports in gem5. These are the message buffers that this controller usesto send messages to other controllers in the system. The “From” messagebuffers are like slave ports. This controller receives messages on“From” buffers from other controllers in the system.We have two different “To” buffers, one for low priority requests, andone for high priority responses. The priority for the networks are notinherent. The priority is based on the order that other controllers lookat the message buffers. It is a good idea to number the virtual networksso that higher numbers mean higher priority, but the virtual networknumber is ignored by Ruby except that messages on network 2 can only goto other message buffers on network 2 (i.e., messages can’t jump fromone network to another).Similarly, there is two different ways this cache can receive messages,either as a forwarded request from the directory (e.g., another cacherequests a writable block and we have a readable copy) or as a responseto a request this controller made. The response is higher priority thanthe forwarded requests.Finally, there is a special message buffer, the mandatoryQueue. Thismessage buffer is used by the Sequencer to convert gem5 packets intoRuby requests. Unlike the other message buffers, mandatoryQueue doesnot connect to the Ruby network. Note: the name of this message bufferis hard-coded and must be exactly “mandatoryQueue”.As previously mentioned, this parameter block is converted into theSimObject description file. Any parameters you put in this block will beSimObject parameters that are accessible from the Python configurationfiles. If you look at the generated file L1Cache_Controller.py, it willlook very familiar. Note: This is a generated file and you should nevermodify generated files directly!from m5.params import *from m5.SimObject import SimObjectfrom Controller import RubyControllerclass L1Cache_Controller(RubyController):    type = 'L1Cache_Controller'    cxx_header = 'mem/protocol/L1Cache_Controller.hh'    sequencer = Param.RubySequencer(\"\")    cacheMemory = Param.RubyCache(\"\")    send_evictions = Param.Bool(\"\")    requestToDir = Param.MessageBuffer(\"\")    responseToDirOrSibling = Param.MessageBuffer(\"\")    forwardFromDir = Param.MessageBuffer(\"\")    responseFromDirOrSibling = Param.MessageBuffer(\"\")    mandatoryQueue = Param.MessageBuffer(\"\")State declarationsThe next part of the state machine is the state declaration. Here, weare going to declare all of the stable and transient states for thestate machine. We will follow the naming convention in Sorin et al. Forinstance, the transient state “IM_AD” corresponds to moving fromInvalid to Modified waiting on acks and data. These states come directlyfrom the left column of Table 8.3 in Sorin et al.state_declaration(State, desc=\"Cache states\") {    I,      AccessPermission:Invalid,                desc=\"Not present/Invalid\";    // States moving out of I    IS_D,   AccessPermission:Invalid,                desc=\"Invalid, moving to S, waiting for data\";    IM_AD,  AccessPermission:Invalid,                desc=\"Invalid, moving to M, waiting for acks and data\";    IM_A,   AccessPermission:Busy,                desc=\"Invalid, moving to M, waiting for acks\";    S,      AccessPermission:Read_Only,                desc=\"Shared. Read-only, other caches may have the block\";    // States moving out of S    SM_AD,  AccessPermission:Read_Only,                desc=\"Shared, moving to M, waiting for acks and 'data'\";    SM_A,   AccessPermission:Read_Only,                desc=\"Shared, moving to M, waiting for acks\";    M,      AccessPermission:Read_Write,                desc=\"Modified. Read &amp; write permissions. Owner of block\";    // States moving to Invalid    MI_A,   AccessPermission:Busy,                desc=\"Was modified, moving to I, waiting for put ack\";    SI_A,   AccessPermission:Busy,                desc=\"Was shared, moving to I, waiting for put ack\";    II_A,   AccessPermission:Invalid,                desc=\"Sent valid data before receiving put ack. \"Waiting for put ack.\";}Each state has an associated access permission: “Invalid”, “NotPresent”,“Busy”, “Read_Only”, or “Read_Write”. The access permission is usedfor functional accesses to the cache. Functional accesses aredebug-like accesses when the simulator wants to read or update the dataimmediately. One example of this is reading in files in SE mode whichare directly loaded into memory.For functional accesses all caches are checked to see if they have acorresponding block with matching address. For functional reads, allof the blocks with a matching address that have read-only or read-writepermission are accessed (they should all have the same data). Forfunctional writes, all blocks are updated with new data if they havebusy, read-only, or read-write permission.Event declarationsNext, we need to declare all of the events that are triggered byincoming messages for this cache controller. These events come directlyfrom the first row in Table 8.3 in Sorin et al.enumeration(Event, desc=\"Cache events\") {    // From the processor/sequencer/mandatory queue    Load,           desc=\"Load from processor\";    Store,          desc=\"Store from processor\";    // Internal event (only triggered from processor requests)    Replacement,    desc=\"Triggered when block is chosen as victim\";    // Forwarded request from other cache via dir on the forward network    FwdGetS,        desc=\"Directory sent us a request to satisfy GetS. We must have the block in M to respond to this.\";    FwdGetM,        desc=\"Directory sent us a request to satisfy GetM. We must have the block in M to respond to this.\";    Inv,            desc=\"Invalidate from the directory.\";    PutAck,         desc=\"Response from directory after we issue a put. This must be on the fwd network to avoid deadlock.\";    // Responses from directory    DataDirNoAcks,  desc=\"Data from directory (acks = 0)\";    DataDirAcks,    desc=\"Data from directory (acks &gt; 0)\";    // Responses from other caches    DataOwner,      desc=\"Data from owner\";    InvAck,         desc=\"Invalidation ack from other cache after Inv\";    // Special event to simplify implementation    LastInvAck,     desc=\"Triggered after the last ack is received\";}User-defined structuresNext, we need to define some structures that we will use in other placesin this controller. The first one we will define is Entry. This is thestructure that is stored in the CacheMemory. It only needs to containdata and a state, but it may contain any other data you want. Note: Thestate that this structure is storing is the State type that wasdefined above, not a hardcoded state type.You can find the abstract version of this class (AbstractCacheEntry)in src/mem/ruby/slicc_interface/AbstractCacheEntry.hh. If you want touse any of the member functions of AbstractCacheEntry, you need todeclare them here (this isn’t used in this protocol).structure(Entry, desc=\"Cache entry\", interface=\"AbstractCacheEntry\") {    State CacheState,        desc=\"cache state\";    DataBlock DataBlk,       desc=\"Data in the block\";}Another structure we will need is a TBE. TBE is the “transaction bufferentry”. This stores information needed during transient states. This islike an MSHR. It functions as an MSHR in this protocol, but the entryis also allocated for other uses. In this protocol, it will store thestate (usually needed), data (also usually needed), and the number ofacks that this block is currently waiting for. The AcksOutstanding isused for the transitions where other controllers send acks instead ofthe data.structure(TBE, desc=\"Entry for transient requests\") {    State TBEState,         desc=\"State of block\";    DataBlock DataBlk,      desc=\"Data for the block. Needed for MI_A\";    int AcksOutstanding, default=0, desc=\"Number of acks left to receive.\";}Next, we need a place to store all of the TBEs. This is an externallydefined class; it is defined in C++ outside of SLICC. Therefore, we needto declare that we are going to use it, and also declare any of thefunctions that we will call on it. You can find the code for theTBETable in src/mem/ruby/structures/TBETable.hh. It is templatized onthe TBE structure defined above, which gets a little confusing, as wewill see.structure(TBETable, external=\"yes\") {  TBE lookup(Addr);  void allocate(Addr);  void deallocate(Addr);  bool isPresent(Addr);}The external=\"yes\" tells SLICC to not look for the definition of thisstructure. This is similar to declaring a variable extern in C/C++.Other declarations and definitions requiredFinally, we are going to go through some boilerplate of declaringvariables, declaring functions in AbstractController that we will usein this controller, and defining abstract functions inAbstractController.First, we need to have a variable that stores a TBE table. We have to dothis in SLICC because it is not until this time that we know the truetype of the TBE table since the TBE type was defined above. This is someparticularly tricky (or nasty) code to get SLICC to generate the rightC++ code. The difficulty is that we want templatize TBETable based onthe TBE type above. The key is that SLICC mangles the names of alltypes declared in the machine with the machine’s name. For instance,TBE is actually L1Cache_TBE in C++.We also want to pass a parameter to the constructor of the TBETable.This is a parameter that is actually part of the AbstractController,thus we need to use the C++ name for the variable since it doesn’t havea SLICC name.TBETable TBEs, template=\"&lt;L1Cache_TBE&gt;\", constructor=\"m_number_of_TBEs\";If you can understand the above code, then you are an official SLICCninja!Next, any functions that are part of AbstractController need to bedeclared, if we are going to use them in the rest of the file. In thiscase, we are only going to use clockEdge():Tick clockEdge();There are a few other functions we’re going to use in actions. Thesefunctions are used in actions to set and unset implicit variablesavailable in action code-blocks. Action code blocks will be explained indetail in the action section &lt;MSI-actions-section&gt;. These may beneeded when a transition has many actions.void set_cache_entry(AbstractCacheEntry a);void unset_cache_entry();void set_tbe(TBE b);void unset_tbe();Another useful function is mapAddressToMachine. This allows us tochange the address mappings for banked directories or caches at runtimeso we don’t have to hardcode them in the SLICC file.MachineID mapAddressToMachine(Addr addr, MachineType mtype);Finally, you can also add any functions you may want to use in the fileand implement them here. For instance, it is convenient to access cacheblocks by address with a single function. Again, in this function thereis some SLICC trickery. We need to access “by pointer” since the cacheblock is something that we need to be mutable later (“by reference”would have been a better name). The cast is also necessary since wedefined a specific Entry type in the file, but the CacheMemory holdsthe abstract type.// Convenience function to look up the cache entry.// Needs a pointer so it will be a reference and can be updated in actionsEntry getCacheEntry(Addr address), return_by_pointer=\"yes\" {    return static_cast(Entry, \"pointer\", cacheMemory.lookup(address));}The next set of boilerplate code rarely changes between differentprotocols. There’s a set of functions that are pure-virtual inAbstractController that we must implement.  getState  Given a TBE, cache entry, and address return the state of the block.This is called on the block to decide which transition to executewhen an event is triggered. Usually, you return the state in the TBEor cache entry, whichever is valid.  setState  Given a TBE, cache entry, and address make sure the state is setcorrectly on the block. This is called at the end of the transitionto set the final state on the block.  getAccessPermission  Get the access permission of a block. This is used during functionalaccess to decide whether or not to functionally access the block. Itis similar to getState, get the information from the TBE if valid,cache entry, if valid, or the block is not present.  setAccessPermission  Like getAccessPermission, but sets the permission.  functionalRead  Functionally read the data. It is possible the TBE has moreup-to-date information, so check that first. Note: testAndRead/Writedefined in src/mem/ruby/slicc_interface/Util.hh  functionalWrite  Functionally write the data. Similarly, you may need to update thedata in both the TBE and the cache entry.State getState(TBE tbe, Entry cache_entry, Addr addr) {    // The TBE state will override the state in cache memory, if valid    if (is_valid(tbe)) { return tbe.TBEState; }    // Next, if the cache entry is valid, it holds the state    else if (is_valid(cache_entry)) { return cache_entry.CacheState; }    // If the block isn't present, then it's state must be I.    else { return State:I; }}void setState(TBE tbe, Entry cache_entry, Addr addr, State state) {  if (is_valid(tbe)) { tbe.TBEState := state; }  if (is_valid(cache_entry)) { cache_entry.CacheState := state; }}AccessPermission getAccessPermission(Addr addr) {    TBE tbe := TBEs[addr];    if(is_valid(tbe)) {        return L1Cache_State_to_permission(tbe.TBEState);    }    Entry cache_entry := getCacheEntry(addr);    if(is_valid(cache_entry)) {        return L1Cache_State_to_permission(cache_entry.CacheState);    }    return AccessPermission:NotPresent;}void setAccessPermission(Entry cache_entry, Addr addr, State state) {    if (is_valid(cache_entry)) {        cache_entry.changePermission(L1Cache_State_to_permission(state));    }}void functionalRead(Addr addr, Packet *pkt) {    TBE tbe := TBEs[addr];    if(is_valid(tbe)) {        testAndRead(addr, tbe.DataBlk, pkt);    } else {        testAndRead(addr, getCacheEntry(addr).DataBlk, pkt);    }}int functionalWrite(Addr addr, Packet *pkt) {    int num_functional_writes := 0;    TBE tbe := TBEs[addr];    if(is_valid(tbe)) {        num_functional_writes := num_functional_writes +            testAndWrite(addr, tbe.DataBlk, pkt);        return num_functional_writes;    }    num_functional_writes := num_functional_writes +            testAndWrite(addr, getCacheEntry(addr).DataBlk, pkt);    return num_functional_writes;}",
        "url": "/documentation/learning_gem5/part3/cache-declarations/"
      }
      ,
    
      "documentation-learning-gem5-part3-cache-in-ports": {
        "title": "In port code blocks",
        "content": "In port code blocksAfter declaring all of the structures we need in the state machine file,the first “functional” part of the file are the “in ports”. This sectionspecifies what events to trigger on different incoming messages.However, before we get to the in ports, we must declare our out ports.out_port(request_out, RequestMsg, requestToDir);out_port(response_out, ResponseMsg, responseToDirOrSibling);This code essentially just renames requestToDir andresponseToDirOrSibling to request_out and response_out. Later inthe file, when we want to enqueue messages to these message buffers wewill use the new names request_out and response_out. This alsospecifies the exact implementation of the messages that we will sendacross these ports. We will look at the exact definition of these typesbelow in the file MSI-msg.sm.Next, we create an in port code block. In SLICC, there are many caseswhere there are code blocks that look similar to if blocks, but theyencode specific information. For instance, the code inside anin_port() block is put in a special generated file:L1Cache_Wakeup.cc.All of the in_port code blocks are executed in order (or based on thepriority if it is specified). On each active cycle for the controller,the first in_port code is executed. If it is successful, it isre-executed to see if there are other messages that can be consumed onthe port. If there are no messages or no events are triggered, then thenext in_port code block is executed.There are three different kinds of stalls that can be generated whenexecuting in_port code blocks. First, there is a parameterized limitfor the number of transitions per cycle at each controller. If thislimit is reached (i.e., there are more messages on the message buffersthan the transition per cycle limit), then all in_port will stopprocessing and wait to continue until the next cycle. Second, therecould be a resource stall. This happens if some needed resource isunavailable. For instance, if using the BankedArray bandwidth model,the needed bank of the cache may be currently occupied. Third, therecould be a protocol stall. This is a special kind of action thatcauses the state machine to stall until the next cycle.It is important to note that protocol stalls and resource stalls preventall in_port blocks from executing. For instance, if the firstin_port block generates a protocol stall, none of the other ports willbe executed, blocking all messages. This is why it is important to usethe correct number and ordering of virtual networks.Below, is the full code for the in_port block for the highest prioritymessages to our L1 cache controller, the response from directory orother caches. Next we will break the code block down to explain eachsection.in_port(response_in, ResponseMsg, responseFromDirOrSibling) {    if (response_in.isReady(clockEdge())) {        peek(response_in, ResponseMsg) {            Entry cache_entry := getCacheEntry(in_msg.addr);            TBE tbe := TBEs[in_msg.addr];            assert(is_valid(tbe));            if (machineIDToMachineType(in_msg.Sender) ==                        MachineType:Directory) {                if (in_msg.Type != CoherenceResponseType:Data) {                    error(\"Directory should only reply with data\");                }                assert(in_msg.Acks + tbe.AcksOutstanding &gt;= 0);                if (in_msg.Acks + tbe.AcksOutstanding == 0) {                    trigger(Event:DataDirNoAcks, in_msg.addr, cache_entry,                            tbe);                } else {                    trigger(Event:DataDirAcks, in_msg.addr, cache_entry,                            tbe);                }            } else {                if (in_msg.Type == CoherenceResponseType:Data) {                    trigger(Event:DataOwner, in_msg.addr, cache_entry,                            tbe);                } else if (in_msg.Type == CoherenceResponseType:InvAck) {                    DPRINTF(RubySlicc, \"Got inv ack. %d left\\n\",                            tbe.AcksOutstanding);                    if (tbe.AcksOutstanding == 1) {                        trigger(Event:LastInvAck, in_msg.addr, cache_entry,                                tbe);                    } else {                        trigger(Event:InvAck, in_msg.addr, cache_entry,                                tbe);                    }                } else {                    error(\"Unexpected response from other cache\");                }            }        }    }}First, like the out_port above “response_in” is the name we’ll uselater when we refer to this port, and “ResponseMsg” is the type ofmessage we expect on this port (since this port processes responses toour requests). The first step in all in_port code blocks is to checkthe message buffer to see if there are any messages to be processed. Ifnot, then this in_port code block is skipped and the next one isexecuted.in_port(response_in, ResponseMsg, responseFromDirOrSibling) {    if (response_in.isReady(clockEdge())) {        . . .    }}Assuming there is a valid message in the message buffer, next, we grabthat message by using the special code block peek. Peek is a specialfunction. Any code inside a peek statement has a special variabledeclared and populated: in_msg. This contains the message (of typeResponseMsg in this case as specified by the second parameter of thepeek call) at the head of the port. Here, response_in is the port wewant to peek into.Then, we need to grab the cache entry and the TBE for the incomingaddress. (We will look at the other parameters in response messagebelow.) Above, we implemented getCacheEntry. It will return either thevalid matching entry for the address, or an invalid entry if there isnot a matching cache block.For the TBE, since this is a response to a request this cache controllerinitiated, there must be a valid TBE in the TBE table. Hence, we seeour first debug statement, an assert. This is one of the ways to easedebugging of cache coherence protocols. It is encouraged to use assertsliberally to make debugging easier.peek(response_in, ResponseMsg) {    Entry cache_entry := getCacheEntry(in_msg.addr);    TBE tbe := TBEs[in_msg.addr];    assert(is_valid(tbe));    . . .}Next, we need to decide what event to trigger based on the message. Forthis, we first need to discuss what data response messages are carrying.To declare a new message type, first create a new file for all of themessage types: MSI-msg.sm. In this file, you can declare anystructures that will be globally used across all of the SLICC filesfor your protocol. We will include this file in all of the state machinedefinitions via the MSI.slicc file later. This is similar to includingglobal definitions in header files in C/C++.In the MSI-msg.sm file, add the following code block:structure(ResponseMsg, desc=\"Used for Dir-&gt;Cache and Fwd message responses\",          interface=\"Message\") {    Addr addr,                   desc=\"Physical address for this response\";    CoherenceResponseType Type,  desc=\"Type of response\";    MachineID Sender,            desc=\"Node who is responding to the request\";    NetDest Destination,         desc=\"Multicast destination mask\";    DataBlock DataBlk,           desc=\"data for the cache line\";    MessageSizeType MessageSize, desc=\"size category of the message\";    int Acks,                    desc=\"Number of acks required from others\";    // This must be overridden here to support functional accesses    bool functionalRead(Packet *pkt) {        if (Type == CoherenceResponseType:Data) {            return testAndRead(addr, DataBlk, pkt);        }        return false;    }    bool functionalWrite(Packet *pkt) {        // No check on message type required since the protocol should read        // data block from only those messages that contain valid data        return testAndWrite(addr, DataBlk, pkt);    }}The message is just another SLICC structure similar to the structureswe’ve defined before. However, this time, we have a specific interfacethat it is implementing: Message. Within this message, we can add anymembers that we need for our protocol. In this case, we first have theaddress. Note, a common “gotcha” is that you cannot use “Addr” with acapitol “A” for the name of the member since it is the same name as thetype!Next, we have the type of response. In our case, there are two types ofresponse data and invalidation acks from other caches after they haveinvalidated their copy. Thus, we need to define an enumeration, theCoherenceResponseType, to use it in this message. Add the followingcode before the ResponseMsg declaration in the same file.enumeration(CoherenceResponseType, desc=\"Types of response messages\") {    Data,       desc=\"Contains the most up-to-date data\";    InvAck,     desc=\"Message from another cache that they have inv. the blk\";}Next, in the response message type, we have the MachineID which sentthe response. MachineID is the specific machine that sent theresponse. For instance, it might be directory 0 or cache 12. TheMachineID contains both the MachineType (e.g., we have been creatingan L1Cache as declared in the first machine()) and the specificversion of that machine type. We will come back to machine versionnumbers when configuring the system.Next, all messages need a destination, and a size. The destinationis specified as a NetDest, which is a bitmap of all the MachineID inthe system. This allows messages to be broadcast to a flexible set ofreceivers. The message also has a size. You can find the possiblemessage sizes in src/mem/protocol/RubySlicc_Exports.sm.This message may also contain a data block and the number acks that areexpected. Thus, we can include these in the message definition as well.Finally, we also have to define functional read and write functions.These are used by Ruby to inspect in-flight messages on function readsand writes. Note: This functionality currently is very brittle and ifthere are messages in-flight for an address that is functionally read orwritten the functional access may fail.You can download the complete MSI-msg.sm file here.Now that we have defined the data in the response message, we can lookat how we choose which action to trigger in the in_port for responseto the cache.// If it's from the directory...if (machineIDToMachineType(in_msg.Sender) ==            MachineType:Directory) {    if (in_msg.Type != CoherenceResponseType:Data) {        error(\"Directory should only reply with data\");    }    assert(in_msg.Acks + tbe.AcksOutstanding &gt;= 0);    if (in_msg.Acks + tbe.AcksOutstanding == 0) {        trigger(Event:DataDirNoAcks, in_msg.addr, cache_entry,                tbe);    } else {        trigger(Event:DataDirAcks, in_msg.addr, cache_entry,                tbe);    }} else {    // This is from another cache.    if (in_msg.Type == CoherenceResponseType:Data) {        trigger(Event:DataOwner, in_msg.addr, cache_entry,                tbe);    } else if (in_msg.Type == CoherenceResponseType:InvAck) {        DPRINTF(RubySlicc, \"Got inv ack. %d left\\n\",                tbe.AcksOutstanding);        if (tbe.AcksOutstanding == 1) {            // If there is exactly one ack remaining then we            // know it is the last ack.            trigger(Event:LastInvAck, in_msg.addr, cache_entry,                    tbe);        } else {            trigger(Event:InvAck, in_msg.addr, cache_entry,                    tbe);        }    } else {        error(\"Unexpected response from other cache\");    }}First, we check to see if the message comes from the directory oranother cache. If it comes from the directory, we know that it must bea data response (the directory will never respond with an ack).Here, we meet our second way to add debug information to protocols: theerror function. This function breaks simulation and prints out thestring parameter similar to panic.Next, when we receive data from the directory, we expect that the numberof acks we are waiting for will never be less than 0. The number of ackswe’re waiting for is the current acks we have received(tbe.AcksOutstanding) and the number of acks the directory has told usto be waiting for. We need to check it this way because it is possiblethat we have received acks from other caches before we get the messagefrom the directory that we need to wait for acks.There are two possibilities for the acks, either we have alreadyreceived all of the acks and now we are getting the data (data from diracks==0 in Table 8.3), or we need to wait for more acks. Thus, we checkthis condition and trigger two different events, one for eachpossibility.When triggering transitions, you need to pass four parameters. The firstparameter is the event to trigger. These events were specified earlierin the Event declaration. The next parameter is the (physical memory)address of the cache block to operate on. Usually this is the same asthe address of the in_msg, but it may be different, for instance, on areplacement the address is for the block being replaced. Next is thecache entry and the TBE for the block. These may be invalid if there areno valid entries for the address in the cache or there is not a validTBE in the TBE table.When we implement actions below, we will see how these last threeparameters are used. They are passed into the actions as implicitvariables: address, cache_entry, and tbe.If the trigger function is executed, after the transition is complete,the in_port logic is executed again, assuming there have been fewertransitions than that maximum transitions per cycle. If there are othermessages in the message buffer more transitions can be triggered.If the response is from another cache instead of the directory, thenother events are triggered, as shown in the code above. These eventscome directly from Table 8.3 in Sorin et al.Importantly, you should use the in_port logic to check all conditions.After an event is triggered, it should only have a single code path.I.e., there should be no if statements in any action blocks. If youwant to conditionally execute actions, you should use different statesor different events in the in_port logic.The reason for this constraint is the way Ruby checks resources beforeexecuting a transition. In the generated code from the in_port blocksbefore the transition is actually executed all of the resources arechecked. In other words, transitions are atomic and either execute allof the actions or none. Conditional statements inside the actionsprevents the SLICC compiler from correctly tracking the resource usageand can lead to strange performance, deadlocks, and other bugs.After specifying the in_port logic for the highest priority network,the response network, we need to add the in_port logic for the forwardrequest network. However, before specifying this logic, we need todefine the RequestMsg type and the CoherenceRequestType whichcontains the types of requests. These two definitions go in theMSI-msg.sm file not in MSI-cache.sm since they are globaldefinitions.It is possible to implement this as two different messages and requesttype enumerations, one for forward and one for normal requests, but itsimplifies the code to use a single message and type.enumeration(CoherenceRequestType, desc=\"Types of request messages\") {    GetS,       desc=\"Request from cache for a block with read permission\";    GetM,       desc=\"Request from cache for a block with write permission\";    PutS,       desc=\"Sent to directory when evicting a block in S (clean WB)\";    PutM,       desc=\"Sent to directory when evicting a block in M\";    // \"Requests\" from the directory to the caches on the fwd network    Inv,        desc=\"Probe the cache and invalidate any matching blocks\";    PutAck,     desc=\"The put request has been processed.\";}structure(RequestMsg, desc=\"Used for Cache-&gt;Dir and Fwd messages\",  interface=\"Message\") {    Addr addr,                   desc=\"Physical address for this request\";    CoherenceRequestType Type,   desc=\"Type of request\";    MachineID Requestor,         desc=\"Node who initiated the request\";    NetDest Destination,         desc=\"Multicast destination mask\";    DataBlock DataBlk,           desc=\"data for the cache line\";    MessageSizeType MessageSize, desc=\"size category of the message\";    bool functionalRead(Packet *pkt) {        // Requests should never have the only copy of the most up-to-date data        return false;    }    bool functionalWrite(Packet *pkt) {        // No check on message type required since the protocol should read        // data block from only those messages that contain valid data        return testAndWrite(addr, DataBlk, pkt);    }}Now, we can specify the logic for the forward network in_port. Thislogic is straightforward and triggers a different event for each requesttype.in_port(forward_in, RequestMsg, forwardFromDir) {    if (forward_in.isReady(clockEdge())) {        peek(forward_in, RequestMsg) {            // Grab the entry and tbe if they exist.            Entry cache_entry := getCacheEntry(in_msg.addr);            TBE tbe := TBEs[in_msg.addr];            if (in_msg.Type == CoherenceRequestType:GetS) {                trigger(Event:FwdGetS, in_msg.addr, cache_entry, tbe);            } else if (in_msg.Type == CoherenceRequestType:GetM) {                trigger(Event:FwdGetM, in_msg.addr, cache_entry, tbe);            } else if (in_msg.Type == CoherenceRequestType:Inv) {                trigger(Event:Inv, in_msg.addr, cache_entry, tbe);            } else if (in_msg.Type == CoherenceRequestType:PutAck) {                trigger(Event:PutAck, in_msg.addr, cache_entry, tbe);            } else {                error(\"Unexpected forward message!\");            }        }    }}The final in_port is for the mandatory queue. This is the lowestpriority queue, so it must be lowest in the state machine file. Themandatory queue has a special message type: RubyRequest. This type isspecified in src/mem/protocol/RubySlicc_Types.sm It contains twodifferent addresses, the LineAddress which is cache-block aligned andthe PhysicalAddress which holds the original request’s address and maynot be cache-block aligned. It also has other members that may be usefulin some protocols. However, for this simple protocol we only need theLineAddress.in_port(mandatory_in, RubyRequest, mandatoryQueue) {    if (mandatory_in.isReady(clockEdge())) {        peek(mandatory_in, RubyRequest, block_on=\"LineAddress\") {            Entry cache_entry := getCacheEntry(in_msg.LineAddress);            TBE tbe := TBEs[in_msg.LineAddress];            if (is_invalid(cache_entry) &amp;&amp;                    cacheMemory.cacheAvail(in_msg.LineAddress) == false ) {                Addr addr := cacheMemory.cacheProbe(in_msg.LineAddress);                Entry victim_entry := getCacheEntry(addr);                TBE victim_tbe := TBEs[addr];                trigger(Event:Replacement, addr, victim_entry, victim_tbe);            } else {                if (in_msg.Type == RubyRequestType:LD ||                        in_msg.Type == RubyRequestType:IFETCH) {                    trigger(Event:Load, in_msg.LineAddress, cache_entry,                            tbe);                } else if (in_msg.Type == RubyRequestType:ST) {                    trigger(Event:Store, in_msg.LineAddress, cache_entry,                            tbe);                } else {                    error(\"Unexpected type from processor\");                }            }        }    }}There are a couple of new concepts shown in this code block. First, weuse block_on=\"LineAddress\" in the peek function. What this does isensure that any other requests to the same cache line will be blockeduntil the current request is complete.Next, we check if the cache entry for this line is valid. If not, andthere are no more entries available in the set, then we need to evictanother entry. To get the victim address, we can use the cacheProbefunction on the CacheMemory object. This function uses theparameterized replacement policy and returns the physical (line) addressof the victim.Importantly, when we trigger the Replacement event we use the addressof the victim block and the victim cache entry and tbe. Thus, when wetake actions in the replacement transitions we will be acting on thevictim block, not the requesting block. Additionally, we need toremember to not remove the requesting message from the mandatory queue(pop) until it has been satisfied. The message should not be poppedafter the replacement is complete.If the cache block was found to be valid, then we simply trigger theLoad or Store event.",
        "url": "/documentation/learning_gem5/part3/cache-in-ports/"
      }
      ,
    
      "documentation-learning-gem5-part3-cache-intro": {
        "title": "MSI example cache protocol",
        "content": "MSI example cache protocolBefore we implement a cache coherence protocol, it is important to havea solid understanding of cache coherence. This section leans heavily onthe great book A Primer on Memory Consistency and Cache Coherence byDaniel J. Sorin, Mark D. Hill, and David A. Wood which was published aspart of the Synthesis Lectures on Computer Architecture in 2011(DOI:10.2200/S00346ED1V01Y201104CAC016).If you are unfamiliar with cache coherence, I strongly advise reading that book before continuing.In this chapter, we will be implementing an MSI protocol.(An MSI protocol has three stable states, modified with read-write permission, shared with read-only permission, and invalid with no permissions.)We will implement this as a three-hop directory protocol (i.e., caches can send data directly to other caches without going through the directory).Details for the protocol can be found in Section 8.2 of A Primer on Memory Consistency and Cache Coherence (pages 141-149).It will be helpful to print out Section 8.2 to reference as you are implementing the protocol.You can download an exceprt of Sorin et al. that contains Section 8.2 here.First steps to writing a protocolLet’s start by creating a new directory for our protocol at src/learning_gem5/MSI_protocol.In this directory, like in all gem5 source directories, we need to create a file for SCons to know what to compile.However, this time, instead of creating a SConscript file, we aregoing to create a SConsopts file. (The SConsopts files are processedbefore the SConscript files and we need to run the SLICC compilerbefore SCons executes.)We need to create a SConsopts file with the following:Import('*')all_protocols.extend(['MSI',])protocol_dirs.append(str(Dir('.').abspath))We do two things in this file. First, we register the name of ourprotocol ('MSI'). Since we have named our protocol MSI, SCons willassume that there is a file named MSI.slicc which specifies all of thestate machine files and auxiliary files. We will create that file afterwriting all of our state machine files. Second, the SConsopts filestells the SCons to look in the current directory for files to pass tothe SLICC compiler.You can download the SConsopts filehere.Writing a state machine fileThe next step, and most of the effort in writing a protocol, is tocreate the state machine files. State machine files generally follow theoutline:  Parameters  These are the parameters for the SimObject that will be generatedfrom the SLICC code.  Declaring required structures and functions  This section declares the states, events, and many other requiredstructures for the state machine.  In port code blocks  Contain code that looks at incoming messages from the (in_port)message buffers and determines what events to trigger.  Actions  These are simple one-effect code blocks (e.g., send a message) thatare executed when going through a transition.  Transitions  Specify actions to execute given a starting state and an event andthe final state. This is the meat of the state machine definition.",
        "url": "/documentation/learning_gem5/part3/cache-intro/"
      }
      ,
    
      "documentation-learning-gem5-part3-cache-transitions": {
        "title": "Transition code blocks",
        "content": "Transition code blocksFinally, we’ve reached the final section of the state machine file! Thissection contains the details for all of the transitions between statesand what actions to execute during the transition.So far in this chapter we have written the state machine top to bottomone section at a time. However, in most cache coherence implementationsyou will find that you need to move around between sections. Forinstance, when writing the transitions you will realize you forgot toadd an action, or you notice that you actually need another transientstate to implement the protocol. This is the normal way to writeprotocols, but for simplicity this chapter goes through the file top tobottom.Transition blocks consist of two parts. First, the first line of atransition block contains the begin state, event to transition on, andend state (the end state may not be required, as we will discuss below).Second, the transition block contains all of the actions to execute onthis transition. For instance, a simple transition in the MSI protocolis transitioning out of Invalid on a Load.transition(I, Load, IS_D) {    allocateCacheBlock;    allocateTBE;    sendGetS;    popMandatoryQueue;}First, you specify the transition as the “parameters” to thetransition statement. In this case, if the initial state is I andthe event is Load then transition to IS_D (was invalid, going toshared, waiting for data). This transition is straight out of Table 8.3in Sorin et al.Then, inside the transition code block, all of the actions that willexecute are listed in order. For this transition first we allocate thecache block. Remember that in the allocateCacheBlock action the newlyallocated entry is set to the entry that will be used in the rest of theactions. After allocating the cache block, we also allocate a TBE. Thiscould be used if we need to wait for acks from other caches. Next, wesend a GetS request to the directory, and finally we pop the head entryoff of the mandatory queue since we have fully handled it.transition(IS_D, {Load, Store, Replacement, Inv}) {    stall;}In this transition, we use slightly different syntax. According to Table8.3 from Sorin et al., we should stall if the cache is in IS_D onloads, stores, replacements, and invalidates. We can specify a singletransition statement for this by including multiple events in curlybrackets as above. Additionally, the final state isn’t required. If thefinal state isn’t specified, then the transition is executed and thestate is not updated (i.e., the block stays in its beginning state). Youcan read the above transition as “If the cache block is in state IS_Dand there is a load, store, replacement, or invalidate stall theprotocol and do not transition out of the state.” You can also use curlybrackets for beginning states, as shown in some of the transitionsbelow.Below is the rest of the transitions needed to implement the L1 cachefrom the MSI protocol.transition(IS_D, {DataDirNoAcks, DataOwner}, S) {    writeDataToCache;    deallocateTBE;    externalLoadHit;    popResponseQueue;}transition({IM_AD, IM_A}, {Load, Store, Replacement, FwdGetS, FwdGetM}) {    stall;}transition({IM_AD, SM_AD}, {DataDirNoAcks, DataOwner}, M) {    writeDataToCache;    deallocateTBE;    externalStoreHit;    popResponseQueue;}transition(IM_AD, DataDirAcks, IM_A) {    writeDataToCache;    storeAcks;    popResponseQueue;}transition({IM_AD, IM_A, SM_AD, SM_A}, InvAck) {    decrAcks;    popResponseQueue;}transition({IM_A, SM_A}, LastInvAck, M) {    deallocateTBE;    externalStoreHit;    popResponseQueue;}transition({S, SM_AD, SM_A, M}, Load) {    loadHit;    popMandatoryQueue;}transition(S, Store, SM_AD) {    allocateTBE;    sendGetM;    popMandatoryQueue;}transition(S, Replacement, SI_A) {    sendPutS;    forwardEviction;}transition(S, Inv, I) {    sendInvAcktoReq;    deallocateCacheBlock;    forwardEviction;    popForwardQueue;}transition({SM_AD, SM_A}, {Store, Replacement, FwdGetS, FwdGetM}) {    stall;}transition(SM_AD, Inv, IM_AD) {    sendInvAcktoReq;    forwardEviction;    popForwardQueue;}transition(SM_AD, DataDirAcks, SM_A) {    writeDataToCache;    storeAcks;    popResponseQueue;}transition(M, Store) {    storeHit;    popMandatoryQueue;}transition(M, Replacement, MI_A) {    sendPutM;    forwardEviction;}transition(M, FwdGetS, S) {    sendCacheDataToReq;    sendCacheDataToDir;    popForwardQueue;}transition(M, FwdGetM, I) {    sendCacheDataToReq;    deallocateCacheBlock;    popForwardQueue;}transition({MI_A, SI_A, II_A}, {Load, Store, Replacement}) {    stall;}transition(MI_A, FwdGetS, SI_A) {    sendCacheDataToReq;    sendCacheDataToDir;    popForwardQueue;}transition(MI_A, FwdGetM, II_A) {    sendCacheDataToReq;    popForwardQueue;}transition({MI_A, SI_A, II_A}, PutAck, I) {    deallocateCacheBlock;    popForwardQueue;}transition(SI_A, Inv, II_A) {    sendInvAcktoReq;    popForwardQueue;}You can download the complete MSI-cache.sm filehere.",
        "url": "/documentation/learning_gem5/part3/cache-transitions/"
      }
      ,
    
      "documentation-learning-gem5-part3-configuration": {
        "title": "Configuring a simple Ruby system",
        "content": "Configuring a simple Ruby systemFirst, create a new configuration directory in configs/. Just like allgem5 configuration files, we will have a configuration run script. Forthe run script, we can start with simple.py fromsimple-config-chapter. Copy this file to simple_ruby.py in your newdirectory.We will make a couple of small changes to this file to use Ruby insteadof directly connecting the CPU to the memory controllers.First, so we can test our coherence protocol, let’s use two CPUs.system.cpu = [TimingSimpleCPU(), TimingSimpleCPU()]Next, after the memory controllers have been instantiated, we are goingto create the cache system and set up all of the caches. Add thefollowing lines after the CPU interrupts have been created, but beforeinstantiating the system.system.caches = MyCacheSystem()system.caches.setup(system, system.cpu, [system.mem_ctrl])Like the classic cache example in cache-config-chapter, we are going tocreate a second file that contains the cache configuration code. In thisfile we are going to have a class called MyCacheSystem and we willcreate a setup function that takes as parameters the CPUs in thesystem and the memory controllers.You can download the complete run scripthere.Cache system configurationNow, let’s create a file msi_caches.py. In this file, we will createfour classes: MyCacheSystem which will inherit from RubySystem,L1Cache and Directory which will inherit from the SimObjects createdby SLICC from our two state machines, and MyNetwork which will inheritfrom SimpleNetwork.L1 CacheLet’s start with the L1Cache. First, we will inherit fromL1Cache_Controller since we named our L1 cache “L1Cache” in the statemachine file. We also include a special class variable and class methodfor tracking the “version number”. For each SLICC state machine, youhave to number them in ascending order from 0. Each machine of the sametype should have a unique version number. This is used to differentiatethe individual machines. (Hopefully, in the future this requirement willbe removed.)class L1Cache(L1Cache_Controller):    _version = 0    @classmethod    def versionCount(cls):        cls._version += 1 # Use count for this particular type        return cls._version - 1Next, we implement the constructor for the class.def __init__(self, system, ruby_system, cpu):    super(L1Cache, self).__init__()    self.version = self.versionCount()    self.cacheMemory = RubyCache(size = '16kB',                           assoc = 8,                           start_index_bit = self.getBlockSizeBits(system))    self.clk_domain = cpu.clk_domain    self.send_evictions = self.sendEvicts(cpu)    self.ruby_system = ruby_system    self.connectQueues(ruby_system)We need the CPUs in this function to grab the clock domain and system isneeded for the cache block size. Here, we set all of the parameters thatwe named in the state machine file (e.g., cacheMemory). We will setsequencer later. We also hardcode the size an associativity of thecache. You could add command line parameters for these options, if it isimportant to vary them at runtime.Next, we implement a couple of helper functions. First, we need tofigure out how many bits of the address to use for indexing into thecache, which is a simple log operation. We also need to decide whetherto send eviction notices to the CPU. Only if we are using theout-of-order CPU and using x86 or ARM ISA should we forward evictions.def getBlockSizeBits(self, system):    bits = int(math.log(system.cache_line_size, 2))    if 2**bits != system.cache_line_size.value:        panic(\"Cache line size not a power of 2!\")    return bitsdef sendEvicts(self, cpu):    \"\"\"True if the CPU model or ISA requires sending evictions from caches       to the CPU. Two scenarios warrant forwarding evictions to the CPU:       1. The O3 model must keep the LSQ coherent with the caches       2. The x86 mwait instruction is built on top of coherence       3. The local exclusive monitor in ARM systems    \"\"\"    if type(cpu) is DerivO3CPU or \\       buildEnv['TARGET_ISA'] in ('x86', 'arm'):        return True    return FalseFinally, we need to implement connectQueues to connect all of themessage buffers to the Ruby network. First, we create a message bufferfor the mandatory queue. Since this is an L1 cache and it will have asequencer, we need to instantiate this special message buffer. Next, weinstantiate a message buffer for each buffer in the controller. All ofthe “to” buffers we must set the “master” to the network (i.e., thebuffer will send messages into the network), and all of the “from”buffers we must set the “slave” to the network. These names are thesame as the gem5 ports, but message buffers are not currentlyimplemented as gem5 ports. In this protocol, we are assuming themessage buffers are ordered for simplicity.def connectQueues(self, ruby_system):    self.mandatoryQueue = MessageBuffer()    self.requestToDir = MessageBuffer(ordered = True)    self.requestToDir.master = ruby_system.network.slave    self.responseToDirOrSibling = MessageBuffer(ordered = True)    self.responseToDirOrSibling.master = ruby_system.network.slave    self.forwardFromDir = MessageBuffer(ordered = True)    self.forwardFromDir.slave = ruby_system.network.master    self.responseFromDirOrSibling = MessageBuffer(ordered = True)    self.responseFromDirOrSibling.slave = ruby_system.network.masterDirectoryNow, we can similarly implement the directory. There are threedifferences from the L1 cache. First, we need to set the address rangesfor the directory. Since each directory corresponds to a particularmemory controller for a subset of the address range (possibly), we needto make sure the ranges match. The default address ranges for Rubycontrollers is AllMemory.Next, we need to set the master port memory. This is the port thatsends messages when queueMemoryRead/Write is called in the SLICC code.We set it the to the memory controller port. Similarly, inconnectQueues we need to instantiate the special message bufferresponseFromMemory like the mandatoryQueue in the L1 cache.class DirController(Directory_Controller):    _version = 0    @classmethod    def versionCount(cls):        cls._version += 1 # Use count for this particular type        return cls._version - 1    def __init__(self, ruby_system, ranges, mem_ctrls):        \"\"\"ranges are the memory ranges assigned to this controller.        \"\"\"        if len(mem_ctrls) &gt; 1:            panic(\"This cache system can only be connected to one mem ctrl\")        super(DirController, self).__init__()        self.version = self.versionCount()        self.addr_ranges = ranges        self.ruby_system = ruby_system        self.directory = RubyDirectoryMemory()        # Connect this directory to the memory side.        self.memory = mem_ctrls[0].port        self.connectQueues(ruby_system)    def connectQueues(self, ruby_system):        self.requestFromCache = MessageBuffer(ordered = True)        self.requestFromCache.slave = ruby_system.network.master        self.responseFromCache = MessageBuffer(ordered = True)        self.responseFromCache.slave = ruby_system.network.master        self.responseToCache = MessageBuffer(ordered = True)        self.responseToCache.master = ruby_system.network.slave        self.forwardToCache = MessageBuffer(ordered = True)        self.forwardToCache.master = ruby_system.network.slave        self.responseFromMemory = MessageBuffer()Ruby SystemNow, we can implement the Ruby system object. For this object, theconstructor is simple. It just checks the SCons variable PROTOCOL tobe sure that we are using the right configuration file for the protocolthat was compiled. We cannot create the controllers in the constructorbecause they require a pointer to the this object. If we were to createthem in the constructor, there would be a circular dependence in theSimObject hierarchy which will cause infinite recursion in when thesystem in instantiated with m5.instantiate.class MyCacheSystem(RubySystem):    def __init__(self):        if buildEnv['PROTOCOL'] != 'MSI':            fatal(\"This system assumes MSI from learning gem5!\")        super(MyCacheSystem, self).__init__()Instead of create the controllers in the constructor, we create a newfunction to create all of the needed objects: setup. First, we createthe network. We will look at this object next. With the network, we needto set the number of virtual networks in the system.Next, we instantiate all of the controllers. Here, we use a singleglobal list of the controllers to make it easier to connect them to thenetwork later. However, for more complicated cache topologies, it canmake sense to use multiple lists of controllers. We create one L1 cachefor each CPU and one directory for the system.Then, we instantiate all of the sequencers, one for each CPU. Eachsequencer needs a pointer to the instruction and data cache to simulatethe correct latency when initially accessing the cache. In morecomplicated systems, you also have to create sequencers for otherobjects like DMA controllers.After creating the sequencers, we set the sequencer variable on each L1cache controller.Then, we connect all of the controllers to the network and call thesetup_buffers function on the network.We then have to set the “port proxy” for both the Ruby system and thesystem for making functional accesses (e.g., loading the binary in SEmode).Finally, we connect all of the CPUs to the ruby system. In this example,we assume that there are only CPU sequencers so the first CPU isconnected to the first sequencer, and so on. We also have to connect theTLBs and interrupt ports (if we are using x86).def setup(self, system, cpus, mem_ctrls):    self.network = MyNetwork(self)    self.number_of_virtual_networks = 3    self.network.number_of_virtual_networks = 3    self.controllers = \\        [L1Cache(system, self, cpu) for cpu in cpus] + \\        [DirController(self, system.mem_ranges, mem_ctrls)]    self.sequencers = [RubySequencer(version = i,                            # I/D cache is combined and grab from ctrl                            icache = self.controllers[i].cacheMemory,                            dcache = self.controllers[i].cacheMemory,                            clk_domain = self.controllers[i].clk_domain,                            ) for i in range(len(cpus))]    for i,c in enumerate(self.controllers[0:len(self.sequencers)]):        c.sequencer = self.sequencers[i]    self.num_of_sequencers = len(self.sequencers)    self.network.connectControllers(self.controllers)    self.network.setup_buffers()    self.sys_port_proxy = RubyPortProxy()    system.system_port = self.sys_port_proxy.slave    for i,cpu in enumerate(cpus):        cpu.icache_port = self.sequencers[i].slave        cpu.dcache_port = self.sequencers[i].slave        isa = buildEnv['TARGET_ISA']        if isa == 'x86':            cpu.interrupts[0].pio = self.sequencers[i].master            cpu.interrupts[0].int_master = self.sequencers[i].slave            cpu.interrupts[0].int_slave = self.sequencers[i].master        if isa == 'x86' or isa == 'arm':            cpu.itb.walker.port = self.sequencers[i].slave            cpu.dtb.walker.port = self.sequencers[i].slaveNetworkFinally, the last object we have to implement is the network. Theconstructor is simple, but we need to declare an empty list for the listof network interfaces (netifs).Most of the code is in connectControllers. This function implements avery simple, unrealistic point-to-point network. In other words, everycontroller has a direct link to every other controller.The Ruby network is made of three parts: routers that route data fromone router to another or to external controllers, external links thatlink a controller to a router, and internal links that link two routerstogether. First, we create a router for each controller. Then, we createan external link from that router to the controller. Finally, we add allof the “internal” links. Each router is connected to all other routersto make the point-to-point network.class MyNetwork(SimpleNetwork):    def __init__(self, ruby_system):        super(MyNetwork, self).__init__()        self.netifs = []        self.ruby_system = ruby_system    def connectControllers(self, controllers):        self.routers = [Switch(router_id = i) for i in range(len(controllers))]        self.ext_links = [SimpleExtLink(link_id=i, ext_node=c,                                        int_node=self.routers[i])                          for i, c in enumerate(controllers)]        link_count = 0        self.int_links = []        for ri in self.routers:            for rj in self.routers:                if ri == rj: continue # Don't connect a router to itself!                link_count += 1                self.int_links.append(SimpleIntLink(link_id = link_count,                                                    src_node = ri,                                                    dst_node = rj))You can download the complete msi_caches.py filehere.",
        "url": "/documentation/learning_gem5/part3/configuration/"
      }
      ,
    
      "documentation-learning-gem5-part3-directory": {
        "title": "MSI Directory implementation",
        "content": "MSI Directory implementationImplementing a directory controller is very similar to the L1 cachecontroller, except using a different state machine table. The statemachine fore the directory can be found in Table 8.2 in Sorin et al.Since things are mostly similar to the L1 cache, this section mostlyjust discusses a few more SLICC details and a few differences betweendirectory controllers and cache controllers. Let’s dive straight in andstart modifying a new file MSI-dir.sm.machine(MachineType:Directory, \"Directory protocol\"):  DirectoryMemory * directory;  Cycles toMemLatency := 1;MessageBuffer *forwardToCache, network=\"To\", virtual_network=\"1\",      vnet_type=\"forward\";MessageBuffer *responseToCache, network=\"To\", virtual_network=\"2\",      vnet_type=\"response\";MessageBuffer *requestFromCache, network=\"From\", virtual_network=\"0\",      vnet_type=\"request\";MessageBuffer *responseFromCache, network=\"From\", virtual_network=\"2\",      vnet_type=\"response\";MessageBuffer *responseFromMemory;{. . .}First, there are two parameter to this directory controller,DirectoryMemory and a toMemLatency. The DirectoryMemory is alittle weird. It is allocated at initialization time such that it cancover all of physical memory, like a complete directory not adirectory cache. I.e., there are pointers in the DirectoryMemoryobject for every 64-byte block in physical memory. However, the actualentries (as defined below) are lazily created via getDirEntry(). We’llsee more details about DirectoryMemory below.Next, is the toMemLatency parameter. This will be used in theenqueue function when enqueuing requests to model the directorylatency. We didn’t use a parameter for this in the L1 cache, but it issimple to make the controller latency parameterized. This parameterdefaults to 1 cycle. It is not required to set a default here. Thedefault is propagated to the generated SimObject description file as thedefault to the SimObject parameter.Next, we have the message buffers for the directory. Importantly, theseneed to have the same virtual network numbers as the message buffers inthe L1 cache. These virtual network numbers are how the Ruby networkdirects messages between controllers.There is also one more special message buffer: responseFromMemory.This is similar to the mandatoryQueue, except instead of being like aslave port for CPUs it is like a master port. The responseFromMemorybuffer will deliver response sent across the the memory port, as we willsee below in the action section.After the parameters and message buffers, we need to declare all of thestates, events, and other local structures.state_declaration(State, desc=\"Directory states\",                  default=\"Directory_State_I\") {    // Stable states.    // NOTE: These are \"cache-centric\" states like in Sorin et al.    // However, The access permissions are memory-centric.    I, AccessPermission:Read_Write,  desc=\"Invalid in the caches.\";    S, AccessPermission:Read_Only,   desc=\"At least one cache has the blk\";    M, AccessPermission:Invalid,     desc=\"A cache has the block in M\";    // Transient states    S_D, AccessPermission:Busy,      desc=\"Moving to S, but need data\";    // Waiting for data from memory    S_m, AccessPermission:Read_Write, desc=\"In S waiting for mem\";    M_m, AccessPermission:Read_Write, desc=\"Moving to M waiting for mem\";    // Waiting for write-ack from memory    MI_m, AccessPermission:Busy,       desc=\"Moving to I waiting for ack\";    SS_m, AccessPermission:Busy,       desc=\"Moving to I waiting for ack\";}enumeration(Event, desc=\"Directory events\") {    // Data requests from the cache    GetS,         desc=\"Request for read-only data from cache\";    GetM,         desc=\"Request for read-write data from cache\";    // Writeback requests from the cache    PutSNotLast,  desc=\"PutS and the block has other sharers\";    PutSLast,     desc=\"PutS and the block has no other sharers\";    PutMOwner,    desc=\"Dirty data writeback from the owner\";    PutMNonOwner, desc=\"Dirty data writeback from non-owner\";    // Cache responses    Data,         desc=\"Response to fwd request with data\";    // From Memory    MemData,      desc=\"Data from memory\";    MemAck,       desc=\"Ack from memory that write is complete\";}structure(Entry, desc=\"...\", interface=\"AbstractEntry\") {    State DirState,         desc=\"Directory state\";    NetDest Sharers,        desc=\"Sharers for this block\";    NetDest Owner,          desc=\"Owner of this block\";}In the state_declaration we define a default. For many things in SLICCyou can specify a default. However, this default must use the C++ name(mangled SLICC name). For the state below you have to use the controllername and the name we use for states. In this case, since the name of themachine is “Directory” the name for “I” is “Directory”+”State” (for thename of the structure)+”I”.Note that the permissions in the directory are “memory-centric”.Whereas, all of the states are cache centric as in Sorin et al.In the Entry definition for the directory, we use a NetDest for boththe sharers and the owner. This makes sense for the sharers, since wewant a full bitvector for all L1 caches that may be sharing the block.The reason we also use a NetDest for the owner is to simply copy thestructure into the message we send as a response as shown below.In this implementation, we use a few more transient states than in Table8.2 in Sorin et al. to deal with the fact that the memory latency inunknown. In Sorin et al., the authors assume that the directory stateand memory data is stored together in main-memory to simplify theprotocol. Similarly, we also include new actions: the responses frommemory.Next, we have the functions that need to overridden and declared. Thefunction getDirectoryEntry either returns the valid directory entry,or, if it hasn’t been allocated yet, this allocates the entry.Implementing it this way may save some host memory since this is lazilypopulated.Tick clockEdge();Entry getDirectoryEntry(Addr addr), return_by_pointer = \"yes\" {    Entry dir_entry := static_cast(Entry, \"pointer\", directory[addr]);    if (is_invalid(dir_entry)) {        // This first time we see this address allocate an entry for it.        dir_entry := static_cast(Entry, \"pointer\",                                 directory.allocate(addr, new Entry));    }    return dir_entry;}State getState(Addr addr) {    if (directory.isPresent(addr)) {        return getDirectoryEntry(addr).DirState;    } else {        return State:I;    }}void setState(Addr addr, State state) {    if (directory.isPresent(addr)) {        if (state == State:M) {            DPRINTF(RubySlicc, \"Owner %s\\n\", getDirectoryEntry(addr).Owner);            assert(getDirectoryEntry(addr).Owner.count() == 1);            assert(getDirectoryEntry(addr).Sharers.count() == 0);        }        getDirectoryEntry(addr).DirState := state;        if (state == State:I)  {            assert(getDirectoryEntry(addr).Owner.count() == 0);            assert(getDirectoryEntry(addr).Sharers.count() == 0);        }    }}AccessPermission getAccessPermission(Addr addr) {    if (directory.isPresent(addr)) {        Entry e := getDirectoryEntry(addr);        return Directory_State_to_permission(e.DirState);    } else  {        return AccessPermission:NotPresent;    }}void setAccessPermission(Addr addr, State state) {    if (directory.isPresent(addr)) {        Entry e := getDirectoryEntry(addr);        e.changePermission(Directory_State_to_permission(state));    }}void functionalRead(Addr addr, Packet *pkt) {    functionalMemoryRead(pkt);}int functionalWrite(Addr addr, Packet *pkt) {    if (functionalMemoryWrite(pkt)) {        return 1;    } else {        return 0;    }Next, we need to implement the ports for the cache. First we specify theout_port and then the in_port code blocks. The only differencebetween the in_port in the directory and in the L1 cache is that thedirectory does not have a TBE or cache entry. Thus, we do not passeither into the trigger function.out_port(forward_out, RequestMsg, forwardToCache);out_port(response_out, ResponseMsg, responseToCache);in_port(memQueue_in, MemoryMsg, responseFromMemory) {    if (memQueue_in.isReady(clockEdge())) {        peek(memQueue_in, MemoryMsg) {            if (in_msg.Type == MemoryRequestType:MEMORY_READ) {                trigger(Event:MemData, in_msg.addr);            } else if (in_msg.Type == MemoryRequestType:MEMORY_WB) {                trigger(Event:MemAck, in_msg.addr);            } else {                error(\"Invalid message\");            }        }    }}in_port(response_in, ResponseMsg, responseFromCache) {    if (response_in.isReady(clockEdge())) {        peek(response_in, ResponseMsg) {            if (in_msg.Type == CoherenceResponseType:Data) {                trigger(Event:Data, in_msg.addr);            } else {                error(\"Unexpected message type.\");            }        }    }}in_port(request_in, RequestMsg, requestFromCache) {    if (request_in.isReady(clockEdge())) {        peek(request_in, RequestMsg) {            Entry e := getDirectoryEntry(in_msg.addr);            if (in_msg.Type == CoherenceRequestType:GetS) {                trigger(Event:GetS, in_msg.addr);            } else if (in_msg.Type == CoherenceRequestType:GetM) {                trigger(Event:GetM, in_msg.addr);            } else if (in_msg.Type == CoherenceRequestType:PutS) {                assert(is_valid(e));                // If there is only a single sharer (i.e., the requestor)                if (e.Sharers.count() == 1) {                    assert(e.Sharers.isElement(in_msg.Requestor));                    trigger(Event:PutSLast, in_msg.addr);                } else {                    trigger(Event:PutSNotLast, in_msg.addr);                }            } else if (in_msg.Type == CoherenceRequestType:PutM) {                assert(is_valid(e));                if (e.Owner.isElement(in_msg.Requestor)) {                    trigger(Event:PutMOwner, in_msg.addr);                } else {                    trigger(Event:PutMNonOwner, in_msg.addr);                }            } else {                error(\"Unexpected message type.\");            }        }    }}The next part of the state machine file is the actions. First, we defineactions for queuing memory reads and writes. For this, we will use aspecial function define in the AbstractController: queueMemoryRead.This function takes an address and converts it to a gem5 request andpacket and sends it to across the port that is connected to thiscontroller. We will see how to connect this port in theconfiguration section &lt;MSI-config-section&gt;. Note that we need twodifferent actions to send data to memory for both requests and responsessince there are two different message buffers (virtual networks) thatdata might arrive on.action(sendMemRead, \"r\", desc=\"Send a memory read request\") {    peek(request_in, RequestMsg) {        queueMemoryRead(in_msg.Requestor, address, toMemLatency);    }}action(sendDataToMem, \"w\", desc=\"Write data to memory\") {    peek(request_in, RequestMsg) {        DPRINTF(RubySlicc, \"Writing memory for %#x\\n\", address);        DPRINTF(RubySlicc, \"Writing %s\\n\", in_msg.DataBlk);        queueMemoryWrite(in_msg.Requestor, address, toMemLatency,                         in_msg.DataBlk);    }}action(sendRespDataToMem, \"rw\", desc=\"Write data to memory from resp\") {    peek(response_in, ResponseMsg) {        DPRINTF(RubySlicc, \"Writing memory for %#x\\n\", address);        DPRINTF(RubySlicc, \"Writing %s\\n\", in_msg.DataBlk);        queueMemoryWrite(in_msg.Sender, address, toMemLatency,                         in_msg.DataBlk);    }}In this code, we also see the last way to add debug information to SLICCprotocols: DPRINTF. This is exactly the same as a DPRINTF in gem5,except in SLICC only the RubySlicc debug flag is available.Next, we specify actions to update the sharers and owner of a particularblock.action(addReqToSharers, \"aS\", desc=\"Add requestor to sharer list\") {    peek(request_in, RequestMsg) {        getDirectoryEntry(address).Sharers.add(in_msg.Requestor);    }}action(setOwner, \"sO\", desc=\"Set the owner\") {    peek(request_in, RequestMsg) {        getDirectoryEntry(address).Owner.add(in_msg.Requestor);    }}action(addOwnerToSharers, \"oS\", desc=\"Add the owner to sharers\") {    Entry e := getDirectoryEntry(address);    assert(e.Owner.count() == 1);    e.Sharers.addNetDest(e.Owner);}action(removeReqFromSharers, \"rS\", desc=\"Remove requestor from sharers\") {    peek(request_in, RequestMsg) {        getDirectoryEntry(address).Sharers.remove(in_msg.Requestor);    }}action(clearSharers, \"cS\", desc=\"Clear the sharer list\") {    getDirectoryEntry(address).Sharers.clear();}action(clearOwner, \"cO\", desc=\"Clear the owner\") {    getDirectoryEntry(address).Owner.clear();}The next set of actions send invalidates and forward requests to cachesthat the directory cannot deal with alone.action(sendInvToSharers, \"i\", desc=\"Send invalidate to all sharers\") {    peek(request_in, RequestMsg) {        enqueue(forward_out, RequestMsg, 1) {            out_msg.addr := address;            out_msg.Type := CoherenceRequestType:Inv;            out_msg.Requestor := in_msg.Requestor;            out_msg.Destination := getDirectoryEntry(address).Sharers;            out_msg.MessageSize := MessageSizeType:Control;        }    }}action(sendFwdGetS, \"fS\", desc=\"Send forward getS to owner\") {    assert(getDirectoryEntry(address).Owner.count() == 1);    peek(request_in, RequestMsg) {        enqueue(forward_out, RequestMsg, 1) {            out_msg.addr := address;            out_msg.Type := CoherenceRequestType:GetS;            out_msg.Requestor := in_msg.Requestor;            out_msg.Destination := getDirectoryEntry(address).Owner;            out_msg.MessageSize := MessageSizeType:Control;        }    }}action(sendFwdGetM, \"fM\", desc=\"Send forward getM to owner\") {    assert(getDirectoryEntry(address).Owner.count() == 1);    peek(request_in, RequestMsg) {        enqueue(forward_out, RequestMsg, 1) {            out_msg.addr := address;            out_msg.Type := CoherenceRequestType:GetM;            out_msg.Requestor := in_msg.Requestor;            out_msg.Destination := getDirectoryEntry(address).Owner;            out_msg.MessageSize := MessageSizeType:Control;        }    }}Now we have responses from the directory. Here we are peeking into thespecial buffer responseFromMemory. You can find the definition ofMemoryMsg in src/mem/protocol/RubySlicc_MemControl.sm.action(sendDataToReq, \"d\", desc=\"Send data from memory to requestor. May need to send sharer number, too\") {    peek(memQueue_in, MemoryMsg) {        enqueue(response_out, ResponseMsg, 1) {            out_msg.addr := address;            out_msg.Type := CoherenceResponseType:Data;            out_msg.Sender := machineID;            out_msg.Destination.add(in_msg.OriginalRequestorMachId);            out_msg.DataBlk := in_msg.DataBlk;            out_msg.MessageSize := MessageSizeType:Data;            Entry e := getDirectoryEntry(address);            // Only need to include acks if we are the owner.            if (e.Owner.isElement(in_msg.OriginalRequestorMachId)) {                out_msg.Acks := e.Sharers.count();            } else {                out_msg.Acks := 0;            }            assert(out_msg.Acks &gt;= 0);        }    }}action(sendPutAck, \"a\", desc=\"Send the put ack\") {    peek(request_in, RequestMsg) {        enqueue(forward_out, RequestMsg, 1) {            out_msg.addr := address;            out_msg.Type := CoherenceRequestType:PutAck;            out_msg.Requestor := machineID;            out_msg.Destination.add(in_msg.Requestor);            out_msg.MessageSize := MessageSizeType:Control;        }    }}Then, we have the queue management and stall actions.action(popResponseQueue, \"pR\", desc=\"Pop the response queue\") {    response_in.dequeue(clockEdge());}action(popRequestQueue, \"pQ\", desc=\"Pop the request queue\") {    request_in.dequeue(clockEdge());}action(popMemQueue, \"pM\", desc=\"Pop the memory queue\") {    memQueue_in.dequeue(clockEdge());}action(stall, \"z\", desc=\"Stall the incoming request\") {    // Do nothing.}Finally, we have the transition section of the state machine file. Thesemostly come from Table 8.2 in Sorin et al., but there are some extratransitions to deal with the unknown memory latency.transition({I, S}, GetS, S_m) {    sendMemRead;    addReqToSharers;    popRequestQueue;}transition(I, {PutSNotLast, PutSLast, PutMNonOwner}) {    sendPutAck;    popRequestQueue;}transition(S_m, MemData, S) {    sendDataToReq;    popMemQueue;}transition(I, GetM, M_m) {    sendMemRead;    setOwner;    popRequestQueue;}transition(M_m, MemData, M) {    sendDataToReq;    clearSharers; // NOTE: This isn't *required* in some cases.    popMemQueue;}transition(S, GetM, M_m) {    sendMemRead;    removeReqFromSharers;    sendInvToSharers;    setOwner;    popRequestQueue;}transition({S, S_D, SS_m, S_m}, {PutSNotLast, PutMNonOwner}) {    removeReqFromSharers;    sendPutAck;    popRequestQueue;}transition(S, PutSLast, I) {    removeReqFromSharers;    sendPutAck;    popRequestQueue;}transition(M, GetS, S_D) {    sendFwdGetS;    addReqToSharers;    addOwnerToSharers;    clearOwner;    popRequestQueue;}transition(M, GetM) {    sendFwdGetM;    clearOwner;    setOwner;    popRequestQueue;}transition({M, M_m, MI_m}, {PutSNotLast, PutSLast, PutMNonOwner}) {    sendPutAck;    popRequestQueue;}transition(M, PutMOwner, MI_m) {    sendDataToMem;    clearOwner;    sendPutAck;    popRequestQueue;}transition(MI_m, MemAck, I) {    popMemQueue;}transition(S_D, {GetS, GetM}) {    stall;}transition(S_D, PutSLast) {    removeReqFromSharers;    sendPutAck;    popRequestQueue;}transition(S_D, Data, SS_m) {    sendRespDataToMem;    popResponseQueue;}transition(SS_m, MemAck, S) {    popMemQueue;}// If we get another request for a block that's waiting on memory,// stall that request.transition({MI_m, SS_m, S_m, M_m}, {GetS, GetM}) {    stall;}You can download the complete MSI-dir.sm filehere.",
        "url": "/documentation/learning_gem5/part3/directory/"
      }
      ,
    
      "documentation-learning-gem5-part3-running": {
        "title": "Running the simple Ruby system",
        "content": "Running the simple Ruby systemNow, we can run our system with the MSI protocol!As something interesting, below is a simple multithreaded program (note:as of this writing there is a bug in gem5 preventing this code fromexecuting).#include &lt;iostream&gt;#include &lt;thread&gt;using namespace std;/* * c = a + b */void array_add(int *a, int *b, int *c, int tid, int threads, int num_values){    for (int i = tid; i &lt; num_values; i += threads) {        c[i] = a[i] + b[i];    }}int main(int argc, char *argv[]){    unsigned num_values;    if (argc == 1) {        num_values = 100;    } else if (argc == 2) {        num_values = atoi(argv[1]);        if (num_values &lt;= 0) {            cerr &lt;&lt; \"Usage: \" &lt;&lt; argv[0] &lt;&lt; \" [num_values]\" &lt;&lt; endl;            return 1;        }    } else {        cerr &lt;&lt; \"Usage: \" &lt;&lt; argv[0] &lt;&lt; \" [num_values]\" &lt;&lt; endl;        return 1;    }    unsigned cpus = thread::hardware_concurrency();    cout &lt;&lt; \"Running on \" &lt;&lt; cpus &lt;&lt; \" cores. \";    cout &lt;&lt; \"with \" &lt;&lt; num_values &lt;&lt; \" values\" &lt;&lt; endl;    int *a, *b, *c;    a = new int[num_values];    b = new int[num_values];    c = new int[num_values];    if (!(a &amp;&amp; b &amp;&amp; c)) {        cerr &lt;&lt; \"Allocation error!\" &lt;&lt; endl;        return 2;    }    for (int i = 0; i &lt; num_values; i++) {        a[i] = i;        b[i] = num_values - i;        c[i] = 0;    }    thread **threads = new thread*[cpus];    // NOTE: -1 is required for this to work in SE mode.    for (int i = 0; i &lt; cpus - 1; i++) {        threads[i] = new thread(array_add, a, b, c, i, cpus, num_values);    }    // Execute the last thread with this thread context to appease SE mode    array_add(a, b, c, cpus - 1, cpus, num_values);    cout &lt;&lt; \"Waiting for other threads to complete\" &lt;&lt; endl;    for (int i = 0; i &lt; cpus - 1; i++) {        threads[i]-&gt;join();    }    delete[] threads;    cout &lt;&lt; \"Validating...\" &lt;&lt; flush;    int num_valid = 0;    for (int i = 0; i &lt; num_values; i++) {        if (c[i] == num_values) {            num_valid++;        } else {            cerr &lt;&lt; \"c[\" &lt;&lt; i &lt;&lt; \"] is wrong.\";            cerr &lt;&lt; \" Expected \" &lt;&lt; num_values;            cerr &lt;&lt; \" Got \" &lt;&lt; c[i] &lt;&lt; \".\" &lt;&lt; endl;        }    }    if (num_valid == num_values) {        cout &lt;&lt; \"Success!\" &lt;&lt; endl;        return 0;    } else {        return 2;    }}With the above code compiled as threads, we can run gem5!build/MSI/gem5.opt configs/learning_gem5/part6/simple_ruby.pyThe output should be something like the following. Most of the warningsare unimplemented syscalls in SE mode due to using pthreads and can besafely ignored for this simple example.gem5 Simulator System.  http://gem5.orggem5 is copyrighted software; use the --copyright option for details.gem5 compiled Sep  7 2017 12:39:51gem5 started Sep 10 2017 20:56:35gem5 executing on fuggle, pid 6687command line: build/MSI/gem5.opt configs/learning_gem5/part6/simple_ruby.pyGlobal frequency set at 1000000000000 ticks per secondwarn: DRAM device capacity (8192 Mbytes) does not match the address range assigned (512 Mbytes)0: system.remote_gdb.listener: listening for remote gdb #0 on port 70000: system.remote_gdb.listener: listening for remote gdb #1 on port 7001Beginning simulation!info: Entering event queue @ 0.  Starting simulation...warn: Replacement policy updates recently became the responsibility of SLICC state machines. Make sure to setMRU() near callbacks in .sm files!warn: ignoring syscall access(...)warn: ignoring syscall access(...)warn: ignoring syscall access(...)warn: ignoring syscall mprotect(...)warn: ignoring syscall access(...)warn: ignoring syscall mprotect(...)warn: ignoring syscall access(...)warn: ignoring syscall mprotect(...)warn: ignoring syscall access(...)warn: ignoring syscall mprotect(...)warn: ignoring syscall access(...)warn: ignoring syscall mprotect(...)warn: ignoring syscall mprotect(...)warn: ignoring syscall mprotect(...)warn: ignoring syscall mprotect(...)warn: ignoring syscall mprotect(...)warn: ignoring syscall mprotect(...)warn: ignoring syscall mprotect(...)warn: ignoring syscall set_robust_list(...)warn: ignoring syscall rt_sigaction(...)      (further warnings will be suppressed)warn: ignoring syscall rt_sigprocmask(...)      (further warnings will be suppressed)info: Increasing stack size by one page.info: Increasing stack size by one page.Running on 2 cores. with 100 valueswarn: ignoring syscall mprotect(...)warn: ClockedObject: Already in the requested power state, request ignoredwarn: ignoring syscall set_robust_list(...)Waiting for other threads to completewarn: ignoring syscall madvise(...)Validating...Success!Exiting @ tick 9386342000 because exiting with last active thread context",
        "url": "/documentation/learning_gem5/part3/running/"
      }
      ,
    
      "documentation-learning-gem5-part3-simple-mi-example": {
        "title": "Configuring for a standard protocol",
        "content": "Configuring for a standard protocolYou can easily adapt the simple example configurations from this part tothe other SLICC protocols in gem5. In this chapter, we will briefly lookat an example with MI_example, though this can be easily extended toother protocols.However, these simple configuration files will only work in syscallemulation mode. Full system mode adds some complications such as DMAcontrollers. These scripts can be extended to full system.For MI_example, we can use exactly the same runscript as before(simple_ruby.py), we just need to implement a differentMyCacheSystem (and import that file in simple_ruby.py). Below, isthe classes needed for MI_example. There are only a couple of changesfrom MSI, mostly due to different naming schemes. You can download thefilehere.class MyCacheSystem(RubySystem):    def __init__(self):        if buildEnv['PROTOCOL'] != 'MI_example':            fatal(\"This system assumes MI_example!\")        super(MyCacheSystem, self).__init__()    def setup(self, system, cpus, mem_ctrls):        \"\"\"Set up the Ruby cache subsystem. Note: This can't be done in the           constructor because many of these items require a pointer to the           ruby system (self). This causes infinite recursion in initialize()           if we do this in the __init__.        \"\"\"        # Ruby's global network.        self.network = MyNetwork(self)        # MI example uses 5 virtual networks        self.number_of_virtual_networks = 5        self.network.number_of_virtual_networks = 5        # There is a single global list of all of the controllers to make it        # easier to connect everything to the global network. This can be        # customized depending on the topology/network requirements.        # Create one controller for each L1 cache (and the cache mem obj.)        # Create a single directory controller (Really the memory cntrl)        self.controllers = \\            [L1Cache(system, self, cpu) for cpu in cpus] + \\            [DirController(self, system.mem_ranges, mem_ctrls)]        # Create one sequencer per CPU. In many systems this is more        # complicated since you have to create sequencers for DMA controllers        # and other controllers, too.        self.sequencers = [RubySequencer(version = i,                                # I/D cache is combined and grab from ctrl                                icache = self.controllers[i].cacheMemory,                                dcache = self.controllers[i].cacheMemory,                                clk_domain = self.controllers[i].clk_domain,                                ) for i in range(len(cpus))]        for i,c in enumerate(self.controllers[0:len(cpus)]):            c.sequencer = self.sequencers[i]        self.num_of_sequencers = len(self.sequencers)        # Create the network and connect the controllers.        # NOTE: This is quite different if using Garnet!        self.network.connectControllers(self.controllers)        self.network.setup_buffers()        # Set up a proxy port for the system_port. Used for load binaries and        # other functional-only things.        self.sys_port_proxy = RubyPortProxy()        system.system_port = self.sys_port_proxy.slave        # Connect the cpu's cache, interrupt, and TLB ports to Ruby        for i,cpu in enumerate(cpus):            cpu.icache_port = self.sequencers[i].slave            cpu.dcache_port = self.sequencers[i].slave            isa = buildEnv['TARGET_ISA']            if isa == 'x86':                cpu.interrupts[0].pio = self.sequencers[i].master                cpu.interrupts[0].int_master = self.sequencers[i].slave                cpu.interrupts[0].int_slave = self.sequencers[i].master            if isa == 'x86' or isa == 'arm':                cpu.itb.walker.port = self.sequencers[i].slave                cpu.dtb.walker.port = self.sequencers[i].slaveclass L1Cache(L1Cache_Controller):    _version = 0    @classmethod    def versionCount(cls):        cls._version += 1 # Use count for this particular type        return cls._version - 1    def __init__(self, system, ruby_system, cpu):        \"\"\"CPUs are needed to grab the clock domain and system is needed for           the cache block size.        \"\"\"        super(L1Cache, self).__init__()        self.version = self.versionCount()        # This is the cache memory object that stores the cache data and tags        self.cacheMemory = RubyCache(size = '16kB',                               assoc = 8,                               start_index_bit = self.getBlockSizeBits(system))        self.clk_domain = cpu.clk_domain        self.send_evictions = self.sendEvicts(cpu)        self.ruby_system = ruby_system        self.connectQueues(ruby_system)    def getBlockSizeBits(self, system):        bits = int(math.log(system.cache_line_size, 2))        if 2**bits != system.cache_line_size.value:            panic(\"Cache line size not a power of 2!\")        return bits    def sendEvicts(self, cpu):        \"\"\"True if the CPU model or ISA requires sending evictions from caches           to the CPU. Two scenarios warrant forwarding evictions to the CPU:           1. The O3 model must keep the LSQ coherent with the caches           2. The x86 mwait instruction is built on top of coherence           3. The local exclusive monitor in ARM systems        \"\"\"        if type(cpu) is DerivO3CPU or \\           buildEnv['TARGET_ISA'] in ('x86', 'arm'):            return True        return False    def connectQueues(self, ruby_system):        \"\"\"Connect all of the queues for this controller.        \"\"\"        self.mandatoryQueue = MessageBuffer()        self.requestFromCache = MessageBuffer(ordered = True)        self.requestFromCache.master = ruby_system.network.slave        self.responseFromCache = MessageBuffer(ordered = True)        self.responseFromCache.master = ruby_system.network.slave        self.forwardToCache = MessageBuffer(ordered = True)        self.forwardToCache.slave = ruby_system.network.master        self.responseToCache = MessageBuffer(ordered = True)        self.responseToCache.slave = ruby_system.network.masterclass DirController(Directory_Controller):    _version = 0    @classmethod    def versionCount(cls):        cls._version += 1 # Use count for this particular type        return cls._version - 1    def __init__(self, ruby_system, ranges, mem_ctrls):        \"\"\"ranges are the memory ranges assigned to this controller.        \"\"\"        if len(mem_ctrls) &gt; 1:            panic(\"This cache system can only be connected to one mem ctrl\")        super(DirController, self).__init__()        self.version = self.versionCount()        self.addr_ranges = ranges        self.ruby_system = ruby_system        self.directory = RubyDirectoryMemory()        # Connect this directory to the memory side.        self.memory = mem_ctrls[0].port        self.connectQueues(ruby_system)    def connectQueues(self, ruby_system):        self.requestToDir = MessageBuffer(ordered = True)        self.requestToDir.slave = ruby_system.network.master        self.dmaRequestToDir = MessageBuffer(ordered = True)        self.dmaRequestToDir.slave = ruby_system.network.master        self.responseFromDir = MessageBuffer()        self.responseFromDir.master = ruby_system.network.slave        self.dmaResponseFromDir = MessageBuffer(ordered = True)        self.dmaResponseFromDir.master = ruby_system.network.slave        self.forwardFromDir = MessageBuffer()        self.forwardFromDir.master = ruby_system.network.slave        self.responseFromMemory = MessageBuffer()class MyNetwork(SimpleNetwork):    \"\"\"A simple point-to-point network. This doesn't not use garnet.    \"\"\"    def __init__(self, ruby_system):        super(MyNetwork, self).__init__()        self.netifs = []        self.ruby_system = ruby_system    def connectControllers(self, controllers):        \"\"\"Connect all of the controllers to routers and connect the routers           together in a point-to-point network.        \"\"\"        # Create one router/switch per controller in the system        self.routers = [Switch(router_id = i) for i in range(len(controllers))]        # Make a link from each controller to the router. The link goes        # externally to the network.        self.ext_links = [SimpleExtLink(link_id=i, ext_node=c,                                        int_node=self.routers[i])                          for i, c in enumerate(controllers)]        # Make an \"internal\" link (internal to the network) between every pair        # of routers.        link_count = 0        self.int_links = []        for ri in self.routers:            for rj in self.routers:                if ri == rj: continue # Don't connect a router to itself!                link_count += 1                self.int_links.append(SimpleIntLink(link_id = link_count,                                                    src_node = ri,                                                    dst_node = rj))",
        "url": "/documentation/learning_gem5/part3/simple-MI_example/"
      }
      ,
    
      "documentation-reporting-problems": {
        "title": "Reporting Problems",
        "content": "Many of the people on the gem5-users mailing list are happyto help when someone has a problem or something doesn’t work. However, pleasekeep in mind those working on gem5 have other commitments, so we’d appreciate,prior to reporting, if users could put in some effort to solving their ownproblems, or, at least, gather enough information to help others resolve theissue.Below we outline some general advise on issue reporting.Prior to reporting a problemThe most important thing to do prior to reporting a problem is to investigatethe issue as much as possible. This may lead you to a solution,or enable you to provide more information to the gem5 community regarding theproblem. Below are a series of steps/checks we’d advise you carry out beforereporting an issue:      Please check if a similar question has already been asked on ourmailing lists (check the archives), or reported in ourJira Issue Tracking system.        Ensure you’re compiling and running the latest version of gem5. The issue may have already been resolved.        Check changes currently under review on our Gerrit system. It’s possible a fix toyour issue is already on its way to being merged into the project.        Make sure you’re running with gem5.opt or gem5.debug, not gem5.fast.The gem5.fast binary compiles out assertion checking for speed, so a problemthat causes a crash or an error on gem5.fast may result in a more informativeassertion failure with gem5.opt or gem5.debug.        If it seems appropriate, enable some debug flags (e.g.,--debug-flags=Foovia the CLI). For more information on debug flags, please consult ourdebugging tutorial.        Don’t be afraid to debug using GDB if your problem is occurring on the C++side.  Reporting a problemOnce you believe you have gathered enough information about your problem. Thenfeel free to report it.      If you have reason to believe your problem is a bug then please report theissue on gem5’s Jira Issue Tracking system.Please include any information which may aid in someone else reproducingthis bug on their system. Include the command line argument used, anyrelevant system information (as a minimum, what OS are you using, and howdid you compile gem5?), error messages received, program outputs, stack traces,etc.        If you choose to ask a question on the gem5-users mailing list, please provide any information which may be helpful. If youhave a theory about what the problem might be, please let us know, butinclude enough basic information so others can decide whether your theory iscorrect or not.  Solving the problemIf you have solved a problem that you reported, please let the community knowabout your solution as a follow-up (either in the mailing list or in the JiraIssue tracking system). If you have fixed a bug, we’d appreciate if you couldsubmit the fix to the gem5 source. Please see ourbeginners guide to contributingon how to do this.If your issue is with the content of a gem5 document/tutorial being incorrect,then please consider submitting a change. Please consult our READMEfor more information on how to make contributions to the gem5 website.",
        "url": "/documentation/reporting_problems/"
      }
      ,
    
      "events-dist-gem5": {
        "title": "ISCA2017 - distributed gem5",
        "content": "Title: dist-gem5: Modeling and Simulating a Distributed ComputerSystem Using Multiple SimulationSunday, June 25, 9:00 to 12:3044th International Symposium on ComputerArchitecture, June 24-28, 2017, Toronto, ON,Canada  List of organisers/presenters  Abstract  Objectives  Slides  Publications  Pre-requisites  Previous tutorialsList of organisers/presenters  Nam Sung Kim, University of Illinois, Urbana-Champaign  Mohammad Alian, University of Illinois, Urbana-Champaign  Nikos Nikoleris, ARM Ltd.  Radhika Jagtap, ARM Ltd.  Gabor Dozsa, ARM Ltd.  Stephan Diestelhorst, ARM Ltd.AbstractThe single-thread performance improvement of processors has beensluggish for the past decade as Dennard’s scaling is approaching itsfundamental physical limit. Thus, the importance of efficiently runningapplications on a parallel/distributed computer system has continuedto increase and diverse applications based on parallel/distributedcomputing models such as MapReduce and MPI have thrived.In a parallel/distributed computing system, the complex interplayamongst processor, node, and network architectures strongly affects theperformance and power efficiency. In particular, we observe that all thehardware and software aspects of the network, which encompassesinterface technology, switch/router capability, link bandwidth,topology, traffic patterns, and protocols, significantly impact theprocessor and node activities. Therefore, to maximize performance andpower efficiency, it is critical to develop various optimizationstrategies cutting across processor, node, and network architectures, aswell as their software stacks, necessitating full-system simulation.However, our community lacks a proper research infrastructure to studythe interplay of these subsystems. Facing such a challenge, we havereleased a gem5-based simulation infrastructure dubbed dist-gem5 tosupport full-system simulation of a parallel/distributed computer systemusing multiple simulation host. This tutorial will cover an introductionto dist-gem5 including relevant background knowledge.ObjectivesMore specifically, the tutorial will provide the following.  Introduction of parallel/distributed system architecture.  Details of enhanced gem5 components to enable simulation of aparallel/distributed computer system.          Network interface and switch models to connect multiplesimulated nodes (as shown in the Figure).      Synchronization amongst multiple simulated nodes running acrossmultiple simulation hosts.      Simulating a region of interest of a given benchmark usingcheck-point creation/restoration enhanced for simulatingmultiple simulated nodes using multiple simulation hosts.        Examples of modeling parallel/distributed computer systems using afew network topologies.                                      09:00 – 10:00      Introduction (60 min)              10:00 – 10:15      Break (15 min)              10:15 – 11:15      dist-gem5 deep dive (60 min)              11:15 – 11:30      Break (15 min)              11:30 – 12:00      dist-gem5 examples (30 min)      Program for the tutorialSlides  The slides from the tutorial can be downloadedhere.Publications  Mohammad Alian, Gabor Dozsa, Umur Darbaz, Stephan Diestelhorst,Daehoon Kim, and Nam Sung Kim. “dist-gem5: Distributed Simulationof Computer Clusters”, IEEE International Symposium on PerformanceAnalysis of Systems (ISPASS), April 2017 (Nominated for the BestPaper Award)  Mohammad Alian, Daehoon Kim, and Nam Sung Kim. “pd-gem5: SimulationInfrastructure for Parallel/Distributed Computer Systems”, IEEEComputer Architecture Letters (CAL), Jan 2016paper  dist-gem5 websitePre-requisites  Basic knowledge of computer architecture  No prior experience with simulators is requiredPrevious tutorials  dist-gem5 tutorial atMICRO 2015  gem5 tutorial at ASPLOS 2017",
        "url": "/events/dist-gem5"
      }
      ,
    
      "events-arm-summit-2017": {
        "title": "ARM research summit 2017",
        "content": "The ARM Research Summit isan academic summit to discuss future trends and disruptive technologiesacross all sectors of computing. On the first day of the Summit, ARMResearch will host a gem5 workshop to give a brief overview of gem5 forcomputer engineers who are new to gem5 and dive deeper into some ofgem5’s more advanced capabilities. The attendees will learn what gem5can and cannot do, how to use and extend gem5, as well as how tocontribute back to gem5.The ARM Research Summit will take place in Cambridge (UK) over the daysof 11-13 September 2017. The gem5 workshop will be a full day event onthe 11th September.Streaming &amp; Offline viewingThe workshop is being streamed live and all talks will be available onYouTube after the workshop. See the main summitpage fordetails.Target AudienceThe primary audience is researchers who are using, or planning to use,gem5 for architecture research.Prerequisites: Attendees are expected to have a working knowledge ofC++, Python, and computer systems.RegistrationSee the main ARM Research Summitwebsite for details aboutregistration.ScheduleThe workshop will take place on Monday the 11th September 2017 atRobinson College in Cambridge (UK). The workshop starts at 9.00 and runsin parallel with the main Summit program until 16.30 when it joins themainprogram.            Time      Topic                  09.00-09.30      Welcome and introduction to gem5 — slides              09.30-09.45      Interacting with gem5 using workload-automation &amp; devlib — slides              09.45-10.00      ARM Research Starter Kit: System Modeling using gem5 — slides              10.00-10.15      Break              10.15-10.30      Debugging a target-agnostic JIT compiler with GEM5              10.30-11.00      Learning gem5: Modeling Cache Coherence with gem5 — slides              11.00-11.15      Break (overlaps with main program break)              11.15-11.45      A Detailed On-Chip Network Model inside a Full-System Simulator — slides              11.45-12.00      Integrating and quantifying the impact of low power modes in the DRAM controller in gem5 — slides              12.00-12.15      Break              12.15-12.45      CPU power estimation using PMCs and its application in gem5 — slides              12.45-13.00      gem5: empowering the masses — slides              13.00-14.15      Lunch              14.15-14.45      Trace-driven simulation of multithreaded applications in gem5 — slides              14.45-15.00      Generating Synthetic Traffic for Heterogeneous Architectures — slides              15:00-15:15      Break              15:15-16:45      System Simulation with gem5, SystemC and other Tools — slides              15:45-16:00      COSSIM: An Integrated Solution to Address the Simulator Gap for Parallel Heterogeneous Systems — slides              16:00-16:15      Simulation of Complex Systems Incorporating Hardware Accelerators — slides              16:15-16:30      Break              16:30-18:15      Introduction to ARM Research              18:20-20.00      Poster Session &amp; Pre-Dinner Drinks              20.00-21.30      Buffet Dinner      TalksTrace-driven simulation of multithreaded applications in gem5The gem5 modular simulator provides a rich set of CPU models whichpermits balancing simulation speed and accuracy. The growing interest inusing gem5 for design-space exploration however requires highersimulation speeds so as to enable scalability analysis with systemscomprising tens to hundreds of cores. One relevant approach for enablingsignificant speedups lies in using trace-driven simulation, in which CPUcores are abstracted away thereby enabling to refocus simulation efforton memory/interconnect subsystems which play a key role on performance.This talk describes some of the work carried out on the Mont-Blanceuropean projects on trace-driven simulation and discusses the relatedchallenges for multicore architectures in which trace injection requiresto account for the API synchronization of the underlying runningapplication. The ElasticSimMATE tool is presented as an initiativetowards combining Elastic Traces and SimMATE so as to enable fast andaccurate simulation of multithreaded applications on ARM multicoresystems.  Dr Gilles Sassatelli is a CNRS senior scientist at LIRMM, aCNRS-University of Montpellier academic research unit with a staff ofover 400. He is vice-head of the microelectronics department and leadsa group of 20 researchers working in the area of smart embeddeddigital systems. He has authored over 200 peer-reviewed papers and hasoccupied key roles in a number of international conferences. Most ofhis research is conducted in the frame of international EU-fundedprojects such as the DreamCloud and Mont-Blanc projects.  Alejandro Nocua received the Ph.D. degree in Microelectronics fromthe University of Montpellier, France, in 2016. Currently, he is apostdoctoral researcher at the French National Center for ScientificResearch (CNRS). His research interests include the analysis ofhigh-performance and energy-efficiency design methodologies. Hereceived his Master degree in Science from the National Institute ofAstrophysics, Optics and Electronics (INAOE), Mexico, in 2013.Alejandro was awarded his BS degree in Electronics Engineering fromIndustrial University of Santander (UIS), Colombia in 2011.  Florent Bruguier received the M.S. and Ph.D. degrees inmicroelectronics from the University of Montpellier, France, in 2009and 2012, respectively. From 2012 to 2015, he was a ScientificAssistant with the Montpellier Laboratory of Informatics, Robotics,and Microelectronics, University of Montpellier. Since 2015, he is aPermanent Associate Professor. He has co-authored over 30publications. His research interests are focused on self-adaptive andsecure approaches for embedded systems.  Anastasiia Butko, Ph.D. is a Postdoctoral Fellow in theComputational Research Division at Lawrence Berkeley NationalLaboratory (LBNL), CA. Her research interests lie in the general areaof computer architecture, with particular emphasis on high-performancecomputing, emerging and heterogeneous technologies, associatedparallel programming and architectural simulation techniques. Broadly,her reasearch addresses the question of how alternative technologiescan provide continuing performance scaling in the approachingPost-Moore’s Law era. Her primary research projects includedevelopment of the EDA tools for fast superconducting logic design,development of the classical ISA for quantum processor control,development of the fast and flexible System-on-Chip generators usingChisel DSL. Dr. Butko co-leads Open Source Supercomputing project andis a technical committee member of the RISC-V foundation.  Dr. Butko received her Ph.D. in Microelectronics from the Universityof Montpellier, France (2015). Her doctoral thesis developed fast andaccurate simulation techniques for many-core architecturesexploration. Her graduate work has been conducted within the Europeanproject MontBlanc, which aims to design a new supercomputerarchitecture using low-power embedded technologies.  Dr. Butko received her MSc. Degree in Microelectronics from UM2,France and MSc and BSc Degrees in Digital Electronics from NTUU “KPI”,Ukraine. During her Master she participated on the internationalprogram of double diploma between Montpellier and Kiev universities.&lt;/span&gt;Modeling Cache Coherence with gem5Correctly implementing cache coherence protocols is hard and theseimplementation details can affect the system’s performance. Therefore,it is important to robustly model the detailed cache coherenceimplementation. The popular computer architecture simulator gem5 usesRuby as its cache coherence model providing higher fidelity cachecoherence modeling than many other simulators.In this talk, I will give a brief overview of Ruby, including SLICC: thedomain-specific language Ruby uses to specify cache protocols. I willshow the extreme flexibility of this model and details of a simple cachecoherence protocol. After this talk, you will be able to dive in andbegin writing your own coherence protocols!  Jason Lowe-Power is an Assistant Professor at University ofCalifornia, Davis in the Computer Science department. Jason’s researchfocuses on increasing the energy efficiency and performance ofend-to-end applications like analytic database operations used byAmazon, Google, Target, etc. One important aspect of this research isadding hardware mechanisms to systems that enable all programmers touse emerging hardware accelerators like GPUs. Additionally, Jason is aleader of the open-source architectural simulator, gem5, used by over1500 academic papers. Jason received his PhD from University ofWisconsin-Madison in Summer 2017. He was awarded the WisconsinDistinguished Graduate Fellowship Cisco Computer Sciences Award in2014 and 2015.&lt;/span&gt;A Detailed On-Chip Network Model inside a Full-System SimulatorCompute systems are ubiquitous, with form factors ranging fromsmartphones at the edge to datacenters in the cloud. Chips in all thesesystems today comprise 10s to 100s of homogeneous/heterogeneous cores orprocessing elements. The growing emphasis on parallelism, distributedcomputing, heterogeneity, and energy-efficiency across all these systemsmakes the design of the Network-on-Chip (NoC) fabric connecting thecores critical to both high-performance and low power consumption.It is imperative to model the details of the NoC when architecting andexploring the design-space of a complex many-core system. If ignored, aninaccurate NoC model could lead to over-design or under-design due toincorrect trade-off choices, causing performance losses at runtime. Tothis end, we have designed and integrated a detailed on-chip networkmodel called Garnet inside the gem5 (www.gem5.org) full-systemarchitectural simulator which is being used extensively by both industryand academia. Together with Garnet, gem5 provides plug-and-play modelsof cores, caches, cache coherence protocols, NoC, memory controller, andDRAM, with varying levels of details, enabling computer architects anddesigners to trade-off simulation speed and accuracy.In this talk, we will first introduce the basic building blocks of NoCsand present the state-of-the-art used in chips today. We will thenpresent Garnet, and demonstrate how it faithfully models thestate-of-the-art, while also offering immense flexibility in modifyingvarious parts of the microarchitecture to serve the needs of bothhomogeneous many-cores and heterogeneous accelerator-based systems ofthe future via case studies and code-snippets. Finally, we willdemonstrate how Garnet works within the entire gem5 ecosystem.  Tushar Krishna is an Assistant Professor in the Schools of ECE andCS at Georgia Tech. He received a Ph.D. in Electrical Engineering andComputer Science from the Massachusetts Institute of Technology in      Prior to that he received a M.S.E from Princeton University in2009, and a B.Tech from the Indian Institute of Technology (IIT) Delhiin 2007, both in Electrical Engineering.    Before joining Georgia Tech in 2015, Dr. Krishna was a post-doctoralresearcher in the VSSAD Group at Intel, Massachusetts, and then at theSingapore-MIT Alliance for Research and Technology at MIT.  Dr. Krishna’s research interests are in computer architecture,interconnection networks, networks-on-chip, deep learningaccelerators, and FPGAs.&lt;/span&gt;System Simulation with gem5, SystemC and other ToolsSystemC TLM based virtual prototypes have become the main tool inindustry and research for concurrent hardware and software development,as well as hardware design space exploration. However, there exists alack of accurate, free, changeable and realistic SystemC models ofmodern CPUs. Therefore, many researchers use the cycle accurate opensource system simulator gem5, which has been developed in parallel tothe SystemC standard. In this tutorial we present the coupling of gem5with SystemC that offers full interoperability between both simulationframeworks, and therefore enables a huge set of possibilities for systemlevel design space exploration. Furthermore, we show several examplesfor coupling gem5 with SystemC and other tools.  Matthias Jung received his PhD degree in Electrical Engineeringfrom the University of Kaiserslautern Germany in 2017. His researchinterest are SystemC based virtual prototypes, especially with thefocus on the modeling of memory systems and memory controller design.Since may 2017 he is a researcher at Fraunhofer IESE, Kaiserslautern,Germany.  Christian Menard received a Diploma degree in Information SystemsTechnology from TU Dresden in Germany in 2016 and joined the chair forcompiler construction as a Ph.D. student within the excellence clustercfaed in TU Dresden. His current research includes system-levelmodeling of widely heterogeneous hardware as well dataflow compilersfor heterogeneous MPSoC platforms.&lt;/span&gt;CPU power estimation using PMCs and its application in gem5Fast and accurate estimation of CPU power consumption is necessary toinform run-time power management approaches and allow effective designspace exploration. Power simulators, combined with a full-systemarchitectural simulator such as gem5, enable power-performancetrade-offs to be investigated early in the design of a system. However,the accuracy of existing power simulators is known to be low, and thiscan lead to incorrect conclusions being made. In this talk, I willpresent our statistically rigorous methodology for building accuraterun-time power models using Performance Monitoring Counters (PMCs) formobile and embedded devices, and demonstrate how our models make moreefficient use of limited training data and better adapt to unseenscenarios by uniquely considering stability. Models built using themethodology for both ARM Cortex-A7 and Cortex-A15 CPUs exhibit a 3.8%and 2.8% average error respectively. I will also present onlineresources that we have made available from the work, including softwaretools, documentation, raw data and further results. I will also presentresults from an investigation into the correlation between gem5 activitystatistics and hardware PMCs. Based on this, a gem5 power model for asimulated quadcore ARM Cortex-A15 has been created, built using theabove methodology, and its accuracy compared against experimentalresults obtained from hardware.  Geoff Merrett is an Associate Professor in the Department ofElectronics and Computer Science at the University of Southampton. Hereceived the BEng (1st, Hons) and PhD degrees in ElectronicEngineering from Southampton in 2004 and 2009 respectively. Hisresearch interests are in energy-aware and self-powered computingsystems, with application across the spectrum from highly constrainedIoT devices to many-core mobile and embedded systems. He has publishedover 100 peer-reviewed articles in these areas, and given invitedtalks at a number of international events. Dr Merrett is aCo-Investigator on the EPSRC-funded £5.6M PRiME Programme Grant (wherehe leads the applications and cross-layer interaction theme),“Continuous on-line adaptation in many-core systems: From gracefuldegradation to graceful amelioration”, and deputy-lead on the“Wearable and Autonomous Computing for Future Smart Cities” PlatformGrant. He is technical manager of Southampton’s ARM-ECS ResearchCentre, an award-winning industry-academia collaboration between theUniversity of Southampton and ARM. He coordinates IoT research at theUniversity, and leads the wireless sensing theme of its PervasiveSystems Centre. He is an Associate Editor for the IET CDS journal,serves as a reviewer for a number of leading journals, and on TPCs fora range of conferences. He co-manages the UK’s Energy HarvestingNetwork, was General Chair of the ACM Workshop on Energy-Harvestingand Energy-Neutral Sensing Systems in 2013, 2014, and 2015, and wasthe General Chair of the European Workshop on MicroelectronicsEducation 2016. He is a member of the IEEE, IET and Fellow of the HEA.&lt;/span&gt;Short TalksDebugging a target-agnostic JIT compiler with GEM5Author: Boris Shingarov - LabWareWe explain how GEM5 enabled us to develop a target-agnostic JITcompiler, in which no knowledge about the target ISA is coded by thehuman programmer; instead, the backend is inferred, using logicprogramming, from a formal machine description written in a ProcessorDescription Language. Debugging such a JIT presents some challengeswhich can not be addressed using traditional approaches. One suchchallenge is the impedance mismatch between the high-level abstractionsin the PDL and the low-level inferred implementation. In this talk, wepresent a new debugger based on simulating the execution of the targetruntime VM in GEM5; the debugger frontend connects to this simulationusing the RSP wire protocol.&lt;/span&gt;COSSIM: An Integrated Solution to Address the Simulator Gap for Parallel Heterogeneous SystemsIn an era of complex networked heterogeneous systems, simulatingindependently only parts, components or attributes of asystem-under-design is not a viable, accurate or efficient option. Theinteractions are too many and too complicated to produce meaningfulresults and the optimization opportunities are severely limited whenconsidering each part of a system in an isolated manner. COSSIM offers aframework that can handle the simulation of a complete system-of-systemsincluding processors, peripherals and networks that can appeal toParallel (Heterogeneous) Systems designers and application developers inan integrated way.The framework is based on gem5 as the main simulation engine forprocessor-based systems and extends its capabilities by integrating itwith the OMNET++ network simulator. This integration allows independentgem5 instances to be networked with all network protocols andhierarchies that can be supported by OMNET++, thus creating a veryflexible solution. The integration of the two main simulation tools isrealized through the IEEE 1516 High-Level Architecture standard (HLA),through which all communication tasks are performed. Through HLA andcustom libraries, a two-level (per node and global) synchronizationscheme is also implemented to ensure a coherent notion of time betweenall nodes.Since HLA is IP-based all gem5 instances and OMNET++ can be executed onthe same physical machine or on any distributed system (or anycombination in between). The overall framework – the set of gem5 nodes,the OMNET++ simulator and the CERTI HLA – are integrated in a unifiedEclipse-based GUI that has been developed to provide easy simulationset-up, execution and visualization of results. McPAT is also integratedin a semi-automated way through the GUI in order to provide power andenergy estimations for each node, while OMNET++ provides powerestimations for networking-related components (NICs and networkdevices).  Andreas Brokalakis is a senior hardware engineer at SynelixisSolutions Ltd. At the same time he is pursuing a PhD degree at theTechnical University of Crete, Greece. He holds a Bachelor degree inComputer Engineering from University of Patras, Greece and a Master’sDegree on Hardware/Software Co-design from the same university.Current work and research interests involve computer architecture andarithmetic, as well as design of ASIC and FPGA systems andaccelerators.  Nikolaos Tampouratzis is a PhD student at Technical University ofCrete, working on simulation tools for computing systems. He hasjoined Telecommunication Systems Institute, Technical University ofCrete since October 2012 as a research associate, providing researchand development services to several EU-funded research projects. Hereceived his Computer Science diploma from the University of Crete(UOC, Greece), with specialization in Hardware Design and FPGAs. Hecontinued his studies in the Technical University of Crete (TUCGreece) where he received his Master Diploma in Electronic andComputer Engineering in which he specialized in Computer Architectureand Hardware Design.&lt;/span&gt;Simulation of Complex Systems Incorporating Hardware AcceleratorsThe breakdown of Dennard scaling coupled with the persistently growingtransistor counts increased the importance of application-specifichardware acceleration; such an approach offers significant performanceand energy benefits compared to general-purpose solutions. In order tothoroughly evaluate such architectures, the designer should perform aquite extensive design space exploration so as to evaluate thetrade-offs across the entire system. The design, until recently, hasbeen predominantly done using Register Transfer Level languages such asVerilog and VHDL, which, however, lead to a prohibitively long andcostly design effort. In order to reduce the design time a wide range ofboth commercial and academic High-Level Synthesis (HLS) tools haveemerged; most of these tools, handle hardware accelerators that aredescribed in synthesizable SystemC. The problem today, however, is thatmost simulators used for evaluating the complete user applications (i.e.full-system CPU/Mem/Peripheral simulators) lack any type of SystemCaccelerator support.Within this context, we extend gem5 to support the simulation of genericSystemC accelerators. We introduce a novel flow that enables us torapidly prototype synthesisable SystemC hardware accelerators inconjunction with gem5. The proposed solution handles automatically allcommunication and synchronisation issues.Compared to a standard gem5 system, several changes at different levelsare required, from the OS and device drivers level down to theimplementation of a device model in the gem5 simulator. Instead of usingfiles to write data for an external accelerator, perform the simulationand then read back the results, our approach communicates with theSystemC simulator through programmed I/Os and DMA engines, supportingfull global synchronisation. Apart from the apparent benefits concerningthe implementation and simulation accuracy, the proposed solution isalso orders of magnitude faster.  Nikolaos Tampouratzis is a PhD student at Technical University ofCrete, working on simulation tools for computing systems. He hasjoined Telecommunication Systems Institute, Technical University ofCrete since October 2012 as a research associate, providing researchand development services to several EU-funded research projects. Hereceived his Computer Science diploma from the University of Crete(UOC, Greece), with specialization in Hardware Design and FPGAs. Hecontinued his studies in the Technical University of Crete (TUCGreece) where he received his Master Diploma in Electronic andComputer Engineering in which he specialized in Computer Architectureand Hardware Design.&lt;/span&gt;Generating Synthetic Traffic for Heterogeneous ArchitecturesModern system-on-chip architectures consist of many heterogeneousprocessing elements. The communication fabric and memory hierarchysupporting these processing elements heavily influence the system’soverall performance. Exploring the design space of these heterogeneousarchitectures with detailed models of each processing element can betime-consuming. Statistical simulation has been shown to be an effectivetool for quickly evaluating architectures by abstracting awaycomplexity.This talk describes work done on modelling the spatial and temporalbehaviour of a processing element’s address stream. We present amethodology that can automatically characterize a processing element byobserving its reads and writes. Using these characteristics we canstimulate a communication fabric connecting many different processingelements by synthetically recreating their addresses. These addressesarrive at their destination in the memory hierarchy, spawning newmessages and responses to read and write requests. Architects can nowcombine ynthetic processing elements that represent various differentcomponents on current and future systems-on-chip to evaluate the impactof changes at the interconnection network and memory hierarchy.  Mario Badr is a PhD Candidate at the University of Toronto workingunder the supervision of Dr. Natalie Enright Jerger. He received hisB.A.Sc. and M.A.Sc from the University of Toronto in ElectricalEngineering and Computer Engineering, respectively. He has internedwith Qualcomm Research Silicon Valley and received the RobertoPadovani Scholarship for his outstanding technical contributions. Inaddition, he has been recognized at the university and departmentallevels for excellence as a teaching assistant. His research interestsinclude performance evaluation in computer architecture, heterogeneousarchitectures, and multi-threaded workloads.&lt;/span&gt;ARM Research Starter Kit: System Modeling using gem5ARM Research Enablement aims to enhance computing research by enablingresearchers worldwide to easily access ARM-based IP and technologies,and helping them to increase their research impact. As a part of ourresearch enablement activities, we provide a System Modeling ResearchStarter Kit using gem5. We have released a High Performance In-order(HPI) CPU timing model based on ARMv8-A in gem5. I will present ahigh-level overview of the released system, its documentation andbenchmark scripts. This talk will target those who are new to gem5 aswell as those who would like to promote gem5 in research.  Ashkan Tousi is a Senior Research Engineer at ARM Cambridge and anHonorary Lecturer at the University of Glasgow. He received his PhD incomputing science (parallel computing) in 2015. He currently leadsresearch enablement activities at ARM, which cover a range ofdifferent research areas from SoC design to IoT and data science.&lt;/span&gt;Interacting with gem5 using workload-automation &amp; devlibRunning workloads on gem5 is often not straightforward. This talk willdiscuss workload-automation and devlib, 2 new open-source tools tointeract with gem5. These frameworks, written to interact with varioushardware platforms, have recently been extended to include gem5 as aplatform. We will discuss use cases and advantages/disadvantages of eachtool and show how they can make your gem5 work easier.  Anouk Van Laer is a Modelling Engineer in Architecture: Systems &amp;Technology group at ARM. She obtained her PhD at University CollegeLondon, where she investigated the effects of optical interconnects onthe performance of chip multiprocessors, using gem5.&lt;/span&gt;gem5: empowering the massesThis talk will give an overview of the state of power modelling in gem5.After discussing the basic power modelling infrastructure, it will coverthe state of CPU DVFS as well as recent improvements in how CPU powerstates are controlled for the ARM architecture in gem5. The talk willcover these improvements in power modelling, highlighting the way inwhich the accuracy and versatility of the simulator have been improved.  Sascha Bischoff is a Senior Software Engineer in the Architecture:Systems &amp; Technology group at ARM in Cambridge. Whilst completing hisPhD with the University of Southampton, he spent 3.5 years based inARM Research in Cambridge. He has spent a large part of the last 6years working with gem5, typically with a focus on power management,ideally without impacting the deliveredperformance.&lt;/span&gt;Integrating and quantifying the impact of low power modes in the DRAM controller in gem5Across applications, DRAM is a significant contributor to the overallsystem power, with the DRAM access energy per bit up to three orders ofmagnitude higher compared to on-chip memory accesses. To improve thepower efficiency, DRAM technology incorporates multiple low power modes,each with different trade-offs between achievable power savings andperformance impact due to entry and exit delay requirements. Accuratemodeling of these low power modes and entry and exit control is crucialto analyze the trade-offs across controller configurations and workloadswith varied memory access characteristics.In this talk, we will give an overview of the decision making logic weadded to the DRAM controller in gem5 that triggers transitions to/fromthe power-down modes. Integrating this functionality makes gem5 thefirst publicly available DRAM low power full-system simulator, providingthe research community a tool for DRAM power analysis for a breadth ofuse cases. We will conclude with simulation data that characterises thelow power behaviour and shows energy and performance trade-offs forrealistic workloads.Note: This talk is based on a paper accepted at MEMSYS 17. Authorsfrom ARM: Radhika Jagtap, Wendy Elsasser and Andreas Hansson. Authorsfrom University of Kaiserslautern: Matthias Jung and Norbert Wehn.  Radhika Jagtap is a Senior Research Engineer working in the Memory&amp; Systems research group. She has plenty of experience with gem5(elastic traces, interconnect, memory controller) and is involved inseveral collaborative research projects, especially with academics.Currently she is exploring the problem of energy efficient datamovement for sparse data workloads.&lt;/span&gt;",
        "url": "/events/arm-summit-2017"
      }
      ,
    
      "events-asplos-2008": {
        "title": "ASPLOS 2008",
        "content": "Using the M5 Simulator ASPLOS 2008 Tutorial Sunday March 2nd, 2008IntroductionThis half-day tutorial will introduce participants to the M5 simulatorsystem. M5 is a modular platform for computersystem architecture research, encompassing system-level architecture aswell as processor microarchitecture.We will be discussing version 2.0 of the M5 simulator and specificallyits new features including:  Multiple ISA support (Alpha, ARM, MIPS, and SPARC)  An execute-in-execute out-of-order SMT CPU timing model, with noSimpleScalar license encumbrance  Message-oriented interface for memory system objects, designed tosimplify the development of non-bus interconnects  New caches models that are easier to modify  New multi-level bus-based coherence protocol  More extensive Python integration and scripting support  Performance improvements  Generating checkpoints for simpointsBecause the primary focus of the M5 development team has been simulationof network-oriented server workloads, M5 incorporates several featuresnot commonly found in other simulators.  Full-system simulation using unmodified Linux 2.4/2.6, FreeBSD, orSolaris (More are on the way)  Detailed timing of I/O device accesses and DMA operations  Accurate, deterministic simulation of multiple networked systems  Flexible, script-driven configuration to simplify specification ofcomplex multi-system configurations  Included network workloads such as Apache, NAT, and NFS  Support for storing results from multiple simulations in a unifieddatabase (e.g. MySQL) for automated reporting and graph generationM5 also integrates a number of other desirable features, includingpervasive object orientation, multiple interchangeable CPU models, anevent-driven memory system model, and multiprocessor capability.Additionally, M5 is also capable of application-only simulation usingsyscall emulation.M5 is freely distributable under a BSD-style license, and does notdepend on any commercial or restricted-license software.Intended AudienceResearchers in academia or industry looking for a free, open-source,full-system simulation environment for processor, system, or platformarchitecture studies. Please register via theASPLOS 2008web page.Tentative TopicsThe following topics will be discussed in detail during the tutorial:  M5 structure  Object structures  Specifying configurations  Object serialization (checkpoints)  Events  CPU models  Memory/Cache models  I/O devices  Full-system modeling  Statistics  Debugging techniques  ISA description language  Future directionsSpeakers  Ali G. Saidi is a Ph.D. candidate in the EECS Department at theUniversity of Michigan, and wrote much of the platform code forLinux full-system simulation. He received a BS in electricalengineering from the University of Texas at Austin and an MSE incomputer science and engineering from the University of Michigan.  Steven K. Reinhardt is an associate professor in the EECS Departmentat the University of Michigan, and a principal developer of M5. Hereceived a BS from Case Western Reserve University and an MS fromStanford University, both in electrical engineering, and a PhD incomputer science from the University of Wisconsin-Madison. While atWisconsin, he was the principal developer of the Wisconsin WindTunnel parallel architecture simulator.  Nathan L. Binkert is currently a Senior Research Scientist with HPLabs and a principal developer of M5. He received a BSE inelectrical engineering and an MS and a PhD in computer science bothfrom the University of Michigan. As an intern at Compaq VSSAD, hewas a principal developer of the ASIM simulator, currently inwidespread use at Intel.  Steve Hines is a Ph.D. candidate in the CS Department at FloridaState University, and created the ARM port of M5. He received a BSfrom Illinois Institute of Technology and an MS from Florida StateUniversity.NOTOC",
        "url": "/events/asplos-2008"
      }
      ,
    
      "events-asplos-2017": {
        "title": "ASPLOS 2017",
        "content": "Architectural Exploration with gem5AbstractThis tutorial will give a brief introduction to gem5 for computerengineers who are new to gem5. The attendees will learn what gem5 canand can not do, how to use and extend gem5, as well as how to contributeback to gem5.Target AudienceThe primary audience is junior computer architecture engineers (e.g.,first or second year graduate students, as well as junior engineers) whoare planning on using gem5 for future architecture research. We alsoinvite others who want a high-level idea of how gem5 works and itsapplicability to architecture research.The tutorial is free to attend (no registration fee required),registration is required via ASPLOS.Prerequisites: Attendees are expected to have a working knowledge ofC++, Python, and computer systems.SlidesThe slides from the tutorial can be downloadedhere.ScheduleThe tutorial is scheduled on the Sunday afternoon 9th April 2017 at TheWestin Xi’an hotel.            Topic      Time                  Introduction      13:00-13:10              Getting started with gem5      13:10-13:30              Advanced configurations      13:30-13:55              Debug &amp; Trace      13:55-14:05              Creating SimObjects      14:05-14:30                                    Break      14:30-15:00              Introduction to memory subsystems      15:00-15:45              Introduction to CPU models      15:45-16:10              Advanced gem5 features and capabilities      16:10-16:40              How to contribute to gem5      16:40-17:00      PresentersThis tutorial is organised by Andreas Sandberg, Stephan Diestelhorst andWilliam Wang of ARM Research",
        "url": "/events/asplos-2017"
      }
      ,
    
      "events-isca-2011": {
        "title": "ISCA 2011",
        "content": "Call for Participation: ISCA 2011 Tutorialgem5: A Multiple-ISA Full System Simulator with Detailed MemoryModelingSunday, June 5, 2011 8:30 amhttp://www.gem5.orgThe gem5 simulator is a merger of two of the computer architecturecommunity’s most popular, open source simulators: M5 and GEMS. The bestfeatures of each simulator have been combined to provide aninfrastructure capable of simulating multiple ISAs, CPU models, memorysystem components, cache coherence protocols and interconnectionnetworks. The gem5 simulation team invites users, developers, and allother interested parties to participate in a tutorial that willhighlight the key aspects of the gem5 simulator .The first half of this full-day tutorial will be an organizedpresentation focusing on gem5 usage and capabilities. The second half isintended to be more free form where we will answer audience questions onspecific usage, including modification of the simulator to enable newfeatures.Topics to be discussed include:  Multiple ISA support (e.g. ARM and x86)  Detailed and simple CPU models including “execute-in-execute”in-order and out-of-order pipeline models.  Cache coherence protocols using SLICC  Interconnection network modeling (Crossbar, Mesh, etc.)  Checkpointing and fast-forwardingWe look forward to your participation in the gem5 tutorial and hope thatby the end of the tutorial you’ll be able to utilize the gem5infrastructure in your future research.Thanks,The gem5 Simulation Team",
        "url": "/events/isca-2011"
      }
      ,
    
      "events-isca-2015": {
        "title": "ISCA 2015 - 2nd User Workshop",
        "content": "Second gem5 User WorkshopJune 14th, 2015; Portland, ORFollowing up from a successful 2012workshop, it is time for the 2015edition of the gem5 user workshop. The primary objective of thisworkshop is to bring together groups across the community who areactively using gem5. Discussion topics will include the activity of thegem5 community, how we can best leverage each others contributions, andhow we continue to make gem5 a successful, community-supportedsimulation framework. Those who will get the most out of the workshopare current users of gem5, although anyone is welcome to attend.The key part of the workshop is a set of presentations from thecommunity about how individuals or groups are using the simulator, anyfeatures you have added that might be useful to others, and any majorpain points, and what can be done to make gem5 better and more broadlyadopted. The hope is that this will provide a forum for people withsimilar uses or needs to connect with eachother.Final Program            Topic      Time      Presenter      Affiliation                  Introduction &amp; Overview of Changes      9:00 AM      Steve Reinhardt      AMD              Classic Memory System Re-visited      9:30 AM      Andreas Hansson      ARM              User Perspectives                                   AMD’s gem5 APU Simulator      10:00 AM      Brad Beckmann      AMD              NoMali: Understanding the Impact of Software Rendering Using a Stub GPU      10:15 AM      Andreas Sandberg      ARM              Cycle-Accurate STT-MRAM model in gem5      10:30 AM      Cong Ma      University of Minnesota              An Accurate and Detailed Prefetching Simulation Framework for gem5      10:45 AM      Martí Torrents Lapuerta      Polytechnic University of Catalonia              Break      11:00 AM                            Supporting Native PThreads in SE Mode      11:30 AM      Brandon Potter      AMD              Dynamically Linked Executables in SE Mode      11:45 AM      Brandon Potter      AMD              Coupling gem5 with SystemC TLM 2.0 Virtual Platforms      12:00 PM      Matthias Jung      University of Kaiserslautern              SST/gem5 Integration      12:15 PM      Simon D. Hammond      Sandia              Lunch      12:30 PM                            Full-System Simulation at Near Native Speed      1:30 PM      Trevor Carlson      Uppsala University              Enabling x86 KVM-Based CPU Model in Syscall Emulation Mode      1:45 PM      Alexandru Dutu      AMD              Parallel gem5 Simulation of Many-Core Systems with Software-Progammable Memories      2:00 PM      Bryan Donyanavard      UC Irvine              Infrastructure for AVF Modeling      2:15 PM      Mark Wilkening      AMD              gem5-Aladdin Integration for Heterogeneous SoC Modeling      2:30 PM      Y. Sophia Shao      Harvard University              Experiences Implementing Tinuso in gem5      2:45 PM      Maxwell Walter      Technical University of Denmark              Experiences with gem5      3:00 PM      Miquel Moretó Planas      BSC/UPC              Little Shop of gem5 Horrors (see also Jason’s blog post)      3:15pm      Jason Power      University of Wisconsin              Break      3:30 PM                            Breakout Sessions                                   Breakout Sessions      4:00 PM      Breakout Groups                     Wrap-Up      5:00 PM      Everyone                                                         Conclusions      5:30 PM      Ali Saidi      ARM      NOTOC",
        "url": "/events/isca-2015"
      }
      ,
    
      "events-isca-2018": {
        "title": "ISCA 2018",
        "content": "AMD gem5 APU Simulator: Modeling GPUs Using the MachineISAHeld in conjunction with [ISCA 2018](http://iscaconf.org/isca2018/).June 2nd, 2018.Important DatesThe tutorial will be held on day one of the conference - June 2nd, 2018ISCA 2018 early registration and hotel reservation deadline - April16th, 2018AbstractAMD Research has developed an APU (Accelerated Processing Unit) modelthat extends gem5 [1] with a GPU timing model that executes the GCN(Graphics Core Next) generation 3 machine ISA [2, 3]. In addition tosupporting a modern machine ISA, the model supports running theopen-source Radeon Open Compute platform (ROCm) stack withoutmodification. This allows users to run a wide variety of applicationswritten in several high-level languages, including C++, HIP, OpenMP, andOpenCL. This provides researchers the ability to evaluate many differenttypes of workloads, from traditional compute applications to emergingmodern GPU workloads, such as task parallel and machine learningapplications. The resulting AMD gem5 APU simulator is a cycle-level,flexible research model that is capable of representing many differentAPU configurations, on-chip cache hierarchies, and system designs. OurAPU extensions allow researchers to model both CPU and GPU memoryrequests and the interactions between them. In particular, the modeluses SLICC and Ruby to implement a wide variety of coherence andsynchronization solutions, which is a critical research area inheterogeneous computing. The model has been used in several top-tiercomputer architecture publications in the last several years [MICRO2013, HPCA 2014, ASPLOS 2014, ISCA 2014, HPCA 2015, ASPLOS 2015, MICRO2016, HPCA 2017, ISCA 2017, HPCA 2018].In this tutorial, we will describe the capabilities of the AMD gem5 APUsimulator that will be publically released with a liberal BSD licensebefore ISCA 2018. We will detail the simulated APU architecture, reviewthe execution flow, and describe how the simulator has been used. Thepresentation will also discuss key design decisions and tradeoffs. Forexample, we use the system-call emulation mode to avoid running a fullOS and kernel driver, therefore we will describe the simulator’ssystem-call emulation interface, and how the ROCm runtime and user spacedrivers interact with it. Also, our GPU model now directly executesnative machine ISA instructions rather than the HSAIL intermediatelanguage representation. Previously relying on executing theintermediate language simplified workload compilation, but was lessaccurate when modeling hardware behavior. In this tutorial, we willhighlight many of the improvements enabled by executing the GCN3 ISA.[1]. Nathan Binkert et al. The gem5Simulator. In SIGARCH ComputerArchitecture News, vol. 39, no. 2, pp. 1-7, Aug. 2011.[2]. AMD. AMD GCN3 ISA ArchitectureManual[3]. Anthony Gutierrez et al. Lost in Abstraction: Pitfalls ofAnalyzing GPUs at the Intermediate LanguageLevel. In HPCA 2018.SlidesSchedule            Topic      Presenter      Time                  Background      Tony      8:00-8:15 am              ROCm Stack, GCN3 ISA, and uArch      Tony      8:15-9:15 am              HSA Queuing      Sooraj      9:15-10:00 am              Break      10:00-10:30 am                     Ruby and GPU Protocol Tester      Tuan      10:30-11:15 am              Demo/Workloads and Q+A      TBD      11:15-12:00 pm      PresentersTony Gutierrez (AMD Research)Sooraj Puthoor (AMD Research)Brad Beckmann (AMD Research)Tuan Ta (Cornell)",
        "url": "/events/isca-2018"
      }
      ,
    
      "events-micro-2012": {
        "title": "MICRO 2012 - 1st user workshop",
        "content": "First gem5 User WorkshopDecember 2012; Vancouver, BCThe primary objective of this workshop is to bring together groupsacross the community who are actively using gem5, discuss what is goingon in the gem5 community, how we can best leverage each otherscontributions, and how we continue to make gem5 a successfulcommunity-supported simulation framework. Those who will get the mostout of the conference are current users of gem5, although anyone iswelcome toattend.Program            Topic      Time      Presenter      Affiliation                  Introduction      8:30 AM      Ali Saidi      ARM              Recent Contributions                                   Memory System Enhancements      8:45 AM      Andreas Hannson      ARM              Visualizing stats via Streamline      9:05 AM      Dam Sunwoo      ARM              User Perspectives                                   HAsim: FPGA-Based Micro-Architecture Simulator      9:20 AM      Michael Adler      Intel              VLIW DSPs/MIPS FS mode      9:35 AM      Deyuan Guo and Hu He      Tsinghua Univ.              Eclipse Integration      9:50 AM      Deyuan Guo and Hu He      Tsinghua Univ.              Break      10:05 AM                            Full-System Workloads and Asymmetric Multi-Core Simulation      10:30 AM      Anthony Gutierrez      Univ. of Michigan              ARM SoC exploration      10:45 AM      Alexandre Romana and Abhilash Nair      Texas Instruments              SystemC integration      11:00 AM      Alexandre Romana      Texas Instruments              Composite Cores      11:15 AM      Shruti Padmanabha and Andrew Lukefahr      Univ. of Michigan              Customized InOrder CPU Modeling      11:30 AM      Korey Sewell      Univ. of Michigan (now at Qualcomm)              Cross-Cutting Infrastructure for Evaluating Managed Languages and Future Architectures      11:45 AM      Paul Gratz      Texas A\\&amp;M Univ.              Lunch      12:00 PM                            Simplifying SLICC via Atomic Messages      1:00 PM      Brad Beckmann      AMD              Accelerating Simulation with Virtual Machines      1:15 PM      Ali Saidi      ARM              gem5-gpu: A Simulator for Heterogeneous Processors      1:30 PM      Jason Power and Marc Orr      Univ. of Wisconsin-Madison              Breakout Sessions                                   Breakout Sessions      1:45 PM      Breakout Groups                     Break      3:00 PM                            Wrap-Up/Next Steps      3:30 PM      Everyone                                                         Conclusions      4:00 PM      Steve Reinhardt      AMD      LocationThe workshop is co-located withMICRO-45 in Vancouver, BC.DateSunday December 2nd from 8:30 - 16:30.NOTOC",
        "url": "/events/micro-2012"
      }
      ,
    
      "events-asplos-2018": {
        "title": "ASPLOS 2018",
        "content": "Learning gem5 Tutorial at ASPLOS 2018Thanks to all of those who attended the tutorial! Links to the slides and videos are below.  Part 1: Slides and Video  Part 2: Slides and Video 1 Video 2  Part 3: Slides and Video  Part 4: Slides and Video  Part N: SlidesWe will be hosting a Learning gem5 tutorial at ASPLOS 2018 in Williamsburg, VA on March 24th.gem5 is used by an incredible number of architecture researchers. The gem5 paper has been cited over 2000 times according to Google Scholar. However, gem5 is a unique software infrastructure; as a user, you also have to be a developer. Currently, there are few resources for young computer architects to learn how to productively use gem5.This tutorial builds off of the Learning gem5 book and will introduce junior architecture students to the inner workings of gem5 so they can be more productive in their future research. The goal of the “tutorial” section of this tutorial is not to introduce attendees to every feature of gem5, but to give them a framework to succeed when using gem5 in their future research.After spending the morning learning about the basics of how gem5 works, the afternoon will be a series of invited talks from users who have experience using gem5 on “gem5 best practices”. This will cover a variety of topics including the basics of computer architecture research, software development practices, and how to contribute to the gem5 open source project.This tutorial is perfect for beginning graduate students or other computer architecture researchers to get started using one of the architecture communities most popular too.This page is under development. It will be updated often leading up to the day of the tutorial. Hope to see you there!Preparing for the tutorialTo get the most out of this tutorial, you are encouraged to bring a laptop to work along. This will be an interactive tutorial, with many coding examples. Additionally, by bringing a laptop, you will be able to easily participate in the afternoon coding sprint.While this tutorial is appropriate for you even if you’ve never used gem5 before, you’ll get more out of it if you familiarize yourself with gem5 before coming. Specifically, by downloading gem5 and making sure it builds on your system you will save yourself a lot of time. Reading and completing the first chapter from the the Learning gem5 book before coming to the tutorial is strongly encouraged.AudienceThe primary audience is junior computer architecture researchers (e.g., first or second year graduate students) who are planning on using gem5 for future architecture research. We also invite others who want a high-level idea of how gem5 works and its applicability to architecture research.ScheduleMorning Schedule: Learning gem5 8:30 – 10:00  Breakfast 7:00 – 8:30  What is gem5 and history  Getting started with gem5          Overall (software) architecture of gem5      Compiling gem5      Simple introduction script      First time running gem5      Interpreting gem5’s output      Simple assembly example to show debug trace of everything        Extending gem5          Structure of C++ code      Writing a simple SimObject        BREAK 10:00 – 10:30          Discrete event simulation programming      SimObject parameters      gem5 memory system      Overview of simple cache implementation      Lunch (Provided) 12:00 – 1:30Advanced Learning gem5 topics 1:30 – 3:30  Building a CPU model in gem5          ISAs and CPU model ISA relation      Overview of different CPU models      Building a simple CPU model        Coherence protocols with Ruby          Intro to Ruby      Simple MSI protocol      Configuring Ruby      Debugging Ruby protocols        Quick overview of other gem5 topics          Overview of full system simulation      Briefly gem5’s other features      gem5 limitations        BREAK 3:30 – 4:00gem5 Best Practices 4:00 – 5:00      Developing and contributing to gem5    This will cover an quick introduction to git, best practices for contributing, how to test gem5, and how to use gem5’s code review site.        Ryota Shioya: Visualizing the out-of-order CPU model    Konata is a new CPU pipeline viewer and has many useful features not in the previous text-based viewer. This talk will explain how to use the new viewer and best practices in gem5. [https://github.com/shioyadan/Konata/releases]    Link to presentation        Éder F. Zulian: Using gem5 for Memory Research    This talk provides an overview of our experiences with the gem5 simulator at the Microelectronic System Design Research Group of the TU Kaiserslautern. It begins with our motivation and use cases for applying gem5. Then we jump ahead to a brief description of innovations introduced by our research group and partners.    The span of topics covers the DRAM power model used by gem5 (DRAMPower), which is being currently extended and maintained by our group. Furthermore, we show how a simple HMC memory model can be built from native objects provided by gem5, the configuration parameters are generated by our DRAMSpec tool.    Moreover, we present how gem5 can be coupled to SystemC/TLM2.0 based modules, an interesting approach for industry to reuse in-house and third-party SystemC modules together with gem5. Finally, we close the session showing a bunch of useful scripts, called gem5 Tips and Tricks, for setting up and breaking the ice with gem5.    Link to presentation  Open forum for questions and feedback 5:00 – 5:30",
        "url": "/events/asplos-2018"
      }
      ,
    
      "events-hpca-2017": {
        "title": "HPCA 2017",
        "content": "Learning gem5 Tutorial and Coding Sprint at HPCA 2017We will be hosting a Learning gem5 tutorial at HPCA 17 in Austin, TX. This tutorial will consist of two parts. In the morning, we will cover an introduction to gem5. Namely, I will be giving a series of lectures following the Learning gem5 book.In the afternoon, we will have a gem5 coding sprint. You can find more information about coding sprints on Wikipedia. We are planning on pairing junior developers, including those who attend the morning Learning gem5 tutorial, with more senior developers and squashing some gem5 bugs or adding small new features. We will have a list of small gem5 projects that can be knocked out in an afternoon. Hopefully, through this sprint, we will be able to expand the developers of gem5.This page is under development. It will be updated often leading up to the day of the tutorial. Hope to see you there!Preparing for the tutorialTo get the most out of this tutorial, you are encouraged to bring a laptop to work along. This will be an interactive tutorial, with many coding examples. Additionally, by bringing a laptop, you will be able to easily participate in the afternoon coding sprint.While this tutorial is appropriate for you even if you’ve never used gem5 before, you’ll get more out of it if you familiarize yourself with gem5 before coming. Specifically, by downloading gem5 and making sure it builds on your system you will save yourself a lot of time. Reading and completing the first chapter from the the Learning gem5 book before coming to the tutorial is strongly encouraged.About this tutorialgem5 is used by an incredible number of architecture researchers. The gem5 paper was cited by more than 800 papers last year (2015) alone according to Google Scholar. However, gem5 is a unique software infrastructure; as a user, you also have to be a developer. Currently, there are few resources for young computer architects to learn how to productively use gem5. Building off of a book, Learning gem5, this tutorial will introduce junior architecture students to the inner workings of gem5 so they can be more productive in their future research. The goal of the “tutorial” section of this tutorial is not to introduce attendees to every feature of gem5, but to give them a framework to succeed when using gem5 in their future research.After spending the morning learning about how gem5 works, the afternoon will be a hands-on “code sprint”. Members of the gem5 development community will introduce new gem5 contributors to the code submission and review process. We will spend the afternoon in small groups squashing simple bugs in gem5. This exercise will both help junior architects be more productive in their future work and improve gem5 at the same time.AudienceThe primary audience is junior computer architecture researchers (e.g., first or second year graduate students) who are planning on using gem5 for future architecture research. We also invite others who want a high-level idea of how gem5 works and its applicability to architecture research.For the afternoon coding sprint, we invite all gem5 developers to participate. The more participation we have from experienced developers, the more we can get done!ScheduleMore details to come soon.Morning Schedule: Learning gem5  8:30 — What is gem5 and history  8:40 — Getting started with gem5          Overall (software) architecture of gem5      Compiling gem5      Simple introduction script      First time running gem5      Interpreting gem5’s output      Simple assembly example to show debug trace of everything        9:15 — Extending gem5          Structure of C++ code      Writing a simple SimObject        BREAK 10:00 — 10:30          10:30 — Discrete event simulation programming      SimObject parameters      10:50 — gem5 memory system        11:40 — Quick overview of other gem5 topics          Overview of full system simulation      Overview of Ruby      Briefly gem5’s other features      gem5 limitations      12:00 — 1:30 Lunch (Will be provided!)Afternoon Schedule: gem5 Coding Sprint      Developing and contributing to gem5 — Andreas Sandberg [~30 minutes]    Andreas will describe the process of writing code and contributing changes to mainline gem5. He will go over the code submission and code review process. This will cover gem5’s new submission and code review process using Gerrit!    Split into small groups to work on code!  Closing statements, recap, and feedback.",
        "url": "/events/hpca-2017"
      }
      ,
    
      "events-ics-2018": {
        "title": "ISC-2018 Vector Architecture Exploration",
        "content": "Vector Architecture Exploration with gem5 (Arm)AbstractThe Arm Scalable Vector Extension (SVE) is a key enabling technology toaccelerate HPC and machine learning workloads on future Arm-basedprocessors. SVE does not set a specific vector length, which ismicroarchitecture-specific. This vector-length agnosticism increasesdesign space complexity and exacerbates the importance of havingflexible and accurate modeling tools.gem5 is an open-source full-system microarchitectural simulator that iswidely used in academia and industry. Arm is a major contributor to gem5and has developed and upstreamed many features and models. SVE supportin gem5 is being finalized to be made publicly available to enable usersto simulate multi-core architectures with SVE using Arm-provided timingmodels.This tutorial covers the features of SVE, the trade-offs of designing amulti-core that uses vectors, and the publicly available tools to modelthe performance of such vector architectures, with an emphasis on gem5with SVE support. In addition to gem5, the tutorial will also coverother analysis tools for SVE, such as the Arm Instruction Emulator,which will be made available to the participants through docker imagesto provide a quick start in these environments.Target AudienceThe primary audience are computer architect engineers both in academia(e.g., graduate students) and in industry who want to learn about theArm Scalable Vector Extension (SVE) and the Arm tools for SVE, or areplanning to use gem5 for architecture research, especially if they planto explore Arm vector architectures. The tutorial is also expected to beuseful as a high-level introduction to gem5 and how it can be used forarchitecture research.Prerequisites: working knowledge of computer systems, vectorarchitectures, C++ and Python is recommended.Schedule (tentative)            Topic      Time                  Introduction      10 min              The Arm Scalable Vector Extension      30 min              Vector Architecture Design and Tools      30 min              Introduction to gem5      15 min                                    Break      30 min              gem5 Basics      45 min              gem5 Advanced Features      45 min              SVE gem5 Simulation      30 min              Closing      5 min      OrganizersTutorial organized by Alex Rico and Jose Joao of Arm",
        "url": "/events/ics-2018"
      }
      ,
    
      "events": {
        "title": "Events",
        "content": "  ISCA 2020: 3rd gem5 Users’ Workshop  ISCA 2020: Learning gem5 Tutorial  ICS 2018: Vector Architecture Exploration with gem5  ASPLOS 2018: Learning gem5  Arm Research Starter Kit on System Modeling using gem5  ISCA 45: AMD gem5 APU Model  Arm Research Summit 2017: gem5 workshop  gem5 Tutorial and Coding Sprint at HPCA 2017  dist-gem5 at ISCA-44 (Toronto, 2017)  ASPLOS 22  HiPEAC Computer Systems Week  ISCA 38  ASPLOS-13  ISCA-33  ISCA-32We have held a handful of tutorials on M5/gem5s at various conferences. Thoughthe material in these tutorials can be out of date, the tutorialmaterials present a more organized (and in some cases more in-depth)overview than the wiki documentation. We highly recommend taking a lookat the most recent tutorial as a complement to the documentation on thewiki.The slides and handouts are the same material except that the handoutsare formatted with two slides per page.ISCA 2020: 3rd gem5 Users’ WorkshopMore information on the workshop page.The goal of the workshop is to provide a forum to discuss what is going on in the community, how we can best leverage each other’s contributions, and how we can continue to make gem5 a successful community-supported simulation framework. The workshop will be a half day in the afternoon on May 30.Details on how to submit an abstract for a presentation can be found on the workshop page.ISCA 2020: Learning gem5 TutorialMore information on the tutorial page.This tutorial builds off of Learning gem5 and will introduce architecture researchers to the inner workings of gem5.The goal of the tutorial is not to introduce attendees to every feature of gem5, but to give them a framework to succeed when using gem5 in their future research.ICS 2018: Vector Architecture Exploration with gem5Vector Architecture Exploration withgem5International Conference on Supercomputing, Beijing (China), June 2018This tutorial covers the Arm Scalable Vector Extension (SVE) and how touse gem5 to explore system architecture designs of microarchitecturesimplementing SVE.ASPLOS 2018: Learning gem5Full-day gem5 tutorial at ASPLOS 2018This tutorial covers the basics of building gem5, running it, extending and contributing to gem5, and other advanced gem5 topics.Arm Research Starter Kit on System Modeling using gem5https://github.com/arm-university/arm-gem5-rskGetting started instructions and an overview of the HPI model.ISCA 45: AMD gem5 APU ModelAMD gem5 APU Simulator: Modeling GPUs Using the MachineISAThis tutorial covers the gem5 APU model in detail. In particular, wediscuss the model’s support for executing GPU machine ISA instructionsand the full user space ROCm stack.Arm Research Summit 2017: gem5 workshopARM Research Summit 2017Workshop covers manyadvanced topics in gem5 such as Ruby, Garnet, and SystemC.gem5 Tutorial and Coding Sprint at HPCA 2017This tutorial introduces gem5 topics covered in the Learning gem5 book and paired junior software developers with seniors developers in a coding sprint to add features and bug fixes to the gem5 codebase using Gerrit.dist-gem5 at ISCA-44 (Toronto, 2017)dist-gem5 is a gem5-based simulation infrastructure which enablesfull-system simulation of a parallel/distributed computer system usingmultiple simulation hosts.  Tutorial websiteASPLOS 22Full day tutorial on gem5 atASPLOS 2017HiPEAC Computer Systems WeekThis tutorial was held in Gothenburg, Sweden in April 2012. It coversgem5 although for information about Ruby you should look at the ISCA 38tutorial. We recorded video of the tutorial which is available    below.  Slides  Overview  Introduction  Basics  RunningExperiments  Debugging  Memory  CPUModels  CommonTasks  Configuration  ConclusionISCA 38This tutorial, held in June 2011 at ISCA-38, it covered gem5 (the mergerbetween M5 and GEMS). It was extremely well attended with 65 peopleparticipating.ISCA 2011  Slides  Podcasts/video coming soon provided there are no technicaldifficultiesASPLOS-13This tutorial, held in March 2008 at ASPLOS XIII in Seattle, covered M52.0 and included several small examples on creating SimObjects andadding parameters.  Slides  Handouts  Video          Introduction– A brief overview of M5, its capabilities and concepts      Running –How to compile and run M5      FullSystem –Full system benchmarks, disk images, and scripts      Objects – Anoverview of the various object models that are available out ofthe box      Extending– M5 internals, defining new objects &amp; parameters, statistics,ISA descriptions, ARM &amp; X86 support, future development      Debugging– Facilities in M5 to aid debugging        DescriptionISCA-33This tutorial, held in June 2006 at ISCA 33 in Boston, was the first oneto cover M5 2.0.  Slides  Handouts  DescriptionISCA-32Our first tutorial, held in June 2005 at ISCA 32 in Madison, is ratherdated as it covered M5 1.X and not 2.0.  Slides  Handouts",
        "url": "/events/"
      }
      ,
    
      "events-isca-2006": {
        "title": "ISCA 2006",
        "content": "Using the M5 Simulator ISCA 2006 Tutorial Sunday June 18th, 2006IntroductionThis half-day tutorial will introduce participants to the M5 simulatorsystem. M5 is a modular platform for computersystem architecture research, encompassing system-level architecture aswell as processor microarchitecture.We will be releasing version 2.0 of M5 in conjunction with thistutorial. Features new in 2.0 include:  Multiple ISA support (Alpha, MIPS, and SPARC)  An all-new, execute-in-execute out-of-order SMT CPU timing model,with no SimpleScalar license encumbrance  All-new, message-oriented interface for memory system objects,designed to simplify the development of non-bus interconnects  More extensive Python integration and scripting supportBecause the primary focus of the M5 development team has been simulationof network-oriented server workloads, M5 incorporates several featuresnot commonly found in other simulators.  Full-system simulation using unmodified Linux 2.4/2.6, HP Tru64 5.1,or L4Ka::Pistachio) (Alphaonly at this time… coming in the future for MIPS and SPARC)  Detailed timing of I/O device accesses and DMA operations  Accurate, deterministic simulation of multiple networked systems  Flexible, script-driven configuration to simplify specification ofcomplex multi-system configurations  Included network workloads such as Apache, NAT, and NFS  Support for storing results from multiple simulations in a unifieddatabase (e.g. MySQL) for automated reporting and graph generationM5 also integrates a number of other desirable features, includingpervasive object orientation, multiple interchangeable CPU models, anevent-driven memory system model, and multiprocessor capability.Additionally, M5 is also capable of application-only simulation usingsyscall emulation.M5 is freely distributable under a BSD-style license, and does notdepend on any commercial or restricted-license software.Intended AudienceResearchers in academia or industry looking for a free, open-source,full-system simulation environment for processor, system, or platformarchitecture studies. Please register via theISCA 2006 web page.Tentative Outline  M5 structure          Object structure                  Intro to SimObjects          Object builder          Configuration language          Specialization using C++ templates          Object serialization (checkpointing)                    Events        CPU models          Simple functional model      Detailed out-of-order model      Sampling and warm-up support        Memory &amp; I/O system overview          Cache models      Interconnect models (busses, point-to-point networks)      Coherence support      I/O modeling                  Programmed I/O (uncached accesses)          DMA I/O                    Ethernet model                  NIC device models          Linux driver          Link layer model                      Full-system modeling          Building disk images      Console and PAL code      Running benchmarks via system init scripts      Target kernel introspection support        Statistics          Built-in statistics types      Adding new statistics      Using the database back end                  Setting up a results database          Using scripts to generate reports and graphs from thedatabase                      Debugging techniques          Built-in debugging support                  Tracing          Runtime checking          Gdb hooks                    Debugging target code (including kernels) using remote gdb        ISA description language          Adding your own instructions to the ISA      Adding support for new ISAs      Speakers  Steven K. Reinhardt is an associate professor in the EECS Departmentat the University of Michigan, and a principal developer of M5. Hereceived a BS from Case Western Reserve University and an MS fromStanford University, both in electrical engineering, and a PhD incomputer science from the University of Wisconsin-Madison. While atWisconsin, he was the principal developer of the Wisconsin WindTunnel parallel architecture simulator.  Nathan L. Binkert received his Ph.D. candidate from the EECSDepartment at the University of Michigan, and a principal developerof M5. He received a BSE in electrical engineering and MS incomputer science both from the University of Michigan. As an internat Compaq VSSAD, he was a principal developer of the ASIM simulator,currently in use at Intel and is currently with Arbor Networks.  Ronald G. Dreslinski is a Ph.D. student in the EECS Department atthe University of Michigan, and a developer of M5’s memory system.He received a BSE in electrical engineering, a BSE in computerengineering, and a MSE in computer science and engineering all fromthe University of Michigan.  Kevin T. Lim is a Ph.D. student in the EECS Department at theUniversity of Michigan, and the developer of M5’s detailed CPUmodel. He received a BSE in computer engineering and an MSE incomputer science and engineering from the University of Michigan.  Ali G. Saidi is a Ph.D. candidate in the EECS Department at theUniversity of Michigan, and wrote much of the platform code forLinux full-system simulation. He received a BS in electricalengineering from the University of Texas at Austin and an MSE incomputer science and engineering from the University of Michigan.NOTOC",
        "url": "/events/isca-2006"
      }
      ,
    
      "events-isca-2020": {
        "title": "ISCA 2020: Learning gem5 Tutorial and gem5 Users' Workshop",
        "content": "  Learning gem5 Tutorial and gem5 Users’ Workshop          Learning gem5 Tutorial      Schedule      3rd gem5 Users’ Workshop (Afternoon)      Learning gem5 Tutorial and gem5 Users’ WorkshopIn conjunction with ISCA 2020, we will be holding a gem5 Tutorial and Workshop on May 30th in Valencia, Spain.In the morning, we will be running a Learning gem5 Tutorial, and in the afternoon we will have a gem5 users’ workshop.The workshop will begin with a keynote detailing the recent changes in gem5 and announcing the first stable version of gem5, gem5-20.The workshop will also include a number of community-contributed talks.See below for details on how to submit an abstract for a talk.Note: You are not required to register for ISCA to register for this workshop.We hope to see you in Valencia!Learning gem5 TutorialThis tutorial builds off of Learning gem5 and will introduce architecture researchers to the inner workings of gem5.The goal of the tutorial is not to introduce attendees to every feature of gem5, but to give them a framework to succeed when using gem5 in their future research.This tutorial is perfect for beginning graduate students or other computer architecture researchers to get started using one of the architecture communities most popular tool.Preparing for the tutorialTo get the most out of this tutorial, you are encouraged to bring a laptop to work along. This will be an interactive tutorial, with many coding examples. Additionally, by bringing a laptop, you will be able to easily participate in the afternoon coding sprint.While this tutorial is appropriate for you even if you’ve never used gem5 before, you’ll get more out of it if you familiarize yourself with gem5 before coming. Specifically, by downloading gem5 and making sure it builds on your system you will save yourself a lot of time. Reading and completing the first chapter from Learning gem5 before coming to the tutorial is strongly encouraged.AudienceThe primary audience is computer architecture researchers that wish to learn how to use gem5, one of the architecture communities most popular and powerful simulatorsThis includes junior computer architecture researchers (e.g., first or second year graduate students) who are planning on using gem5 for future architecture research.We also invite others who want a high-level idea of how gem5 works and its applicability to architecture research.ScheduleLearning gem5 8:30 – 10:00  What is gem5 and history  Getting started with gem5          Overall (software) architecture of gem5      Compiling gem5      Simple introduction script      First time running gem5      Interpreting gem5’s output      Simple assembly example to show debug trace of everything        Extending gem5          Structure of C++ code      Writing a simple SimObject        BREAK          Discrete event simulation programming      SimObject parameters      gem5 memory system      Overview of simple cache implementation        Quick overview of other gem5 topics          Overview of full system simulation      Briefly gem5’s other features      gem5 limitations      3rd gem5 Users’ Workshop (Afternoon)Call for presentationsThe gem5 community is excited to announce the 3rd gem5 Users’ workshop held in conjunction with ISCA 2020 in Valencia, Spain. The goal of the workshop is to provide a forum to discuss what is going on in the community, how we can best leverage each other’s contributions, and how we can continue to make gem5 a successful community-supported simulation framework. The workshop will be a half day in the afternoon of May 30.The workshop will follow a half-day “Learning gem5” tutorial.The workshop will include a keynote presentation “RE-gem5 and gem5-20: Past, Present, and Future of the gem5 Community Infrastructure.”We invite the gem5 community to submit abstracts (1-2 paragraphs) for short presentations. The scope of this workshop is broadly the gem5 user and development community. Topics of interest include:  New features added to gem5  New models added to gem5  Extensions and integrations with other simulators  Experience using gem5  Validation of gem5 modelsWe encourage accepted presentations to post a full paper to arXiv or other archival repository in order to give other users a citable source for your contribution. These sources may be cited in future gem5 release notes.Please submit your abstracts via this Google Form. The deadline to submit an abstract is April 10th and we will send notifications by April 14th before the ISCA early registration deadline (April 16th). Due to the close proximity to other deadlines/conferences and the early registration deadline, there will not be any extension.Form for abstract submission: https://forms.gle/UnpFXRvpLEFKJBb46More information can be found on the gem5 website: https://www.gem5.org/events/isca-2020Looking forward to seeing you in Valencia!Draft Agenda for Workshop            Time      Event                  1-1:45      Keynote: RE-gem5 and gem5-20: Past, Present, and Future of the gem5 Community Infrastructure              1:45-2      Community feedback              2-3      Community presentations              3-3:30      Break              3:30-4:30      Community presentations              4:30-5      Wrap up and more feedback      ",
        "url": "/events/isca-2020"
      }
      ,
    
      "getting-started": {
        "title": "Getting Started with gem5",
        "content": "Getting Started with gem5First stepsThe gem5 simulator is most useful for research when you build new models and new features on top of the current codebase.Thus, the most common way to use gem5 is to download the source and build it yourself.To download gem5, you can use git to checkout to current stable branch.If you’re not familiar with version control or git, The git book (available online for free) is a great way to learn more about git and become more comfortable using version control.The canonical version of gem5 is hosted by Google on googlesource.com.However, there is a GitHub mirror as well.It is strongly suggested to use the googlesource version of gem5, and it is required if you want to contribute any changes back to the gem5 mainline.git clone https://gem5.googlesource.com/public/gem5After cloning the source code, you can build gem5 by using scons.Building gem5 can take anywhere from a few minutes on a large server to 45 minutes on a laptop.gem5 must be built on a Unix platform.Linux is tested on every commit, and some people have been able to use MacOS as well, though it is not regularly tested.It is strongly suggested to not try to compile gem5 when running on a virtual machine.When running with a VM on a laptop gem5 can take over an hour just to compile.The building gem5 provides more details on building gem5 and its dependencies.cd gem5scons build/X86/gem5.opt -j &lt;NUMBER OF CPUs ON YOUR PLATFORM&gt;Now that you have a gem5 binary, you can run your first simulation!gem5’s interface is Python scripts.The gem5 binary reads in and executes the provided Python script which creates the system under test and executes the simulator.In this example, the script creates a very simple system and executes a “hello world” binary.More information about the script can be found in the Simple Config chapter of the Learning gem5 book.build/X86/gem5.opt configs/learning_gem5/part1/simple.pyAfter running this command, you’ll see gem5’s output as well as Hello world, which comes from the hello world binary!Now, you can start digging into how to use and extend gem5!Next steps  Learning gem5 is a work in progress book describing how to use and develop with gem5. It contains details on how to create configurations files, extend gem5 with new models, gem5’s cache coherence model, and more.  gem5 Events are frequently occuring with computer architecture conferences and at other locations.  You can get help on gem5’s mailing lists or by following the gem5 tag on Stack Overflow.  The contributing guide describes how to contribute your code changes and other ways to contribute to gem5.Tips for Using gem5 in ResearchWhat version of gem5 should I use?The gem5 git repository has two branches: develop and master. The developbranch contains the very latest gem5 changes but is not stable. It isfrequently updated. The develop branch should only be used whencontributing the the gem5 project (please see our Contributing Guide for more information on how to submit code to gem5).The master branch contains stable gem5 code. The HEAD of the master branchpoints towards the latest gem5 release. We would advise researchers use thelatest stable release of gem5 and report which version was used when publishingresults (use git describe to see latest gem5 release version number).If replicating previous work, please find which version of gem5 was used. Thisversion should be tagged on the master branch and can thereby be checked-outon a new branch using git checkout -b {branch} {version}.E.g., to checkout v19.0.0 on a new branch called version19:git checkout -b version19 v19.0.0. A complete list of released gem5versions can be determined by executing git tag on the master branch.How should I cite gem5?You should always cite the gem5 paper.The gem5 Simulator. Nathan Binkert, Bradford Beckmann, Gabriel Black, Steven K. Reinhardt, Ali Saidi, Arkaprava Basu, Joel Hestness, Derek R. Hower, Tushar Krishna, Somayeh Sardashti, Rathijit Sen, Korey Sewell, Muhammad Shoaib, Nilay Vaish, Mark D. Hill, and David A. Wood. May 2011, ACM SIGARCH Computer Architecture News.You should also specify the version of gem5 you use in your methodology section.If you didn’t use a specific stable version of gem5 (e.g., gem5-20.1.3), you should state the commit hash as shown on https:/gem5.googlesource.com/.If you use the GPU model, the DRAM model, or any of the other models in gem5 that have been published, you’re encouraged to cite those works as well.See the publications page for a list of models that have been contributed to gem5 beyond the original paper.How should I refer to gem5?“gem5” should always have a lowercase “g”. If it makes you uncomfortable beginning a sentence with a lowercase letter or your editor requires a capital letter, you can instead refer to gem5 as “The gem5 Simulator”.Can I use the gem5 logo?Absolutely!The gem5 logo was created by Nicole Hill and put into the public domain under the CC0 license.You can download the full sized logo from these links:  Vertical color  Horizontal color  All logos (svg)Please follow the gem5 logo style guide when using the gem5 logo.More details and more versions of the logo can be found in the source for gem5’s documentation.",
        "url": "/getting_started/"
      }
      ,
    
      "governance": {
        "title": "Governance",
        "content": "  Overview  Philosophy  gem5 Roadmap  Roles And Responsibilities          Users      Contributors      Committers      Project management committee      PMC Chair        Support  Contribution Process          Reviewing Patches        Decision Making Process          Lazy consensus      Voting      Overviewgem5 is a meritocratic, consensus-based community project. Anyone with an interest in the project can join the community, contribute to the project design and participate in the decision-making process. Historically, gem5 development has been carried out both in industry and in academia. This document describes how that participation takes place and how to set about earning merit within the project community.The document is broken into a number of sections. Philosophy describes the ideas behind the gem5 community. The Roadmap section points to the roadmap document for gem5’s development. Users and Responsibilities describes the classes of users that use gem5, the types of gem5 contributors, and their responsibilities. Support describes how the community supports users and the Contribution process describes how to contribute. Finally, the Decision Process describes how decisions are made and then we conclude.PhilosophyThe goal of gem5 is to provide a tool to further the state of the art in computer architecture. gem5 can be used for (but is not limited to) computer-architecture research, advanced development, system-level performance analysis and design-space exploration, hardware-software co-design, and low-level software performance analysis. Another goal of gem5 is to be a common framework for computer architecture. A common framework in the academic community makes it easier for other researchers to share workloads as well as models and to compare and contrast with other architectural techniques.The gem5 community strives to balance the needs of its three user types (academic researchers, industry researchers, and students, detailed below). For instance, gem5 strives to balance adding new features (important to researchers) and a stable code base (important for students). Specific user needs important to the community are enumerated below:  Effectively and efficiently emulate the behavior of modern processors in a way that balances simulation performance and accuracy  Serve as a malleable baseline infrastructure that can easily be adapted to emulate the desired behaviors  Provide a core set of APIs and features that remain relatively stable  Incorporate features that make it easy for companies and research groups to stay up to date with the tip and continue contributing to the projectAdditionally, the gem5 community is committed to openness, transparency, and inclusiveness. Participants in the gem5 community of all backgrounds should feel welcome and encouraged to contribute.gem5 RoadmapThe roadmap for gem5 can be found on Roadmap page. The roadmap document details the short and long term goals for the gem5 software. Users of all types are encouraged to contribute to this document and shape the future of gem5. Users are especially encouraged to update the roadmap (and get consensus) before submitting large changes to gem5.Roles And ResponsibilitiesUsersUsers are community members who have a need for the project. They are the most important members of the community and without them the project would have no purpose. Anyone can be a user; there are no special requirements. There are currently three main categories of gem5 users: academic researchers, industry researchers, and students. Individuals may transition between categories, e.g., when a graduate student takes an industry internship, then returns to school; or when a student graduates and takes a job in industry. These three users are described below.Academic ResearchersThis type of user primarily encompasses individuals that use gem5 in academic research. Examples include, but are not limited to, graduate students, research scientists, and post-graduates. This user often uses gem5 as a tool to discover and invent new computer architecture mechanisms. Academic Researchers often are first exposed to gem5 as Students (see below) and transition from Students to Academic Researchers over time.Because of these users’ goals, they primarily add new features to gem5. It is important to the gem5 community to encourage these users to contribute their work to the mainline gem5 repository. By encouraging these users to commit their research contributions, gem5 will make it much easier for other researchers to compare and contrast with other architectural techniques (see Philosophy section).Industry ResearchersThis type of user primarily encompasses individuals working for companies that use gem5. These users are distinguished from academic researchers in two ways. First, industry researchers are often part of a larger team, rather than working individually on gem5. Second, industry researchers often want to incorporate proprietary information into private branches of gem5. Therefore, industry researchers tend to have rather sophisticated software infrastructures built around gem5. For these users, the stability of gem5 features and baseline source code is important. Another key consideration is the fidelity of the models, and their ability to accurately reflect realistic implementations. To enable industry participation, it is critical to maintain licensing terms that do not restrict or burden the use of gem5 in conjunction with proprietary IP.StudentsThis type of user primarily encompasses individuals that are using gem5 in a classroom setting. These users typically have some foundation in computer architecture, but they have little or no background using simulation tools. Additionally, these users may not use gem5 for an extended period of time, after finishing their short-term goals (e.g., a semester-long class).The project asks its users to participate in the project and community as much as possible. User contributions enable the project team to ensure that they are satisfying the needs of those users. Common user contributions include (but are not limited to):  evangelising about the project (e.g., a link on a website and word-of-mouth awareness raising)  informing developers of strengths and weaknesses from a new user perspective  providing moral support (a ‘thank you’ goes a long way)  providing financial support (the software is open source, but its developers need to eat)Users who continue to engage with the project and its community will often become more and more involved. Such users may find themselves becoming contributors, as described in the next section.ContributorsContributors are community members who contribute in concrete ways to the project. Anyone can become a contributor, and contributions can take many forms. There are no specific skill requirements and no selection process.  There is only one expectation of commitment to the project: contributors must be respectful to each other during the review process and work together to reach compromises. See the “Reviewing Patches” section for more on the process of contributing.In addition to their actions as users, contributors may also find themselves doing one or more of the following:  answering questions on the mailing lists, particularly the “easy” questions from new users (existing users are often the best people to support new users), or those that relate to the particular contributor’s experiences  reporting bugs  identifying requirements  providing graphics and web design  programming  assisting with project infrastructure  writing documentation  fixing bugs  adding features  acting as an ambassador and helping to promote the projectContributors engage with the project through the Review Board and mailing list, or by writing or editing documentation. They submit changes to the project source code via patches submitted to Review Board, which will be considered for inclusion in the project by existing committers (see next section). The developer mailing list is the most appropriate place to ask for help when making that first contribution.As contributors gain experience and familiarity with the project, their profile within, and commitment to, the community will increase. At some stage, they may find themselves being nominated for committership.CommittersCommitters are community members who have shown that they are committed to the continued development of the project through ongoing engagement with the community. Committership allows contributors to more easily carry on with their project related activities by giving them direct access to the project’s resources. That is, they can make changes directly to project outputs, although they still have to submit code changes via Review Board. Additionally, committers are expected to have an ongoing record of contributions in terms of code, reviews, and/or discussion.Committers have no more authority over the project than contributors. While committership indicates a valued member of the community who has demonstrated a healthy respect for the project’s aims and objectives, their work continues to be reviewed by the community. The key difference between a committer and a contributor is committers have the extra responsibility of pushing patches to the mainline. Additionally, committers are expected to contribute to discussions on the gem5-dev list and review patches.Anyone can become a committer. The only expectation is that a committer has demonstrated an ability to participate in the project as a team player. Specifically, refer to the 2nd paragraph of the Contributors section.Typically, a potential committer will need to show that they have an understanding of the project, its objectives and its strategy (see Philosophy section). They will also have provided valuable contributions to the project over a period of time.New committers can be nominated by any existing committer. Once they have been nominated, there will be a vote by the project management committee (PMC; see below). Committer nomination and voting is one of the few activities that takes place on the project’s private management list. This is to allow PMC members to freely express their opinions about a nominee without causing embarrassment. Once the vote has been held, the nominee is notified of the result. The nominee is entitled to request an explanation of any ‘no’ votes against them, regardless of the outcome of the vote. This explanation will be provided by the PMC Chair (see below) and will be anonymous and constructive in nature.Nominees may decline their appointment as a committer. However, this is unusual, as the project does not expect any specific time or resource commitment from its community members. The intention behind the role of committer is to allow people to contribute to the project more easily, not to tie them into the project in any formal way.It is important to recognise that commitership is a privilege, not a right. That privilege must be earned and once earned it can be removed by the PMC (see next section) in extreme circumstances. However, under normal circumstances committership exists for as long as the committer wishes to continue engaging with the project.A committer who shows an above-average level of contribution to the project, particularly with respect to its strategic direction and long-term health, may be nominated to become a member of the PMC. This role is described below.Project management committeeThe project management committee consists of those individuals identified as ‘project owners’ on the development site. The PMC has additional responsibilities over and above those of a committer. These responsibilities ensure the smooth running of the project. PMC members are expected to review code contributions, participate in strategic planning, approve changes to the governance model and manage how the software is distributed and licensed.Some PMC members are responsible for specific components of the gem5 project. This includes gem5 source modules (e.g., classic caches, O3CPU model, etc.) and project assets (e.g., the website). A list of the current components and the responsible members can be found within the MAINTAINERS document.Members of the PMC do not have significant authority over other members of the community, although it is the PMC that votes on new committers. It also makes decisions when community consensus cannot be reached. In addition, the PMC has access to the project’s private mailing list. This list is used for sensitive issues, such as votes for new committers and legal matters that cannot be discussed in public. It is never used for project management or planning.Membership of the PMC is by invitation from the existing PMC members. A nomination will result in discussion and then a vote by the existing PMC members. PMC membership votes are subject to consensus approval of the current PMC members. Additions to the PMC require unanimous agreement of the PMC members. Removing someone from the PMC requires N-1 positive votes, where N is the number of PMC members not including the individual who is being voted out.Members  Ali Saidi  Andreas Hansson  Andreas Sandberg  Anthony Gutierrez  Brad Beckmann  Gabe Black  Giacomo Travaglini  Jason Lowe-Power (chair)  Nathan Binkerg  Steve ReinhardtPMC ChairThe PMC Chair is a single individual, voted for by the PMC members. Once someone has been appointed Chair, they remain in that role until they choose to retire, or the PMC casts a two-thirds majority vote to remove them.The PMC Chair has no additional authority over other members of the PMC: the role is one of coordinator and facilitator. The Chair is also expected to ensure that all governance processes are adhered to, and has the casting vote when any project decision fails to reach consensus.SupportAll participants in the community are encouraged to provide support for new users within the project management infrastructure. This support is provided as a way of growing the community. Those seeking support should recognise that all support activity within the project is voluntary and is therefore provided as and when time allows.Contribution ProcessAnyone, capable of showing respect to others, can contribute to the project, regardless of their skills, as there are many ways to contribute. For instance, a contributor might be active on the project mailing list and issue tracker, or might supply patches. The various ways of contributing are described in more detail in a separate document Submitting Contributions.The developer mailing list is the most appropriate place for a contributor to ask for help when making their first contribution. See the Submitting Contributions page on the gem5 wiki for details of the gem5 contribution process. Each new contribution should be submitted as a patch to our Review Board site. Then, other gem5 developers will review your patch, possibly asking for minor changes. After the patch has received consensus (see Decision Making Process), the patch is ready to be committed to the gem5 tree. For committers, this is as simple as pushing the changeset. For contributors, a committer should push the changeset for you. If a committer does not push the changeset within a reasonable window (a couple of days), send a friendly reminder email to the gem5-dev list. Before a patch is committed to gem5, it must receive at least 2 “Ship its” from reviewboard. If there are no reviews on a patch, users should send follow up emails to the gem5-dev list asking for reviews.Reviewing PatchesAn important part of the contribution process is providing feedback on patches that other developers submit. The purpose of reviewing patches is to weed out obvious bugs and to ensure that the code in gem5 is of sufficient quality.All users are encouraged to review the contributions that are posted on Review Board. If you are an active gem5 user, it’s a good idea to keep your eye on the contributions that are posted there (typically by subscribing to the gem5-dev mailing list) so you can speak up when you see a contribution that could impact your use of gem5. It is far more effective to contribute your opinion in a review before a patch gets committed than to complain after the patch is committed, you update your repository, and you find that your simulations no longer work.We greatly value the efforts of reviewers to maintain gem5’s code quality and consistency. However, it is important that reviews balance the desire to maintain the quality of the code in gem5 with the need to be open to accepting contributions from a broader community. People will base their desire to contribute (or continue contributing) on how they and other contributors are received. With that in mind, here are some guidelines for reviewers:  Remember that submitting a contribution is a generous act, and is very rarely a requirement for the person submitting it. It’s always a good idea to start a review with something like “thank you for submitting this contribution”. A thank-you is particularly important for new or occasional submitters.  Overall, the attitude of a reviewer should be “how can we take this contribution and put it to good use”, not “what shortcomings in this work must the submitter address before the contribution can be considered worthy”.  As the saying goes, “the perfect is the enemy of the good”. While we don’t want gem5 to deteriorate, we also don’t want to bypass useful functionality or improvements simply because they are not optimal. If the optimal solution is not likely to happen, then accepting a suboptimal solution may be preferable to having no solution. A suboptimal solution can always be replaced by the optimal solution later. Perhaps the suboptimal solution can be incrementally improved to reach that point.  When asking a submitter for additional changes, consider the cost-benefit ratio of those changes. In particular, reviewers should not discount the costs of requested changes just because the cost to the reviewer is near zero. Asking for extensive changes, particularly from someone who is not a long-time gem5 developer, may be imposing a significant burden on someone who is just trying to be helpful by submitting their code. If you as a reviewer really feel that some extensive reworking of a patch is necessary, consider volunteering to make the changes yourself.  Not everyone uses gem5 in the same way or has the same needs. It’s easy to reject a solution due to its flaws when it solves a problem you don’t have—so there’s no loss to you if we end up with no solution. That’s probably not an acceptable result for the person submitting the patch though. Another way to look at this point is as the flip side of the previous item: just as your cost-benefit analysis should not discount the costs to the submitter of making changes, just because the costs to you are low, it should also not discount the benefits to the submitter of accepting the submission, just because the benefits to you are low.  Be independent and unbiased while commenting on review requests. Do not support a patch just because you or your organization will benefit from it or oppose it because you will need to do more work. Whether you are an individual or someone working with an organization, think about the patch from community’s perspective.  Try to keep the arguments technical and the language simple. If you make some claim about a patch, substantiate it.Decision Making ProcessDecisions about the future of the project are made through discussion with all members of the community, from the newest user to the most experienced PMC member. All non-sensitive project management discussion takes place on the gem5-dev mailing list. Occasionally, sensitive discussion occurs on a private list.In order to ensure that the project is not bogged down by endless discussion and continual voting, the project operates a policy of lazy consensus. This allows the majority of decisions to be made without resorting to a formal vote.Lazy consensusDecision making typically involves the following steps:  Proposal  Discussion  Vote (if consensus is not reached through discussion)  DecisionAny community member can make a proposal for consideration by the community. In order to initiate a discussion about a new idea, they should send an email to the gem5-dev list or submit a patch implementing the idea to Review Board. This will prompt a review and, if necessary, a discussion of the idea. The goal of this review and discussion is to gain approval for the contribution. Since most people in the project community have a shared vision, there is often little need for discussion in order to reach consensus.In general, as long as nobody explicitly opposes a proposal, it is recognised as having the support of the community. This is called lazy consensus—that is, those who have not stated their opinion explicitly have implicitly agreed to the implementation of the proposal.Lazy consensus is a very important concept within the project. It is this process that allows a large group of people to efficiently reach consensus, as someone with no objections to a proposal need not spend time stating their position, and others need not spend time reading such mails.For lazy consensus to be effective, it is necessary to allow at least two weeks before assuming that there are no objections to the proposal. This requirement ensures that everyone is given enough time to read, digest and respond to the proposal. This time period is chosen so as to be as inclusive as possible of all participants, regardless of their location and time commitments. For Review Board requests, if there are no reviews after two weeks, the submitter should send a reminder email to the mailing list. Reviewers may ask patch submitters to delay submitting a patch when they have a desire to review a patch and need more time to do so. As discussed in the Contributing Section, each patch should have at least two “Ship its” before it is committed.VotingNot all decisions can be made using lazy consensus. Issues such as those affecting the strategic direction or legal standing of the project must gain explicit approval in the form of a vote. Every member of the community is encouraged to express their opinions in all discussion and all votes. However, only project committers and/or PMC members (as defined above) have binding votes for the purposes of decision making. A separate document on the voting within a meritocratic governance model (http://oss-watch.ac.uk/resources/meritocraticgovernancevoting) describes in more detail how voting is conducted in projects following the practice established within the Apache Software Foundation.This document is based on the example (http://oss-watch.ac.uk/resources/meritocraticgovernancemodel) by Ross Gardler and Gabriel Hanganu and is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License",
        "url": "/governance/"
      }
      ,
    
      "mailing-lists": {
        "title": "Mailing Lists",
        "content": "There are two mailing lists for gem5:  gem5-dev (subscribe) —Discussions regarding gem5 development. Bug reports should be submitted to ourJira Issue Tracker, though the gem5-dev mailinglist can be used to discuss bugs in greater detail.  gem5-users (subscribe) —General discussions about gem5 and its use.Mail ArchiveWe maintain archives of our mailing list.The gem5-dev mail archive can be found here.The gem5-users mail archive can be found here.Alternative communication channelsStack Overflow can be used to crowd-sourcesolutions to gem5 related problems. The Stack Overflow gem5 tag should beused: https://stackoverflow.com/questions/tagged/gem5). Please make surethat your question complies with the site guidelines before posting. Interested users can optto receive email notifications for such questions as explained athttps://meta.stackexchange.com/questions/25224/email-notifications-for-new-questions-matching-specific-tags).",
        "url": "/mailing_lists/"
      }
      ,
    
      "publications": {
        "title": "Publications",
        "content": "  Original Paper  Special Features of gem5          GPUs      DRAM Controller, DRAM Power Estimation      KVM      Elastic Traces      SystemC Couping        Derivative projects          gem5-gpu      MV5        Other Publications related to gem5  Publications using gem5 / m5          2017      2016      2015      2014      2013      2012      2011      2010      2009      2008      2007      2006      2005      2004      2003      2002      If you use gem5 in your research, we would appreciate a citation to the original paper in any publications you produce. Moreover, we would appreciate if you cite also the speacial features of gem5 which have been developed and contributed to the main line since the publication of the original paper in 2011. In other words, if you use feature X please also cite the according paper Y from the list below.Original Paper  The gem5 Simulator. Nathan Binkert, Bradford Beckmann, Gabriel Black, Steven K. Reinhardt, Ali Saidi, Arkaprava Basu, Joel Hestness, Derek R. Hower, Tushar Krishna, Somayeh Sardashti, Rathijit Sen, Korey Sewell, Muhammad Shoaib, Nilay Vaish, Mark D. Hill, and David A. Wood. May 2011, ACM SIGARCH Computer Architecture News.Special Features of gem5GPUs      Lost in Abstraction: Pitfalls of Analyzing GPUs at the Intermediate Language Level. Anthony Gutierrez, Bradford M. Beckmann, Alexandru Dutu, Joseph Gross, John Kalamatianos, Onur Kayiran, Michael LeBeane, Matthew Poremba, Brandon Potter, Sooraj Puthoor, Matthew D. Sinclair, Mark Wyse, Jieming Yin, Xianwei Zhang, Akshay Jain, Timothy G. Rogers. In Proceedings of the 24th IEEE International Symposium on High-Performance Computer Architecture (HPCA), February 2018.        NoMali: Simulating a realistic graphics driver stack using a stub GPU. René de Jong, Andreas Sandberg. In Proceedings of the International Symposium on Performance Analysis of Systems and Software (ISPASS), March 2016.        gem5-gpu: A Heterogeneous CPU-GPU Simulator. Jason Power, Joel Hestness, Marc S. Orr, Mark D. Hill, David A. Wood. Computer Architecture Letters vol. 13, no. 1, Jan 2014  DRAM Controller, DRAM Power Estimation      Simulating DRAM controllers for future system architecture exploration. Andreas Hansson, Neha Agarwal, Aasheesh Kolli, Thomas Wenisch and Aniruddha N. Udipi. In Proceedings of the International Symposium on Performance Analysis of Systems and Software (ISPASS), March 2014.        DRAMPower: Open-source DRAM Power &amp; Energy Estimation Tool. Karthik Chandrasekar, Christian Weis, Yonghui Li, Sven Goossens, Matthias Jung, Omar Naji, Benny Akesson, Norbert Wehn, and Kees Goossens, URL: http://www.drampower.info.  KVM  Full Speed Ahead: Detailed Architectural Simulation at Near-Native Speed. Andreas Sandberg, Nikos Nikoleris, Trevor E. Carlson, Erik Hagersten, Stefanos Kaxiras, David Black-Schaffer. 2015 IEEE International Symposium on Workload CharacterizationElastic Traces  Exploring system performance using elastic traces: Fast, accurate and portable. Radhika Jagtap, Matthias Jung, Stephan Diestelhorst, Andreas Hansson, Norbert Wehn. IEEE International Conference on Embedded Computer Systems: Architectures, Modeling and Simulation (SAMOS), 2016SystemC Couping  System Simulation with gem5 and SystemC: The Keystone for Full Interoperability. C. Menard, M. Jung, J. Castrillon, N. Wehn. IEEE International Conference on Embedded Computer Systems Architectures Modeling and Simulation (SAMOS), July, 2017Derivative projectsBelow is a list of projects that are based on gem5, are extensions ofgem5, or use gem5.gem5-gpu  Merges 2 popular simulators: gem5 and GPGPU-Sim  Simulates CPUs, GPUs, and the interactions between them  Models a flexible memory system with support for heterogeneousprocessors and coherence  Supports full-system simulation through GPU driver emulationResources  Home Page  Overview slides  Mailing listMV5  MV5 is a reconfigurable simulator for heterogeneous multicorearchitectures. It is based on M5v2.0 beta 4.  Typical usage: simulating data-parallel applications on SIMT coresthat operate over directory-based cache hierarchies. You can alsoadd out-of-order cores to have a heterogeneous system, and alldifferent types of cores can operate under the same address spacethrough the same cache hierarchy.  Research projects based on MV5 have been published in ISCA’10,ICCD’09, and IPDPS’10.Features  Single-Instruction, Multiple-Threads (SIMT) cores  Directory-based Coherence Cache: MESI/MSI. (Not based on gems/ruby)  Interconnect: Fully connected and 2D Mesh. (Not based on gems/ruby)  Threading API/library in system emulation mode (No support forfull-system simulation. A benchmark suite using the thread API isprovided)Resources  Home Page  Tutorial at ISPASS ‘11  Google groupOther Publications related to gem5      Enabling Realistic Logical Device Interface and Driver for NVM Express Enabled Full System Simulations. Donghyun Gouk, Jie Zhang and Myoungsoo Jung. IFIP International Conference on Network and Parallel Computing (NPC) and Invited for International Journal of Parallel Programming (IJPP), 2017        SimpleSSD: Modeling Solid State Drives for Holistic System Simulation. Myoungsoo Jung, Jie Zhang, Ahmed Abulila, Miryeong Kwon, Narges Shahidi, John Shalf, Nam Sung Kim and Mahmut Kandemir. IEEE Computer Architecture Letters (CAL), 2017        “dist-gem5: Distributed Simulation of Computer Clusters,” Mohammad Alian, Gabor Dozsa, Umur Darbaz, Stephan Diestelhorst, Daehoon Kim, and Nam Sung Kim. IEEE International Symposium on Performance Analysis of Systems (ISPASS), April 2017        pd-gem5: Simulation Infrastructure for Parallel/Distributed Computer Systems. Mohammad Alian, Daehoon Kim, and Nam Sung Kim. Computer Architecture Letters (CAL), 2016.        A Full-System Approach to Analyze the Impact of Next-Generation Mobile Flash Storage. Rene de Jong and Andreas Hansson. In Proceedings of the International Symposium on Performance Analysis of Systems and Software (ISPASS), March 2015.        Sources of Error in Full-System Simulation. A. Gutierrez, J. Pusdesris, R.G. Dreslinski, T. Mudge, C. Sudanthi, C.D. Emmons, M. Hayenga, and N. Paver. In Proceedings of the International Symposium on Performance Analysis of Systems and Software (ISPASS), March 2014.        Introducing DVFS-Management in a Full-System Simulator. Vasileios Spiliopoulos, Akash Bagdia, Andreas Hansson, Peter Aldworth and Stefanos Kaxiras. In Proceedings of the 21st International Symposium on Modeling, Analysis &amp; Simulation of Computer and Telecommunication Systems (MASCOTS), August 2013.    Accuracy Evaluation of GEM5 Simulator System. A. Butko, R. Garibotti, L. Ost, and G. Sassatelli. In the proceeding of the IEEE International Workshop on Reconfigurable Communication-centric Systems-on-Chip (ReCoSoC), York, United Kingdom, July 2012.  The M5 Simulator: Modeling Networked Systems. N. L. Binkert, R. G. Dreslinski, L. R. Hsu, K. T. Lim, A. G. Saidi, S. K. Reinhardt. IEEE Micro, vol. 26, no. 4, pp. 52-60, July/August, 2006.  Multifacet’s General Execution-driven Multiprocessor Simulator (GEMS) Toolset. Milo M.K. Martin, Daniel J. Sorin, Bradford M. Beckmann, Michael R. Marty, Min Xu, Alaa R. Alameldeen, Kevin E. Moore, Mark D. Hill, and David A. Wood. Computer Architecture News (CAN), September 2005.Publications using gem5 / m52017      [https://chess.eecs.berkeley.edu/pubs/1194/KimEtAl_CyPhy17.pdfAn Integrated Simulation Tool for Computer Architecture and Cyber-Physical Systems]. Hokeun Kim, Armin Wasicek, and Edward A. Lee. In Proceedings of the 6th Workshop on Design, Modeling and Evaluation of Cyber-Physical Systems (CyPhy’17), Seoul, Korea, October 19, 2017.        [http://www.lirmm.fr/~sassate/ADAC/wp-content/uploads/2017/06/opensuco17.pdfEfficient Programming for Multicore Processor Heterogeneity: OpenMP versus OmpSs]. Anastasiia Butko, Florent Bruguier, Abdoulaye Gamatié and Gilles Sassatelli. In Open Source Supercomputing (OpenSuCo’17) Workshop co-located with ISC’17, June 2017.        [https://hal-lirmm.ccsd.cnrs.fr/lirmm-01467328MAGPIE: System-level Evaluation of Manycore Systems with Emerging Memory Technologies]. Thibaud Delobelle, Pierre-Yves Péneau, Abdoulaye Gamatié, Florent Bruguier, Sophiane Senni, Gilles Sassatelli and Lionel Torres, 2nd International Workshop on Emerging Memory Solutions (EMS) co-located with DATE’17, March 2017.  2016      [http://ieeexplore.ieee.org/document/7776838An Agile Post-Silicon Validation Methodology for the Address Translation Mechanisms of Modern Microprocessors]. G. Papadimitriou, A. Chatzidimitriou, D. Gizopoulos, R. Morad, IEEE Transactions on Device and Materials Reliability (TDMR 2016), Volume: PP, Issue: 99, December 2016.        [http://ieeexplore.ieee.org/document/7753339Unveiling Difficult Bugs in Address Translation Caching Arrays for Effective Post-Silicon Validation]. G. Papadimitriou, D. Gizopoulos, A. Chatzidimitriou, T. Kolan, A. Koyfman, R. Morad, V. Sokhin, IEEE International Conference on Computer Design (ICCD 2016), Phoenix, AZ, USA, October 2016.        [http://ieeexplore.ieee.org/document/7833682/Loop optimization in presence of STT-MRAM caches: A study of performance-energy tradeoffs]. Pierre-Yves Péneau, Rabab Bouziane, Abdoulaye Gamatié, Erven Rohou, Florent Bruguier, Gilles Sassatelli, Lionel Torres and Sophiane Senni, 26th International Workshop on Power and Timing Modeling, Optimization and Simulation (PATMOS), September 21-23 2016.        [http://ieeexplore.ieee.org/abstract/document/7774439Full-System Simulation of big.LITTLE Multicore Architecture for Performance and Energy Exploration]. Anastasiia Butko, Florent Bruguier, Abdoulaye Gamatié, Gilles Sassatelli, David Novo, Lionel Torres and Michel Robert. Embedded Multicore/Many-core Systems-on-Chip (MCSoC), 2016 IEEE 10th International Symposium on, September 21-23, 2016.        [http://ieeexplore.ieee.org/document/7448986Exploring MRAM Technologies for Energy Efficient Systems-On-Chip]. Sophiane Senni, Lionel Torres, Gilles Sassatelli, Abdoulaye Gamatié and Bruno Mussard, IEEE Journal on Emerging and Selected Topics in Circuits and Systems , Volume: 6, Issue: 3, Sept. 2016.        [https://cpc2016.infor.uva.es/wp-content/uploads/2016/06/CPC2016_paper_11.pdfArchitectural exploration of heterogeneous memory systems]. Marcos Horro, Gabriel Rodríguez, Juan Touriño and Mahmut T. Kandemir. 19th Workshop on Compilers for Parallel Computing (CPC), July 2016.        [http://ieeexplore.ieee.org/document/7604675ISA-Independent Post-Silicon Validation for the Address Translation Mechanisms of Modern Microprocessors]. G. Papadimitriou, A. Chatzidimitriou, D. Gizopoulos and R. Morad, IEEE International Symposium on On-Line Testing and Robust System Design (IOLTS 2016), Sant Feliu de Guixols, Spain, July 2016.        Anatomy of microarchitecture-level reliability assessment: Throughput and accuracy. A.Chatzidimitriou, D.Gizopoulos, IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS), Uppsala, Sweden, April 2016.        Agave: A benchmark suite for exploring the complexities of the Android software stack. Martin Brown, Zachary Yannes, Michael Lustig, Mazdak Sanati, Sally A. McKee, Gary S. Tyson, Steven K. Reinhardt, IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS), Uppsala, Sweden, April 2016.  2015      [http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=7314163Differential Fault Injection on Microarchitectural Simulators]. M.Kaliorakis, S.Tselonis, A.Chatzidimitriou, N.Foutris, D.Gizopoulos, IEEE International Symposium on Workload Characterization (IISWC), Atlanta, GA, USA, October 2015.        Live Introspection of Target-Agnostic JIT in Simulation. B. Shingarov. International Workshop IWST’15 in cooperation with ACM, Brescia, Italy, 2015.        Security in MPSoCs: A NoC Firewall and an Evaluation Framework. M.D. Grammatikakis, K. Papadimitriou, P. Petrakis, A. Papagrigoriou, G. Kornaros, I. Christoforakis, O. Tomoutzoglou, G. Tsamis and M. Coppola. In IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD), vol.34, no.8, pp.1344-1357, Aug. 2015        DPCS: Dynamic Power/Capacity Scaling for SRAM Caches in the Nanoscale Era. Mark Gottscho, Abbas BanaiyanMofrad, Nikil Dutt, Alex Nicolau, and Puneet Gupta. ACM Transactions on Architecture and Code Optimization (TACO), Vol. 12, No. 3, Article 27. Pre-print June 2015, published August 2015, print October 2015.        A predictable and command-level priority-based DRAM controller for mixed-criticality systems. Hokeun Kim, David Broman, Edward A. Lee, Michael Zimmer, Aviral Shrivastava, Junkwang Oh. Proceedings of the 21st IEEE Real-Time and Embedded Technology and Application Symposium (RTAS), Seattle, WA, USA, April, 2015.        Security Enhancements for Building Saturation-free, Low-Power NoC-based MPSoCs. Kyprianos Papadimitriou, Polydoros Petrakis, Miltos Grammatikakis, Marcello Coppola. In IEEE Conference on Communications and Network Security (CNS) - 1st IEEE Workshop on Security and Privacy in Cybermatics, Florence, Italy, 2015        Design Exploration For Next Generation High-Performance Manycore On-chip Systems: Application To big.LITTLE Architectures. Anastasiia Butko, Abdoulaye Gamatie, Gilles Sassatelli, Lionel Torres and Michel Robert. VLSI (ISVLSI), 2015 IEEE Computer Society Annual Symposium on, July 10, 2015        [http://dx.doi.org/10.1007/s11227-014-1375-7 Gem5v: a modified gem5 for simulating virtualized systems]. Seyed Hossein Nikounia, Siamak Mohammadi. Springer Journal of Supercomputing. The source code is available [https://github.com/nikoonia/gem5v here].        Micro-architectural simulation of embedded core heterogeneity with gem5 and McPAT. Fernando A. Endo, Damien Couroussé, Henri-Pierre Charles. RAPIDO ‘15 Proceedings of the 2015 Workshop on Rapid Simulation and Performance Evaluation: Methods and Tools. January 2015.        A trace-driven approach for fast and accurate simulation of manycore architectures. Anastasiia Butko, Rafael Garibotti, Luciano Ost, Vianney Lapotre, Abdoulaye Gamatie, Gilles Sassatelli and Chris Adeniyi-Jones. Design Automation Conference (ASP-DAC), 2015 20th Asia and South Pacific. January 19, 2015  2014      Evaluating Private vs. Shared Last-Level Caches for Energy Efficiency in Asymmetric Multi-Cores. A. Gutierrez, R.G. Dreslinski, and Trevor Mudge. In Proceedings of the 14th International Conference on Embedded Computer Systems: Architectures, Modeling, and Simulation (SAMOS), 2014.        [http://dx.doi.org/10.1109/HPCC.2014.173 Security Effectiveness and a Hardware Firewall for MPSoCs]. M. D. Grammatikakis, K. Papadimitriou, P. Petrakis, A. Papagrigoriou, G. Kornaros, I. Christoforakis and M. Coppola. In 16th IEEE International Conference on High Performance Computing and Communications - Workshop on Multicore and Multithreaded Architectures and Algorithms, 2014, pp. 1032-1039 Aug. 2014        [http://dx.doi.org/10.1145/2541940.2541951 Integrated 3D-Stacked Server Designs for Increasing Physical Density of Key-Value Stores]. Anthony Gutierrez, Michael Cieslak, Bharan Giridhar, Ronald G. Dreslinski, Luis Ceze, and Trevor Mudge. ASPLOS XIX        [http://dx.doi.org/10.1145/2593069.2593184 Power / Capacity Scaling: Energy Savings With Simple Fault-Tolerant Caches]. Mark Gottscho, Abbas BanaiyanMofrad, Nikil Dutt, Alex Nicolau, and Puneet Gupta. DAC, 2014.        ”‘Write-Aware Replacement Policies for PCM-Based Systems “’. R. Rodríguez-Rodríguez, F. Castro, D. Chaver*, R. Gonzalez-Alberquilla, L. Piñuel and F. Tirado. The Computer Journal, 2014.        ”‘Micro-architectural simulation of in-order and out-of-order ARM microprocessors with gem5 “’. Fernando A. Endo, Damien Couroussé, Henri-Pierre Charles. 2014 International Conference on Embedded Computer Systems: Architectures, Modeling, and Simulation (SAMOS XIV). July 2014.  2013  Continuous Real-World Inputs Can Open Up Alternative Accelerator Designs. Bilel Belhadj, Antoine Joubert, Zheng Li, Rodolphe Héliot, and Olivier Temam. ISCA ‘13  Cache Coherence for GPU Architectures. Inderpreet Singh, Arrvindh Shriraman, Wilson WL Fung, Mike O’Connor, and Tor M. Aamodt. HPCA, 2013.  Navigating Heterogeneous Processors with Market Mechanisms. Marisabel Guevara, Benjamin Lubin, and Benjamin C. Lee. HPCA, 2013  Power Struggles: Revisiting the RISC vs. CISC Debate on Contemporary ARM and x86 Architectures. Emily Blem, Jaikrishnan Menon, and Karthikeyan Sankaralingam. HPCA 2013.  Coset coding to extend the lifetime of memory. Adam N. Jacobvitz, Robert Calderbank, Daniel J. Sorin. HPCA ‘13.  The McPAT Framework for Multicore and Manycore Architectures: Simultaneously Modeling Power, Area, and Timing. Sheng Li, Jung Ho Ahn, Richard D. Strong, Jay B. Brockman, Dean M. Tullsen, Norman P. Jouppi. ACM Transactions on Architecture and Code Optimization (TACO), Volume 10, Issue 1, April 2013  Optimization and Mathematical Modeling in Computer Architecture Nowatzki, T., Ferris, M., Sankaralingam, K., Estan, C., Vaish, N., &amp; Wood, David A. (2013). Synthesis Lectures on Computer Architecture, 8(4), 1-144.  Limits of Parallelism and Boosting in Dim Silicon. Nathaniel Pinckney, Ronald G. Dreslinski, Korey Sewell, David Fick, Trevor Mudge, Dennis Sylvester, David Blaauw, IEEE Micro, vol. 33, no. 5, pp. 30-37, Sept.-Oct., 20132012  Hardware Prefetchers for Emerging Parallel Applications, Biswabandan Panda, Shankar Balachandran. In the proceedings of the IEEE/ACM Parallel Architectures and Compilation Techniques,PACT, Minneapolis, October 2012.  Lazy Cache Invalidation for Self-Modifying Codes. A. Gutierrez, J. Pusdesris, R.G. Dreslinski, and T. Mudge. In the proceedings of the International Conference on Compilers, Architecture and Synthesis for Embedded Systems (CASES), Tampere, Finland, October 2012.  Accuracy Evaluation of GEM5 Simulator System. A. Butko, R. Garibotti, L. Ost, and G. Sassatelli. In the proceeding of the IEEE International Workshop on Reconfigurable Communication-centric Systems-on-Chip (ReCoSoC), York, United Kingdom, July 2012.  Viper: Virtual Pipelines for Enhanced Reliability. A. Pellegrini, J. L. Greathouse, and V. Bertacco. In the proceedings of the International Symposium on Computer Architecture (ISCA), Portland, OR, June 2012.  Reducing memory reference energy with opportunistic virtual caching. Arkaprava Basu, Mark D. Hill, Michael M. Swift. In the proceedings of the 39th International Symposium on Computer Architecture (ISCA 2012).  Cache Revive: Architecting Volatile STT-RAM Caches for Enhanced Performance in CMPs. Adwait Jog, Asit Mishra, Cong Xu, Yuan Xie, V. Narayanan, Ravi Iyer, Chita Das. In the proceedings oF the IEEE/ACM Design Automation Conference (DAC), San Francisco, CA, June 2012.2011  Full-System Analysis and Characterization of Interactive Smartphone Applications. A. Gutierrez, R.G. Dreslinski, T.F. Wenisch, T. Mudge, A. Saidi, C. Emmons, and N. Paver. In the proceeding of the IEEE International Symposium on Workload Characterization (IISWC), pages 81-90, Austin, TX, November 2011.  Universal Rules Guided Design Parameter Selection for Soft Error Resilient Processors, L. Duan, Y. Zhang, B. Li, and L. Peng. Proceedings of the International Symposium on Performance Analysis of Systems and Software(ISPASS), Austin, TX, April 2011.2010  Using Hardware Vulnerability Factors to Enhance AVF Analysis, V. Sridharan, D. R. Kaeli. Proceedings of the International Symposium on Computer Architecture (ISCA-37), Saint-Malo, France, June 2010.  Leveraging Unused Cache Block Words to Reduce Power in CMP Interconnect, H. Kim, P. Gratz. IEEE Computer Architecture Letters, vol. 99, (RapidPosts), 2010.  A Fast Timing-Accurate MPSoC HW/SW Co-Simulation Platform based on a Novel Synchronization Scheme, Mingyan Yu, Junjie Song, Fangfa Fu, Siyue Sun, and Bo Liu. Proceedings of the International MultiConfernce of Engineers and Computer Scientists. 2010 pdf  Simulation of Standard Benchmarks in Hardware Implementations of L2 Cache Models in Verilog HDL, Rosario M. Reas, Anastacia B. Alvarez, Joy Alinda P. Reyes, Computer Modeling and Simulation, International Conference on, pp. 153-158, 2010 12th International Conference on Computer Modelling and Simulation, 2010  A Simulation of Cache Sub-banking and Block Buffering as Power Reduction Techniques for Multiprocessor Cache Design, Jestoni V. Zarsuela, Anastacia Alvarez, Joy Alinda Reyes, Computer Modeling and Simulation, International Conference on, pp. 515-520, 2010 12th International Conference on Computer Modelling and Simulation, 20102009  Efﬁcient Implementation of Decoupling Capacitors in 3D Processor-DRAM Integrated Computing Systems. Q. Wu, J. Lu, K. Rose, and T. Zhang. Great Lakes Symposium on VLSI. 2009.  Evaluating the Impact of Job Scheduling and Power Management on Processor Lifetime for Chip Multiprocessors. A. K. Coskun, R. Strong, D. M. Tullsen, and T. S. Rosing. Proceedings of the eleventh international joint conference on Measurement and modeling of computer systems. 2009.  ” Devices and architectures for photonic chip-scale integration.” J. Ahn, M. Fiorentino1, R. G. Beausoleil, N. Binkert, A. Davis, D. Fattal, N. P. Jouppi, M. McLaren, C. M. Santori, R. S. Schreiber, S. M. Spillane, D. Vantrease and Q. Xu. Journal of Applied Physics A: Materials Science &amp; Processing. February 2009.      System-Level Power, Thermal and Reliability Optimization. C. Zhu. Thesis at Queen’s University. 2009.    A light-weight fairness mechanism for chip multiprocessor memory systems. M. Jahre, L. Natvig. Proceedings of the 6th ACM conference on Computing Frontiers. 2009.  Decoupled DIMM: building high-bandwidth memory system using low-speed DRAM devices. H. Zheng, J. Lin, Z. Zhang, and Z. Zhu. International Symposium on Computer Architecture (ISCA). 2009.      On the Performance of Commit-Time-Locking Based Software Transactional Memory. Z. He and B. Hong. The 11th IEEE International Conference on. High Performance Computing and Communications (HPCC-09). 2009.    A Quantitative Study of Memory System Interference in Chip Multiprocessor Architectures. M. Jahre, M. Grannaes and L. Natvig. The 11th IEEE International Conference on. High Performance Computing and Communications (HPCC-09). 2009.  Hardware Support for Debugging Message Passing Applications for Many-Core Architectures. C. Svensson. Masters Thesis at the University of Illinois at Urbana-Champaign, 2009.  Initial Experiments in Visualizing Fine-Grained Execution of Parallel Software Through Cycle-Level Simulation. R. Strong, J. Mudigonda, J. C. Mogul, N. Binkert. USENIX Workshop on Hot Topics in Parallelism (HotPar). 2009.  MPreplay: Architecture Support for Deterministic Replay of Message Passing Programs on Message Passing Many-core Processors. C. Erik-Svensson, D. Kesler, R. Kumar, and G. Pokam. University of Illinois Technical Report number UILU-09-2209.  Low-power Inter-core Communication through Cache Partitioning in Embedded Multiprocessors. C. Yu, X. Zhou, and P. Petrov .Symposium on Integrated Circuits and System Design (sbcci). 2009.  Integrating NAND flash devices onto servers. D. Roberts, T. Kgil, T. Mudge. Communications of the ACM (CACM). 2009.  A High-Performance Low-Power Nanophotonic On-Chip Network. Z. Li, J. Wu, L. Shang, A. Mickelson, M. Vachharajani, D. Filipovic, W. Park∗ and Y. Sun. International Symposium on Low Power Electronic Design (ISLPED). 2009.  Core monitors: monitoring performance in multicore processors. P. West, Y. Peress, G. S. Tyson, and S. A. McKee. Computing Frontiers. 2009.  Parallel Assertion Processing using Memory Snapshots. M. F. Iqbal, J. H. Siddiqui, and D. Chiou. Workshop on Unique Chips and Systems (UCAS). April 2009.  Leveraging Memory Level Parallelism Using Dynamic Warp Subdivision. J. Meng, D. Tarjan, and K. Skadron. Univ. of Virginia Dept. of Comp. Sci. Tech Report (CS-2009-02).  Reconfigurable Multicore Server Processors for Low Power Operation. R. G. Dreslinski, D. Fick, D. Blaauw, D. Sylvester and T. Mudge. 9th International Symposium on Systems, Architectures, MOdeling and Simulation (SAMOS). July 2009.  Near Threshold Computing: Overcoming Performance Degradation from Aggressive Voltage Scaling R. G. Dreslinski, M. Wieckowski, D. Blaauw, D. Sylvester, and T. Mudge. Workshop on Energy Efficient Design (WEED), June 2009.      Workload Adaptive Shared Memory Multicore Processors with Reconfigurable Interconnects. S. Akram, R. Kumar, and D. Chen. IEEE Symposium on Application Specific Processors, July 2009.    Eliminating Microarchitectural Dependency from Architectural Vulnerability. V. Sridharan, D. R. Kaeli. Proceedings of the 15th International Symposium on High-Performance Computer Architecture (HPCA-15), February 2009.  Producing Wrong Data Without Doing Anything Obviously Wrong! T. Mytkowicz, A. Diwan, M. Hauswirth, P. F. Sweeney. Proceedings of the 14th international conference on Architectural support for programming languages and operating systems (ASPLOS). 2009.  End-To-End Performance Forecasting: Finding Bottlenecks Before They Happen A. Saidi, N. Binkert, S. Reinhardt, T. Mudge. Proceedings of the 36th International Symposium on Computer Architecture (ISCA-36), June 2009.  Fast Switching of Threads Between Cores. R. Strong, J. Mudigonda, J. C. Mogul, N. Binkert, D. Tullsen. ACM SIGOPS Operating Systems Review. 2009.  Express Cube Topologies for On-Chip Interconnects. B. Grot, J. Hestness, S. W. Keckler, O. Mutlu. Proceedings of the 15th International Symposium on High-Performance Computer Architecture (HPCA-15), February 2009.  Enhancing LTP-Driven Cache Management Using Reuse Distance Information. W. Lieu, D. Yeung. Journal of Instruction-Level Parallelism 11 (2009).2008  Analyzing the Impact of Data Prefetching on Chip MultiProcessors. N. Fukumoto, T. Mihara, K. Inoue, and K. Murakami. Asia-Pacific Computer Systems Architecture Conference. 2008.      Historical Study of the Development of Branch Predictors. Y. Peress. Masters Thesis at Florida State University. 2008.    Hierarchical Domain Partitioning For Hierarchical Architectures. J. Meng, S. Che, J. W. Sheaffer, J. Li, J. Huang, and K. Skadron. Univ. of Virginia Dept. of Comp. Sci. Tech Report CS-2008-08. 2008.      Memory Access Scheduling Schemes for Systems with Multi-Core Processors. H. Zheng, J. Lin, Z. Zhang, and Z. Zhu. International Conference on Parallel Processing, 2008.    Register Multimapping: Reducing Register Bank Conflicts Through One-To-Many Logical-To-Physical Register Mapping. N. L. Duong and R. Kumar. Tehnical Report CHRC-08-07.  Cross-Layer Custimization Platform for Low-Power and Real-Time Embedded Applications. X. Zhou. Dissertation at the University of Maryland. 2008.  Probabilistic Replacement: Enabling Flexible Use of Shared Caches for CMPs. W. Liu and D. Yeung. University of Maryland Technical Report UMIACS-TR-2008-13. 2008.  Observer Effect and Measurement Bias in Performance Analysis. T. Mytkowicz, P. F. Sweeney, M. Hauswirth, and A. Diwan. University of Colorado at Boulder Technical Report CU-CS 1042-08. June, 2008.  Power-Aware Dynamic Cache Partitioning for CMPs. I. Kotera, K. Abe, R. Egawa, H. Takizawa, and H. Kobayashi. 3rd International Conference on High Performance and Embedded Architectures and Compilers (HiPEAC). 2008.  Modeling of Cache Access Behavior Based on Zipf’s Law. I. Kotera, H. Takizawa, R. Egawa, H. Kobayashi. MEDEA 2008.      Hierarchical Verification for Increasing Performance in Reliable Processors. J. Yoo, M. Franklin. Journal of Electronic Testing. 2008.        Transaction-Aware Network-on-Chip Resource Reservation. Z. Li, C. Zhu, L. Shang, R. Dick, Y. Sun. Computer Architecture Letters. Volume PP, Issue 99, Page(s):1 - 1.        Predictable Out-of-order Execution Using Virtual Traces. J. Whitham, N. Audsley. Proceedings of the 29th IEEE Real-time Systems Symposium, December 2008. pdf        Architectural and Compiler Mechanisms for Acelerating Single Thread Applications on Multicore Processors. H. Zhong. Dissertation at The University of Michigan. 2008.        Mini-Rank: Adaptive DRAM Architecture for Improving Memory Power Efficiency. H. Zheng, J. Lin, Z. Zhang, E. Gorbatov, H. David, Z. Zhu. Proceedings of the 41st Annual Symposium on Microarchitecture (MICRO-41), November 2008.        Reconfigurable Energy Efficient Near Threshold Cache Architectures. R. Dreslinski, G. Chen, T. Mudge, D. Blaauw, D. Sylvester, K. Flautner. Proceedings of the 41st Annual Symposium on Microarchitecture (MICRO-41), November 2008.        Distributed and low-power synchronization architecture for embedded multiprocessors. C. Yu, P. Petrov. Internation Conference on Hardware/Software Codesign and System Synthesis (CODES+ISSS), October 2008.        Thermal Monitoring Mechanisms for Chip Multiprocessors. J. Long, S.O. Memik, G. Memik, R. Mukherjee. ACM Transactions on Architecture and Code Optimization (TACO), August 2008.        Multi-optimization power management for chip multiprocessors. K. Meng, R. Joseph, R. Dick, L. Shang. Proceedings of the 17th international conference on Parallel Architectures and Compilation Techniques (PACT), 2008.        ” Three-Dimensional Chip-Multiprocessor Run-Time Thermal Management.” C. Zhu, Z. Gu, L. Shang, R.P. Dick, R. Joseph. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD), August 2008.        ” Latency and bandwidth efficient communication through system customization for embedded multiprocessors”. C. Yu and P. Petrov. DAC 2008, June 2008.        Corona: System Implications of Emerging Nanophotonic Technology. D. Vantrease, R. Schreiber, M. Monchiero, M. McLaren, N., P. Jouppi, M. Fiorentino, A. Davis, N. Binkert, R. G. Beausoleil, and J. Ahn. Proceedings of the 35th International Symposium on Computer Architecture (ISCA-35), June 2008.        Improving NAND Flash Based Disk Caches. T. Kgil, D. Roberts and T. N. Mudge. Proceedings of the 35th International Symposium on Computer Architecture (ISCA-35), June 2008.        A Taxonomy to Enable Error Recovery and Correction in Software. V. Sridharan, D. A. Liberty, and D. R. Kaeli. Workshop on Quality-Aware Design (W-QUAD), in conjunction with the 35th International Symposium on Computer Architecture (ISCA-35), June 2008.        Quantifying Software Vulnerability. V. Sridharan and D. R. Kaeli. First Workshop on Radiation Effects and Fault Tolerance in Nanometer Technologies, in conjunction with the ACM International Conference on Computing Frontiers, May 2008.        Core Monitors: Monitoring Performance in Multicore Processors. P. West. Masters Thesis at Florida State University. April 2008.        Full System Critical Path Analysis. A. Saidi, N. Binkert, T. N. Mudge, and S. K. Reinhardt. 2008 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS), April 2008.        A Power and Temperature Aware DRAM Architecture. S. Liu, S. O. Memik, Y. Zhang, G. Memik. 45th annual conference on Design automation (DAC), 2008.        Streamware: Programming General-Purpose Multicore Processors Using Streams. J. Gummaraju, J. Coburn, Y. Turner, M. Rosenblum. Procedings of the Thirteenth International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), March 2008.        Application-aware snoop filtering for low-power cache coherence in embedded multiprocessors. X. Zhou, C. Yu, A. Dash, and P. Petrov. Transactions on Design Automation of Electronic Systems (TODAES). January 2008.    An approach for adaptive DRAM temperature and power management. Song Liu, S. O. Memik, Y. Zhang, and G. Memik. Proceedings of the 22nd annual international conference on Supercomputing. 2008.2007  Modeling and Characterizing Power Variability in Multicore Architectures. K. Meng, F. Huebbers, R, Joseph, and Y. Ismail. ISPASS-2007.  A High Performance Adaptive Miss Handling Architecture for Chip Multiprocessors. M. Jahre, and L. Natvig. HiPEAC Journal 2007.  Performance Effects of a Cache Miss Handling Architecture in a Multi-core Processor. M. Jahre and L. Natvig. NIK-2007 conference. 2007.      Prioritizing Verification via Value-based Correctness Criticality. J. Yoo, M. Franklin. Proceedings of the 25th International Conference on Computer Design (ICCD), 2007.        DRAM-Level Prefetching for Fully-Buffered DIMM: Design, Performance and Power Saving. J. Lin, H. Zheng, Z. Zhu, Z. Zhang ,H. David. ISPASS 2007.        ” Virtual Exclusion: An architectural approach to reducing leakage energy in caches for multiprocessor systems”. M. Ghosh, H. Lee. Proceedings of the International Conference on Parallel and Distributed Systems. December 2007.        Dependability-Performance Trade-off on Multiple Clustered Core Processors. T. Funaki, T. Sato. Proceedings of the 4th International Workshop on Dependable Embedded Systems. October 2007.        Predictive Thread-to-Core Assignment on a Heterogeneous Multi-core Processor. T. Sondag, V. Krishnamurthy, H. Rajan. PLOS ‘07: ACM SIGOPS 4th Workshop on Programming Languages and Operating Systems. October 2007.        Power deregulation: eliminating off-chip voltage regulation circuitry from embedded systems. S. Kim, R. P. Dick, R. Joseph. 5th IEEE/ACM International Conference on Hardware/Software Co-Design and System Synthesis (CODES+ISSS). October 2007.        Aggressive Snoop Reduction for Synchronized Producer-Consumer Communication in Energy-Efficient Embedded Multi-Processors. C. Yu, P. Petrov. 5th IEEE/ACM International Conference on Hardware/Software Co-Design and System Synthesis (CODES+ISSS). October 2007.        Three-Dimensional Multiprocessor System-on-Chip Thermal Optimization. C. Sun, L. Shang, R.P. Dick. 5th IEEE/ACM International Conference on Hardware/Software Co-Design and System Synthesis (CODES+ISSS). October 2007.        Sampled Simulation for Multithreaded Processors. M. Van Biesbrouck. (Thesis) UC San Diego Technical Report CS2007-XXXX. September 2007.        Representative Multiprogram Workloads for Multithreaded Processor Simulation. M. Van Biesbroucky, L. Eeckhoutz, B. Calder. IEEE International Symposium on Workload Characterization (IISWC). September 2007.        The Interval Page Table: Virtual Memory Support in Real-Time and Memory-Constrained Embedded Systems. X. Zhou, P. Petrov. Proceedings of the 20th annual conference on Integrated circuits and systems design. 2007.        A power-aware shared cache mechanism based on locality assessment of memory reference for CMPs. I. Kotera, R. Egawa, H. Takizawa, H. Kobayashi. Proceedings of the 2007 workshop on MEmory performance: DEaling with Applications, systems and architecture (MEDEA). September 2007.        Architectural Support for the Stream Execution Model on General-Purpose Processors. J. Gummaraju, M. Erez, J. Coburn, M. Rosenblum, W. J. Dally. The Sixteenth International Conference on Parallel Architectures and Compilation Techniques (PACT). September 2007.        An Energy Efficient Parallel Architecture Using Near Threshold Operation. R. Dreslinski, B. Zhai, T. Mudge, D. Blaauw, D. Sylvester. The Sixteenth International Conference on Parallel Architectures and Compilation Techniques (PACT). September 2007.        When Homogeneous becomes Heterogeneous: Wearout Aware Task Scheduling for Streaming Applications. D. Roberts, R. Dreslinski, E. Karl, T. Mudge, D. Sylvester, D. Blaauw. Workshop on Operationg System Support for Heterogeneous Multicore Architectures (OSHMA). September 2007.        ” On-Chip Cache Device Scaling Limits and Effective Fault Repair Techniques in Future Nanoscale Technology”. D. Roberts, N. Kim,T. Mudge. Digital System Design Architectures, Methods and Tools (DSD). August 2007.        Energy Efficient Near-threshold Chip Multi-processing. B. Zhai, R. Dreslinski, D. Blaauw, T. Mudge, D. Sylvester. International Symposium on Low Power Electronics and Design (ISLPED). August 2007.        ” A Burst Scheduling Access Reordering Mechanism”. J. Shao, B.T. Davis. IEEE 13th International Symposium on High Performance Computer Architecture (HPCA). 2007.        Enhancing LTP-Driven Cache Management Using Reuse Distance Information. W. Liu, D. Yeung. University of Maryland Technical Report UMIACS-TR-2007-33. June 2007.        Thermal modeling and management of DRAM memory systems. J. Lin, H. Zheng, Z. Zhu, H. David, and Z. Zhang. Proceedings of the 34th Annual international Symposium on Computer Architecture (ISCA). June 2007.        Duplicating and Verifying LogTM with OS Support in the M5 Simulator. G. Blake, T. Mudge. Sixth Annual Workshop on Duplicating, Deconstructing, and Debunking (WDDD). June 2007.        Analysis of Hardware Prefetching Across Virtual Page Boundaries. R. Dreslinski, A. Saidi, T. Mudge, S. Reinhardt. Proc. of the 4th Conference on Computing Frontiers. May 2007.        Reliability in the Shadow of Long-Stall Instructions. V. Sridharan, D. Kaeli, A. Biswas. Third Workshop on Silicon Errors in Logic - System Effects (SELSE-3). April 2007.    Extending Multicore Architectures to Exploit Hybrid Parallelism in Single-thread Applications. H. Zhong, S. A. Lieberman, S. A. Mahlke. Proc. 13th Intl. Symposium on High Performance Computer Architecture (HPCA). February 2007.2006      Evaluation of the Data Vortex Photonic All-Optical Path Interconnection Network for Next-Generation Supercomputers. W. C. Hawkins. Dissertation at Georgia Tech. December 2006.        Running the manual: an approach to high-assurance microkernel development. P. Derrin, K. Elphinstone, G. Klein, D. Cock, M. M. T. Chakravarty. Proceedings of the 2006 ACM SIGPLAN workshop on Haskell. 2006.        The Filter Checker: An Active Verification Management Approach. J. Yoo, M. Franklin. 21st IEEE International Symposium on Defect and Fault-Tolerance in VLSI Systems (DFT’06), 2006.        Physical Resource Matching Under Power Asymmetry. K. Meng, F. Huebbers, R. Joseph, Y. Ismail. Presented at the 2006 P=ac2 Conference. 2006. pdf        Process Variation Aware Cache Leakage Management. K. Meng, R. Joseph. Proceedings of the 2006 International Symposium on Low Power Electronics and Design (ISLPED). October 2006.        FlashCache: a NAND flash memory file cache for low power web servers. T. Kgil, T. Mudge. Proceedings of the 2006 international conference on Compilers, Architecture and Synthesis for Embedded Systems (CASES). October 2006.        PicoServer: Using 3D Stacking Technology To Enable A Compact Energy Efficient Chip Multiprocessor. T. Kgil, S. D’Souza, A. Saidi, N. Binkert, R. Dreslinski, S. Reinhardt, K. Flautner, T. Mudge. 12th Int’l Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS). October 2006.        Integrated Network Interfaces for High-Bandwidth TCP/IP. N. L. Binkert, A. G. Saidi, S. K. Reinhardt. 12th Int’l Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS). October 2006.        Communist, utilitarian, and capitalist cache policies on CMPs: caches as a shared resource. L. R. Hsu, S. K. Reinhardt, R. Iyer, S. Makineni. Proc. 15th Int’l Conf. on Parallel Architectures and Compilation Techniques (PACT), September 2006.        Impact of CMP Design on High-Performance Embedded Computing. P. Crowley, M. A. Franklin, J. Buhler, and R. D. Chamberlain. Proc. of 10th High Performance Embedded Computing Workshop. September 2006.        BASS: A Benchmark suite for evaluating Architectural Security Systems. J. Poe, T. Li. ACM SIGARCH Computer Architecture News. Vol. 34, No. 4, September 2006.        The M5 Simulator: Modeling Networked Systems. N. L. Binkert, R. G. Dreslinski, L. R. Hsu, K. T. Lim, A. G. Saidi, S. K. Reinhardt. IEEE Micro, vol. 26, no. 4, pp. 52-60, July/August, 2006.Link        Considering All Starting Points for Simultaneous Multithreading Simulation. M. Van Biesbrouck, L. Eeckhout, B. Calder. Proc. of the Int’l Symp. on Performance Analysis of Systems and Software (ISPASS). 2006.pdf        Dynamic Thread Assignment on Heterogeneous Multiprocessor Architectures. M. Becchi, P. Crowley. Proc. of the 3rd Conference on Computing Frontiers. pp29-40. May 2006. pdf        Integrated System Architectures for High-Performance Internet Servers. N. L. Binkert. Dissertation at the University of Michigan. February 2006.        Exploring Salvage Techniques for Multi-core Architectures. R. Joseph. 2nd Workshop on High Performance Computing Reliability Issues. February 2006. pdf        A Simple Integrated Network Interface for High-Bandwidth Servers. N. L. Binkert, A. G. Saidi, S. K. Reinhardt. University of Michigan Technical Report CSE-TR-514-06, January 2006. pdf  2005      Software Defined Radio - A High Performance Embedded Challenge. H. lee, Y. Lin, Y. Harel, M. Woh, S. Mahlke, T. Mudge, K. Flautner. Proc. 2005 Int’l Conf. on High Performance Embedded Architectures and Compilers (HiPEAC). November 2005. pdf        How to Fake 1000 Registers. D. W. Oehmke, N. L. Binkert, S. K. Reinhardt, and T. Mudge. Proc. 38th Ann. Int’l Symp. on Microarchitecture (MICRO), November 2005. pdf        Virtualizing Register Context. D. W. Oehmke. Dissertation at the University of Michigan, 2005. pdf    Performance Validation of Network-Intensive Workloads on a Full-System Simulator. A. G. Saidi, N. L. Binkert, L. R. Hsu, and S. K. Reinhardt. First Ann. Workshop on Iteraction between Operating System and Computer Architecture (IOSCA), October 2005. pdf          An extended version appears as University of Michigan Technical Report CSE-TR-511-05, July 2005. pdf            Performance Analysis of System Overheads in TCP/IP Workloads. N. L. Binkert, L. R. Hsu, A. G. Saidi, R. G. Dreslinski, A. L. Schultz, and S. K. Reinhardt. Proc. 14th Int’l Conf. on Parallel Architectures and Compilation Techniques (PACT), September 2005. pdf    Sampling and Stability in TCP/IP Workloads. L. R. Hsu, A. G. Saidi, N. L. Binkert, and S. K. Reinhardt. Proc. First Annual Workshop on Modeling, Benchmarking, and Simulation (MoBS), June          pdf            A Unified Compressed Memory Hierarchy. E. G. Hallnor and S. K. Reinhardt. Proc. 11th Int’l Symp. on High-Performance Computer Architecture (HPCA), February 2005. pdf    Analyzing NIC Overheads in Network-Intensive Workloads. N. L. Binkert, L. R. Hsu, A. G. Saidi, R. G. Dreslinski, A. L. Schultz, and S. K. Reinhardt. Eighth Workshop on Computer Architecture Evaluation using Commercial Workloads (CAECW), February 2005. pdf          An extended version appears as University of Michigan Technical Report CSE-TR-505-04, December 2004. pdf      2004      Emulation of realisitic network traffic patterns on an eight-node data vortex interconnection network subsytem. B. Small, A. Shacham, K. Bergman, K. Athikulwongse, C. Hawkins, and D.S. Will. Journal of Optical Networking Vol. 3, No.11, pp 802-809, November 2004. pdf        ChipLock: Support for Secure Microarchitectures. T. Kgil, L Falk, and T. Mudge. Proc. Workshop on Architectural Support for Security and Anti-virus (WASSA), October 2004, pp. 130-139. pdf        Design and Applications of a Virtual Context Architecture. D. Oehmke, N. Binkert, S. Reinhardt, and T. Mudge. University of Michigan Technical Report CSE-TR-497-04, September 2004. pdf        The Performance Potential of an Integrated Network Interface. N. L. Binkert, R. G. Dreslinski, E. G. Hallnor, L. R. Hsu, S. E. Raasch, A. L. Schultz, and S. K. Reinhardt. Proc. Advanced Networking and Communications Hardware Workshop (ANCHOR), June 2004. pdf        A Co-Phase Matrix to Guide Simultaneous Multithreading Simulation. M. Van Biesbrouck, T. Sherwood, and B. Calder. IEEE International Symposium on Performance Analysis and Software (ISPASS), March 2004. pdf        A Compressed Memory Hierarchy using an Indirect Index Cache. E. G. Hallnor and S. K. Reinhardt. Proc. 3rd Workshop on Memory Performance Issues (WMPI), June 2004. pdf          An extended version appears as University of Michigan Technical Report CSE-TR-488-04, March 2004. pdf      2003  The Impact of Resource Partitioning on SMT Processors. S. E. Raasch and S. K. Reinhardt. Proc. 12th Int’l Conf. on Parallel Architectures and Compilation Techniques (PACT), pp. 15-25, Sept.          pdf        Network-Oriented Full-System Simulation using M5. N. L. Binkert, E. G. Hallnor, and S. K. Reinhardt. Sixth Workshop on Computer Architecture Evaluation using Commercial Workloads (CAECW), February          pdf        Design, Implementation and Use of the MIRV Experimental Compiler for Computer Architecture Research. D. A. Greene. Dissertation at the Universtiy of Michigan, 2003. [http://www.eecs.umich.edu/~tnm/theses/daveg.pdg“&gt;pdf ]2002  A Scalable Instruction Queue Design Using Dependence Chains. S. E. Raasch, N. L. Binkert, and S. K. Reinhardt. Proc. 29th Annual Int’l Symp. on Computer Architecture (ISCA), pp. 318-329, May 2002. pdf ps ps.gz",
        "url": "/publications/"
      }
      ,
    
      "search": {
        "title": "Search",
        "content": "              Search  ",
        "url": "/search/"
      }
      
    
  };
</script>
<script src="/assets/js/lunr.min.js"></script>
<script src="/assets/js/search.js"></script>


</div>

<!-- button to scroll to top of page -->
<button onclick="topFunction()" id="myBtn" title="Go to top">&#9651;</button>

	</main>
	<footer class="page-footer">
	<div class="container">
		<div class="row">

			<div class="col-12 col-sm-4">
				<p>gem5</p>
				<p><a href="/about">About</a></p>
				<p><a href="/publications">Publications</a></p>
				<p><a href="/contributing">Contributing</a></p>
				<p><a href="/governance">Governance</a></p>
			<br></div>

			<div class="col-12 col-sm-4">
				<p>Docs</p>
				<p><a href="/documentation">Documentation</a></p>
				<p><a href="http://gem5.org/Documentation">Old Documentation</a></p>
				<p><a href="https://gem5.googlesource.com/public/gem5">Source</a></p>
			<br></div>

			<div class="col-12 col-sm-4">
				<p>Help</p>
				<p><a href="/search">Search</a></p>
				<p><a href="/mailing_lists">Mailing Lists</a></p>
				<p><a href="https://gem5.googlesource.com/public/gem5-website/+/refs/heads/master/README.md">Website Source</a></p>
			<br></div>

		</div>
	</div>
</footer>


	<script src="https://code.jquery.com/jquery-3.3.1.slim.min.js" integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous"></script>
	<script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.3/umd/popper.min.js" integrity="sha384-ZMP7rVo3mIykV+2+9J3UJ46jBk0WLaUAdn689aCwoqbBJiSnjAK/l8WvCWPIPm49" crossorigin="anonymous"></script>
	<script src="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/js/bootstrap.min.js" integrity="sha384-ChfqqxuZUCnJSK3+MXmPNIyE6ZbWh2IMqE241rYiqJxyMiZ6OW/JmZQ5stwEULTy" crossorigin="anonymous"></script>
	<script src="https://unpkg.com/commentbox.io/dist/commentBox.min.js"></script>

	<script>
	  // When the user scrolls down 20px from the top of the document, show the button
	  window.onscroll = function() {scrollFunction()};

	  function scrollFunction() {
	      if (document.body.scrollTop > 100 || document.documentElement.scrollTop > 20) {
	          document.getElementById("myBtn").style.display = "block";
	      } else {
	          document.getElementById("myBtn").style.display = "none";
	      }
	  }

	  // When the user clicks on the button, scroll to the top of the document
	  function topFunction() {
	      document.body.scrollTop = 0;
	      document.documentElement.scrollTop = 0;
	  }

		import commentBox from 'commentbox.io';
		// or
		const commentBox = require('commentbox.io');
		// or if using the CDN, it will be available as a global "commentBox" variable.

		commentBox('my-project-id');

	</script>

</body>


</html>
