

<!DOCTYPE html>
<html class="writer-html5" lang="en" >
<head>
  <meta charset="utf-8" />
  <meta name="generator" content="Docutils 0.19: https://docutils.sourceforge.io/" />

  <meta name="viewport" content="width=device-width, initial-scale=1.0" />
  
  <title>QoS Study with mClock and WPQ Schedulers &mdash; Ceph Documentation</title>
  

  
  <link rel="stylesheet" href="../../../_static/ceph.css" type="text/css" />
  <link rel="stylesheet" href="../../../_static/pygments.css" type="text/css" />
  <link rel="stylesheet" href="../../../_static/pygments.css" type="text/css" />
  <link rel="stylesheet" href="../../../_static/ceph.css" type="text/css" />
  <link rel="stylesheet" href="../../../_static/graphviz.css" type="text/css" />
  <link rel="stylesheet" href="../../../_static/css/custom.css" type="text/css" />

  
  

  
  

  

  
  <!--[if lt IE 9]>
    <script src="../../../_static/js/html5shiv.min.js"></script>
  <![endif]-->
  
    
      <script type="text/javascript" id="documentation_options" data-url_root="../../../" src="../../../_static/documentation_options.js"></script>
        <script src="../../../_static/jquery.js"></script>
        <script src="../../../_static/_sphinx_javascript_frameworks_compat.js"></script>
        <script data-url_root="../../../" id="documentation_options" src="../../../_static/documentation_options.js"></script>
        <script src="../../../_static/doctools.js"></script>
        <script src="../../../_static/sphinx_highlight.js"></script>
    
    <script type="text/javascript" src="../../../_static/js/theme.js"></script>

    
    <link rel="index" title="Index" href="../../../genindex/" />
    <link rel="search" title="Search" href="../../../search/" />
    <link rel="next" title="OSD" href="../osd_overview/" />
    <link rel="prev" title="Map and PG Message handling" href="../map_message_handling/" /> 
</head>

<body class="wy-body-for-nav">

   
  <header class="top-bar">
    <div role="navigation" aria-label="Page navigation">
  <ul class="wy-breadcrumbs">
      <li><a href="../../../" class="icon icon-home" aria-label="Home"></a></li>
          <li class="breadcrumb-item"><a href="../../internals/">Ceph 内幕</a></li>
          <li class="breadcrumb-item"><a href="../">OSD 开发者文档</a></li>
      <li class="breadcrumb-item active">QoS Study with mClock and WPQ Schedulers</li>
      <li class="wy-breadcrumbs-aside">
            <a href="../../../_sources/dev/osd_internals/mclock_wpq_cmp_study.rst.txt" rel="nofollow"> View page source</a>
      </li>
  </ul>
  <hr/>
</div>
  </header>
  <div class="wy-grid-for-nav">
    
    <nav data-toggle="wy-nav-shift" class="wy-nav-side">
      <div class="wy-side-scroll">
        <div class="wy-side-nav-search"  style="background: #eee" >
          

          
            <a href="../../../" class="icon icon-home"> Ceph
          

          
          </a>

          

          
<div role="search">
  <form id="rtd-search-form" class="wy-form" action="../../../search/" method="get">
    <input type="text" name="q" placeholder="Search docs" aria-label="Search docs" />
    <input type="hidden" name="check_keywords" value="yes" />
    <input type="hidden" name="area" value="default" />
  </form>
</div>

          
        </div>

        
        <div class="wy-menu wy-menu-vertical" data-spy="affix" role="navigation" aria-label="main navigation">
          
            
            
              
            
            
              <ul class="current">
<li class="toctree-l1"><a class="reference internal" href="../../../start/">Ceph 简介</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../install/">安装 Ceph</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../cephadm/">Cephadm</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../rados/">Ceph 存储集群</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../cephfs/">Ceph 文件系统</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../rbd/">Ceph 块设备</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../radosgw/">Ceph 对象网关</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../mgr/">Ceph 管理器守护进程</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../mgr/dashboard/">Ceph 仪表盘</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../monitoring/">监控概览</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../api/">API 文档</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../architecture/">体系结构</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../developer_guide/">开发者指南</a></li>
<li class="toctree-l1 current"><a class="reference internal" href="../../internals/">Ceph 内幕</a><ul class="current">
<li class="toctree-l2"><a class="reference internal" href="../../balancer-design/">Ceph 如何均衡（读写、容量）</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../blkin/">Tracing Ceph With LTTng</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../blkin/#tracing-ceph-with-blkin">Tracing Ceph With Blkin</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../bluestore/">BlueStore Internals</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../ceph_krb_auth/">如何配置好 Ceph Kerberos 认证的详细文档</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../cephfs-mirroring/">CephFS Mirroring</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../cephfs-reclaim/">CephFS Reclaim Interface</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../cephfs-snapshots/">CephFS 快照</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../cephx/">Cephx</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../cephx_protocol/">Cephx 认证协议详细阐述</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../config/">配置管理系统</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../config-key/">config-key layout</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../context/">CephContext</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../continuous-integration/">Continuous Integration Architecture</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../corpus/">资料库结构</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../cpu-profiler/">Oprofile 的安装</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../crush-msr/">CRUSH MSR (Multi-step Retry)</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../cxx/">C++17 and libstdc++ ABI</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../deduplication/">去重</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../delayed-delete/">CephFS delayed deletion</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../dev_cluster_deployment/">开发集群的部署</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../dev_cluster_deployment/#id5">在同一机器上部署多套开发集群</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../development-workflow/">开发流程</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../documenting/">为 Ceph 写作文档</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../dpdk/">Ceph messenger DPDKStack</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../encoding/">序列化（编码、解码）</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../erasure-coded-pool/">纠删码存储池</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../file-striping/">File striping</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../freebsd/">FreeBSD Implementation details</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../generatedocs/">Ceph 文档的构建</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../health-reports/">Health Reports</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../iana/">IANA 号</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../kclient/">Testing changes to the Linux Kernel CephFS driver</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../kclient/#step-one-build-the-kernel">Step One: build the kernel</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../kclient/#step-two-create-a-vm">Step Two: create a VM</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../kclient/#step-three-networking-the-vm">Step Three: Networking the VM</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../kubernetes/">Hacking on Ceph in Kubernetes with Rook</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../libcephfs_proxy/">Design of the libcephfs proxy</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../libs/">库体系结构</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../logging/">集群日志的用法</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../logs/">调试日志</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../macos/">在 MacOS 上构建</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../mempool_accounting/">What is a mempool?</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../mempool_accounting/#some-common-mempools-that-we-can-track">Some common mempools that we can track</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../messenger/">Messenger notes</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../mon-bootstrap/">Monitor bootstrap</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../mon-elections/">Monitor Elections</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../mon-on-disk-formats/">ON-DISK FORMAT</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../mon-osdmap-prune/">FULL OSDMAP VERSION PRUNING</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../msgr2/">msgr2 协议（ msgr2.0 和 msgr2.1 ）</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../network-encoding/">Network Encoding</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../network-protocol/">网络协议</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../object-store/">对象存储架构概述</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../osd-class-path/">OSD class path issues</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../peering/">互联</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../perf/">Using perf</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../perf_counters/">性能计数器</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../perf_histograms/">Perf histograms</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../placement-group/">PG （归置组）说明</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../quick_guide/">开发者指南（快速）</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../rados-client-protocol/">RADOS 客户端协议</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../rbd-diff/">RBD 增量备份</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../rbd-export/">RBD Export &amp; Import</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../rbd-layering/">RBD Layering</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../release-checklists/">Release checklists</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../release-process/">Ceph Release Process</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../seastore/">SeaStore</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../sepia/">Sepia 社区测试实验室</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../session_authentication/">Session Authentication for the Cephx Protocol</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../testing/">测试笔记</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../versions/">Public OSD Version</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../vstart-ganesha/">NFS CephFS-RGW Developer Guide</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../wireshark/">Wireshark Dissector</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../zoned-storage/">Zoned Storage Support</a></li>
<li class="toctree-l2 current"><a class="reference internal" href="../">OSD 开发者文档</a><ul class="current">
<li class="toctree-l3"><a class="reference internal" href="../async_recovery/">异步恢复</a></li>
<li class="toctree-l3"><a class="reference internal" href="../backfill_reservation/">Backfill Reservation</a></li>
<li class="toctree-l3"><a class="reference internal" href="../erasure_coding/">纠删码编码的归置组</a></li>
<li class="toctree-l3"><a class="reference internal" href="../last_epoch_started/">last_epoch_started</a></li>
<li class="toctree-l3"><a class="reference internal" href="../log_based_pg/">Log Based PG</a></li>
<li class="toctree-l3"><a class="reference internal" href="../manifest/">Manifest</a></li>
<li class="toctree-l3"><a class="reference internal" href="../map_message_handling/">Map and PG Message handling</a></li>
<li class="toctree-l3 current"><a class="current reference internal" href="#">QoS Study with mClock and WPQ Schedulers</a><ul>
<li class="toctree-l4"><a class="reference internal" href="#introduction">Introduction</a></li>
<li class="toctree-l4"><a class="reference internal" href="#overview">Overview</a></li>
<li class="toctree-l4"><a class="reference internal" href="#test-environment">Test Environment</a></li>
<li class="toctree-l4"><a class="reference internal" href="#test-methodology">Test Methodology</a></li>
<li class="toctree-l4"><a class="reference internal" href="#establish-baseline-client-throughput-iops">Establish Baseline Client Throughput (IOPS)</a></li>
<li class="toctree-l4"><a class="reference internal" href="#mclock-profile-allocations">MClock Profile Allocations</a></li>
<li class="toctree-l4"><a class="reference internal" href="#recovery-test-steps">Recovery Test Steps</a></li>
<li class="toctree-l4"><a class="reference internal" href="#test-results">Test Results</a></li>
<li class="toctree-l4"><a class="reference internal" href="#key-takeaways-and-conclusion">Key Takeaways and Conclusion</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="../osd_overview/">OSD</a></li>
<li class="toctree-l3"><a class="reference internal" href="../partial_object_recovery/">Partial Object Recovery</a></li>
<li class="toctree-l3"><a class="reference internal" href="../past_intervals/">OSDMap Trimming and PastIntervals</a></li>
<li class="toctree-l3"><a class="reference internal" href="../pg/">PG</a></li>
<li class="toctree-l3"><a class="reference internal" href="../pg_removal/">PG Removal</a></li>
<li class="toctree-l3"><a class="reference internal" href="../pgpool/">PGPool</a></li>
<li class="toctree-l3"><a class="reference internal" href="../recovery_reservation/">Recovery Reservation</a></li>
<li class="toctree-l3"><a class="reference internal" href="../refcount/">Refcount</a></li>
<li class="toctree-l3"><a class="reference internal" href="../scrub/">Scrub internals and diagnostics</a></li>
<li class="toctree-l3"><a class="reference internal" href="../snaps/">快照</a></li>
<li class="toctree-l3"><a class="reference internal" href="../stale_read/">Preventing Stale Reads</a></li>
<li class="toctree-l3"><a class="reference internal" href="../watch_notify/">关注通知</a></li>
<li class="toctree-l3"><a class="reference internal" href="../wbthrottle/">回写抑制</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../mds_internals/">MDS 开发者文档</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../radosgw/">RADOS 网关开发者文档</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../ceph-volume/">ceph-volume 开发者文档</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../crimson/">Crimson developer documentation</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../../governance/">项目管理</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../foundation/">Ceph 基金会</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../ceph-volume/">ceph-volume</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../releases/general/">Ceph 版本（总目录）</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../releases/">Ceph 版本（索引）</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../security/">Security</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../hardware-monitoring/">硬件监控</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../glossary/">Ceph 术语</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../jaegertracing/">Tracing</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../../translation_cn/">中文版翻译资源</a></li>
</ul>

            
          
        </div>
        
      </div>
    </nav>

    <section data-toggle="wy-nav-shift" class="wy-nav-content-wrap">

      
      <nav class="wy-nav-top" aria-label="top navigation">
        
          <i data-toggle="wy-nav-top" class="fa fa-bars"></i>
          <a href="../../../">Ceph</a>
        
      </nav>


      <div class="wy-nav-content">
        
        <div class="rst-content">
        
          <div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
           <div itemprop="articleBody">
            
<div id="dev-warning" class="admonition note">
  <p class="first admonition-title">Notice</p>
  <p class="last">This document is for a development version of Ceph.</p>
</div>
  <div id="docubetter" align="right" style="padding: 5px; font-weight: bold;">
    <a href="https://pad.ceph.com/p/Report_Documentation_Bugs">Report a Documentation Bug</a>
  </div>

  
  <section id="qos-study-with-mclock-and-wpq-schedulers">
<h1>QoS Study with mClock and WPQ Schedulers<a class="headerlink" href="#qos-study-with-mclock-and-wpq-schedulers" title="Permalink to this heading"></a></h1>
<section id="introduction">
<h2>Introduction<a class="headerlink" href="#introduction" title="Permalink to this heading"></a></h2>
<p>The mClock scheduler provides three controls for each service using it. In Ceph,
the services using mClock are for example client I/O, background recovery,
scrub, snap trim and PG deletes. The three controls such as <em>weight</em>,
<em>reservation</em> and <em>limit</em> are used for predictable allocation of resources to
each service in proportion to its weight subject to the constraint that the
service receives at least its reservation and no more than its limit. In Ceph,
these controls are used to allocate IOPS for each service type provided the IOPS
capacity of each OSD is known. The mClock scheduler is based on
<a class="reference external" href="https://www.usenix.org/legacy/event/osdi10/tech/full_papers/Gulati.pdf">the dmClock algorithm</a>. See <a class="reference internal" href="../../../rados/configuration/osd-config-ref/#dmclock-qos"><span class="std std-ref">基于 mClock 的 QoS</span></a> section for more details.</p>
<p>Ceph’s use of mClock was primarily experimental and approached with an
exploratory mindset. This is still true with other organizations and individuals
who continue to either use the codebase or modify it according to their needs.</p>
<p>DmClock exists in its own <a class="reference external" href="https://github.com/ceph/dmclock">repository</a>. Before the Ceph <em>Pacific</em> release,
mClock could be enabled by setting the <a class="reference internal" href="../../../rados/configuration/osd-config-ref/#confval-osd_op_queue"><code class="xref std std-confval docutils literal notranslate"><span class="pre">osd_op_queue</span></code></a> Ceph option to
“mclock_scheduler”. Additional mClock parameters like <em>reservation</em>, <em>weight</em>
and <em>limit</em> for each service type could be set using Ceph options.
For example, <code class="docutils literal notranslate"><span class="pre">osd_mclock_scheduler_client_[res,wgt,lim]</span></code> is one such option.
See <a class="reference internal" href="../../../rados/configuration/osd-config-ref/#dmclock-qos"><span class="std std-ref">基于 mClock 的 QoS</span></a> section for more details. Even with all the mClock
options set, the full capability of mClock could not be realized due to:</p>
<ul class="simple">
<li><p>Unknown OSD capacity in terms of throughput (IOPS).</p></li>
<li><p>No limit enforcement. In other words, services using mClock were allowed to
exceed their limits resulting in the desired QoS goals not being met.</p></li>
<li><p>Share of each service type not distributed across the number of operational
shards.</p></li>
</ul>
<p>To resolve the above, refinements were made to the mClock scheduler in the Ceph
code base. See <a class="reference internal" href="../../../rados/configuration/mclock-config-ref/"><span class="doc">mClock Config Reference</span></a>. With the
refinements, the usage of mClock is a bit more user-friendly and intuitive. This
is one step of many to refine and optimize the way mClock is used in Ceph.</p>
</section>
<section id="overview">
<h2>Overview<a class="headerlink" href="#overview" title="Permalink to this heading"></a></h2>
<p>A comparison study was performed as part of efforts to refine the mClock
scheduler. The study involved running tests with client ops and background
recovery operations in parallel with the two schedulers. The results were
collated and then compared. The following statistics were compared between the
schedulers from the test results for each service type:</p>
<ul class="simple">
<li><p>External client</p>
<ul>
<li><p>Average throughput(IOPS),</p></li>
<li><p>Average and percentile(95th, 99th, 99.5th) latency,</p></li>
</ul>
</li>
<li><p>Background recovery</p>
<ul>
<li><p>Average recovery throughput,</p></li>
<li><p>Number of misplaced objects recovered per second</p></li>
</ul>
</li>
</ul>
</section>
<section id="test-environment">
<h2>Test Environment<a class="headerlink" href="#test-environment" title="Permalink to this heading"></a></h2>
<ol class="arabic simple">
<li><p><strong>Software Configuration</strong>: CentOS 8.1.1911 Linux Kernel 4.18.0-193.6.3.el8_2.x86_64</p></li>
<li><p><strong>CPU</strong>: 2 x Intel® Xeon® CPU E5-2650 v3 &#64; 2.30GHz</p></li>
<li><p><strong>nproc</strong>: 40</p></li>
<li><p><strong>System Memory</strong>: 64 GiB</p></li>
<li><p><strong>Tuned-adm Profile</strong>: network-latency</p></li>
<li><p><strong>CephVer</strong>: 17.0.0-2125-g94f550a87f (94f550a87fcbda799afe9f85e40386e6d90b232e) quincy (dev)</p></li>
<li><p><strong>Storage</strong>:</p></li>
</ol>
<blockquote>
<div><ul class="simple">
<li><p>Intel® NVMe SSD DC P3700 Series (SSDPE2MD800G4) [4 x 800GB]</p></li>
<li><p>Seagate Constellation 7200 RPM 64MB Cache SATA 6.0Gb/s HDD (ST91000640NS) [4 x 1TB]</p></li>
</ul>
</div></blockquote>
</section>
<section id="test-methodology">
<h2>Test Methodology<a class="headerlink" href="#test-methodology" title="Permalink to this heading"></a></h2>
<p>Ceph <a class="reference external" href="https://github.com/ceph/cbt">cbt</a> was used to test the recovery scenarios. A new recovery test to
generate background recoveries with client I/Os in parallel was created.
See the next section for the detailed test steps. The test was executed 3 times
with the default <em>Weighted Priority Queue (WPQ)</em> scheduler for comparison
purposes. This was done to establish a credible mean value to compare
the mClock scheduler results at a later point.</p>
<p>After this, the same test was executed with mClock scheduler and with different
mClock profiles i.e., <em>high_client_ops</em>, <em>balanced</em> and <em>high_recovery_ops</em> and
the results collated for comparison. With each profile, the test was
executed 3 times, and the average of those runs are reported in this study.</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>Tests with HDDs were performed with and without the bluestore WAL and
dB configured. The charts discussed further below help bring out the
comparison across the schedulers and their configurations.</p>
</div>
</section>
<section id="establish-baseline-client-throughput-iops">
<h2>Establish Baseline Client Throughput (IOPS)<a class="headerlink" href="#establish-baseline-client-throughput-iops" title="Permalink to this heading"></a></h2>
<p>Before the actual recovery tests, the baseline throughput was established for
both the SSDs and the HDDs on the test machine by following the steps mentioned
in the <a class="reference internal" href="../../../rados/configuration/mclock-config-ref/"><span class="doc">mClock Config Reference</span></a> document under
the “Benchmarking Test Steps Using CBT” section. For this study, the following
baseline throughput for each device type was determined:</p>
<table class="docutils align-default">
<thead>
<tr class="row-odd"><th class="head"><p>Device Type</p></th>
<th class="head"><p>Baseline Throughput(&#64;4KiB Random Writes)</p></th>
</tr>
</thead>
<tbody>
<tr class="row-even"><td><p><strong>NVMe SSD</strong></p></td>
<td><p>21500 IOPS (84 MiB/s)</p></td>
</tr>
<tr class="row-odd"><td><p><strong>HDD (with bluestore WAL &amp; dB)</strong></p></td>
<td><p>340 IOPS (1.33 MiB/s)</p></td>
</tr>
<tr class="row-even"><td><p><strong>HDD (without bluestore WAL &amp; dB)</strong></p></td>
<td><p>315 IOPS (1.23 MiB/s)</p></td>
</tr>
</tbody>
</table>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>The <a class="reference internal" href="../../../rados/configuration/bluestore-config-ref/#confval-bluestore_throttle_bytes"><code class="xref std std-confval docutils literal notranslate"><span class="pre">bluestore_throttle_bytes</span></code></a> and
<a class="reference internal" href="../../../rados/configuration/bluestore-config-ref/#confval-bluestore_throttle_deferred_bytes"><code class="xref std std-confval docutils literal notranslate"><span class="pre">bluestore_throttle_deferred_bytes</span></code></a> for SSDs were determined to be
256 KiB. For HDDs, it was 40MiB. The above throughput was obtained
by running 4 KiB random writes at a queue depth of 64 for 300 secs.</p>
</div>
</section>
<section id="mclock-profile-allocations">
<h2>MClock Profile Allocations<a class="headerlink" href="#mclock-profile-allocations" title="Permalink to this heading"></a></h2>
<p>The low-level mClock shares per profile are shown in the tables below. For
parameters like <em>reservation</em> and <em>limit</em>, the shares are represented as a
percentage of the total OSD capacity. For the <em>high_client_ops</em> profile, the
<em>reservation</em> parameter is set to 50% of the total OSD capacity. Therefore, for
the NVMe(baseline 21500 IOPS) device, a minimum of 10750 IOPS is reserved for
client operations. These allocations are made under the hood once
a profile is enabled.</p>
<p>The <em>weight</em> parameter is unitless. See <a class="reference internal" href="../../../rados/configuration/osd-config-ref/#dmclock-qos"><span class="std std-ref">基于 mClock 的 QoS</span></a>.</p>
<section id="high-client-ops-default">
<h3>high_client_ops(default)<a class="headerlink" href="#high-client-ops-default" title="Permalink to this heading"></a></h3>
<p>This profile allocates more reservation and limit to external clients ops
when compared to background recoveries and other internal clients within
Ceph. This profile is enabled by default.</p>
<table class="docutils align-default">
<thead>
<tr class="row-odd"><th class="head"><p>Service Type</p></th>
<th class="head"><p>Reservation</p></th>
<th class="head"><p>Weight</p></th>
<th class="head"><p>Limit</p></th>
</tr>
</thead>
<tbody>
<tr class="row-even"><td><p>client</p></td>
<td><p>50%</p></td>
<td><p>2</p></td>
<td><p>MAX</p></td>
</tr>
<tr class="row-odd"><td><p>background recovery</p></td>
<td><p>25%</p></td>
<td><p>1</p></td>
<td><p>100%</p></td>
</tr>
<tr class="row-even"><td><p>background best effort</p></td>
<td><p>25%</p></td>
<td><p>2</p></td>
<td><p>MAX</p></td>
</tr>
</tbody>
</table>
</section>
<section id="balanced">
<h3>balanced<a class="headerlink" href="#balanced" title="Permalink to this heading"></a></h3>
<p>This profile allocates equal reservations to client ops and background
recovery ops. The internal best effort client get a lower reservation
but a very high limit so that they can complete quickly if there are
no competing services.</p>
<table class="docutils align-default">
<thead>
<tr class="row-odd"><th class="head"><p>Service Type</p></th>
<th class="head"><p>Reservation</p></th>
<th class="head"><p>Weight</p></th>
<th class="head"><p>Limit</p></th>
</tr>
</thead>
<tbody>
<tr class="row-even"><td><p>client</p></td>
<td><p>40%</p></td>
<td><p>1</p></td>
<td><p>100%</p></td>
</tr>
<tr class="row-odd"><td><p>background recovery</p></td>
<td><p>40%</p></td>
<td><p>1</p></td>
<td><p>150%</p></td>
</tr>
<tr class="row-even"><td><p>background best effort</p></td>
<td><p>20%</p></td>
<td><p>2</p></td>
<td><p>MAX</p></td>
</tr>
</tbody>
</table>
</section>
<section id="high-recovery-ops">
<h3>high_recovery_ops<a class="headerlink" href="#high-recovery-ops" title="Permalink to this heading"></a></h3>
<p>This profile allocates more reservation to background recoveries when
compared to external clients and other internal clients within Ceph. For
example, an admin may enable this profile temporarily to speed-up background
recoveries during non-peak hours.</p>
<table class="docutils align-default">
<thead>
<tr class="row-odd"><th class="head"><p>Service Type</p></th>
<th class="head"><p>Reservation</p></th>
<th class="head"><p>Weight</p></th>
<th class="head"><p>Limit</p></th>
</tr>
</thead>
<tbody>
<tr class="row-even"><td><p>client</p></td>
<td><p>30%</p></td>
<td><p>1</p></td>
<td><p>80%</p></td>
</tr>
<tr class="row-odd"><td><p>background recovery</p></td>
<td><p>60%</p></td>
<td><p>2</p></td>
<td><p>200%</p></td>
</tr>
<tr class="row-even"><td><p>background best effort</p></td>
<td><p>1 (MIN)</p></td>
<td><p>2</p></td>
<td><p>MAX</p></td>
</tr>
</tbody>
</table>
</section>
<section id="custom">
<h3>custom<a class="headerlink" href="#custom" title="Permalink to this heading"></a></h3>
<p>The custom profile allows the user to have complete control of the mClock
and Ceph config parameters. To use this profile, the user must have a deep
understanding of the workings of Ceph and the mClock scheduler. All the
<em>reservation</em>, <em>weight</em> and <em>limit</em> parameters of the different service types
must be set manually along with any Ceph option(s). This profile may be used
for experimental and exploratory purposes or if the built-in profiles do not
meet the requirements. In such cases, adequate testing must be performed prior
to enabling this profile.</p>
</section>
</section>
<section id="recovery-test-steps">
<h2>Recovery Test Steps<a class="headerlink" href="#recovery-test-steps" title="Permalink to this heading"></a></h2>
<p>Before bringing up the Ceph cluster, the following mClock configuration
parameters were set appropriately based on the obtained baseline throughput
from the previous section:</p>
<ul class="simple">
<li><p><a class="reference internal" href="../../../rados/configuration/mclock-config-ref/#confval-osd_mclock_max_capacity_iops_hdd"><code class="xref std std-confval docutils literal notranslate"><span class="pre">osd_mclock_max_capacity_iops_hdd</span></code></a></p></li>
<li><p><a class="reference internal" href="../../../rados/configuration/mclock-config-ref/#confval-osd_mclock_max_capacity_iops_ssd"><code class="xref std std-confval docutils literal notranslate"><span class="pre">osd_mclock_max_capacity_iops_ssd</span></code></a></p></li>
<li><p><a class="reference internal" href="../../../rados/configuration/mclock-config-ref/#confval-osd_mclock_profile"><code class="xref std std-confval docutils literal notranslate"><span class="pre">osd_mclock_profile</span></code></a></p></li>
</ul>
<p>See <a class="reference internal" href="../../../rados/configuration/mclock-config-ref/"><span class="doc">mClock Config Reference</span></a> for more details.</p>
<section id="test-steps-using-cbt">
<h3>Test Steps(Using cbt)<a class="headerlink" href="#test-steps-using-cbt" title="Permalink to this heading"></a></h3>
<ol class="arabic simple">
<li><p>Bring up the Ceph cluster with 4 osds.</p></li>
<li><p>Configure the OSDs with replication factor 3.</p></li>
<li><p>Create a recovery pool to populate recovery data.</p></li>
<li><p>Create a client pool and prefill some objects in it.</p></li>
<li><p>Create the recovery thread and mark an OSD down and out.</p></li>
<li><p>After the cluster handles the OSD down event, recovery data is
prefilled into the recovery pool. For the tests involving SSDs, prefill 100K
4MiB objects into the recovery pool. For the tests involving HDDs, prefill
5K 4MiB objects into the recovery pool.</p></li>
<li><p>After the prefill stage is completed, the downed OSD is brought up and in.
The backfill phase starts at this point.</p></li>
<li><p>As soon as the backfill/recovery starts, the test proceeds to initiate client
I/O on the client pool on another thread using a single client.</p></li>
<li><p>During step 8 above, statistics related to the client latency and
bandwidth are captured by cbt. The test also captures the total number of
misplaced objects and the number of misplaced objects recovered per second.</p></li>
</ol>
<p>To summarize, the steps above creates 2 pools during the test. Recovery is
triggered on one pool and client I/O is triggered on the other simultaneously.
Statistics captured during the tests are discussed below.</p>
</section>
<section id="non-default-ceph-recovery-options">
<h3>Non-Default Ceph Recovery Options<a class="headerlink" href="#non-default-ceph-recovery-options" title="Permalink to this heading"></a></h3>
<p>Apart from the non-default bluestore throttle already mentioned above, the
following set of Ceph recovery related options were modified for tests with both
the WPQ and mClock schedulers.</p>
<ul class="simple">
<li><p><a class="reference internal" href="../../../rados/configuration/osd-config-ref/#confval-osd_max_backfills"><code class="xref std std-confval docutils literal notranslate"><span class="pre">osd_max_backfills</span></code></a> = 1000</p></li>
<li><p><a class="reference internal" href="../../../rados/configuration/osd-config-ref/#confval-osd_recovery_max_active"><code class="xref std std-confval docutils literal notranslate"><span class="pre">osd_recovery_max_active</span></code></a> = 1000</p></li>
<li><p><a class="reference internal" href="../../../rados/configuration/osd-config-ref/#confval-osd_async_recovery_min_cost"><code class="xref std std-confval docutils literal notranslate"><span class="pre">osd_async_recovery_min_cost</span></code></a> = 1</p></li>
</ul>
<p>The above options set a high limit on the number of concurrent local and
remote backfill operations per OSD. Under these conditions the capability of the
mClock scheduler was tested and the results are discussed below.</p>
</section>
</section>
<section id="test-results">
<h2>Test Results<a class="headerlink" href="#test-results" title="Permalink to this heading"></a></h2>
<section id="test-results-with-nvme-ssds">
<h3>Test Results With NVMe SSDs<a class="headerlink" href="#test-results-with-nvme-ssds" title="Permalink to this heading"></a></h3>
<section id="client-throughput-comparison">
<h4>Client Throughput Comparison<a class="headerlink" href="#client-throughput-comparison" title="Permalink to this heading"></a></h4>
<p>The chart below shows the average client throughput comparison across the
schedulers and their respective configurations.</p>
<img alt="../../../_images/Avg_Client_Throughput_NVMe_SSD_WPQ_vs_mClock.png" src="../../../_images/Avg_Client_Throughput_NVMe_SSD_WPQ_vs_mClock.png" />
<p>WPQ(def) in the chart shows the average client throughput obtained
using the WPQ scheduler with all other Ceph configuration settings set to
default values. The default setting for <a class="reference internal" href="../../../rados/configuration/osd-config-ref/#confval-osd_max_backfills"><code class="xref std std-confval docutils literal notranslate"><span class="pre">osd_max_backfills</span></code></a> limits the number
of concurrent local and remote backfills or recoveries per OSD to 1. As a
result, the average client throughput obtained is impressive at just over 18000
IOPS when compared to the baseline value which is 21500 IOPS.</p>
<p>However, with WPQ scheduler along with non-default options mentioned in section
<a class="reference internal" href="#non-default-ceph-recovery-options">Non-Default Ceph Recovery Options</a>, things are quite different as shown in the
chart for WPQ(BST). In this case, the average client throughput obtained drops
dramatically to only 2544 IOPS. The non-default recovery options clearly had a
significant impact on the client throughput. In other words, recovery operations
overwhelm the client operations. Sections further below discuss the recovery
rates under these conditions.</p>
<p>With the non-default options, the same test was executed with mClock and with
the default profile (<em>high_client_ops</em>) enabled. As per the profile allocation,
the reservation goal of 50% (10750 IOPS) is being met with an average throughput
of 11209 IOPS during the course of recovery operations. This is more than 4x
times the throughput obtained with WPQ(BST).</p>
<p>Similar throughput with the <em>balanced</em> (11017 IOPS) and <em>high_recovery_ops</em>
(11153 IOPS) profile was obtained as seen in the chart above. This clearly
demonstrates that mClock is able to provide the desired QoS for the client
with multiple concurrent backfill/recovery operations in progress.</p>
</section>
<section id="client-latency-comparison">
<h4>Client Latency Comparison<a class="headerlink" href="#client-latency-comparison" title="Permalink to this heading"></a></h4>
<p>The chart below shows the average completion latency (<em>clat</em>) along with the
average 95th, 99th and 99.5th percentiles across the schedulers and their
respective configurations.</p>
<img alt="../../../_images/Avg_Client_Latency_Percentiles_NVMe_SSD_WPQ_vs_mClock.png" src="../../../_images/Avg_Client_Latency_Percentiles_NVMe_SSD_WPQ_vs_mClock.png" />
<p>The average <em>clat</em> latency obtained with WPQ(Def) was 3.535 msec. But in this
case the number of concurrent recoveries was very much limited at an average of
around 97 objects/sec or ~388 MiB/s and a major contributing factor to the low
latency seen by the client.</p>
<p>With WPQ(BST) and with the non-default recovery options, things are very
different with the average <em>clat</em> latency shooting up to an average of almost
25 msec which is 7x times worse! This is due to the high number of concurrent
recoveries which was measured to be ~350 objects/sec or ~1.4 GiB/s which is
close to the maximum OSD bandwidth.</p>
<p>With mClock enabled and with the default <em>high_client_ops</em> profile, the average
<em>clat</em> latency was 5.688 msec which is impressive considering the high number
of concurrent active background backfill/recoveries. The recovery rate was
throttled down by mClock to an average of 80 objects/sec or ~320 MiB/s according
to the minimum profile allocation of 25% of the maximum OSD bandwidth thus
allowing the client operations to meet the QoS goal.</p>
<p>With the other profiles like <em>balanced</em> and <em>high_recovery_ops</em>, the average
client <em>clat</em> latency didn’t change much and stayed between 5.7 - 5.8 msec with
variations in the average percentile latency as observed from the chart above.</p>
<img alt="../../../_images/Clat_Latency_Comparison_NVMe_SSD_WPQ_vs_mClock.png" src="../../../_images/Clat_Latency_Comparison_NVMe_SSD_WPQ_vs_mClock.png" />
<p>Perhaps a more interesting chart is the comparison chart shown above that
tracks the average <em>clat</em> latency variations through the duration of the test.
The chart shows the differences in the average latency between the
WPQ and mClock profiles). During the initial phase of the test, for about 150
secs, the differences in the average latency between the WPQ scheduler and
across the profiles of mClock scheduler are quite evident and self explanatory.
The <em>high_client_ops</em> profile shows the lowest latency followed by <em>balanced</em>
and <em>high_recovery_ops</em> profiles. The WPQ(BST) had the highest average latency
through the course of the test.</p>
</section>
<section id="recovery-statistics-comparison">
<h4>Recovery Statistics Comparison<a class="headerlink" href="#recovery-statistics-comparison" title="Permalink to this heading"></a></h4>
<p>Another important aspect to consider is how the recovery bandwidth and recovery
time are affected by mClock profile settings. The chart below outlines the
recovery rates and times for each mClock profile and how they differ with the
WPQ scheduler. The total number of objects to be recovered in all the cases was
around 75000 objects as observed in the chart below.</p>
<img alt="../../../_images/Recovery_Rate_Comparison_NVMe_SSD_WPQ_vs_mClock.png" src="../../../_images/Recovery_Rate_Comparison_NVMe_SSD_WPQ_vs_mClock.png" />
<p>Intuitively, the <em>high_client_ops</em> should impact recovery operations the most
and this is indeed the case as it took an average of 966 secs for the
recovery to complete at 80 Objects/sec. The recovery bandwidth as expected was
the lowest at an average of ~320 MiB/s.</p>
<img alt="../../../_images/Avg_Obj_Rec_Throughput_NVMe_SSD_WPQ_vs_mClock.png" src="../../../_images/Avg_Obj_Rec_Throughput_NVMe_SSD_WPQ_vs_mClock.png" />
<p>The <em>balanced</em> profile provides a good middle ground by allocating the same
reservation and weight to client and recovery operations. The recovery rate
curve falls between the <em>high_recovery_ops</em> and <em>high_client_ops</em> curves with
an average bandwidth of ~480 MiB/s and taking an average of ~647 secs at ~120
Objects/sec to complete the recovery.</p>
<p>The <em>high_recovery_ops</em> profile provides the fastest way to complete recovery
operations at the expense of other operations. The recovery bandwidth was
nearly 2x the bandwidth at ~635 MiB/s when compared to the bandwidth observed
using the <em>high_client_ops</em> profile. The average object recovery rate was ~159
objects/sec and completed the fastest in approximately 488 secs.</p>
</section>
</section>
<section id="test-results-with-hdds-wal-and-db-configured">
<h3>Test Results With HDDs (WAL and dB configured)<a class="headerlink" href="#test-results-with-hdds-wal-and-db-configured" title="Permalink to this heading"></a></h3>
<p>The recovery tests were performed on HDDs with bluestore WAL and dB configured
on faster NVMe SSDs. The baseline throughput measured was 340 IOPS.</p>
<section id="client-throughput-latency-comparison">
<h4>Client Throughput &amp; latency Comparison<a class="headerlink" href="#client-throughput-latency-comparison" title="Permalink to this heading"></a></h4>
<p>The average client throughput comparison for WPQ and mClock and its profiles
are shown in the chart below.</p>
<img alt="../../../_images/Avg_Client_Throughput_HDD_WALdB_WPQ_vs_mClock.png" src="../../../_images/Avg_Client_Throughput_HDD_WALdB_WPQ_vs_mClock.png" />
<p>With WPQ(Def), the average client throughput obtained was ~308 IOPS since the
the number of concurrent recoveries was very much limited. The average <em>clat</em>
latency was ~208 msec.</p>
<p>However for WPQ(BST), due to concurrent recoveries client throughput is affected
significantly with 146 IOPS and an average <em>clat</em> latency of 433 msec.</p>
<img alt="../../../_images/Avg_Client_Latency_Percentiles_HDD_WALdB_WPQ_vs_mClock.png" src="../../../_images/Avg_Client_Latency_Percentiles_HDD_WALdB_WPQ_vs_mClock.png" />
<p>With the <em>high_client_ops</em> profile, mClock was able to meet the QoS requirement
for client operations with an average throughput of 271 IOPS which is nearly
80% of the baseline throughput at an average <em>clat</em> latency of 235 msecs.</p>
<p>For <em>balanced</em> and <em>high_recovery_ops</em> profiles, the average client throughput
came down marginally to ~248 IOPS and ~240 IOPS respectively. The average <em>clat</em>
latency as expected increased to ~258 msec and ~265 msec respectively.</p>
<img alt="../../../_images/Clat_Latency_Comparison_HDD_WALdB_WPQ_vs_mClock.png" src="../../../_images/Clat_Latency_Comparison_HDD_WALdB_WPQ_vs_mClock.png" />
<p>The <em>clat</em> latency comparison chart above provides a more comprehensive insight
into the differences in latency through the course of the test. As observed
with the NVMe SSD case, <em>high_client_ops</em> profile shows the lowest latency in
the HDD case as well followed by the <em>balanced</em> and <em>high_recovery_ops</em> profile.
It’s fairly easy to discern this between the profiles during the first 200 secs
of the test.</p>
</section>
<section id="id1">
<h4>Recovery Statistics Comparison<a class="headerlink" href="#id1" title="Permalink to this heading"></a></h4>
<p>The charts below compares the recovery rates and times. The total number of
objects to be recovered in all the cases using HDDs with WAL and dB was around
4000 objects as observed in the chart below.</p>
<img alt="../../../_images/Recovery_Rate_Comparison_HDD_WALdB_WPQ_vs_mClock.png" src="../../../_images/Recovery_Rate_Comparison_HDD_WALdB_WPQ_vs_mClock.png" />
<p>As expected, the <em>high_client_ops</em> impacts recovery operations the most as it
took an average of  ~1409 secs for the recovery to complete at ~3 Objects/sec.
The recovery bandwidth as expected was the lowest at ~11 MiB/s.</p>
<img alt="../../../_images/Avg_Obj_Rec_Throughput_HDD_WALdB_WPQ_vs_mClock.png" src="../../../_images/Avg_Obj_Rec_Throughput_HDD_WALdB_WPQ_vs_mClock.png" />
<p>The <em>balanced</em> profile as expected provides a decent compromise with an an
average bandwidth of ~16.5 MiB/s and taking an average of ~966 secs at ~4
Objects/sec to complete the recovery.</p>
<p>The <em>high_recovery_ops</em> profile is the fastest with nearly 2x the bandwidth at
~21 MiB/s when compared to the <em>high_client_ops</em> profile. The average object
recovery rate was ~5 objects/sec and completed in approximately 747 secs. This
is somewhat similar to the recovery time observed with WPQ(Def) at 647 secs with
a bandwidth of 23 MiB/s and at a rate of 5.8 objects/sec.</p>
</section>
</section>
<section id="test-results-with-hdds-no-wal-and-db-configured">
<h3>Test Results With HDDs (No WAL and dB configured)<a class="headerlink" href="#test-results-with-hdds-no-wal-and-db-configured" title="Permalink to this heading"></a></h3>
<p>The recovery tests were also performed on HDDs without bluestore WAL and dB
configured. The baseline throughput measured was 315 IOPS.</p>
<p>This type of configuration without WAL and dB configured is probably rare
but testing was nevertheless performed to get a sense of how mClock performs
under a very restrictive environment where the OSD capacity is at the lower end.
The sections and charts below are very similar to the ones presented above and
are provided here for reference.</p>
<section id="id2">
<h4>Client Throughput &amp; latency Comparison<a class="headerlink" href="#id2" title="Permalink to this heading"></a></h4>
<p>The average client throughput, latency and percentiles are compared as before
in the set of charts shown below.</p>
<img alt="../../../_images/Avg_Client_Throughput_HDD_NoWALdB_WPQ_vs_mClock.png" src="../../../_images/Avg_Client_Throughput_HDD_NoWALdB_WPQ_vs_mClock.png" />
<img alt="../../../_images/Avg_Client_Latency_Percentiles_HDD_NoWALdB_WPQ_vs_mClock.png" src="../../../_images/Avg_Client_Latency_Percentiles_HDD_NoWALdB_WPQ_vs_mClock.png" />
<img alt="../../../_images/Clat_Latency_Comparison_HDD_NoWALdB_WPQ_vs_mClock.png" src="../../../_images/Clat_Latency_Comparison_HDD_NoWALdB_WPQ_vs_mClock.png" />
</section>
<section id="id3">
<h4>Recovery Statistics Comparison<a class="headerlink" href="#id3" title="Permalink to this heading"></a></h4>
<p>The recovery rates and times are shown in the charts below.</p>
<img alt="../../../_images/Avg_Obj_Rec_Throughput_HDD_NoWALdB_WPQ_vs_mClock.png" src="../../../_images/Avg_Obj_Rec_Throughput_HDD_NoWALdB_WPQ_vs_mClock.png" />
<img alt="../../../_images/Recovery_Rate_Comparison_HDD_NoWALdB_WPQ_vs_mClock.png" src="../../../_images/Recovery_Rate_Comparison_HDD_NoWALdB_WPQ_vs_mClock.png" />
</section>
</section>
</section>
<section id="key-takeaways-and-conclusion">
<h2>Key Takeaways and Conclusion<a class="headerlink" href="#key-takeaways-and-conclusion" title="Permalink to this heading"></a></h2>
<ul class="simple">
<li><p>mClock is able to provide the desired QoS using profiles to allocate proper
<em>reservation</em>, <em>weight</em> and <em>limit</em> to the service types.</p></li>
<li><p>By using the cost per I/O and the cost per byte parameters, mClock can
schedule operations appropriately for the different device types(SSD/HDD).</p></li>
</ul>
<p>The study so far shows promising results with the refinements made to the mClock
scheduler. Further refinements to mClock and profile tuning are planned. Further
improvements will also be based on feedback from broader testing on larger
clusters and with different workloads.</p>
</section>
</section>



<div id="support-the-ceph-foundation" class="admonition note">
  <p class="first admonition-title">Brought to you by the Ceph Foundation</p>
  <p class="last">The Ceph Documentation is a community resource funded and hosted by the non-profit <a href="https://ceph.io/en/foundation/">Ceph Foundation</a>. If you would like to support this and our other efforts, please consider <a href="https://ceph.io/en/foundation/join/">joining now</a>.</p>
</div>


           </div>
           
          </div>
          <footer><div class="rst-footer-buttons" role="navigation" aria-label="Footer">
        <a href="../map_message_handling/" class="btn btn-neutral float-left" title="Map and PG Message handling" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left" aria-hidden="true"></span> Previous</a>
        <a href="../osd_overview/" class="btn btn-neutral float-right" title="OSD" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right" aria-hidden="true"></span></a>
    </div>

  <hr/>

  <div role="contentinfo">
    <p>&#169; Copyright 2016, Ceph authors and contributors. Licensed under Creative Commons Attribution Share Alike 3.0 (CC-BY-SA-3.0).</p>
  </div>

   

</footer>
        </div>
      </div>

    </section>

  </div>
  

  <script type="text/javascript">
      jQuery(function () {
          SphinxRtdTheme.Navigation.enable(true);
      });
  </script>

  
  
    
   

</body>
</html>