<!DOCTYPE HTML>
<html lang="en" class="sidebar-visible no-js light">
    <head>
        <!-- Book generated using mdBook -->
        <meta charset="UTF-8">
        <title>读论文</title>
        <meta name="robots" content="noindex" />


        <!-- Custom HTML head -->
        
        <meta content="text/html; charset=utf-8" http-equiv="Content-Type">
        <meta name="description" content="">
        <meta name="viewport" content="width=device-width, initial-scale=1">
        <meta name="theme-color" content="#ffffff" />

        <link rel="icon" href="favicon.svg">
        <link rel="shortcut icon" href="favicon.png">
        <link rel="stylesheet" href="css/variables.css">
        <link rel="stylesheet" href="css/general.css">
        <link rel="stylesheet" href="css/chrome.css">
        <link rel="stylesheet" href="css/print.css" media="print">

        <!-- Fonts -->
        <link rel="stylesheet" href="FontAwesome/css/font-awesome.css">
        <link rel="stylesheet" href="fonts/fonts.css">

        <!-- Highlight.js Stylesheets -->
        <link rel="stylesheet" href="highlight.css">
        <link rel="stylesheet" href="tomorrow-night.css">
        <link rel="stylesheet" href="ayu-highlight.css">

        <!-- Custom theme stylesheets -->

        <!-- MathJax -->
        <script async type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script>
    </head>
    <body>
        <!-- Provide site root to javascript -->
        <script type="text/javascript">
            var path_to_root = "";
            var default_theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "navy" : "light";
        </script>

        <!-- Work around some values being stored in localStorage wrapped in quotes -->
        <script type="text/javascript">
            try {
                var theme = localStorage.getItem('mdbook-theme');
                var sidebar = localStorage.getItem('mdbook-sidebar');

                if (theme.startsWith('"') && theme.endsWith('"')) {
                    localStorage.setItem('mdbook-theme', theme.slice(1, theme.length - 1));
                }

                if (sidebar.startsWith('"') && sidebar.endsWith('"')) {
                    localStorage.setItem('mdbook-sidebar', sidebar.slice(1, sidebar.length - 1));
                }
            } catch (e) { }
        </script>

        <!-- Set the theme before any content is loaded, prevents flash -->
        <script type="text/javascript">
            var theme;
            try { theme = localStorage.getItem('mdbook-theme'); } catch(e) { }
            if (theme === null || theme === undefined) { theme = default_theme; }
            var html = document.querySelector('html');
            html.classList.remove('no-js')
            html.classList.remove('light')
            html.classList.add(theme);
            html.classList.add('js');
        </script>

        <!-- Hide / unhide sidebar before it is displayed -->
        <script type="text/javascript">
            var html = document.querySelector('html');
            var sidebar = 'hidden';
            if (document.body.clientWidth >= 1080) {
                try { sidebar = localStorage.getItem('mdbook-sidebar'); } catch(e) { }
                sidebar = sidebar || 'visible';
            }
            html.classList.remove('sidebar-visible');
            html.classList.add("sidebar-" + sidebar);
        </script>

        <nav id="sidebar" class="sidebar" aria-label="Table of contents">
            <div class="sidebar-scrollbox">
                <ol class="chapter"><li class="chapter-item expanded affix "><a href="chapter_1.html">读论文活动</a></li><li class="chapter-item expanded affix "><li class="part-title">6.824 分布式系统</li><li class="chapter-item expanded "><a href="Mapreduce.html"><strong aria-hidden="true">1.</strong> Mapreduce</a></li><li class="chapter-item expanded "><a href="GFS.html"><strong aria-hidden="true">2.</strong> GFS</a></li><li class="chapter-item expanded "><a href="VM-FT.html"><strong aria-hidden="true">3.</strong> VM-FT</a></li><li class="chapter-item expanded "><a href="Raft.html"><strong aria-hidden="true">4.</strong> Raft</a></li><li><ol class="section"><li class="chapter-item expanded "><a href="Raft0.html"><strong aria-hidden="true">4.1.</strong> 感性认识Raft</a></li><li class="chapter-item expanded "><a href="Raft1.html"><strong aria-hidden="true">4.2.</strong> 什么是Raft？</a></li><li class="chapter-item expanded "><a href="Raft2.html"><strong aria-hidden="true">4.3.</strong> 复制状态机（Replicated State Machine）</a></li><li class="chapter-item expanded "><a href="Raft3.html"><strong aria-hidden="true">4.4.</strong> What's wrong with Paxos?</a></li><li class="chapter-item expanded "><a href="Raft4.html"><strong aria-hidden="true">4.5.</strong> 向可理解性进军</a></li><li class="chapter-item expanded "><a href="Raft5.html"><strong aria-hidden="true">4.6.</strong> Raft共识算法（零）</a></li><li class="chapter-item expanded "><a href="Raft6.html"><strong aria-hidden="true">4.7.</strong> Raft共识算法（一）——基础概念</a></li><li class="chapter-item expanded "><a href="Raft7.html"><strong aria-hidden="true">4.8.</strong> Raft共识算法（二）——选举leader</a></li><li class="chapter-item expanded "><a href="Raft8.html"><strong aria-hidden="true">4.9.</strong> Raft共识算法（三）——日志备份（log replication）</a></li><li class="chapter-item expanded "><a href="Raft9.html"><strong aria-hidden="true">4.10.</strong> Raft共识算法（四）——安全性和选举限制</a></li><li class="chapter-item expanded "><a href="Raft10.html"><strong aria-hidden="true">4.11.</strong> Raft共识算法（五）——如何提交之前term里的entry</a></li><li class="chapter-item expanded "><a href="Raft11.html"><strong aria-hidden="true">4.12.</strong> Raft共识算法（六）——安全性定理</a></li><li class="chapter-item expanded "><a href="Raft12.html"><strong aria-hidden="true">4.13.</strong> Raft共识算法（七）——如果follower/candidate宕机了</a></li><li class="chapter-item expanded "><a href="Raft13.html"><strong aria-hidden="true">4.14.</strong> Raft共识算法（八）——时间与可用性</a></li><li class="chapter-item expanded "><a href="Raft14.html"><strong aria-hidden="true">4.15.</strong> 成员变更</a></li><li class="chapter-item expanded "><a href="Raft15.html"><strong aria-hidden="true">4.16.</strong> 日志压缩</a></li><li class="chapter-item expanded "><a href="Raft16.html"><strong aria-hidden="true">4.17.</strong> 与Client的交互</a></li><li class="chapter-item expanded "><a href="Raft17.html"><strong aria-hidden="true">4.18.</strong> 实验时遇到的bug</a></li><li class="chapter-item expanded "><a href="Raft18.html"><strong aria-hidden="true">4.19.</strong> 总结</a></li></ol></li><li class="chapter-item expanded "><a href="Zookeeper.html"><strong aria-hidden="true">5.</strong> Zookeeper</a></li><li><ol class="section"><li class="chapter-item expanded "><a href="linearizability1.html"><strong aria-hidden="true">5.1.</strong> 线性一致性（一）——基础概念</a></li><li class="chapter-item expanded "><a href="linearizability2.html"><strong aria-hidden="true">5.2.</strong> 线性一致性（二）——细究linearizability</a></li><li class="chapter-item expanded "><a href="zk_intro.html"><strong aria-hidden="true">5.3.</strong> 引言</a></li><li class="chapter-item expanded "><a href="zk_service.html"><strong aria-hidden="true">5.4.</strong> Zookeeper Service</a></li><li class="chapter-item expanded "><a href="zk_api.html"><strong aria-hidden="true">5.5.</strong> Zookeeper API</a></li><li class="chapter-item expanded "><a href="zk_prop.html"><strong aria-hidden="true">5.6.</strong> Zookeeper的性质</a></li><li class="chapter-item expanded "><a href="zk_ex.html"><strong aria-hidden="true">5.7.</strong> 基于Zookeeper实现锁</a></li></ol></li><li class="chapter-item expanded "><a href="CRAQ.html"><strong aria-hidden="true">6.</strong> CRAQ</a></li><li class="chapter-item expanded "><a href="lamport_clock.html"><strong aria-hidden="true">7.</strong> Time, Clocks, and the Ordering of Events in a Distributed System</a></li><li><ol class="section"><li class="chapter-item expanded "><a href="lamport_clock1.html"><strong aria-hidden="true">7.1.</strong> 引言</a></li><li class="chapter-item expanded "><a href="lamport_clock_partial_order.html"><strong aria-hidden="true">7.2.</strong> 偏序关系</a></li><li class="chapter-item expanded "><a href="lamport_logic_clock.html"><strong aria-hidden="true">7.3.</strong> 逻辑时钟</a></li><li class="chapter-item expanded "><a href="lamport_total_order.html"><strong aria-hidden="true">7.4.</strong> 全序关系</a></li><li class="chapter-item expanded "><a href="lamport_clock_ana_behave.html"><strong aria-hidden="true">7.5.</strong> 异常事件</a></li><li class="chapter-item expanded "><a href="lamport_p_clock.html"><strong aria-hidden="true">7.6.</strong> 物理时钟</a></li><li class="chapter-item expanded "><a href="lamport_end.html"><strong aria-hidden="true">7.7.</strong> 结论</a></li></ol></li><li class="chapter-item expanded "><li class="part-title">6.828 操作系统</li><li class="chapter-item expanded "><a href="828intro.html"><strong aria-hidden="true">8.</strong> Killer of Microseconds</a></li><li class="chapter-item expanded "><a href="cloudlab.html"><strong aria-hidden="true">9.</strong> CloudLab</a></li><li class="chapter-item expanded "><a href="dpdk.html"><strong aria-hidden="true">10.</strong> DPDK</a></li><li class="chapter-item expanded "><a href="spdk.html"><strong aria-hidden="true">11.</strong> SPDK</a></li><li class="chapter-item expanded "><a href="Shenango.html"><strong aria-hidden="true">12.</strong> Shenango</a></li><li class="chapter-item expanded "><a href="TritonSort.html"><strong aria-hidden="true">13.</strong> TritonSort</a></li><li class="chapter-item expanded "><a href="Profiling.html"><strong aria-hidden="true">14.</strong> Profiling a warehouse-scale computer</a></li><li class="chapter-item expanded affix "><li class="part-title">6.828 - Network</li><li class="chapter-item expanded affix "><li class="part-title">CS244 - Advanced Topics in Networking</li><li class="chapter-item expanded "><a href="DARPA_NET.html"><strong aria-hidden="true">15.</strong> The Design Philosophy of The DARPA Internet Protocols</a></li><li><ol class="section"><li class="chapter-item expanded "><a href="DARPA_NET2.html"><strong aria-hidden="true">15.1.</strong> Second Level Goals</a></li><li class="chapter-item expanded "><a href="DARPA_NET3.html"><strong aria-hidden="true">15.2.</strong> Types of Service</a></li><li class="chapter-item expanded "><a href="DARPA_NET4.html"><strong aria-hidden="true">15.3.</strong> Varieties of Networks</a></li><li class="chapter-item expanded "><a href="DARPA_NET5.html"><strong aria-hidden="true">15.4.</strong> Architecture and Implementation</a></li><li class="chapter-item expanded "><a href="DARPA_NET6.html"><strong aria-hidden="true">15.5.</strong> Datagrams</a></li></ol></li><li class="chapter-item expanded "><li class="part-title">最后</li><li class="chapter-item expanded "><a href="end.html"><strong aria-hidden="true">16.</strong> 最后</a></li></ol>
            </div>
            <div id="sidebar-resize-handle" class="sidebar-resize-handle"></div>
        </nav>

        <div id="page-wrapper" class="page-wrapper">

            <div class="page">
                                <div id="menu-bar-hover-placeholder"></div>
                <div id="menu-bar" class="menu-bar sticky bordered">
                    <div class="left-buttons">
                        <button id="sidebar-toggle" class="icon-button" type="button" title="Toggle Table of Contents" aria-label="Toggle Table of Contents" aria-controls="sidebar">
                            <i class="fa fa-bars"></i>
                        </button>
                        <button id="theme-toggle" class="icon-button" type="button" title="Change theme" aria-label="Change theme" aria-haspopup="true" aria-expanded="false" aria-controls="theme-list">
                            <i class="fa fa-paint-brush"></i>
                        </button>
                        <ul id="theme-list" class="theme-popup" aria-label="Themes" role="menu">
                            <li role="none"><button role="menuitem" class="theme" id="light">Light (default)</button></li>
                            <li role="none"><button role="menuitem" class="theme" id="rust">Rust</button></li>
                            <li role="none"><button role="menuitem" class="theme" id="coal">Coal</button></li>
                            <li role="none"><button role="menuitem" class="theme" id="navy">Navy</button></li>
                            <li role="none"><button role="menuitem" class="theme" id="ayu">Ayu</button></li>
                        </ul>
                        <button id="search-toggle" class="icon-button" type="button" title="Search. (Shortkey: s)" aria-label="Toggle Searchbar" aria-expanded="false" aria-keyshortcuts="S" aria-controls="searchbar">
                            <i class="fa fa-search"></i>
                        </button>
                    </div>

                    <h1 class="menu-title">读论文</h1>

                    <div class="right-buttons">
                        <a href="print.html" title="Print this book" aria-label="Print this book">
                            <i id="print-button" class="fa fa-print"></i>
                        </a>

                    </div>
                </div>

                <div id="search-wrapper" class="hidden">
                    <form id="searchbar-outer" class="searchbar-outer">
                        <input type="search" id="searchbar" name="searchbar" placeholder="Search this book ..." aria-controls="searchresults-outer" aria-describedby="searchresults-header">
                    </form>
                    <div id="searchresults-outer" class="searchresults-outer hidden">
                        <div id="searchresults-header" class="searchresults-header"></div>
                        <ul id="searchresults">
                        </ul>
                    </div>
                </div>

                <!-- Apply ARIA attributes after the sidebar and the sidebar toggle button are added to the DOM -->
                <script type="text/javascript">
                    document.getElementById('sidebar-toggle').setAttribute('aria-expanded', sidebar === 'visible');
                    document.getElementById('sidebar').setAttribute('aria-hidden', sidebar !== 'visible');
                    Array.from(document.querySelectorAll('#sidebar a')).forEach(function(link) {
                        link.setAttribute('tabIndex', sidebar === 'visible' ? 0 : -1);
                    });
                </script>

                <div id="content" class="content">
                    <main>
                        <h1 id="读论文活动"><a class="header" href="#读论文活动">读论文活动</a></h1>
<blockquote>
<p><a href="https://pdos.csail.mit.edu/6.824/schedule.html">6.824-schedule</a></p>
<p><a href="https://abelay.github.io/6828seminar/schedule.html">6.828论文list</a></p>
</blockquote>
<p>从2020年开始，MIT6.828课程被拆成了两个部分。6.S081变成了本科生的操作系统课程。<br />
6.828变成了论文研讨会。</p>
<hr />
<p>胡津铭大神在知乎上举办过一个<a href="https://zhuanlan.zhihu.com/p/347150916">828读论文</a>活动。
还把技术分享的录像放到了<a href="https://space.bilibili.com/6441785">b站</a>。<br />
我觉得这个活动尤其有趣，只是在看录播的时候，里面的内容我听不太懂...<br />
虽然关注了胡神的b站账号，但是惭愧的是，他的视频录像我几乎没有完整的看完过...
或许只有真正地参与其中，才会有所收获吧。</p>
<hr />
<p>因此，我也想组织这样一个线上讨论班，体会抱团学习的快乐。
希望你可以：</p>
<ol>
<li>学完6.S081或同等课程。</li>
<li>有时间参加论文研讨会。</li>
<li>乐于分享，每人至少要在研讨班期间做一次技术分享。</li>
<li>(optional) 乐于进行团队合作，研讨班后期或许我们可以参考828的组织形式，大家自行组队做项目。</li>
</ol>
<hr />
<p>研讨班将分为三个阶段：</p>
<ol>
<li>破冰。认识新朋友，一起读论文。
每次讨论会前，以6.824/6.828的paper reading list为主线，确定一篇论文，大家读完之后进行讨论。<br />
参会前，建议至少读完论文的摘要，算是对分享者的尊重。</li>
<li>分享新论文/项目。
在828指定的list之外，轮流分享自己正在做的/感兴趣的论文。</li>
<li>组队做项目。
也许会是最有趣的part。<br />
首先，如果你有idea，可以做一个proposal，在proposal里面，讲述你的idea、plan，等待感兴趣的人加入。
队伍建议2人以上。
做项目期间，欢迎向大家分享你们的项目进展。
最后，在研讨班结束的时候，做一个presentation。</li>
</ol>
<hr />
<p>如果你也对系统方向感兴趣，想一起抱团取暖，欢迎加入飞书群。玩法是入群的人至少要做一次技术分享，所以在加入群聊后，请说明你想分享的论文内容。
群主会给你安排时间。</p>
<p>最近b站在给千粉以上的小up一些直播奖励，一场直播如果大于100人观看的并长于1小时的话，会给25块钱的激励。<br />
所以到时候分享会在b站开直播，做分享的主持人会得到<a href="https://space.bilibili.com/16765968">本up</a>在b站获得的全部收入（25块激励金+直播打赏）。</p>
<p><img width="40%" src="assets/feishu.png" alt="QR CODE" /></p>
<h2 id="qa"><a class="header" href="#qa">Q&amp;A</a></h2>
<h3 id="1-up不是在更6s081的实验吗怎么没更完就开始搞这个了"><a class="header" href="#1-up不是在更6s081的实验吗怎么没更完就开始搞这个了">1. Up不是在更6.S081的实验吗？怎么没更完就开始搞这个了？</a></h3>
<p>Up现在081的实验更新到lab6了，感觉再搞实况写代码，有点没必要。<br />
因为能做到lab6的，之后的实验自己肯定也能写出来。<br />
但是会慢慢讲的！</p>
<h3 id="2-system不是自己的研究方向可以只听课不分享吗"><a class="header" href="#2-system不是自己的研究方向可以只听课不分享吗">2. System不是自己的研究方向，可以只听课不分享吗？</a></h3>
<p>可以，加群之后备注&quot;旁听&quot;，Up就不会去找你认领论文了。</p>
<h3 id="3-现在好像才10来个人关注这个是不是人有点少"><a class="header" href="#3-现在好像才10来个人关注这个是不是人有点少">3. 现在好像才10来个人关注这个，是不是人有点少？</a></h3>
<p>不少了。我觉得这个活动只要有六七个人参与，就能顺利进行下去了。</p>
<h3 id="4-可不可以讲一些老的论文感觉新论文只是看个热闹"><a class="header" href="#4-可不可以讲一些老的论文感觉新论文只是看个热闹">4. 可不可以讲一些老的论文，感觉新论文只是看个热闹？</a></h3>
<p>可以！</p>
<h2 id="读论文的方式"><a class="header" href="#读论文的方式">读论文的方式</a></h2>
<p>不管是沐神还是吴恩达、绿导师，他们给的建议都是在了解一个领域前，相关的论文读个百十来篇是十分有必要的。</p>
<p>读完之后，就每天揪着头发问自己，有没有什么新的idea。<br />
早晚有一天，要么头发被揪光，要么想出能发paper的idea。</p>
<hr />
<p>读一篇论文可以分五个阶段。</p>
<ol>
<li>只看摘要。</li>
<li>看摘要和Introduction，了解作者的观点。</li>
<li>看正文，了解论文做的事情。忽略细节。</li>
<li>读懂论文里的每一句话。</li>
<li>复现。</li>
</ol>
<p>刚上手一篇论文的时候，搞不清楚细节就搞不清楚吧，重视概念性的理解。不必陷入细节。</p>
<p>等到意识到论文很重要，需要搞清楚every fucking detail的时候，再去死磕细节。</p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="mapreduce"><a class="header" href="#mapreduce">Mapreduce</a></h1>
<p>分布式计算入门必看的论文：<a href="./assets/mapreduce.pdf">mapreduce.pdf</a></p>
<p>强烈大家建议去做6.824的lab1。</p>
<blockquote>
<p>我反对！我常跟行家讲，周董是我的偶像！
工程抢过来不必自己做，十亿先拿掉五亿，接下来发包，两转三转，四五六七八转，
你不赚钱想办法偷工减料，再下来跟营建署勾结，追加三五亿预算，
这个工程下来，我看你起码拿掉七亿，你分给我们这么一点点的钱，你还有良心啊?</p>
<p align="right">——电影《黑金》</p>
</blockquote>
<h2 id="mapreduce-simplified-data-processing-on-large-clusters"><a class="header" href="#mapreduce-simplified-data-processing-on-large-clusters">MapReduce: Simplified Data Processing on Large Clusters</a></h2>
<h2 id="摘要"><a class="header" href="#摘要">摘要</a></h2>
<blockquote>
<p>MapReduce is a programming model and an associated implementation for processing and generating large data sets.<br />
Users specify a map function that processes a key/value pair to generate a set of intermediate key/value pairs,
and a reduce function that merges all intermediate values associated with the same intermediate key.<br />
Many real world tasks are expressible in this model, as shown in the paper.</p>
<p>Programs written in this functional style are automatically parallelized and executed on a large cluster of commodity machines.<br />
The run-time system takes care of the details of partitioning the input data,
scheduling the pro- gram’s execution across a set of machines,
handling machine failures, and managing the required inter-machine communication.<br />
This allows programmers without any experience with parallel and distributed systems to easily utilize the resources of a large distributed system.</p>
<p>Our implementation of MapReduce runs on a large cluster of commodity machines and is highly scalable:
a typical MapReduce computation processes many terabytes of data on thousands of machines.<br />
Programmers find the system easy to use: hundreds of MapReduce programs have been implemented and upwards of one thousand MapReduce jobs are executed on Google’s clusters every day.</p>
</blockquote>
<h2 id="结论"><a class="header" href="#结论">结论</a></h2>
<blockquote>
<p>The MapReduce programming model has been successfully used at Google for many different purposes.<br />
We attribute this success to several reasons.<br />
First, the model is easy to use, even for programmers without experience with parallel and distributed systems, since it hides the details of parallelization, fault-tolerance, locality optimization, and load balancing.<br />
Second, a large variety of problems are easily expressible as MapReduce com- putations.<br />
For example, MapReduce is used for the generation of data for Google’s production web search service, for sorting, for data mining, for machine learning, and many other systems.<br />
Third, we have developed an implementation of MapReduce that scales to large clusters of machines comprising thousands of machines.<br />
The implementation makes efficient use of these machine re- sources and therefore is suitable for use on many of the large computational problems encountered at Google.</p>
</blockquote>
<h2 id="分布式系统概念"><a class="header" href="#分布式系统概念">分布式系统概念</a></h2>
<p>定义：</p>
<ul>
<li>若干计算机组成的集群，通过网络通信，共同完成一系列具有耦合性（coherent）的任务。</li>
</ul>
<p>目的：</p>
<ul>
<li>提高吞吐量（大型存储系统）</li>
<li>容错/容灾（保持服务高可用/两地三中心）</li>
<li>使计算过程物理独立（服务下沉）</li>
<li>使计算节点具有一定的隔离性，从而保证安全（区块链）</li>
</ul>
<p>复杂性：</p>
<ul>
<li>系统间不同部分的交互</li>
<li>局部错误（机器、磁盘故障等）</li>
<li>性能瓶颈（性能不一定正比于机器数量）</li>
</ul>
<p>学习分布式的意义：</p>
<ul>
<li>计算机科学的掌上明珠</li>
<li>Interesting unsolved questions</li>
<li>提高动手能力</li>
<li>提升系统设计的思想和理念</li>
</ul>
<h2 id="什么是mapreduce"><a class="header" href="#什么是mapreduce">什么是mapreduce？</a></h2>
<p>谷歌推出的，可水平扩展的，分布式计算框架</p>
<h2 id="背景"><a class="header" href="#背景">背景</a></h2>
<p>早在2003年之前，谷歌作为一家搜索起家的公司，需要解决的问题有：统计单词在文本中出现的数量、建立单词在文档中出现的索引、统计url被点击的数量、排序。</p>
<p>每一个问题都很直观简单，但是当数据量增大至TB乃至PB量级的时候，没有一台单机可以进行这样简单的计算。</p>
<p>最开始，谷歌招募了一些懂分布式系统设计的程序员，针对具体任务去写分布式的业务代码。但是谷歌作为一家需要控制成本和盈利的公司，领导层绝不会希望每一个程序员都是需要懂分布式的。</p>
<p>于是，很自然的，谷歌提出设计一个系统，系统的设计者负责提供通用的分布式计算框架，该框架最重要的特性是支持水平扩展（scale）；而系统的使用者，只需要很少的心智负担，即可使用这个框架来解决他们的大数据量的计算问题。</p>
<h2 id="以word_count为例讲解什么是map和reduce"><a class="header" href="#以word_count为例讲解什么是map和reduce">以word_count为例，讲解什么是map和reduce</a></h2>
<p>我们先来假设一个问题，通过这个问题来引出什么是map和reduce。假如你是一个大学的图书馆管理员，手底下有一群勤工俭学的大学生，现在校长想统计整个图书馆所有的图书中每个单词出现的次数。你应该怎么给手底下的学生分配任务呢？</p>
<p>在这个问题里，你是这个分布式系统的master，勤工俭学的学生是worker，而校长则是client。</p>
<hr />
<p>首先很自然地想到，你可以先把图书平均分给每一个学生，然后让他们进行每一本书的单词统计。这样的话，比如学生A拿到了《他改变了中国》《He changed China》两本书，然后给出的统计结果是这样的：</p>
<pre><code>当特首 1
吼啊 1
当然啦 1
naive 1
exciting 1
</code></pre>
<p>学生B拿到了《红楼梦》《The Red Building Dream》两本书，然后B给出的结果是</p>
<pre><code>吼啊 1
当然啦 1
naive 1
exciting 1
</code></pre>
<hr />
<p>那么然后呢？A和B的结果里包含了相同的key，我们如何对这些相同的key进行一个汇总？（在这里，汇总的语义是什么？）</p>
<p>不难想到，我们只要让A和B的统计结果中，具有相同的key被统一地进一步处理就好啦。比方说我们现在还有两个同学C和D，我们可以简单的让C处理中文的统计，D处理英文的统计。那么C的处理结果就是</p>
<pre><code>当特首 1
吼啊 2
当然啦 2
</code></pre>
<p>D的处理结果就是</p>
<pre><code>naive 2
exciting 2
</code></pre>
<p>于是我们只要把C和D的统计结果进行合并就好啦。</p>
<hr />
<p>P.S: 作为程序员，很自然的可以想到一种key的分配方式：</p>
<p>假设我们有n个同学（编号为0, 1, 2, ..., n-1）被分配来做汇总工作，那么对于每个key，对它做汇总工作的同学的编号应为：</p>
<p>$$ i=hash(key) % n $$</p>
<hr />
<p>来整理和更细化一下刚才的过程吧，作为master，我们有A、B、C、D四个worker。</p>
<p>首先在第一阶段，我们将要处理的文件进行了摊派(map)，A和B拿到了书名和书中的内容，对应到计算机中，即filename和content。</p>
<p>A和B处理完成后，A知道他需要把中文的key留给C做汇总，英文的key留给D做汇总。于是A输出的文件为&quot;map-A-C&quot;和&quot;map-A-D&quot;，同理B的输出文件为&quot;map-B-C&quot;和“map-B-D”。</p>
<p>在第二阶段，我们需要将第一阶段得到的中间结果进行汇总(reduce)，现在有C和D，C知道自己要处理的文件为&quot;map-A-C&quot;和&quot;map-B-C&quot;，D知道自己要处理的文件为&quot;map-A-D&quot;和&quot;map-B-D&quot;。</p>
<p>最终C和D的输出文件为&quot;reduce-C&quot;和&quot;reduce-D&quot;，然后我们将这两个文件进行合并就是最后的结果。</p>
<p>于是，Mapreduce的抽象表达也就呼之欲出了⬇️</p>
<h2 id="mapreduce的抽象表达"><a class="header" href="#mapreduce的抽象表达">Mapreduce的抽象表达</a></h2>
<p>$$ map(k1, v1) \rightarrow     list(k2, v2') $$
$$ reduce(k2, list(v2'))  \rightarrow  v2 $$</p>
<p><code>k1</code>一般是文件名，<code>v1</code>是文件里的内容，map的任务是将<code>k1</code>和<code>v1</code>转化成一堆键值对<code>k2, v2'</code>。<br />
<code>k2</code>和<code>v2'</code>是中间结果，reduce的任务就是对具有相同键的中间结果做处理，得到最终结果<code>v2</code>。</p>
<h2 id="show-me-code"><a class="header" href="#show-me-code">Show me code</a></h2>
<p>还是懵懵懂懂的吗？要不看看代码？</p>
<h3 id="先来看单点按顺序执行的程序"><a class="header" href="#先来看单点按顺序执行的程序">先来看单点按顺序执行的程序</a></h3>
<pre><code class="language-bash">
$ git clone git://g.csail.mit.edu/6.824-golabs-2021 6.824
$ cd 6.824

</code></pre>
<p>在<code>main</code>目录下的<code>mrsequential.go</code>中有：</p>
<pre><code class="language-go">package main

//
// simple sequential MapReduce.
//
// go run mrsequential.go wc.so pg*.txt
//

import &quot;fmt&quot;
import &quot;../mr&quot;
import &quot;plugin&quot;
import &quot;os&quot;
import &quot;log&quot;
import &quot;io/ioutil&quot;
import &quot;sort&quot;

// for sorting by key.
type ByKey []mr.KeyValue

// for sorting by key.
func (a ByKey) Len() int           { return len(a) }
func (a ByKey) Swap(i, j int)      { a[i], a[j] = a[j], a[i] }
func (a ByKey) Less(i, j int) bool { return a[i].Key &lt; a[j].Key }

func main() {
   if len(os.Args) &lt; 3 {
      fmt.Fprintf(os.Stderr, &quot;Usage: mrsequential xxx.so inputfiles...\n&quot;)
      os.Exit(1)
   }

   mapf, reducef := loadPlugin(os.Args[1])

   //
   // read each input file,
   // pass it to Map,
   // accumulate the intermediate Map output.
   //
   intermediate := []mr.KeyValue{}
   for _, filename := range os.Args[2:] {
      file, err := os.Open(filename)
      if err != nil {
         log.Fatalf(&quot;cannot open %v&quot;, filename)
      }
      content, err := ioutil.ReadAll(file)
      if err != nil {
         log.Fatalf(&quot;cannot read %v&quot;, filename)
      }
      file.Close()
      kva := mapf(filename, string(content))
      intermediate = append(intermediate, kva...)
   }

   //
   // a big difference from real MapReduce is that all the
   // intermediate data is in one place, intermediate[],
   // rather than being partitioned into NxM buckets.
   //

   sort.Sort(ByKey(intermediate))

   oname := &quot;mr-out-0&quot;
   ofile, _ := os.Create(oname)

   //
   // call Reduce on each distinct key in intermediate[],
   // and print the result to mr-out-0.
   //
   i := 0
   for i &lt; len(intermediate) {
      j := i + 1
      for j &lt; len(intermediate) &amp;&amp; intermediate[j].Key == intermediate[i].Key {
         j++
      }
      values := []string{}
      for k := i; k &lt; j; k++ {
         values = append(values, intermediate[k].Value)
      }
      fmt.Println(intermediate[i].Key)
      fmt.Println(values)
      output := reducef(intermediate[i].Key, values)

      // this is the correct format for each line of Reduce output.
      fmt.Fprintf(ofile, &quot;%v %v\n&quot;, intermediate[i].Key, output)

      i = j
   }

   ofile.Close()
}

//
// load the application Map and Reduce functions
// from a plugin file, e.g. ../mrapps/wc.so
//
func loadPlugin(filename string) (func(string, string) []mr.KeyValue, func(string, []string) string) {
   p, err := plugin.Open(filename)
   if err != nil {
      log.Fatalf(&quot;cannot load plugin %v&quot;, filename)
   }
   xmapf, err := p.Lookup(&quot;Map&quot;)
   if err != nil {
      log.Fatalf(&quot;cannot find Map in %v&quot;, filename)
   }
   mapf := xmapf.(func(string, string) []mr.KeyValue)
   xreducef, err := p.Lookup(&quot;Reduce&quot;)
   if err != nil {
      log.Fatalf(&quot;cannot find Reduce in %v&quot;, filename)
   }
   reducef := xreducef.(func(string, []string) string)

   return mapf, reducef
}

</code></pre>
<p>mrapp/wc.go的代码如下：</p>
<pre><code class="language-go">
package main

//
// a word-count application &quot;plugin&quot; for MapReduce.
//
// go build -buildmode=plugin wc.go
//

import &quot;6.824/mr&quot;
import &quot;unicode&quot;
import &quot;strings&quot;
import &quot;strconv&quot;

//
// The map function is called once for each file of input. The first
// argument is the name of the input file, and the second is the
// file's complete contents. You should ignore the input file name,
// and look only at the contents argument. The return value is a slice
// of key/value pairs.
//
func Map(filename string, contents string) []mr.KeyValue {
   // function to detect word separators.
   ff := func(r rune) bool { return !unicode.IsLetter(r) }

   // split contents into an array of words.
   words := strings.FieldsFunc(contents, ff)

   kva := []mr.KeyValue{}
   for _, w := range words {
      kv := mr.KeyValue{w, &quot;1&quot;}
      kva = append(kva, kv)
   }
   return kva
}

//
// The reduce function is called once for each key generated by the
// map tasks, with a list of all the values created for that key by
// any map task.
//
func Reduce(key string, values []string) string {
   // return the number of occurrences of this word.
   return strconv.Itoa(len(values))
}

</code></pre>
<p>然后在shell里执行</p>
<pre><code class="language-bash">
$ go build -race -buildmode=plugin ../mrapps/wc.go
$ rm mr-out*
$ go run -race mrcoordinator.go pg-*.txt

</code></pre>
<p>最终可以得到结果<code>mr-out-0</code></p>
<pre><code>A 509
ABOUT 2
ACT 8
ACTRESS 1
ACTUAL 8
ADLER 1
ADVENTURE 12
...

</code></pre>
<h2 id="mr的分布式设计"><a class="header" href="#mr的分布式设计">MR的分布式设计</a></h2>
<p>那么如何设计一个具有水平扩展性的分布式计算框架呢？</p>
<p>在这个计算框架下，使用方只要写map和reduce函数，再指定所要执行的文件，就可以在成百上千的机器上并行的去跑这些执行任务，最终得到总的结果。</p>
<p>在这个框架下，如果有的计算节点没有完成它的map或者reduce任务，这个框架需要重新指定别的计算节点去执行任务。</p>
<p>谷歌给出的方案如下：</p>
<p><img src="./assets/mr_f1.png" alt="mr_f1" /></p>
<p>关于该图的具体解释请参考论文里的内容。（偷个懒hhhh）</p>
<p>分布式理论最初应用在工业界的时候，为了达到最终一致性，为了容错，
人们倾向于有一个master和一群worker，worker出错了的话master会及时感知到，从而触发兜底方案。</p>
<p>而master出错的话，就手动去恢复，相当于把错误限制在为数不多的master机器上。</p>
<p>而mit6.824的lab1中，就是让我们去实现这样一个框架。<br />
时序图如下：</p>
<h3 id="stage1初始化和map阶段"><a class="header" href="#stage1初始化和map阶段">stage1——初始化和map阶段：</a></h3>
<p><img src="./assets/mr_f2.png" alt="mr_f2" /></p>
<h3 id="stage2reduce和完成阶段"><a class="header" href="#stage2reduce和完成阶段">stage2——reduce和完成阶段：</a></h3>
<p><img src="./assets/mr_f3.png" alt="mr_f3" /></p>
<h2 id="问题"><a class="header" href="#问题">问题</a></h2>
<ol>
<li>过了这么多年，目前好用的计算框架除了Mapreduce，还有什么？</li>
<li>Mapreduce不适合处理流式数据，或者数据之间存在关联的时候也不适合，那应该咋办？</li>
</ol>
<div style="break-before: page; page-break-before: always;"></div><h1 id="gfs"><a class="header" href="#gfs">GFS</a></h1>
<p><a href="./assets/gfs.pdf">The Google File System</a></p>
<h2 id="摘要-1"><a class="header" href="#摘要-1">摘要</a></h2>
<blockquote>
<p>We have designed and implemented the Google File System, 
a scalable distributed file system for large distributed data-intensive applications.<br />
It provides fault tolerance while running on inexpensive commodity hardware, 
and it delivers high aggregate performance to a large number of clients.</p>
<p>While sharing many of the same goals as previous distributed file systems,
our design has been driven by observations of our application workloads and technological environment, 
both current and anticipated, that reflect a marked departure from some earlier file system assumptions.<br />
This has led us to reexamine traditional choices and explore rad- ically different design points.</p>
<p>The file system has successfully met our storage needs.<br />
It is widely deployed within Google as the storage platform for the generation and processing of data used by our service as well as research and development efforts that require large data sets.<br />
The largest cluster to date provides hun- dreds of terabytes of storage across thousands of disks on over a thousand machines, and it is concurrently accessed by hundreds of clients.</p>
<p>In this paper, we present file system interface extensions designed to support distributed applications,
discuss many aspects of our design, and report measurements from both micro-benchmarks and real world use.</p>
</blockquote>
<h2 id="结论-1"><a class="header" href="#结论-1">结论</a></h2>
<blockquote>
<p>The Google File System demonstrates the qualities essential for supporting large-scale data processing workloads on commodity hardware.<br />
While some design decisions are specific to our unique setting, many may apply to data processing tasks of a similar magnitude and cost consciousness.</p>
<p>We started by reexamining traditional file system assumptions in light of our current and anticipated application workloads and technological environment.<br />
Our observations have led to radically different points in the design space.<br />
We treat component failures as the norm rather than the exception, optimize for huge files that are mostly appended to (perhaps concurrently) and then read (usually sequentially), 
and both extend and relax the standard file system interface to improve the overall system.</p>
<p>Our system provides fault tolerance by constant monitoring, replicating crucial data, and fast and automatic recovery. 
Chunk replication allows us to tolerate chunkserver failures.<br />
The frequency of these failures motivated a novel online repair mechanism that regularly and transparently repairs the damage and compensates for lost replicas as soon as possible.<br />
Additionally, we use checksumming to detect data corruption at the disk or IDE subsystem level, which becomes all too common given the number of disks in the system.</p>
<p>Our design delivers high aggregate throughput to many concurrent readers and writers performing a variety of tasks.<br />
We achieve this by separating file system control, which passes through the master, from data transfer, which passes directly between chunkservers and clients.<br />
Master involvement in common operations is minimized by a large chunk size and by chunk leases, which delegates authority to primary replicas in data mutations.<br />
This makes possible a simple, centralized master that does not become a bottleneck.<br />
We believe that improvements in our networking stack will lift the current limitation on the write throughput seen by an individual client.</p>
<p>GFS has successfully met our storage needs and is widely used within Google as the storage platform for research and development as well as production data processing.<br />
It is an important tool that enables us to continue to innovate and attack problems on the scale of the entire web.</p>
</blockquote>
<hr />
<p>GFS的开源实现是HDFS。<br />
GFS开启了一个分布式系统的先河：谷歌的第一个大规模分布式存储系统。<br />
以往学术界普遍认为分布式存储系统应该具有强一致性。结果GFS出来说，对于工程上来说，保证弱一致性也是可以的。</p>
<h2 id="架构"><a class="header" href="#架构">架构</a></h2>
<p><img src="./assets/gfs_f1.png" alt="GFS的架构" /></p>
<blockquote>
<p>A GFS cluster consists of a single master and multiple chunkservers and is accessed by multiple clients, 
as shown in Figure 1.<br />
Each of these is typically a commodity Linux machine running a user-level server process.<br />
It is easy to run both a chunkserver and a client on the same machine, 
as long as machine resources permit and the lower reliability caused by running possibly flaky application code is acceptable.</p>
<p>Files are divided into fixed-size chunks.<br />
Each chunk is identified by an immutable and globally unique 64 bit chunk handle assigned by the master at the time of chunk creation.<br />
Chunkservers store chunks on local disks as Linux files and read or write chunk data specified by a chunk handle and byte range.<br />
For reliability, each chunk is replicated on multiple chunkservers.<br />
By default, we store three replicas, though users can designate different replication levels for different regions of the file namespace.</p>
<p>The master maintains all file system metadata.<br />
This includes the namespace, access control information, the mapping from files to chunks, and the current locations of chunks.<br />
It also controls system-wide activities such as chunk lease management, garbage collection of orphaned chunks, and chunk migration between chunkservers.<br />
The master periodically communicates with each chunkserver in HeartBeat messages to give it instructions and collect its state.</p>
<p>GFS client code linked into each application implements the file system API and communicates with the master and chunkservers to read or write data on behalf of the application.<br />
Clients interact with the master for metadata operations, but all data-bearing communication goes directly to the chunkservers.<br />
We do not provide the POSIX API and therefore need not hook into the Linux vnode layer.</p>
<p>Neither the client nor the chunkserver caches file data.<br />
Client caches offer little benefit because most applications stream through huge files or have working sets too large to be cached.<br />
Not having them simplifies the client and the overall system by eliminating cache coherence issues. (Clients do cache metadata, however.)<br />
Chunkservers need not cache file data because chunks are stored as local files and so Linux’s buffer cache already keeps frequently accessed data in memory.</p>
</blockquote>
<div style="break-before: page; page-break-before: always;"></div><h1 id="vm-ft"><a class="header" href="#vm-ft">VM-FT</a></h1>
<p><a href="./assets/vm-ft.pdf">VM-FT</a></p>
<h2 id="摘要-2"><a class="header" href="#摘要-2">摘要</a></h2>
<blockquote>
<p>We have implemented a commercial enterprise-grade system for providing fault-tolerant virtual machines,
based on the approach of replicating the execution of a primary virtual machine (VM) via a backup virtual machine on another server.<br />
We have designed a complete system in VMware vSphere 4.0 that is easy to use,
runs on commodity servers, and typically reduces performance of real applications by less than 10%. In addition,
the data bandwidth needed to keep the primary and secondary VM executing in lockstep is less than 20 Mbit/s for several real applications, 
which allows for the possibility of implementing fault tolerance over longer distances.<br />
An easy-to-use, commercial system that automatically restores redundancy after failure requires many additional components beyond replicated VM execution.<br />
We have designed and implemented these extra components and addressed many practical issues encountered in supporting VMs running enterprise applications. In this paper,
we describe our basic design, discuss alternate design choices and a number of the implementation details, and provide performance results for both micro-benchmarks and real applications.</p>
</blockquote>
<div style="break-before: page; page-break-before: always;"></div><h1 id="raft"><a class="header" href="#raft">Raft</a></h1>
<blockquote>
<p>阿乐：你这届先不要选，全力支持我，我连了庄，下届全力支持你做话事人。<br />
东莞仔：双话事人咯，你话事人，我也话事人，不必争了。</p>
<p align="right">——电影《黑_社_会：以和为贵》</p>
</blockquote>
<p>学习Raft的过程相当痛苦。</p>
<p>6.824是我的年度实验。</p>
<p>2020年我花了一周的时间学Go、做完了lab1，21年我又花了一周的时间重写lab1，并尝试做lab2，但是失败了。</p>
<p>直到22年，才花了一个月左右的时间，断断续续地写完了Lab2的Raft实验。</p>
<p>做lab的时候真的怀疑自己到底是不是这块料T_T，欢迎聪明的你来挑战一下。</p>
<hr />
<p>最痛苦的部分是读论文，每次读完论文之后，都记不住自己读过什么。于是第二次再拿起来看的时候，不知道该从哪开始看，只好又从头开始读。反反复复多次之后，论文里的Introduction倒是看了好几遍，但是永远看不到正文。</p>
<p>后来想了一个办法：自己动手把论文翻译出来。只要保证翻译过的就是我自己弄懂的，那就可以线性的看下去啦。</p>
<p>虽然网上翻译Raft的也不少，但是自己翻译的话，可以加深对论文的理解，看别人的翻译未必真的能搞懂。</p>
<p>翻译的时候，顺带记录了自己的实验过程，共分为18个小节。这篇笔记就以这18个小节为主线，给大家分享一下我是怎么一点点把这个实验做完的。</p>
<hr />
<p>声明：如果你有的地方听不懂，那是正常的。Raft的细节实在太多，如果没有get your hands dirty（看论文+做实验），则很难有把握说自己搞懂了Raft。</p>
<h2 id="摘要-3"><a class="header" href="#摘要-3">摘要</a></h2>
<blockquote>
<p>Raft is a consensus algorithm for managing a replicated log.</p>
<p>It produces a result equivalent to (multi-)Paxos, and it is as efficient as Paxos, but its structure is different from Paxos; this makes Raft more understandable than
Paxos and also provides a better foundation for building practical systems.</p>
<p>In order to enhance understandability, Raft separates the key elements of consensus, such as leader election, log replication, and safety, and it enforces a stronger degree of coherency to reduce the number of states that must be considered. Results from a user study demonstrate that Raft is easier for students to learn than Paxos.</p>
<p>Raft also includes a new mechanism for changing the cluster membership, which uses overlapping majorities to guarantee safety.</p>
</blockquote>
<h2 id="结论-2"><a class="header" href="#结论-2">结论</a></h2>
<blockquote>
<p>Algorithms are often designed with correctness, efficiency, and/or conciseness as the primary goals.</p>
<p>Although these are all worthy goals, we believe that understandability is just as important. None of the other goals can be achieved until developers render the algorithm into a practical implementation, which will inevitably deviate from and expand upon the published form. Unless developers have a deep understanding of the algorithm and can create intuitions about it, it will be difficult for them to retainits desirable properties in their implementation.</p>
<p>In this paper we addressed the issue of distributed consensus, where a widely accepted but impenetrable algorithm, Paxos, has challenged students and developers for many years. </p>
<p>We developed a new algorithm, Raft, which we have shown to be more understandable than Paxos. </p>
<p>We also believe that Raft provides a better foundation for system building. Using understandability as the primary design goal changed the way we approached the design of Raft; as the design progressed we found ourselves reusing a few techniques repeatedly, such as decomposing the problem and simplifying the state space.</p>
<p>These techniques not only improved the understandability of Raft but also made it easier to convince ourselves of its correctness.</p>
</blockquote>
<p>做lab2的时候，这个图看了无数遍。</p>
<p><img src="./assets/raft_2.png" alt="raft_2" /></p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="感性认识raft"><a class="header" href="#感性认识raft">感性认识Raft</a></h1>
<ul>
<li>
<p>Raft可视化<br />
<a href="http://thesecretlivesofdata.com/raft/">thesecretlivesofdata.com/raft</a></p>
</li>
<li>
<p>Raft论文<br />
<a href="https://pdos.csail.mit.edu/6.824/papers/raft-extended.pdf">https://pdos.csail.mit.edu/6.824/papers/raft-extended.pdf</a></p>
</li>
<li>
<p>实验指导书<br />
<a href="https://pdos.csail.mit.edu/6.824/labs/lab-raft.html">https://pdos.csail.mit.edu/6.824/labs/lab-raft.html</a></p>
</li>
<li>
<p>其他有用的资料<br />
<a href="https://mit-public-courses-cn-translatio.gitbook.io/mit6-824/lecture-06-raft1">课程翻译</a><br />
<a href="https://raft.github.io/">raft.github.io</a><br />
<a href="https://blog.josejg.com/debugging-pretty/">如何打印你的多线程程序</a></p>
</li>
</ul>
<div style="break-before: page; page-break-before: always;"></div><h1 id="什么是raft"><a class="header" href="#什么是raft">什么是Raft？</a></h1>
<h3 id="摘要-4"><a class="header" href="#摘要-4">摘要</a></h3>
<p>Raft是一个用于管理备份日志（replicated log）的共识性算法，它和Paxos协议达成的效果一样，且效率相当。但是Raft采用了一种和Paxos截然不同的结构设计，使得它能够更容易地被人们所理解并进行工程实现。</p>
<p>为了增强协议的可理解性，设计者将共识协议拆分成了若干个关键模块——leader选举、日志备份、安全性。此外，这些模块之间被设计的相当内聚，从而减少了服务器里必须要考虑的状态数量。</p>
<p>研究发现，Raft比Paxos确实更容易被学生理解。</p>
<p>最后，Raft还引入了一种变更集群中服务器成员的新机制，该机制采用过半重叠（overlapping majority）的方式来确保系统的安全性。</p>
<h3 id="引言"><a class="header" href="#引言">引言</a></h3>
<p>共识算法是指一个集群系统可以像单台机器那样，哪怕某些节点出现故障了，也可以作为一个连贯的整体对外提供服务。正因如此，在由大规模集群组成的软件系统中，为了保证系统的可靠性，共识算法的地位显得异常重要。</p>
<p>过去的十年里，谈到共识算法，人们最先想到的总是Paxos协议：大多数分布式系统中关于共识算法的具体实现，要么是基于Paxos，要么就或多或少受到了Paxos的影响；在大学校园的课堂上，教师们在讲授共识算法时，也普遍采用Paxos作为一种教学手段。</p>
<p>不过，尽管有很多人尝试去给大家去解释Paxos协议，但是该协议仍然很不易于理解。而且，在系统的实现过程中，Paxos协议需要做大量的工程优化。于是，Paxos给程序员和学生群体带来了很多的痛苦。</p>
<p>于是在经历了一番Paxos协议的折磨之后，作者希望设计一个全新的共识性算法。而且作者希望该算法不管是在学术界的教学方面，还是在工业界具体系统的实现方面，都能提供一个坚实的理论基础。</p>
<p>值得称道的是，作者的设计理念是优先保证协议更易于人们去理解，也就是说，设计算法的时候，需要常常去思考：我们是否能够设计一个共识性算法，首先比Paxos更容易去理解和学习，并且在实际中也能更容易去实现？所以，作者希望这个算法能够更加符合系统设计者们的直观。<br />
算法能够work很重要，而能让人们清晰的知道算法是怎么work起来的，则更重要。</p>
<p>综上，一个叫Raft的共识算法就呼之欲出了。作者在系统的可理解性上花了很多小心思，例如对系统模块进行拆解（Raft将系统模块分成leader选举、日志备份以及安全性），削减节点的状态空间（Raft和Paxos协议中，服务都是有状态的，不同的节点通过交互使得它们的状态保持一致。相比于Paxos，Raft削减掉了很多次要的、不起决定作用的状态，使得状态的数量大规模的下降）。</p>
<p>一份调查问卷显示，两个大学的学生普遍反映Raft比Paxos更易于理解：43名学生学习了这两个算法，并且拿到一个试卷，其中有33名的学生在Raft相关知识的答题状况要比Paxos上的更好。</p>
<p>Raft和现有的共识算法有很多相同之处，不过也有一些新特性：</p>
<ul>
<li><strong>强势的leader</strong>：相比于其他共识算法，Raft采用了一种更为强势的领导制度。比如说，log entries的数据流转的顺序一般只从leader那里流向其他服务器节点。这种强势的领导制度能够使日志备份的管理更加容易，而且还能更易于让人们理解。</li>
<li><strong>leader选举</strong>：在选举leader的过程中，Raft在设计中加了几个随机的倒计时机制。只要在心跳检测的过程中（每个共识算法都会有），加一个小小的倒计时机制，就能轻易且迅速地解决服务之间的冲突。</li>
<li><strong>成员更迭</strong>：Raft采用一种全新的联合共识（joint consensus）方法来变更集群中的服务器成员。联合共识是指两次成员更替的过渡中，一定要有过半的服务器都出现在集群配置里。这种机制使得集群可以在成员变更的过程中仍能正常运转。</li>
</ul>
<p>作者认为相较于Paxos以及其他共识算法，Raft在工程实现以及教学方面都要更优。它的设计特别简单且易于理解；对于一个实际的系统来说，它的算法本身是描述完备（described completely）的；它的开源实现已经被许多公司采用；它的安全性也得到了形式化的论述和证明；它的性能也和其他共识算法大致相当。</p>
<p>本文剩下的章节会介绍什么是复制状态机（replicated state machine）问题（Section2），讨论Paxos的优缺点（Section3），以及作者在共识算法的可理解性上做的相关工作（Section4）。在第5～8个Section，作者介绍了Raft共识算法；在第9个Section里，对Raft的特性进行了评估；在最后的第10个Section，讨论了相关的工作。</p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="复制状态机replicated-state-machine"><a class="header" href="#复制状态机replicated-state-machine">复制状态机（Replicated State Machine）</a></h1>
<p>共识算法通常在replicated state machine（复制状态机）的内容里出现。使用复制状态机，server集群可以共同维护一份相同的状态机副本。哪怕集群中有一些server宕机了，整个系统一样可以正常运行。</p>
<p>复制状态机通常用来解决分布式系统中的容错问题。比如说，像GFS、HDFS、RAMCloud这种大规模系统中的单leader集群（单leader集群是指，系统中leader是逻辑唯一的，但是物理上不一定只有一个服务器），通常会使用一个独立的复制状态机来管理leader选举、存储在leader崩溃后仍能正常运行的且必要的配置信息。复制状态机的例子还包括Chubby、Zookeeper。</p>
<p><img src="./assets/raft_f1.png" alt="raft_f1" />
<em>图1: 复制状态机的架构。由共识模块里的算法管理replicated log，来记录从clients发来的状态机命令。这些状态机从这些log里面取命令并进行运算，因为共识算法能够确保在这些log上面的命令序列是完全一样的，所以这些状态机最终会产生相同的输出。</em></p>
<p>状态机一般来说是由replicated log（备份日志）来实现的。如图1所示，每个server都保存一份log，用来记录一连串的命令，而且server的状态机是按一定顺序执行的。每份log所记录的命令都具有相同的顺序，所以每一个状态机都处理着相同的命令序列。因为状态机是确定性的，所以每一份状态机都对相同的状态进行计算且具有相同的一系列输出。</p>
<p>共识算法要做的就是保证replicated log（复制日志）的一致性。一个server上的共识模块会接收从client发送过来的命令，然后将这些命令追加到它的log里。每个server的共识模块会和其他server上的共识模块进行通信，确保即使在部分server宕机的情况下，每一份log最终仍然能以相同的顺序记录着相同的一连串请求。一旦命令被正确地备份，每个server的状态机会按log记录的顺序来处理这些命令，并且把输出返回给client。最终，整个server集群看上去就像是一个单节点、高可靠的状态机。</p>
<p>物理机上的共识算法具有如下性质：</p>
<ul>
<li>安全性。在非拜占庭环境下，共识算法可以保证系统安全性，即绝不会返回错误的结果，哪怕是在网络延迟、分区、丢包、重复接收、乱序发送的情况下。</li>
<li>只要超过半数的server仍能工作并能够与其他server和client通信，系统就是可用（available）的。即，一个五台server的集群能够容忍两台server的崩溃。Server的崩溃通常被认为是停机了，也许不久后会从持久化存储中恢复运算状态，并且重新加入集群。</li>
<li>集群不依赖时钟来确保log的一致性。在极端情况下，易错的时钟和长延时会造成严重的系统不可用问题。</li>
<li>在绝大多数情况下，超过半数的server可以在一轮RPC的时间内就能完成client的命令，少数拖后腿的服务器就不会影响整体系统的性能了。</li>
</ul>
<div style="break-before: page; page-break-before: always;"></div><h1 id="whats-wrong-with-paxos"><a class="header" href="#whats-wrong-with-paxos">What's wrong with Paxos?</a></h1>
<p>过去十年里，Lamport老爷子提出的Paxos协议简直是和“共识”这个词深度绑定了。言共识协议，则必谈Paxos。Paxos协议被广泛地在大学课程里教授，工业界关于共识算法的实现大多也都是从Paxos开始的。Paxos首先定义了一个能够由单个决定就达成一致意见的协议，比如说里面的single replicated log entry（单个备份日志条目）。我们把这部分子集称之为独裁式的Paxos。然后Paxos将这些单个实例结合起来，比如将single replicated log entry组成一整个log，就轻易达成了一系列决定的共识，即multi-Paxos。Paxos确定了安全性和可用性，并且支持集群成员的更换。它的正确性也得到了证明，且在普遍情况下具有一定的执行效率。</p>
<p>不幸的是，Paxos有两个重大缺陷。首先，Paxos特别难以理解。有关Paxos的完整解释相当地晦涩难懂，极少有人能够理解它，即便你在上面下了很大的功夫。于是，目前就有了几种试图用更简洁的方式来解释Paxos的文献，但是这几个文献读起来同样十分具有挑战性。2012的NSDI会议上，在对与会者们的一个非正式的调查中，我们发现大家都认为Paxos不怎么直观，即便是系统研究领域的老鸟也这样认为。我们也时常与Paxos作斗争，我们直到最近读了几篇简化版的Paxos解释，并且设计出我们的替代协议后，才敢说我们弄懂它了，这个过程花了我们将近快一年的时间。</p>
<p>我们觉得Paxos的晦涩难懂是因为它选择了独裁式Paxos作为基础。独裁式Paxos十分晦涩难懂：它被强行分成了两个阶段，这两个阶段均没有一个直观的解释，且每个阶段都无法单独去理解。因此，很难对独裁式协议形成一种直观的感觉，我们无法解释How it works。至于multi-Paxos就使这个协议变得更晦涩难懂了。我们认为，对于一致性问题，完全可以使用多决定策略（例如，使用log而不是single entry）来达成共识，这个问题完全可以用另一种更直观明了的方式简而化之。</p>
<p>Paxos的第二个问题是，它实现起来特别不容易。关于multi-Paxos，人们对于怎样去实现它根本没有形成一个普遍的共识。Lamport老爷子的论文里大多都是在描述独裁式Paxos，虽然他也大致描绘了一下怎么去实现multi-Paxos的方法，但是许多并没有阐述太多细节。人们对于Paxos的补充和优化也作出了不少努力，但是这些工作彼此各不相同，和Lamport一开始描述的也不太一样。像Chubby这样的系统倒是实现了类Paxos的算法，但是作者也没有公布太多细节。</p>
<p>更糟糕的是，实际系统中，Paxos的架构很难搭建，这又是一个独裁式协议的两阶段分解所导致的问题。举个例子，把互相独立的log entries的集合混合到一起，形成一个有序的log，这完全看不出有任何好处，还增加了不少复杂性。还不如就围绕着一份log来设计整个系统，新的entry就直接按顺序追加到这个序列里，这样的话更简单高效。Paxos的另一个问题是，它的核心思想是采用一种server之间彼此平等的方式来解决问题（尽管最后它也建议使用一种弱约束关系的领导制度来优化性能）。在理想的世界中，只针对一件事情作决定的情况下，这种方式确实也make sense。但是如果要有一系列的决定要作的话，先选出一个leader，由leader来协调并作出决定显然是更加简单且高效的。</p>
<p>这种后果所导致的就是，基于Paxos协议构建的物理系统彼此都各不相同。开发者们实现共识算法的时候，都是先从Paxos出发，发现它的难以实现之后，就被迫设计出了完全不同的架构。这使得开发过程变得极其耗时且易错，再加上Paxos的难以理解，使这些问题雪上加霜。Paxos形式上的正确性证明从理论上来看的确漂亮，但是在具体实现中，不同的开发人员写出来的Paxos又如此不同。这使得实际中的具体实现很难再去保证正确性，如此一来，Paxos在理论上的正确性也就没什么价值了。Chubby的开发人员针对Paxos的评价尤为经典：</p>
<blockquote>
<p>“Paxos算法和现实世界里的系统有一个巨大的鸿沟...最终设计出来的系统无法保证正确性，因为它很大程度上是基于一个尚未证明其正确性的协议开发出来的。”</p>
</blockquote>
<p>基于以上种种问题，我们认为Paxos并不能为教学和工业界的具体实现提供一个良好的理论基础。</p>
<p>鉴于共识算法对于大型软件系统又是如此地重要，我们决定试试能否设计出一个比Paxos更好的替代品，于是Raft就呼之欲出了。</p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="向可理解性进军"><a class="header" href="#向可理解性进军">向可理解性进军</a></h1>
<blockquote>
<p>&quot;必须是简单易懂的！&quot;</p>
</blockquote>
<p>设计Raft的时候，我们有若干目标：为系统实现提供完整且坚实的基础，它必须极大程度上去减轻开发人员的负担；必须在任何情况下都保证安全性，在大部分运行状态下保证可用性；必须保证常见的操作都是高效的。不过我们最看重的目标还是——它必须是简单易懂的，这个目标极具挑战。我们必须保证大部分受众都能够无痛地充分理解raft协议。也就是说，读者可以轻易地对该算法形成一种符合直观的感觉，这样的话系统的开发人员在具体实现的时候才能轻易地对其进行拓展。</p>
<p>设计Raft的过程中，在有多种解决方案的时候，我们出于一切为可理解性让步的原则，主要考量以下几点：每个候选方案的可解释性究竟几何（比如说，它的状态空间有多复杂，它是否有一些不易觉察的假设）？读者想彻底搞懂这个方法和它的假设是否容易？</p>
<p>我们了解这种分析具有相当强的主观性，不过我们使用了两种通用性的技巧来进行设计。第一个技巧是对问题进行分解：我们尽可能把一个大问题分解成若干小问题，每个小问题彼此独立且简洁易懂，都可以被逐个击破。比如说，在Raft里我们将一个大问题拆解成若干个小问题，其中有leader选举、日志备份、安全性以及成员变更。</p>
<p>第二个技巧是尽可能地简化状态空间，方法是减少所需要考虑的状态、使系统更加紧凑、不使用非确定性方法。特别要指出的是，log不允许有空洞，并且Raft限制log不一致的情况。另外，尽管我们极尽所能地去剔除非确定性因素，有些时候非确定性算法反而能够增强可理解性。特别地，通过引入随机化方法，算法的状态空间被极大的削减了（例如在leader选举的时候，每一个可能的leader都会导致不同的状态，这个时候随机选一个leader，则会使整个算法具有一种对称的简洁性）。我们使用随机化的方式来简化Raft里的leader选举算法。</p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="raft共识算法零"><a class="header" href="#raft共识算法零">Raft共识算法（零）</a></h1>
<p>之前的第二小节提到过，Raft是一个用于管理replicated log的算法。图2对该算法进行了简要总结，以供开发人员参考。图3列举了该算法的若干性质。这些图中每一小部分都会在本章节进行细致的阐述。</p>
<p>Raft实现了一种共识协议。在该协议中，servers首先选举一个特定的leader，然后这个leader对于replicated log具有充分的决定权。这个leader接收从client发过来的log entries，并且将这些日志条目的副本发送给其他的server，并且告诉其他server什么时候可以安全地将这些log entries追加到它们的状态机里。一个强有力的leader可以大大简化管理replicated log的复杂度。举个例子，leader可以在不需要征询其他server意见的情况下去决定追加哪些entry到log里面，数据流就可以以一种简单的形式从leader流转至其他的server。如果leader宕机或者是联系不上其他的server了，在这种情况下就会去选一个新的leader。</p>
<p>基于这种leader制度，Raft将共识问题拆解为三个独立的子问题，每个子问题都会在下文中进行细致地讨论。</p>
<ul>
<li><strong>leader选举</strong>：如果现有的leader宕机了，必须要选举出一个新的leader。</li>
<li><strong>日志备份</strong>：leader必须接收从client发过来的log entries，并且将这些日志条目在集群中进行备份，使得其他server的log和leader的log保持一致。</li>
<li><strong>安全性</strong>：Raft保证安全性的关键点是保证状态机的安全性。如图3所示：如果任何一个server将特定的日志条目追加到状态机中，别的server必须不能在其特定位置上追加一个其他的命令记录。5.4小节阐述了Raft是如何确保这一点的；其解决方案是对选举制度额外引入一个严格的限制，5.2小节会说明这一点。</li>
</ul>
<p>阐述完共识算法之后，本章会对系统的可用性问题进行详细讨论，并且会探讨时间在系统中所起的作用。</p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="raft共识算法一基础概念"><a class="header" href="#raft共识算法一基础概念">Raft共识算法（一）——基础概念</a></h1>
<p>一个raft集群包含了若干server；一般来说5个server的集群是标配，5个server至少可以容忍2个server出现宕机的情况。<br />
<strong>在任意时刻中，一个server会有三种状态：leader、follower、candidate。在正常情况下，只会有一个leader，其他的server都是follower。</strong><br />
Follower是被动的一方：它只听命于leader或者candidate。由leader来处理所有client的请求（如果follower收到了来自client的请求，这个follower会把该请求转发给leader）。<br />
上述所提到的candidate状态会在选举新leader的时候用到，5.2小节会详述。图4给出了集群中server的状态及状态之间的相互转化，我们接下来对状态相互转化来进行探讨。</p>
<p><img src="./assets/raft_f2.png" alt="raft_f2.png" />
<em>图4: Server的状态变化。Follower只响应其他server的请求。如果一个follower长时间没有收到消息，它会变成candidate并且尝试举行选举。收到集群中半数以上选票的candidate会变成新的leader。Leader会一直运行下去，直到它fail掉。</em></p>
<p><img src="./assets/raft_f3.png" alt="raft_f3" />
<em>图5: 按轮次来划分时间，每一轮都会举行一次选举。成功的选举意味着，一个leader会管理整个集群直至该轮次结束。有时候选举会失败，即直到轮次结束也没有选出一个leader出来。从不同server角度来看，每一个轮次的过渡时间会有点不太一样。</em></p>
<p>如图5所示，Raft将时间划分为不同的轮次。轮次是使用连续的整数进行量化的。每一轮会从一场选举开始，就像5.2小节讲的那样，一个或多个candidate会尝试去成为leader。如果一个candidate赢得了选举，在该轮次中，这个server就会去履行leader的职责。有些时候会出现平分选票的情况，这样一来，当前轮次就没有一个leader。下一轮次会迅速开始，一场新的选举会再次举行。Raft确保了在一个给定的轮次中，server集群里最多有且只有一个leader。</p>
<p>从不同的服务器角度来看，每个轮次的过渡时间是不太一样的，甚至会存在server在整个轮次中都觉察不到选举的情况出现。在raft里，轮次有点像是一个逻辑上的时钟，每一轮都允许server集群去检测是否还会收到来自老leader的无效信息。每个server都会存一个current term的整型变量，这个变量是单调递增的。当server之间进行通信的时候，当前轮次的值会发生变化。如果一个server的当前轮次数小于其他server的，它就会对自己的当前轮次数进行更新，使其增大到与其他server的当前轮次相同的数值。如果一个candidate或者leader发现它的轮次数已经过期，它会立即将自身状态转为follower。如果一个server收到了之前轮次的过期请求，server会拒绝该请求。</p>
<p>Raft server之间会使用RPC进行通信，基本的共识算法只需要两种类型的RPC。RequestVote RPC被candidate用来在选举拉票，AppendEntries RPC被leader用来对日志条目进行备份以及发送心跳。第7小节加入了第三种类型的RPC用于在server之间传输快照。如果server在一定时间内没有收到响应，它们会尝试重新发送RPC。另外，为了能够使性能达到最优，server会并行的方式发送RPC请求。</p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="raft共识算法二选举leader"><a class="header" href="#raft共识算法二选举leader">Raft共识算法（二）——选举leader</a></h1>
<blockquote>
<p>我说你们另请高明吧。我也不是谦虚，我也不知道一个作为follower的server，怎么就把我选举成leader了？</p>
</blockquote>
<p>Raft采用心跳检测的机制去触发选举。当server启动的时候，都是从follower状态开始的。如果一个server能从candidate或者leader那里收到合法的RPC，它就会一直保持follower状态不变。leader会周期性地向所有follower发送心跳包（无log entry的AppendEntries RPC），来保证它的话事权。如果一个follower一段时间内没有收到任何RPC请求，我们称这种情况为“选举时间到”，这时它就会认为当前已经不存在leader了，而且是时候开始一场选举来选一个新的leader了。</p>
<p>在选举开始的时候，follower会增加自己的current term并且转为candidate状态。然后它会给自己投上一票，并且并行地给集群里的其他server发送RequestVote的RPC请求。candidate会一直保持自身的状态，直到如下的三种事件发生：</p>
<ul>
<li>它赢了这场选举。</li>
<li>其他的server变成了leader。</li>
<li>一段时间内都没有server赢得选举。（Candidate会递增自己的term，再去寻求下次选举。）</li>
</ul>
<p>以上这些情况都会在下面的章节进行详细讨论。</p>
<p>当candidate收到了集群中过半的server的选票时，它就赢得了这场选举。每一个server最多在一个特定的term里给一个candidate投票，投票遵循先来先服务的原则（5.4节里给投票规则增加了更加严格的限制条件）。一旦candidate赢得了选举，它就会变成leader。leader会给其他server定期发送心跳包，来保证它的话事权，并且阻止新的选举事件发生。</p>
<p>在等待投票的时候，一个candidate可能会收到来自其他server的AppendEntries RPC请求，这说明这个server已经宣称它是leader了。如果在RPC里的信息里，这个leader的term大于candidate的，该candidate就会认为这是一个合法的leader，从而转归到follower状态。如果RPC里的term值小于candidate的current term，那么candidate就会拒绝掉这个RPC并接着保持自身的candidate状态。</p>
<p>第三种可能的情况是，candidate在选举中既没输也没赢：一个term里，多个follower都变成了candidate，选票被若干candidate平分，于是没有server获得过半的票数。这种情况发生的时候，candidate会设置一个时间，当时间到的时候，会去举行一场新的选举，意味着它又要递增自己的term并且发送一轮RequestVote RPC请求了。然而，如果没有额外的措施的话，平分选票的情况可能会无限地重复下去。</p>
<p>Raft使用随机设置选举时间的方法，来降低平分选票出现的概率。这种随机化也保证了哪怕平分选票的情况出现，servers也可以很快地解决这个问题。为了防止平分选票的情况在早期选举中出现，选举时间间隔被随机设置在了一个固定的区间内（e.g. 150～300ms）。这样的话，在大多数情况下，有且仅有一个server会到选举时间、赢得选举、发送心跳。同样的，随机化的方法同样可以解决平分选票的情形，每个candidate会在随机的一段时间间隔后，再次尝试举行选举。并且它耐心地会等这个选举时间到了之后，再去重新尝试拉选票。9.3节会说明使用这种方法，是可以很快地把leader选出来的。</p>
<p>选举模块可以很好的说明，我们是多么地重视共识协议的可理解性。最开始我们打算用一个排名系统：每个candidate都被安排一个特定的rank，这样的话它们竞争的时候就可以有效的对它们进行优先级上的排列了。如果一个candidate发生了另一个candidate比它的rank要高，它就会转归成follower状态。这样一来，高优先级的candidate就会更轻易地赢得这场选举。我们发现这种方式会产生一个很隐晦的可用性问题（如果高优先级没赢得选举，低优先级server也许需要等待一段时间，才能再一次成为candidate，但即便这段时间很快，等待的这段时间很有可能黄花菜都凉了）。我们对这个算法进行了多次调整，但每次调整都会有边界case出现。最终我们得出结论：还是随机化的超时重试更直观易懂一些。</p>
<hr />
<blockquote>
<p><strong>额外的话</strong><br />
如果平分选票的情况出现：<br />
则candidate竞选失败，需要等到下一次选举时间到了的时候，再重新尝试拉选票。<br />
即一旦一个follower成为candidate之后，要么它成功当选；要么它收到比自己term大的RequestVote请求，转归到follower状态上。</p>
<p>读到这里，我们就可以尝试去实现lab2a了。
我认为重要的有两个点：</p>
<ul>
<li>再次进行选举的时间间隔的设置，需要远大于发送心跳的周期，当然也不能太大，要不然会出现长时间选不出leader的情况；</li>
<li>发送网络请求的时候，等待回复的时间可能会很长，所以这里需要异步地进行编程，不能阻塞主任务的执行。</li>
</ul>
</blockquote>
<div style="break-before: page; page-break-before: always;"></div><h1 id="raft共识算法三日志备份log-replication"><a class="header" href="#raft共识算法三日志备份log-replication">Raft共识算法（三）——日志备份（log replication）</a></h1>
<blockquote>
<p>leader被选出来了，那它该如何响应上层调用者的请求？<br />
另外，如果leader宕机，该如何从崩溃中迅速恢复？</p>
</blockquote>
<p>一旦leader被选举出来了，它就会开始服务client的请求。每一个client发过来的请求都包含了一个复制状态机要去执行的命令。<br />
leader将这个命令看作是一个新的entry，追加到它的log里，然后给每个server并行地发送AppendEntries RPC请求，从而实现日志条目entry的备份。<br />
当这个entry被安全的（我们在下文会定义安全性）复制成功后，leader就会把这个entry添加到它的状态机里并且将它的执行结果返回给client。<br />
<strong>如果follower宕机或者速度太慢，或者发生了网络丢包，leader就会不断地尝试重新发送AppendEntries RPC请求直到follower最终成功地将所有日志条目存储下来。</strong></p>
<p><img src="./assets/raft_f3_3.png" alt="raft_f3_3" /></p>
<p><img src="./assets/raft_f6.png" alt="raft_f6" /></p>
<p>log的组织格式如图6所示。每个来自leader的log entry都会存储一个状态机命令以及它的term值。这个term值用来检测log里的不一致的地方，以及用来确保图3中raft的一些性质。每一个log entry同样有一个整数类型的index号，用来定位它在log中的位置。</p>
<p>leader会判断什么时候给状态机追加（apply）一个entry是安全的，被追加的entry状态我们称之为提交态（committed）。<br />
Raft保证了每一个被apply的日志都是被持久化的，而且其最终会被所有的server状态机所执行。<br />
一旦leader把这个entry备份到过半的follower的状态机里之后，leader就会把这个entry设置为提交态（e.g. 图6中的entry7）。<br />
leader也会将一些它之前的entry进行提交，包括前任leader创建的entry。<br />
5.4节会讨论一些不易察觉的事件，比如说换leader的时候追加日志的规则应该是什么，并且证明这种提交策略是足够安全的。<br />
leader会一直跟踪被提交的entry里最高的那个log index，并且将这个index放到AppendEntries RPC里（包括心跳包），这样的话其他server就能知道目前进行到哪里了。<br />
一旦follower发现有一些entry该变成提交态了，它就会把这些entry按顺序添加到它本地的状态机里。</p>
<p>我们设计的raft机制，使分布在不同server上的logs具有高内聚的特性。<br />
这种机制不但简化了系统的行为且具有良好的可预测性，更重要的是它安全。</p>
<p>Raft会一直保证它的如下两个性质，这两个性质共同组成了图3所说的（日志匹配特性）“The Log Matching Property”：</p>
<ul>
<li>如果两个entry在不同server的日志里具有相同的index和term，那它们存储的命令一定是相同的。</li>
<li>如果两个entry在不同server的日志里具有相同的index和term，那它们各自的日志里之前所有的log entries均相同。</li>
</ul>
<p>第一个性质基于如下事实：leader创建entry时，会给定一个log index和term，log entry永远不会改变它在log里的位置。<br />
而AppendEntries RPC里简单的一致性检查则保证了第二个性质，通过比对log index和term，follower可以立即着手处理最新的entry。<br />
如果follower在日志里并没有发现与之匹配的index和term，它会拒绝掉这条最新的entry。<br />
这种一致性检查基于如下推论：<br />
最初的空log满足日志匹配特性，而只有当追加的entry还能继续满足日志匹配性的时候，才可以进行追加操作。（有点像是数学归纳法）<br />
因此，每当AppendEntries返回成功的时候，leader就能确认follower的log和它自己的保持一致了。</p>
<p><img src="./assets/raft_7.png" alt="raft_7" />
<em>图7:当leader拿到话事权时，follower可能会出现如下从a到f的情况。
每个长条表示一个log，长条里的box代表了一个日志entry；box里的值表示term。
一个follower可能会丢失entry（a和b），可能会有额外未提交的entry（c-d），抑或是兼而有之（e-f）。
比如说，我们来看一种f情况出现的一种场景：如果一个server在term2的时候是一个leader，它将若干entry添加到自己的log中，然后它在未提交的情况下宕机了；
假设它迅速地重启，并成为了term3的leader，又往自己的log里添加了若干entry；
假设它又没来得及提交term2和term3里的条目，就在接下来的几个term里一直宕机，f的这种情况就会出现。</em></p>
<p>在正常的操作中，leader的log和follower的会保持一致，因此AppendEntries的一致性检查就永远不会出错。然而，leader宕机的时候会导致log不一致的情况出现（比如说老leader还没有完全地把entry都备份到日志里面）。<br />
这种不一致会造成一系列的leader、follower的宕机行为。</p>
<p>如图7所示，可能会有follower的日志和新leader的日志不一样的情况。follower拥有的日志条目可能会少于leader的，也有可能会多于leader的，或者这种情况兼而有之。<br />
这种丢entry或者多entry的情况，在每一个term里都有可能会出现。</p>
<p>在raft协议中，leader处理不一致的措施是：强迫follower的log与leader保持一致。这意味着follower中冲突的entry会被leader里的entry所重写。5.4小节会说明这种方式和一种更严格的限制措施结合之后，是可以保证数据安全性的。</p>
<p>作为leader，为了使follower的log与其保持一致，leader必须必须找到集群中最新的entry，确定大家的当前工作进度到哪一步了，然后删除follower里这个工作点往后的log entries。
并且基于它自己的log，把leader里的这个entry之后的所有entry再次发给所有的follower。<br />
所有的这些操作都是由AppendEntires的一致性检查完成的。leader为每一个follower都维护一个nextIndex，即要发给follower的下一个entry的索引值。<br />
当leader最开始拿到话事权的时候，它会把所有的follower对应的nextIndex初始化成它自己log里最新的那一条（在图7中，index的值就是11）。如果follower的log与leader的不符，下一次的AppendEntries请求就通不过一致性检查。于是follower会拒绝AppendEntries的请求，leader就会把nextIndex的值递减，并再次尝试发送AppendEntries。最终leader和follower的log会保持一致。<br />
一致的情况意味着，AppendEntries请求会成功返回，并且follower里冲突的entry会被移除掉，并且会新增上leader里log的entry（如果有的话）。一旦AppendEntries成功，follower的日志就和leader保持一致，然后它们就可以在这一轮term愉快地工作了。</p>
<p>如果有额外的性能需求，raft协议可以通过减少拒绝掉AppendEntries的次数来进行优化。比如说，在拒绝AppendEntries请求的时候，follower可以在返回的结构体里面放入冲突的entry以及本轮term里它存的第一个日志条目的index。有了这些信息，leader就能绕过冲突的条目，直接将nextIndex的值递减到follower发过来的index上；一个AppendEntries请求可以把所有冲突的entry传进去，而不是单个RPC只包含一个entry。具体实现的时候，我们其实对这种优化的必要性存疑，因为failure实在太不常见了，所以不会有太多不一致的entry条目。</p>
<p>使用如此机制，leader在拿到话事权的时候，不需要对log的一致性做过多的特定的操作。它只需要正常操作即可，log会在AppendEntries的一致性检查中，逐渐趋向一致。一个leader永远不会覆写或者删除它自己的log（图3中有提到，Leader Append-Only Property特性，即leader只做追加操作的原则）。</p>
<p>日志复制机制体现了我们在第二章所提到的共识算法的一致性：</p>
<ul>
<li>当新的entries被过半的server所接受，raft服务器可以接收、复制、添加新的entry到它的log中；</li>
<li>大多数正常的情况下，一个entry大概会经过一轮RPC的时间，从而被集群server备份；</li>
<li>而单个处理时间缓慢的follower不会影响整体系统的性能。</li>
</ul>
<hr />
<blockquote>
<p><strong>额外的话</strong>：<br />
看这段的时候，心生了很多问题。</p>
<ul>
<li>
<p><strong>如果正常工作则万事大吉，万一崩溃了，如何处理日志不一样的情况呢？</strong><br />
论文里说，通过AppendEntries的一致性检查，使日志逐渐趋于一致。这个需要我们仔细地进行设计。</p>
</li>
<li>
<p><strong>另一个问题是，会不会存在follower里的日志比leader的日志要长的情况？</strong><br />
答案是会，因此论文又增加了选举限制规则，进一步保证日志的一致性。关于选举限制规则，会在后面的两节出现。</p>
</li>
<li>
<p><strong>此外，关于leader和follower通过AppendEntries同步log的时候，如何优化性能？</strong><br />
我没有采取论文和824的lecture里提到的方法，而是采用每次将差值*2的方式进行同步，这个方法同样通过了lab2的性能测试，且实现起来更加简单。</p>
</li>
</ul>
<p>有了这些信息，我们就可以去做6.824的lab2b了。</p>
</blockquote>
<div style="break-before: page; page-break-before: always;"></div><h1 id="raft共识算法四安全性和选举限制"><a class="header" href="#raft共识算法四安全性和选举限制">Raft共识算法（四）——安全性和选举限制</a></h1>
<blockquote>
<p>梁家辉：我们坐的都是奔驰，劳斯莱斯，你坐马自达，活该你塞车。<br />
你坐马自达，你根本没资格参加这个会哦。<br />
你迟到了11分钟，就是不重视这个会，我们凭什么拿你当兄弟？回去等电话，有消息通知你！</p>
<p align="right">——电影《黑金》</p>
</blockquote>
<blockquote>
<p>raft节点：我们的log都是update的，你的log是stale的，活该你拉不到选票。<br />
你log stale了，你根本没资格当leader哦。<br />
你作为一个掉队的follower/candidate，就是不重视这次election，我们凭什么拿你当兄弟？<br />
回去等RPC请求，有消息通知你！</p>
</blockquote>
<p>前面的章节讨论了raft是如何选举leader并且进行日志条目备份的。然而，目前所阐述的机制并不足以保障状态机在执行命令过程中的有序性。<br />
举个例子，万一有一些entry，follower并不知道leader已经提交了；然后来了一个新leader发送命令把这些entry全部覆写成了新的entry；于是不同的server状态机就会执行不同的命令序列。</p>
<h3 id="选举限制"><a class="header" href="#选举限制">选举限制</a></h3>
<p>在任何基于leader制度的共识算法中，leader最终必须存储所有已提交的日志条目。<br />
在某些共识算法中，例如Viewstamped Replication，大家可以选举不包含所有已提交日志条目的leader。<br />
这种算法会额外增加一些机制来确保丢失的entry可以转交给新leader，比如说在选举或者竞选成功之后进行entry转交。
不幸的是，这种做法会引入额外的机制和系统复杂性。<br />
Raft则采取了更为简单的方法来确保新选出的leader拥有之前term里所有已提交的entry，而不需要引入额外的entry传输机制。<br />
这意味着log entry的增加是单向的，即只能从leader到follower，并且leader永远不会覆写它log里现有的entry。</p>
<p>除非candidate拥有所有已提交的entry才能赢得选举，否则Raft的选举程序会制止这个candidate成为leader。<br />
一个candidate必须赢得集群中过半服务器的选票，这意味着每一个已提交的日志至少会保存在现有的某一个server里面。<br />
如果candidate的log在过半的server里都算是最新的（接下来我们会定义什么是“最新”），那么它就拥有全部已提交的entry。<br />
我们可以给RequestVote RPC请求添加如下限制：<br />
<strong>RPC里的请求需要包含candidate的log信息，如果作为选民的server发现候选人的log不如它的新，那么选民server就不会投票。</strong></p>
<p>Raft协议里，比较两个log的最新程度是通过上一轮term和index进行比较的。</p>
<ul>
<li>如果term不同，那么谁的term更大，谁的日志就最新。</li>
<li>如果term相同，last_index不同，谁的last_index更大，谁的日志就最新。</li>
<li>如果term和last_index一样，那两个log就一样新。</li>
</ul>
<div style="break-before: page; page-break-before: always;"></div><h1 id="raft共识算法五如何提交之前term里的entry"><a class="header" href="#raft共识算法五如何提交之前term里的entry">Raft共识算法（五）——如何提交之前term里的entry</a></h1>
<blockquote>
<p>&quot;raft永远不会通过计算备份数量的方式对之前term的entry进行提交。只有leader的current term里的entry可以通过计算备份数量的方式进行提交。&quot;</p>
</blockquote>
<p><img src="./assets/raft_f8.png" alt="raft_f8" /></p>
<p><em>图8. 该时间序列展示了为什么一个leader是无法通过log来确定过去term的entry是否被提交的。(a)时刻，S1是leader，它把entry复制到了部分服务器，如S2里。(b)时刻，S1宕机；S5成为了新的leader，它的term是3，由S3、S4以及它自己选出，并且在index2上接收了一个新的entry。(c)时刻，S5宕机，S1重启，并被选为了leader，于是它继续进行日志备份，此刻，它成功地把它的entry2备份到大部分的server上，但是未作提交。(d)时刻，假设S1又宕机，S5被S2、S3、S4选成了leader，然后把index2的entry全部覆写成了term3的命令。然而，我们再来作另一种假设，如果S1成功提交了entry2，在平行世界里的(e)时刻，由于entry2已经被提交，S5不可能再赢得选举，e时刻里所有的entry是可以被正常提交的。</em></p>
<p>论文5.3节提到，leader一旦确定当前term里一条entry被安全地存储在过半的server里面，它就会提交这条entry。<br />
如果leader在提交entry前宕机，之后的leader会尝试对这个entry进行备份。<br />
然而，leader是无法立即确定之前term里的entry是否被存储在了大部分server里。<br />
图8说明了这种情况：一个老的entry被存储在了大部分server里，但仍然会被一个新的leader覆写。</p>
<p>为了避免图8中的问题，raft永远不会通过计算备份数量的方式对之前term的entry进行提交。<br />
只有leader的current term里的entry可以通过计算备份数量的方式进行提交；<br />
一旦current term里的entry以这种方式被提交之后，根据日志匹配性质，它之前的所有entry会以一种间接的方式被提交。<br />
虽然确实有几种方式，leader可以通过通信来确定之前的log entry是否已经被提交（比如说，如果一个entry被所有的server存下来了，那这个entry就被认为是提交态的），但是raft采用了一种更为保守的方式来确保它的简单性。</p>
<p>由于log entry会存储它原本的term号，如果让leader备份之前term里的entry，会使raft引入额外的复杂性。<br />
在其他共识算法里，如果一个新的leader去备份前任term里的entry，它必须将这个entry里的term修改成新的term number。<br />
Raft的方法更为简单，因为entry里的term值会一直保持不变。另外，采用这种方法，和其他的算法相比，新leader可以发送更少的log entry。（其他算法里，新leader必须发送大量的log entry，去备份前任leader留下的日志条目）</p>
<hr />
<blockquote>
<p><strong>额外的话1</strong>：</p>
<p>翻译到这里，我越来越困惑了。论文里根本没提网络分区的问题，一直都在假设leader/follower宕机了怎么办。<br />
假设follower没有宕机，而是遇到了网络分区问题，一直收不到leader的心跳。<br />
按论文里的逻辑，它会转到candidate状态，然后自增term。<br />
当网络分区的问题解决后，它发送的RequestVote RPC请求里包含的term值一定大于任何一个server的current term。<br />
依照论文里图2的逻辑，任何一个server，在response/request看到任何一个比它大的term值，都要增大至这个term值，并且变成follower状态。</p>
<p>这样的话势必会触发新的选举。</p>
<p>但是6.824的lab2B里明确指出，触发bug的原因之一，有可能就是在leader还活着的时候，剩下的followers发起了新的选举。</p>
<p>在群里问了大佬之后，大佬们说，在raft的那篇博士论文里，提到了使用preVote来解决这个事情。</p>
<p>另外在raft.github.io里，有人做了raft可视化，通过模拟，发现如果有网络分区，follower自增term，于是触发新的选举。如果按论文里的逻辑去实现，这种不合理的情况得确会出现。
<img src="./assets/raft_e_1.png" alt="raft_e1" />
不过这种情况即使出现，依然不违反系统的一致性。只是从情感上不太容易接受，且系统的性能也许会受到这种情况的影响。</p>
</blockquote>
<hr />
<blockquote>
<p><strong>额外的话2</strong>：<br />
这一小节还是挺令人迷惑的。<br />
知乎上这篇文章讲的很清楚：
<a href="https://zhuanlan.zhihu.com/p/369989974">Raft 的 Figure 8 讲了什么问题？为什么需要 no-op 日志？</a></p>
<p>其实图8那张图，是在描述一个小概率的corner case。<br />
假设我们允许leader通过计算备份日志数量的方式，去提交它之前log entry，就有可能会产生这种不好的corner case。<br />
请看下图⬇️，当(a), (b), (c)情形依次出现之后，
也许会出现(d1)情形，也有可能会出现(d2)情形。</p>
<p>即当(c)时刻，S1节点身为Term2的leader，它已经提交完index为2的log entry，并要将这一信息同步给S3时，突然宕机了。<br />
如此一来，S5是可以当选的，因为这不违反选举限制条件。那S5就有可能会收到client的请求，并将S1、S2在index为2上面的log entry覆盖成一个新的log entry。</p>
<p>这种情况是不允许的。为了避免这种情形的出现，论文里说<br />
<em>&quot;raft永远不会通过计算备份数量的方式对之前term的entry进行提交。只有leader的current term里的entry可以通过计算备份数量的方式进行提交。&quot;</em></p>
<p>那么，有了这个限制之后，在(c)时刻，即使index为2的log entry被复制到了大多数节点上，由于该entry不属于S1当前的term，所以S1不能提交它。<br />
S1只能等到index为3且term为4的entry被复制到了大多数节点上的时候，它才能提交属于它term的日志，此时index为2的log entry将被间接地提交。</p>
<p>你可以要问，S1当选后，那如果client永远不发送当前term的命令，那S1就永远不能提交它之前的日志？<br />
为了解决这个问题，在《与client的交互》一节有提到，leader当选后，需要立即在log里提交一个空的no-op entry，这个问题就迎刃而解了。</p>
<p><img src="./assets/raft_f3.7.png" alt="raft_f3.7.png" /></p>
</blockquote>
<div style="break-before: page; page-break-before: always;"></div><h1 id="raft共识算法六安全性定理"><a class="header" href="#raft共识算法六安全性定理">Raft共识算法（六）——安全性定理</a></h1>
<p>通过给定的Raft算法，我们现在可以认为该算法具有Leader完整性（Leader Completeness Property）了（这个定理会在9.2节给出证明）。<br />
采取反证法，我们首先可以假设leader完整性不存在，然后再给出反例。<br />
假设term T的leader提交了在本轮提交了一个日志条目，但是这个entry并没有被之后的leader保存在它的log里。<br />
假设term U &gt; term T，term U的leader没有存这个entry。</p>
<p><img src="./assets/raft_9.png" alt="raft_9" />
<em>图9: 如果S1是term_T的leader，它在它的任期term_T内提交了一个entry，S5是term_U的leader。那么必然存在一个server，如图中的S3所示，它既备份了S1的entry，投票给了S5。</em></p>
<ol>
<li>这个已提交的entry一定在leader_U的log里不存在。</li>
<li>leader_T将这个entry备份在集群的大部分节点上，leader_U收到了大部分server的选票。因此，至少有一个server拿到了leader_T的entry，并且投票给了leader_U。 该过程如图9所示。这里其实是一个悖论。</li>
<li>这个投票的server我们称之为voter。它在投票给leader_U之前，一定提交了leader_T的entry，否则的话，它一定拒绝掉了leader_T的AppendEntries的请求（因为如果它投票给了leader_U，它的current term一定比AppendEntries请求里的term要大）。</li>
<li>这个voter投票给leader_U的时候，它还存储着这个entry，假设从term_T到term_U，这中间的leader也都保留了这个entry，因为leader从来不删entry，因此follower就只删除和leader冲突的那些entry。</li>
<li>voter投票给leader_U，因此leader_U的log必须要比voter的新。这就导致了两个悖论。</li>
<li>第一，假设voter和leader_U拥有相同的last log term，leader_U的log至少也要和voter的一样长，所以它的log里包含了所有voter的log entry。这就导致了第一个悖论，因为voter里有一个已提交了entry是leader_U没有的。</li>
<li>另外一种情况是，leader_U的last log term比voter的大。假设，它的log term远大于voter的，由于voter的last log term至少也是T（它有来自term T的entry），那么leader_U之前的老leader一定包含这条voter的已提交entry。因此，根据日志匹配性，leader_U的日志必须包含voter的那个已提交日志。因此，这又是一个悖论。</li>
<li>既然两种可能的情况都是悖论。那么，我们加了选举限制规则后，term T之后的leader必须包含所有term T里已提交的日志。</li>
<li>日志匹配性保证了未来的leader一定会包含所有已经提交的entry，不管是直接提交还是间接提交的。就像图8里(d)的index2那样。</li>
</ol>
<p>根据leader完整性，我们可以证明图3中的状态机安全性了。<br />
状态机安全性是指如果一个server给状态机添加了一条entry，其他的server就不会在它的log里 相同的位置上添加一条不同的entry。<br />
当server给它的状态机添加了一条entry时，它的log必须和leader的log保持一致，且该entry必须被提交。<br />
如果在最小的term里，所有server的log entry都一样，日志完整性原则保证了更高term的leader会存储相同的log entry，因此server在之后的term里就能在相同的index上追加具有相同值的entry了。<br />
于是，状态机安全性就得以保证了。</p>
<p>最后，raft要求必须以index的顺序追加entry。结合状态机安全性，这意味着所有的server会往它们的状态机里追加相同的log entry，且以相同的顺序进行追加。</p>
<blockquote>
<p><strong>额外的话</strong>：<br />
所以照这么说，raft需要保证所有可能被提交的日志条目都要存储在所有的server上面？</p>
<p><img src="./assets/raft_e2.png" alt="raft_e2" />
那么那些可能没有被提交的entry，会不会也会被莫名其妙地变成提交态了呢？<br />
貌似是会的。
在824里的lecture，老师也提到了这个情况。<br />
比如如上场景，index为12，term为5的entry可能并没有被leader_5提交，但是经过leader_6的一番操作后。最终结果会变成下图。
<img src="./assets/raft_e3.png" alt="raft_e3" />
于是leader6提交了leader5的entry。</p>
</blockquote>
<div style="break-before: page; page-break-before: always;"></div><h1 id="raft共识算法七如果followercandidate宕机了"><a class="header" href="#raft共识算法七如果followercandidate宕机了">Raft共识算法（七）——如果follower/candidate宕机了</a></h1>
<p>至今我们都在关注leader宕机的情况。而follower/candidate宕机处理起来则更为简单，处理follower和candidate宕机的情况是一样的。<br />
如果一个follower/candidate宕机了，发给AppendEntries和RequestVote请求就会fail掉。<br />
对于failure的问题，raft采用的是无限重试的方法。如果宕机的server重启后，它就又能处理RPC请求了。<br />
如果server在处理RPC请求后，更新了自身的状态，还没来得及返回就宕机了，它就会在重启后收到一个相同的RPC请求。<br />
每一个RPC请求都是幂等的，所以这不会有什么问题。<br />
举个例子，如果follower收到了一个AppendEntries请求，请求参数包含了之前已经追加过的log entry，follower就会直接无视掉这个新的请求。</p>
<blockquote>
<p>论文至此翻译了一半，此时已经是我写Raft实验的第12天，正卡在lab2b。</p>
<p>这个时候我看到了一篇blog，<a href="https://blog.josejg.com/debugging-pretty/">如何优雅的打印多线程调试程序</a>。</p>
<p>里面提到，py3有个rich库，可以将日志信息通过管道，传输给python程序，用rich里的函数把美化后的调试信息输出到终端。</p>
<p>本来我的调试信息长这样：</p>
<p><img src="./assets/raft_e4.png" alt="raft_e4" /></p>
<p>用了他的脚本之后，重新定制自己print函数，于是终端上的调试信息变成了这样：</p>
<p><img src="./assets/raft_e5.png" alt="raft_e5" /></p>
<p>有了这个脚本，我觉得我的多线程编程还能再抢救一下。</p>
</blockquote>
<div style="break-before: page; page-break-before: always;"></div><h1 id="raft共识算法八时间与可用性"><a class="header" href="#raft共识算法八时间与可用性">Raft共识算法（八）——时间与可用性</a></h1>
<p>我们设计Raft时，一个基本要求就是安全性不与时间挂钩：不能因为某些事件发生的更快或者更慢，就会导致系统产生不正确的结果。
然而，可用性则不可避免地会和时间有关系（可用性是指系统能在一个可接受的时间内响应client的请求）。</p>
<p>举个例子，如果server的宕机的周期比系统内server之间交换消息的时间还要短，candidate坚持不到赢得选举的那一刻就宕机了，那也就可能长时间选不出一个leader；
如果没有一个稳定的leader，Raft就谈不上什么可用性。</p>
<p>Raft的选举模块里，时间对系统来说很重要。
只有满足如下条件，raft里的server才能选出一个稳定的leader并且这个leader能够尽可能长时间地保持它的系统话事权，从而满足上述所说的时间要求：</p>
<p>$$ broadcastTime \ll electionTimeout \ll MTBF $$</p>
<p>在这个不等式里，broadcastTime是指一个server并行地向集群其他server发送RPC请求并且收到它们响应的时间；
electionTimeout是指5.2小节中提到过的选举超时时间；而MTBF是一个单节点server的两次宕机的平均时间间隔。<br />
broadcastTime应该至少小于electionTimeout的一个数量级，这样的话leader才能稳定地发送心跳包来阻止follower发起选举；<br />
由于选举超时时间是随机的，因此每个server的超时时间都不一样，就能够避免平分选票的情况出现。<br />
而选举超时时间又应该远小于MTBF的若干个数量级，这样的话系统才能稳定地对外提供服务。<br />
当leader宕机了，系统会在一段时间内丧失可用性，这个丧失可用性的时间间隔应该和选举超时的时间相当。<br />
我们姑且认为这是系统对外提供服务的时间长河里的一段可以忽略不计的小插曲。</p>
<p>广播时间和MTBF都算是更底层系统的属性，而选举超时时间则是我们可以人为设置的。<br />
Raft的RPC里的信息通常需要持久化到存储设备上，因此广播时间基本上从0.5ms到20ms不等，取决于存储技术的好坏（不考虑网络通信的时间吗？）。<br />
因此，选举超时时间大概可以设置在10ms到500ms不等。通常来说server的MTBF是几个月甚至更久，因此设置好选举超时时间，就可以很容易让上述不等式成立。</p>
<blockquote>
<p><strong>额外的话</strong>：<br />
论文读到这里，发现自己在实现2A的时候，很多细节理解的不对，再加上当时没有好的调试工具，代码写的乱七八糟。只能推翻重写。第二次写给自己设置了很多原则：</p>
<ol>
<li>增量开发，确保每一步都是可以被测试的。</li>
<li>及时panic，如果有想不明白的地方，直接panic，之后再handle it。不至于出现一个bug，对着调试信息看半天，然后意识到是自己之前没有实现的逻辑，那里还写了一个todo。</li>
<li>debug信息要清晰。使用mit助教给的调试脚本。</li>
<li>开发前充分想清楚测试用例。</li>
<li>未经测试的代码都是错误的。未经测试的代码都是错误的。未经测试的代码都是错误的。<br />
&gt; Test it.</li>
<li>如果有些问题没有想清楚，在纸上写伪代码并在大脑里进行验证，比直接对着电脑硬来，这两种情况，前一种更让人长寿。</li>
</ol>
</blockquote>
<div style="break-before: page; page-break-before: always;"></div><h1 id="成员变更"><a class="header" href="#成员变更">成员变更</a></h1>
<p>因为824的lab里面没有关于成员变更的部分。故省略。</p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="日志压缩"><a class="header" href="#日志压缩">日志压缩</a></h1>
<p>随着client的请求越来越多，raft的log也会越来越长。在实际的系统中，没有什么server的日志是可以无限增长的。
随着log变得越来越长，log会占用越来越多的空间，server重启的时间也会变长。我们需要定期地把log里过期的数据清理掉，要不然server总有一天会产生可用性问题。</p>
<p>Snapshotting（快照策略）是一种最简单的压缩方法。整个系统的状态可以作为一份snapshot（快照）写入持久化的存储设备中，
如此一来，被快照的节点之前的log，就可以都从server的状态里清空了。<br />
快照策略被应用在Chubby和ZooKeeper上，在Raft中，我们也使用快照策略。本章讲进行详细阐述。</p>
<p>另一种增量策略，像是log cleaning、log-structured merged trees，也是可行的解决方案。<br />
增量的策略每次只处理数据里的一小部分，这样一来就可以把压缩导致的负载均摊在每一次操作上。<br />
首先，这种策略会找出数据区域中累积的空闲对象，然后将这些空闲对象压缩成live objects，并释放响应的区域。<br />
相比快照，这种方案需要更多的策略进行实现，且复杂度高。<br />
不过快照的缺点则是，每次处理的都是整个数据集。<br />
尽管如果引入log cleaning的话，需要对raft进行修改，但是状态机可以通过实现Log-structured merge-tree，从而使用与快照相同的接口。</p>
<p><img src="./assets/raft_12.png" alt="raft_12" />
<em>图12: server将已提交的entry（index从1到5）制作成快照，并只存储当前状态（如变量x和y）。快照里最后一条entry的index和term提供了下一个entry的位置，在图中指entry6。</em></p>
<p>图12展示了Raft快照的基本思想。每个server独立管理快照，只有commit过的entry才能加入快照。<br />
Most of the work consists of the state machine writing its current state to the snapshot. （我没看懂这句话）<br />
Raft还在快照里存了一些metadata：</p>
<ol>
<li>the last included index，即快照里保存的最后一个entry在log里的index（最后一个状态机apply的entry）;</li>
<li>the last included term，即最后这个entry的term。</li>
</ol>
<p>这些信息是为了支持AppendEntries的一致性检查所设置的。<br />
为了支持成员变更（第6章的内容），快照还需要包含像last included index这样最新的log配置信息。<br />
一旦server将快照写入完毕，它就可以把log里last included index之前的entry全部删除了，包括快照之前的那些entry。</p>
<p>尽管server通常都独立地处理快照，但是还是存在leader需要给follower发送快照的情况的。<br />
这是因为当leader丢弃掉即将发送给follower的下一个entry的时候，它就需要发送快照了。<br />
（这句话看了好几遍，才明白啥意思。就是leader即将要发给follower的entry，它要把这个entry放进快照里了。理论上它可以把这个entry从log里删除，但是它删不得，因为还要给follower发entry。所以为了解决这个问题，leader应该给follower发送快照）<br />
幸运的是，这种情况不太会在正常的情况下发生：follower会包含leader拥有的全部entry。然而，意外时有发生，当一个很慢的follower或者一个新的server加入集群之后，follower就没有leader的entry。<br />
这个时候，如果想要把follower的日志更新到leader的那种程度，leader就需要通过网络给follower发送快照了。</p>
<p align="center"><img  width="50%" src="./assets/raft_f13.png" alt="raft_f13" /></p>
<p>leader用一个新的RPC请求来给掉队的follower发送快照，我们将这个RPC请求称之为InstallSnapshot。<br />
如图13所示，当follower通过RPC收到快照之后，它必须决定该对它现有的log entries做些什么。<br />
通常情况下，快照会包含一些接受方log里没有的新信息。这种情况下，follower直接丢掉它自己全部的log，进而那些与快照产生冲突的未提交的entry会被快照所覆盖。<br />
另一种情况是，由于重传或者别的什么错误，follower收到的快照是自己log的子集，那被快照覆盖的log entries就会被删除，但是快照后面的entry则仍然保留。（译者：十分合理。）</p>
<p>这种快照策略与leader的强势话事权不同，因为follower会在leader不知情的情况下创建快照。<br />
不过我们认为这样其实也无伤大雅。毕竟设置leader的初衷是避免冲突/达成共识，既然快照里保存的都是达成了共识的状态，那也就不存在什么冲突需要leader介入了。<br />
数据依然只从leader流转至follower，但是follower自行对数据进行reorganize。</p>
<p>我们考虑过一种替代的基于话事人的快照方案，即只有话事人leader创建快照，然后发送给所有的跟随者。<br />
但是这样做有两个缺点。</p>
<ol>
<li>发送快照会浪费网络带宽并且延缓了快照处理的时间。<br />
每个跟随者都已经拥有了所有产生快照需要的信息，而且很显然，自己从本地的状态中创建快照比通过网络接收别人发来的要经济。</li>
<li>话事人的实现会更加复杂。例如，话事人需要发送快照的同时并行的将新的日志条目发送给跟随者，这样才不会阻塞新的客户端请求。</li>
</ol>
<p>有两个额外的问题会影响快照性能。</p>
<ol>
<li>
<p>server必须决定什么时候创建快照。<br />
如果server创建快照太频繁，它会浪费大量的磁盘带宽和能耗；<br />
如果server创建快照的次数不频繁，它会存在耗尽存储资源，且重启时恢复log的时候会占用很长的时间。<br />
一种简单的策略是，设置一个fix size，一旦log的大小达到这个fix size阈值，就创建快照。如果这个size设置的比预期一个快照的容量要大，则磁盘带宽的负载就会相应的小。</p>
</li>
<li>
<p>第二个影响性能的点是，写快照会占据大量的时间，我们不希望写快照会影响正常的操作。<br />
一种解决方案是使用<strong>copy-on-write写时拷贝技术</strong>。<br />
如此一来，任何新的update会被立即接收，而不受写快照操作的影响。<br />
举个例子，具有函数式数据结构的状态机就支持这种功能；另外，操作系统的写时拷贝支持（如linux中的fork）也可以被用来创建状态机的内存快照（我们的实现就是基于此的）。</p>
</li>
</ol>
<blockquote>
<p>第一次看这一节的时候，根本不知道日志压缩和快照策略在说什么。</p>
<p>后来才发现，原来raft作为application的一个模块，需要对它的上层调用者，即应用程序（注意不是client）提供服务。</p>
<p>有一类请求就是应用程序告诉raft，“我已经制作快照了，所以commandIndex之前的log你都不用保留了，保留这个快照就好”。</p>
<p>由于log的大小往往远远大于快照的大小（因为快照保存的是状态机里的状态），于是raft要持久化的数据就变小了。</p>
<p>824的lab2D实验就是关于快照策略的。</p>
</blockquote>
<h2 id="raft_diagram-架构图"><a class="header" href="#raft_diagram-架构图">raft_diagram-架构图</a></h2>
<p><img src="./assets/raft_diagram.png" alt="raft_diagram" /></p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="与client的交互"><a class="header" href="#与client的交互">与Client的交互</a></h1>
<p>本小节将阐述client是如何与raft进行交互的，包括client是如何发现集群leader，以及raft是如何支持线性一致性语义（linearizable semantics）的。<br />
这些问题都是基于共识协议系统的基础问题，而raft的解决方案和其他的系统也都大差不差。</p>
<p>raft的客户端只向leader发送请求。当一个client开始启动后，它先随机连接系统里的一个server。如果client与之通信的server不是一个leader，这个server就会拒绝响应client的请求，并且提供leader（如果有）的信息。</p>
<blockquote>
<p>If the client’s first choice is not the leader, that server will reject the client’s request and supply information about the most recent leader it has heard from (AppendEntries requests include the network address of the leader).<br />
【论文里这句话很绕，supply information about the most recent leader it has heard from(AppendEntries requests include the network address of the leader)，提供最近给它发送心跳包的那个leader的信息，AppendEntries请求里包含了leader在网络里的地址】</p>
</blockquote>
<p>如果leader宕机了，client的请求就会超时；client就会重新随机选一个server进行通信。</p>
<p>Raft的另一个目标是实现线性语义（每一个操作都是立即地，只执行一次的，即只在调用它的时刻与请求返回的时刻之间，只执行一次）。<br />
然而，论文写到这里，raft其实还是有可能会重复把一个command执行多次的。（是的！作为一个尝试复现raft的读者，我也逐渐开始意识到这个问题）<br />
<strong>举个例子，如果leader在提交entry之后宕机，但没来得及向client发送response，client就会尝试重新向新的leader发送这条command，这样的话就会导致一条命令执行了两次。</strong><br />
解决这个问题的关键就是给每一个命令分配一个唯一的序列号。如此一来，状态机可以通过比对之前的response，来定位最新的命令序列号是什么。<br />
如果server收到了一个该序列号已经被执行过的命令，它就会立即返回而不再重复执行这条请求。</p>
<p>只读操作不会更改log里的数据。<br />
然而，如果没有额外的措施，可能会存在client读到过期数据的风险，因为leader响应的这个request的时候，可能它已经被另一个新的leader取代了而不自知。<br />
线性化读操作要求一定不能返回过期数据，raft需要引入两个额外的预防措施在不使用log的情况下来保证不返回过期数据。<br />
第一，leader必须知道哪些entry已经被提交了。<br />
Leader完整性保证了leader拥有全部的已提交entry，但是在term最开始的时候，它可能会不确定一些entry是否已经被提交。<br />
为了搞清楚这个事情，新的leader需要在它的term内去提交一条新的entry。<br />
Raft规定，每个term里，刚竞选成功的leader在一开始，需要在log里提交一个空的no-op entry，从而来解决这个问题。<br />
第二，leader在处理一个只读的请求之前，必须先确认它自己是否已经被罢免了（如果一个新leader被选出来了，那么它所读到的数据可能就是一个过期数据）。<br />
Raft规定，如果一个leader要处理一个只读请求，那它必须先发送心跳信息，当它收到集群中过半的返回消息之后，才能去处理这个只读请求，从而保证它不会返回过期数据。<br />
另一种可选方案是心跳机制可以提供一个租约（lease），leader依据这个租约来响应请求，但是如此一来的话时间就和安全性挂钩了（可能会有时钟偏差）。</p>
<blockquote>
<p><strong>额外的话</strong>：<br />
这一小节解决了我的若干疑惑。</p>
<ol>
<li>Q: client向系统发送请求，系统成功commit了，但是返回超时；client再发送相同的命令，系统应该怎么办？<br />
A: raft的请求应该是保证幂等的，幂等的意思是 <code>f(f(x)) = f(x)</code>，如果系统里的leader收到client的命令，发现这个命令它已经处理过了，那它就直接返回true，但不做任何处理。</li>
<li>Q: leader如何知道哪些entry已经被提交了？<br />
A: 为了搞清楚这个事情，新的leader需要在它的term内去提交一条新的entry。Raft规定，每个term里，刚竞选成功的leader在一开始，需要在log里提交一个空的no-op entry，从而来解决这个问题。
（这个和raft官网上的可视化模型表现不符。官网上的演示模型，如果不提交新的request，leader就不commit前任leader留下的entry。）<br />
<strong>在824的lab框架里，client发送的command是interface类型。我为了实现no-op特性，声明了一个type叫no-op。</strong><br />
此外，为了适配这种特性，我还特地区分了commandIndex和logIndex，commandIndex是给上层调用者client看的，logIndex是raft节点保存日志用的。<br />
不过我参考了网上其他人的代码，别人好像都没这么做。</li>
<li>Q: 如何避免leader给client返回过期的数据？<br />
A: <strong>Raft规定，如果一个leader要处理一个只读请求，那它必须先发送心跳信息，当它收到集群中过半的返回消息之后，才能去处理这个只读请求，从而保证它不会返回过期数据。</strong><br />
另一种可选方案是心跳机制可以提供一个租约（lease），leader依据这个租约来响应请求，但是如此一来的话时间就和安全性挂钩了（可能会有时钟偏差）。</li>
</ol>
<p>但是仍然有一些怪怪的地方，比如说我使用官网的演示模型。在如下情况中，leader宕机，entry不过半的情形下。</p>
<p><img src="./assets/raft_e6.png" alt="raft_e6" /></p>
<p>我可以构造出两种不一样的结果：term2里的entry被保留/被覆写。<br />
如下图：term2的entry被保留。</p>
<p><img src="./assets/raft_e7.png" alt="raft_e7" /></p>
<p>抑或是如下图：term2的entry被覆写。</p>
<p><img src="./assets/raft_e8.png" alt="raft_e8" /></p>
<p>我想，这篇论文里描述的raft安全性，应该是指，如果一个entry已经被提交。<br />
那么在大部分的server的log里面，在与这个entry相同的位置上，一定不会存在另一个不同内容的entry。<br />
而上述的这种情形，不违反论文里规定的安全性定义。</p>
</blockquote>
<blockquote>
<p>有了本节的知识，我们可以去做lab3了。
其实lab2的test给lab3提供了很好的参考。</p>
</blockquote>
<div style="break-before: page; page-break-before: always;"></div><h1 id="实验时遇到的bug"><a class="header" href="#实验时遇到的bug">实验时遇到的bug</a></h1>
<h3 id="隐晦的死锁问题"><a class="header" href="#隐晦的死锁问题">隐晦的死锁问题</a></h3>
<p><img src="./assets/raft_e9.png" alt="raft_e9" /></p>
<p>在for循环里面，cond获取到了锁，但是接下来for循环的条件不成立，于是函数在持有锁的情况下退出。</p>
<p>解决方法：for循环结束之后将锁释放掉。</p>
<blockquote>
<p>这个bug超级隐晦。我甚至一度以为mit给的测试脚本有bug，不过到最后发现，原来还是自己的问题。<br />
或者我写了一个量子程序，因为bug出现之后，当我加了几行打印语句之后再运行，测试程序又没bug了。<br />
并发程序真的太难写了！
可能rust更适合写并发程序吧，毕竟在rust里，锁是有生命周期的，过了它的生命周期，锁可以自动被释放。</p>
</blockquote>
<h3 id="乱序的网络请求"><a class="header" href="#乱序的网络请求">乱序的网络请求</a></h3>
<p>刚实现完lab2b的时候，每跑100遍就fail一次，找了一天。</p>
<p align="center"><img width="60%" src="assets/raft_e10.jpeg" alt="raft_e10" /></p>
<p>发现是follower会收到leader过期的AppendEntries请求，这个请求有之前提交过的entry，然后follower响应这个过期的请求，根据图中的第三条规则，就把自己的entry给删掉了一部分。</p>
<p>当这种网络请求被乱序接收后，follower没有保证幂等。</p>
<p>我通过给每个网络请求加一个SeqId，然后才发现的这个问题。</p>
<p><img src="./assets/raft_seq.png" alt="raft_seq" /></p>
<p>查看guide发现，Q&amp;A（https://thesquareplanet.com/blog/raft-qa/）里有一个一模一样的问题，回答如下</p>
<blockquote>
<p><strong>Note that the rule starts with “if an existing entry conflicts”.</strong><br />
即如果有冲突的话，再进行delete操作。这样的话哪怕收到过期请求，也能够保证幂等了。</p>
</blockquote>
<h3 id="我的程序有点慢"><a class="header" href="#我的程序有点慢">我的程序有点慢</a></h3>
<p>网上的大佬说用go的channel来做这个实验会更优雅。于是我一开始使用go的channel进行多线程之间的同步。</p>
<p>但是由于太会使，导致go的协程一直阻塞在channel上，并不会被系统回收。随着程序运行的越久，无法被回收协程就越多，最终程序变得越来越慢。</p>
<p>网上称这个问题为&quot;go routine泄漏&quot;。</p>
<p>后来把channel相关的代码全部删除，使用更熟悉的lock/mutex来实现论文里的全部逻辑。才通过测试。</p>
<p>当然熟悉go channel的大佬可以忽略这条。</p>
<h3 id="记得go-test--race"><a class="header" href="#记得go-test--race">记得go test -race</a></h3>
<p>一个Raft节点并行运行的任务非常的多，所以一旦发生数据竞争的问题，程序必出bug。</p>
<p>解决方法就是上一把大锁，只有获取这个大锁的协程才能读取/修改Raft节点里的状态。</p>
<p>一定要保证自己的程序可以通过go test -race的测试，否则发现测试程序fail之后，debug半天然后发现是由于数据竞争导致的，那就得不偿失了。</p>
<h3 id="活锁问题"><a class="header" href="#活锁问题">活锁问题</a></h3>
<blockquote>
<p>根据<a href="https://thesquareplanet.com/blog/students-guide-to-raft/">guide</a><br />
Make sure you reset your election timer <strong>exactly</strong> when Figure 2 says you should.<br />
Specifically, you should <strong>only</strong> restart your election timer if</p>
<p>a) you get an AppendEntries RPC from the <strong>current</strong> leader (i.e., if the term in the AppendEntries arguments is outdated, you should <strong>not</strong> reset your timer);</p>
<p>b) you are starting an election;</p>
<p>c) you <strong>grant</strong> a vote to another peer.</p>
</blockquote>
<p>Guide里说只有三种情况才会重置server的选举定时器。</p>
<p>原因是如果在lab2b的日志复制模块里，加上选举限制之后，有资格成为leader的server会变少。</p>
<p>因此如果这个server的选举定时器一直被重置的话，则它没有机会成为leader。</p>
<p>如此一来会造成“活锁”。<strong>活锁的意思是，所有server都在工作，但是系统do not make any progress。</strong></p>
<h3 id="快照策略给raft引入了巨大的复杂性"><a class="header" href="#快照策略给raft引入了巨大的复杂性">快照策略给raft引入了巨大的复杂性</a></h3>
<p>因为加入快照策略后，被快照的日志条目就索引不到了。</p>
<p>因此raft节点的状态需要引入额外的三个变量：</p>
<ul>
<li><strong>lastIncludedLogIndex</strong>: for servers to update</li>
<li><strong>lastIncludedTerm</strong>: term of lastIncludedCommandIndex</li>
<li><strong>lastIncludedCommandIndex</strong>: the snapshot replaces all entries up through and including this index</li>
</ul>
<p>并且重新修改论文中图2的逻辑，仔细地处理边界情况。</p>
<p>老师在lecture里说，他不太懂InstallSnapshot RPC的规则6说的是啥意思，他可能觉得如果收到了很老的InstallSnapshot请求，那直接丢弃就好了。我觉得也是。</p>
<h3 id="处理client请求"><a class="header" href="#处理client请求">处理client请求</a></h3>
<p>如果client的发过来的命令太久没有提交，则超过一个timeout之后，直接返回命令执行成功。<br />
要不然没法通过测试。</p>
<p>甚至网上也有大佬在leader收到client的command就立即返回true。<br />
但我对这种处理方式存疑。</p>
<h3 id="重复的command"><a class="header" href="#重复的command">重复的command</a></h3>
<p>另外一个细节是，存在server已经执行过command，但是client没有收到回复，重复发送command的情况。<br />
根据论文里所提到的，raft算法不能很好的地避免这个问题。<br />
它给出的解决方案是，client给每一个command一个序列号。但是6.824的测试框架中，command是一个interface类型，并没有序列号一说。</p>
<p>我们在做lab的时候可以不去实现&quot;client给每一个command一个序列号&quot;这个逻辑，最终也可以通过测试。</p>
<h3 id="选举定时与心跳周期的参数设置"><a class="header" href="#选举定时与心跳周期的参数设置">选举定时与心跳周期的参数设置</a></h3>
<p>如果参数设置的不好，可能会出现平分选票的情况。</p>
<p>不过如果实现逻辑没问题的话，在一个定性的区间内设置就行，不需要太考究参数的值究竟是多少，保证选举定时器的timeout是发送心跳包的周期2～3倍即可。</p>
<h3 id="死锁问题"><a class="header" href="#死锁问题">死锁问题</a></h3>
<p>写lab3时遇到的，KvServer里的线程在等Raft里的线程释放rf.mu，而Raft里的线程在等待KvServer里的线程释放kv.mu，造成了循环等待。<br />
参考了lab2的测试代码，发现这两个锁可以相互独立地获取，不必耦合在一起。</p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="总结"><a class="header" href="#总结">总结</a></h1>
<p>在被Raft一番毒打，经过不停地 「写bug -&gt; 测试 -&gt; 看日志 -&gt; 修bug」之后，终于通过了lab2的abcd所有测试。</p>
<p>只是lab3和lab4还会用到lab2的内容，不知道做后面的实验的时候，会不会再回来改lab2的bug。</p>
<p>从git log上来看，从重写2A到写完2B的代码，大概用了10天...<br />
对lab2C又debug了4天左右，最后摆烂了半个月，又花了两三天实现了2D，共计30天完成这个实验。<br />
其实基本的代码一两天就写完了，很长一段时间都是在等待测试结果 + 看日志找bug，发现日志信息打印的不完整/有bug，然后再写一两行代码，如此循环。</p>
<p>不禁感慨细节是魔鬼，由于写的是并发程序，因此raft算法的正确性不是一目了然的。得等到熟悉各个模块之后，才能意识到里面的并发问题。比如说，leader一上来需要发一个no-op指令，从而提交之前follower的命令等。</p>
<blockquote>
<p><strong>raft对外提供的api</strong>：</p>
<p>raft花了大量的篇幅介绍如何保证replicated log的一致性，但是对于如何保证replicated machine的一致性，则没有作过多的介绍。需要我们基于raft的日志复制模块自己去构建相应的服务。</p>
<p>同时论文在Section8提到了raft如何与client进行交互，但是并没有规定对外提供的api具体是什么。
从824的lab3可以看到，raft节点作为server，对外提供的api有get/set/append。<br />
etcd也是这么干的。</p>
<p>而像zookeeper提供的api则更加丰富，那么zookeeper的api是否更优呢？</p>
</blockquote>
<blockquote>
<p><strong>线性一致性</strong>：</p>
<p>论文的section8提到了线性一致性语义（linearizable semantics）。<br />
什么是线性一致性？</p>
</blockquote>
<div style="break-before: page; page-break-before: always;"></div><h1 id="zookeeper-wait-free-coordination-for-internet-scale-systems"><a class="header" href="#zookeeper-wait-free-coordination-for-internet-scale-systems">Zookeeper: Wait-free coordination for Internet-scale systems</a></h1>
<h1 id="zk-general-purpose-coordination-service"><a class="header" href="#zk-general-purpose-coordination-service">ZK: General-Purpose Coordination Service</a></h1>
<h2 id="摘要-5"><a class="header" href="#摘要-5">摘要</a></h2>
<blockquote>
<p>In this paper, we describe ZooKeeper, a service for coordinating processes of distributed applications. Since ZooKeeper is part of critical infrastructure, ZooKeeper
aims to provide a simple and high performance kernel for building more complex coordination primitives at the client. It incorporates elements from group messaging,
shared registers, and distributed lock services in a replicated, centralized service. The interface exposed by ZooKeeper has the wait-free aspects of shared registers with
an event-driven mechanism similar to cache invalidations of distributed file systems to provide a simple, yet powerful coordination service.</p>
</blockquote>
<p>在这篇论文里，我们描述了ZK作为分布式应用的协调器的工作原理。</p>
<p>ZK对于互联网基建很重要。它致力于提供一种简单高效的内核，以供客户端去构建更加复杂的协调原语。</p>
<p>ZK协调了分布式系统里的组群通信、共享寄存器、分布式锁服务。ZK的接口提供了无等待的特点共享寄存器，为分布式系统中的协调服务赋能。</p>
<blockquote>
<p>The ZooKeeper interface enables a high-performance service implementation. In addition to the wait-free property, ZooKeeper provides a per client guarantee of FIFO execution of requests and linearizability for all requests that change the ZooKeeper state. These design decisions enable the implementation of a high performance processing pipeline with read requests being satisfied by local servers. We show for the target workloads, 2:1 to 100:1 read to write ratio, that ZooKeeper can handle tens to hundreds of thousands of transactions per second. This performance allows ZooKeeper to be used extensively by client applications.</p>
</blockquote>
<p>ZK的接口具有高性能。除了无等待的特点之外，ZK给每个客户端提供了FIFO（先入先出）的请求执行队列，以及对写请求保证线性一致性。这种设计策略使系统可以快速响应客户端的读请求。</p>
<p>我们给出了在2:1到100:1的读/写比率下，ZooKeeper 每秒可以处理数万到数十万个事务。这样的性能使得ZooKeeper 可以被客户端应用程序广泛地使用。</p>
<h2 id="zk的特点-from-6824-lecture"><a class="header" href="#zk的特点-from-6824-lecture">ZK的特点 (from 6.824 lecture)</a></h2>
<ul>
<li>提供了通用的API，方便人们构建分布式服务。</li>
</ul>
<ul>
<li>
<p>a simpler way to structure fault-tolerant services.</p>
<p>通过多副本来完成容错。</p>
</li>
<li>
<p>high-performance in a real-life service built on Raft-like replication.</p>
<p>性能是可以支持水平扩容的。即n倍的机器，可以有n倍的性能。（写请求是线性一致的，而读请求不是，牺牲了读请求的强一致性来换取性能。）</p>
</li>
</ul>
<p>有了ZK之后，我们可以基于它，构建分布式系统的配置文件、分布式的锁。</p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="线性一致性"><a class="header" href="#线性一致性">线性一致性</a></h1>
<blockquote>
<p>824讲ZK之前， 先介绍了线性一致性。莫里斯讲的有点听不大懂，我又上网找了一下别的资料。</p>
<p>发现剑桥的一门关于分布式系统基础的公开课讲的特别好。
👉<a href="https://www.cl.cam.ac.uk/teaching/2122/ConcDisSys/">Concurrent and Distributed Systems</a><br />
里面没有实验，用了8个lecture讲述了分布式系统里各种各样的概念。可以和824搭配食用，很好的弥补了824在理论上的缺失。</p>
</blockquote>
<h2 id="引言-1"><a class="header" href="#引言-1">引言</a></h2>
<p>本节我们将讨论并发系统中一个特定的一致性模型：线性一致性。<br />
我们将定性地讨论它，如果你对它的细节感兴趣，可以去参考<a href="http://cs.brown.edu/%7Emph/HerlihyW90/p463-herlihy.pdf">关于线性一致性的严格讨论</a>。<br />
“强一致性”是一个模糊又抽象的概念，人们在讨论强一致性的时候，他们其实可能想说的是线性一致性。<br />
与强一致性不同，线性一致性的概念是具体且清晰的。</p>
<p>线性一致性的概念不仅出现在分布式系统中，同样也会出现在计算机体系架构的课程里。<br />
有趣的是，一个具有多个CPU核心的计算机（当今大多数的电脑和手机），默认情况下，访问内存的时候是不具有线性一致性的！
这是因为每个CPU核心都有自己的缓存，因此一个CPU核心更新的时候，不会立即被另一个CPU核心所感知到。</p>
<p>定义线性一致性的目的是为了保证：<strong>从clients角度，观测系统最新的状态时，clients不应该读到一个过期（stale、outdated）的结果。</strong></p>
<h2 id="read-after-write一致性"><a class="header" href="#read-after-write一致性"><em>read-after-write一致性</em></a></h2>
<p>首先我们先定义<em>read-after-write一致性</em>，这个概念定义<strong>单个client</strong>访问分布式系统的时候，读到的数据要在写之后，且不能读到过期的数据。</p>
<p>从client的角度来说，每个读/写操作都是需要耗费一定时间的。<br />
当application（server）收到一个操作请求的时候，我们说这个请求从这个时刻开始，当它结束该操作，并向client返回结果的时候，我们称这个请求在此时结束。<br />
在开始和结束的这段时间里，分布式的节点之间可能也会进行着各种各样的网络通信。</p>
<p>假设我们采用一种quorum制度，quorum制度规定，只有当client收到符合法定个数的response的时候，才能对这个操作的结果作出结论。<br />
那么，对于client来说，就从它向server发请求的时候算作一个操作的开始，从它收到法定个数的response的时候算作一个操作的结束。</p>
<p>我们结合下图来进一步说明我们要表达的意思，从client的角度来说，get/set操作是一个耗时的操作，每个竖着的长方形表示了一个操作所占用的时间。<br />
我们还在这个长方形里画出了一个操作的含义：<code>set(x, v)</code>表示更新键值对，将键为x的值更新为<code>v</code>，<code>get(x) -&gt; v</code>表示读键<code>x</code>的值，获得返回值<code>v</code>。</p>
<p align="center"><img width="60%" src="./assets/zk_raw.png" alt="zk_raw" /></p>
<p>对于这个上图来说，client尝试给ABC发送<code>(t1, set(x, v1))</code>的请求，B、C收到了请求，并更改了<code>x</code>的值，但是A没有收到请求，它状态机里的<code>x</code>仍然是old value，记作<code>v0</code>。<br />
当client再给ABC发送<code>get(x)</code>的请求的时候，假设C没有收到请求，A、B分别返回<code>(t0, v0)</code>、<code>(t1, v1)</code>，client通过对比value的时间戳，确定当前状态机里x的值应该是<code>v1</code>，如此则符合<em>read-after-write一致性</em>。<br />
但如果没有这样的一个时间戳规则，C收到了两个不同的response，我们假设它随机地接受了一个server的返回结果，那么这个系统就不符合<em>read-after-write一致性</em>。</p>
<p><em>read-after-write一致性</em>只定义了单个client访问系统的结果，线性一致性则将这种idea拓展到多个clients并发地去对系统读/写的场景。</p>
<h2 id="线性一致性-1"><a class="header" href="#线性一致性-1">线性一致性</a></h2>
<p><em>线性一致性</em>不关心系统的具体实现和内部的通信协议是什么。它在乎的是每个操作的开始和结束的时间，以及这些操作的返回结果是否符合规则。<br />
因此在后面的图里，我们只画client和server是如何交互的，从client的角度去观察系统的行为，而不关心servers之间内部的通信。</p>
<p>而且线性一致性关心的是一个操作的开始，是否发生在一个操作的结束之后。</p>
<p align="center"><img width="30%" src="./assets/zk_linear1.png" alt="zk_linear1" /></p>
<p>如上图所示，这两个get操作的开始都发生在set操作结束之后，由于set操作把<code>x</code>的值更新成了<code>v1</code>，因此我们期望get操作的返回结果是<code>v1</code>。<br />
或者说get操作的返回结果得是一个比<code>v1</code>更新的值（因为可能在这段real time里面还有其他的set操作），如果不是，我们则说这个系统是不符合线性一致性的。</p>
<p>我们再来看另一种情形，下图中，get和set操作是重叠的，即get的开始并没有发生在set的结束。</p>
<p align="center"><img width="30%" src="./assets/zk_linear2.png" alt="zk_linear2" /></p>
<p>这个时候，我们不知道对于系统来说，哪一个操作真正地先发生了。如果set先发生，则get的结果是<code>v1</code>；<br />
如果get先发生，返回的结果则是一个比<code>v1</code>更老的数据<code>v0</code>。<br />
无论如何，这两种结果我们都是接受的。</p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="细究linearizability"><a class="header" href="#细究linearizability">细究linearizability</a></h1>
<h2 id="linearizability与happens-before"><a class="header" href="#linearizability与happens-before">linearizability与happens-before</a></h2>
<p>注意&quot;操作A的结束时间发生在操作B的开始时间之前&quot;，与&quot;A发生在B之前&quot;的这种&quot;happens-before&quot;概念不同。<br />
happen-before关系是一个定义消息发送和接收的术语（编译器/多线程里好像经常用这个词？）；</p>
<p><img src="./assets/linear_happens_before.png" alt="linear_happens_before.png" /></p>
<p>在happens-before的定义下，如果b在a之后先执行，要么a和b在相同的任务执行流（进程/线程）里；要么a和b在不同的process1和process2里，且process1和process2通过发送message通信的方式确定a和b的执行顺序。</p>
<p>假设有两个操作，分别在不同的process里，且这两个process并没有相互通信，那么在并发的情况下，这两个操作a和b则有三种可能：</p>
<ol>
<li>a-&gt;b，a先执行，b后执行；</li>
<li>b-&gt;a，b先执行，a后执行；</li>
<li>a, b重叠（因为a和b操作都是耗时的）。</li>
</ol>
<h2 id="real-time"><a class="header" href="#real-time">real-time</a></h2>
<p>线性一致性定义了real time。<br />
在这个概念里，我们有一个想象中的、带有上帝视角的全局观察者，它可以立即观察到任何节点在任何时候的状态，也可以观察到任何节点的操作在什么时候开始，或者在什么时候结束。<br />
或者说系统中每个节点都有一个完美的用来同步的时钟，有了这个时钟之后，我们就可以精确地考察系统里每个节点的状态了。</p>
<p>不过现实世界中，不存在这样一个全局观察者或者理想的同步时钟，但是我们在分析系统的时候，我们可以假设它们存在。</p>
<h2 id="get操作之间的关系"><a class="header" href="#get操作之间的关系">get操作之间的关系</a></h2>
<p>线性一致性不仅关心set操作和get操作之间的关系，同时它还关心get操作和get操作之间的关系。</p>
<p align="center"><img width="60%" src="./assets/zk_linear3.png" alt="zk_linear3" /></p>
<p>如上图所示，我们以一个quorum系统来举例子。虽然这个系统采用了quorum制度，但是它并不能保证线性一致性。</p>
<ol>
<li>一开始client1将<code>x</code>设置为<code>v1</code>，假设A很快更新了它的备份状态机里<code>x</code>的值，然而B、C的响应地很慢。</li>
<li>client2从A、B里读数据，发现A、B的返回结果不一致，并决定根据值的时间戳采纳A的结果<code>v1</code>。</li>
<li>client2的读操作结束之后，client2从B、C里去读数据，收到了<code>v0</code>的过期数据，由于它不知道A里面有最新的数据<code>v1</code>，于是决定采纳<code>v0</code>作为<code>x</code>的值。</li>
</ol>
<p>如此一来，client3观察到的数据就比client2观察到的数据要老。<br />
从real-time的角度来看，client3的读操作发生在client2的读操作之后。对于线性一致性的系统来说，这种行为是不允许的。</p>
<h2 id="abd算法"><a class="header" href="#abd算法"><a href="https://cs.neea.dev/distributed/abd/">ABD算法</a></a></h2>
<p>庆幸的是，使用quorum制度进行读写，也是有可能使get/set具有线性一致性的。</p>
<p>首先，为了简单起见，我们先假设只有一个指定的节点可以进行set操作，之后我们再抛弃这个假设。</p>
<p>考察下图，在这个模型中，client1给A、B、C发送set请求，并等待过半节点的返回（只有过半的节点都修改成功后，这个set操作才算是修改成功）。<br />
与上图一样，A收到了请求，B、C没有收到，A修改了自身状态的<code>x</code>值为<code>v1</code>。</p>
<p align="center"><img width="60%" src="./assets/zk_linear_abd.png" alt="zk_linear_abd" /></p>
<p>对于get操作来说，我们增加一些规则：</p>
<ol>
<li>client第一步必须先给所有的节点发送请求，并等待大多数节点的回复。</li>
<li>如果某些节点返回的数据比其他节点的要新（这个新是通过对比时间戳进行比较的），client会把最新的数据广播出去，如果收到大多数节点成功返回的response，这个操作就算作成功。<br />
我们管这种广播操作叫做read repair。在广播的过程中，如果一个节点通过对比之后发现自己状态机里的数据是过期的，则会更新自己的数据之后再返回成功。</li>
<li>只有当client确定最新的数据被存储大多数（或者说是法定节点数）的节点里面之后，get操作才会结束。</li>
</ol>
<p>这样的话，大多数节点要么返回经过read repair修复成功的response，要么一开始就返回最新的数据。</p>
<p>比如说上图，client2给A、B、C发送get请求，分别收到来自A、B的<code>(t1, v1)</code>、<code>(t0, v0)</code>，于是它认为最新的数据是<code>(t1, v1)</code>，它就会广播set(t1, v1)这个操作给系统中的节点，
于是B、C收到这个请求之后，就会更新自己的状态。当client2收到B、C成功返回的response之后，它就结束get操作，并认为此时<code>x</code>的值是<code>v1</code>。</p>
<blockquote>
<p>这里有一个diff，当client进行get操作，在第一阶段收到大多数server的response的之后。<br />
剑桥的课件里的说法是给掉队的server发送set操作，而有的资料是说将set操作广播给系统里的所有节点。<br />
欢迎你来指出这两个说法哪一个是正确的。</p>
</blockquote>
<p>这个方法叫ABD算法（1995年提出），作者是Attiya, Bar-Noy, and Dolev。</p>
<p>它保证了系统可以进行线性一致地读和写。<br />
因为每当get或者set请求结束的时候，我们都能够确定读/写的数据已经被写入到了大多数节点里的状态机里。</p>
<h2 id="将abd算法进行推广"><a class="header" href="#将abd算法进行推广">将ABD算法进行推广</a></h2>
<p>刚才我们假设只有一个节点可以执行set操作，现在我们来尝试将ABD算法推广到多个节点都能够执行set操作的情形。</p>
<p>我们需要有一个能够反映出real-time的时间戳，来确保不同操作之间的顺序。<br />
令第一个操作<code>set(x, v1)</code>携带的时间戳是<code>t1</code>；第二个操作<code>set(x, v2)</code>携带的时间戳是<code>t2</code>。<br />
如果第一个操作结束之后，第二个操作才开始，那么我们可以肯定，<code>t1&lt;t2</code>。</p>
<blockquote>
<p>然而，不同的client可能会并发地执行set操作，如此一来可能会出现不同的操作具有相同的时间戳的情况。<br />
为了区分这种情况，我们可以给每个client一个唯一Id，并且结合clientId去比较时间戳。</p>
<p>当client进行get操作时，假设来自不同server的response里，时间戳相同但是值不同：<br />
可以安排一个clientId的优先级对比规则，通过对比优先级，client来确定应该采纳哪一个response（Lamport的时间戳也是如此定义的）。</p>
</blockquote>
<p>这个算法确保了在任意节点都可以执行set/get操作的情况下，系统仍能保证线性一致性。</p>
<h2 id="遗留问题"><a class="header" href="#遗留问题">遗留问题</a></h2>
<p>剑桥的课件里还提到了线性一致的CAS操作，里面还提到了偏序和全序。</p>
<ol>
<li>如何保证线性一致的CAS操作？</li>
<li>什么是<a href="https://eli.thegreenplace.net/2018/partial-and-total-orders/">偏序和全序</a>？在分布式系统里有什么应用？</li>
<li><a href="https://mit-public-courses-cn-translatio.gitbook.io/mit6-824/lecture-07-raft2/7.6-qiang-yi-zhi-linearizability">824的lecture</a>里，莫里斯教授列举了很多种情形，观察里面的例子，你是否能得出和莫里斯一样的结论？</li>
</ol>
<div style="break-before: page; page-break-before: always;"></div><h1 id="引言-2"><a class="header" href="#引言-2">引言</a></h1>
<h2 id="introdcution"><a class="header" href="#introdcution">Introdcution</a></h2>
<p>大型的分布式系统应用往往需要很多各种各样的协调操作。<br />
比如分布式系统的配置，就是一个最基础的协调问题。<br />
有的配置方式很简单，就是一个参数列表；有的系统里面，配置方式则更复杂，比如它们需要支持动态参数配置。</p>
<p>集群membership和leader选举同样是分布式系统里常见的协调问题：一个分布式节点需要搞清楚其他节点是否还在线，或者说它还是否是集群里的话事人。</p>
<p>还有分布式锁，可以支持对资源排他性的访问，作为一个强有力的协调原语，在分布式系统中也十分重要。</p>
<p>在设计协调服务的时候，我们没有给Zookeeper设计特定功能的服务原语，而是简单地暴露了一些最基础的API，供应用开发者们去实现他们自己的服务原语。<br />
这样一来，我们只需要设计一个coordination kernel即可，不需要为了提供新的原语而修改内核。<br />
开发者们基于这个内核去构建他们想要的协调功能。</p>
<p>为什么在设计API的时候，我们抛弃了锁这种原语呢？因为这种阻塞性的原语对于协调服务来说，会导致快的client受到慢的client的影响。<br />
Zookeeper提供的API都是无等待(wait-free)的。<br />
这些API支持对数据进行查询和修改，设计风格有点像是一种对文件系统的操作。<br />
如果仅从API的函数签名来看的话，它更像是一个不带锁功能的Chubby～</p>
<p>不过对于协调服务来说，光有wait-free还不够。
我们的系统还可以保证操作之间具有顺序性。<br />
具体地说，我们使用了FIFO client ordering of all operations和线性一致地写这两种方法，来确保系统的高效性和正确性。</p>
<p>一句话概括：ZK的优点是高性能和高可用；同时ZK可以保证对单个client来说，执行的操作具有FIFO顺序。</p>
<p>为了满足写操作的线性一致性，ZK采用了Zab协议（类似raft）。</p>
<p>额外地，如果在客户端一侧把数据缓存起来，可以有效地提高读数据的性能。<br />
比如在配置中心里，如果client知道谁是它的leader（注意这里说的不是ZK的leader），它可以把这个数据缓存下来，而不需要每次都去查询ZK系统。<br />
ZK提供了一种watch机制，一旦缓存数据更新，ZK节点就会通知client，如此一来就节省了在client侧去管理缓存的成本。</p>
<p>在Chubby系统里，则使用租约(lease)来做这个事情，但是租约会让慢速的client对系统性能造成影响，所以ZK不予采用。</p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="zookeeper-service"><a class="header" href="#zookeeper-service">Zookeeper Service</a></h1>
<blockquote>
<p>我们将提供一个客户端的library。</p>
<p>通过这个库，client可以和ZK集群建立长连接，从而调用ZK的API。</p>
</blockquote>
<p>以下是论文里的几个术语。</p>
<ul>
<li>server：提供ZK服务的机器。</li>
<li>client：使用ZK服务的user。</li>
<li>znode：in-memory数据，一个树形结构，提供了有层级的命名空间。</li>
<li>session：client与ZK server建立的长连接。</li>
</ul>
<h2 id="znode"><a class="header" href="#znode">znode</a></h2>
<p>ZK给客户端提供了一个数据结构：znode。
znode是一个有层级的命名空间。client通过ZK的API对znode进行操作。</p>
<p>这个有层级的命名空间有点像是文件系统。用这种数据结构来管理应用服务的元信息还是很有效的。</p>
<p>我们使用和unix文件路径一样的记号来表示znode。<br />
比如说，我们使用<code>/A/B/C</code>表示znode C，znode C的父节点是znode B，znode B的父节点是znode A。</p>
<p>znode有两个type：</p>
<ol>
<li>Regular：客户端可以对其创建和删除。</li>
<li>Ephemeral：临时节点，客户端创建这个节点，如果客户端与系统的会话断开，则这个节点会被删除。</li>
</ol>
<p>除了临时状态的znode以外，剩下所有的znode都可以有子节点。</p>
<p>此外，当创建一个新的节点时，客户端可以设置一个sequential flag。<br />
设置了这个flag的znode会在文件名后面加上一个数字，如果多个客户端多次创建这个znode，这个数字自动递增，文件名不会重合。<br />
如果ZK创建了一个节点n，它的父节点是p，那么n作为被新创建出来的节点，它的序列号值不会比p里面任何一个其他该znode节点的序列值来得小。</p>
<h2 id="监听机制"><a class="header" href="#监听机制">监听机制</a></h2>
<p>ZK还实现了监听机制。当节点发生了改变，客户端将收到相应的通知。这样一来，客户端就不需要轮询了。<br />
比如说，当客户端进行一个读操作，这个读操作里有一个watch flag，那么ZK返回response之后，同时会保证当客户端监听的值发生改变之后，客户端会收到相应的通知。
这个监听是一次性的，当ZK发送通知/会话断开之后，这个监听机制就无效了。<br />
监听只是确保客户端能够感知到值的变化，但是通知信息里不会包含被改变的值。</p>
<blockquote>
<p>举个例子，假设客户端发送请求<code>get(&quot;/foo&quot;, true)</code>，然后系统里<code>“/foo”</code>的值发生了两次改变，那么客户端只会收到一次通知。</p>
<p>如果客户端和ZK的节点会话断开，则watch机制的回调操作也会被触发，如此，客户端就知道它监听的事件出问题了。</p>
</blockquote>
<h2 id="数据模型"><a class="header" href="#数据模型">数据模型</a></h2>
<p><strong>数据模型</strong>：ZK的数据模型就是一个类似文件系统的API，或者带有层次的键值表。</p>
<p>设计znode不是为了搞一个通用的文件存储服务，ZK不适合保存通用数据，而适合保存配置或一些元信息。</p>
<p><img src="./assets/zk_f1.png" alt="zk_f1" /></p>
<p>以图1为例，里面有两个子树，一个子树是<code>/app1</code>，表示应用1； 另一个是<code>/app2</code>，表示应用2.</p>
<p>应用1的子树采用了一个简单的group membership protocol：每个客户端的进程\( p_i \)创建一个znode节点\( p_i \)，只要进程还在运行，这个节点就会一直存在。
我们将在论文里的2.4节再来阐述这一点。</p>
<p>ZK适用于分布式集群中存储一些元信息。比如说mapreduce里的master信息，或者互联网应用里的服务注册与发现。</p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="zookeeper-api"><a class="header" href="#zookeeper-api">Zookeeper API</a></h1>
<blockquote>
<p><strong>会话</strong>: 
客户端可以和ZK建立会话。</p>
<p>如果一段时间内ZK节点没有收到client的任何消息，ZK会认为连接已经断开。客户端也可以显式地关闭会话。</p>
</blockquote>
<p>ZK给客户端提供了如下API。</p>
<ol>
<li><strong>create(path, data, flags)</strong>： 创建一个路径为<code>path</code>的znode节点，将<code>data[]</code>存储在这个节点的数据里面，并将znode的名字返回。<br />
客户端可以通过<code>flags</code>来选择znode的类型：regular还是ephemeral，并且设置znode的<code>sequential flag</code>。</li>
<li><strong>delete(path, version)</strong>：删除特定version，路径为<code>path</code>的znode节点。</li>
<li><strong>exists(path, watch)</strong>：判断路径为<code>path</code>的znode节点存在，并返回结果。
watch标志意味着客户端可以监听该节点的变动。</li>
<li><strong>getData(path, watch)</strong>：返回<code>path</code>的数据和元信息，比如version相关的元信息等等。<br />
如果watch为真，则监听<code>path</code>的变动。<br />
不过如果路径为pathznode节点不存在，则不监听。</li>
<li><strong>setData(path, data, version)</strong>：如果当前版本号的值是<code>version</code>，则将<code>path</code>对应的值写为<code>data[]</code>。</li>
<li><strong>sync(path)</strong>：在操作开始时，需要先等待<code>path</code>传播至当前client连接的server节点。</li>
</ol>
<blockquote>
<p>观察<strong>setData</strong>可以得知，每次的更新都是对给定版本的更新。<br />
有条件的更新可以保证：如果实际版本号与期望的版本号不一致，更新操作就会失败。（testAndSet）<br />
如果版本号为-1，不会检查版本号。</p>
</blockquote>
<div style="break-before: page; page-break-before: always;"></div><h1 id="zookeeper的性质"><a class="header" href="#zookeeper的性质">Zookeeper的性质</a></h1>
<blockquote>
<p>ZK可以保证以下两点：</p>
<ul>
<li><strong>线性一致性地写</strong>：所有的写操作都是符合线性一致性的。</li>
<li><strong>先入先出的客户端请求顺序</strong>：对于同一个客户端来说，它发送的所有请求都是符合FIFO顺序的。</li>
</ul>
</blockquote>
<p>这里我们所说的线性一致性与Herlihy提出的不同，我们称之为A-linearizability（异步-线性一致性）。</p>
<p>线性一致性的概念里，一个客户端在一段时间内只能发送一个请求。而我们的概念里，一个客户端在一段时间内可以发送多个请求。因此我们需要保证来自同一个客户端的请求可以按先入先出的顺序执行。</p>
<p>值得注意的是，一个满足异步线性一致性的系统，也必然符合线性一致性。</p>
<p>ZK为了提高读操作的性能，客户端可以从follower节点读数据，因此ZK只保证了写操作的线性一致性。这也使得ZK读操作是支持水平拓展的。</p>
<h2 id="一个分布式场景"><a class="header" href="#一个分布式场景">一个分布式场景</a></h2>
<p>为了更好的说明以上两点，让我们来看一个场景。<br />
一个分布式系统需要选出leader去指挥worker运行任务。<br />
当leader拿到系统控制权，它必须更新一系列的配置参数，当更新完成后，它需要通知其他follower配置更新完成。我们有如下两点要求：</p>
<ol>
<li>新leader在更新配置的时候，follower不能看到更新过程中的中间结果。</li>
<li>如果新leader在更新配置的时候宕机，我们不希望follower使用这个更新了一半的配置。</li>
</ol>
<p>假如我们使用一个分布式的锁，类似Chubby提供的那种，可以满足第一点要求，但是不能满足第二个。</p>
<h2 id="ready"><a class="header" href="#ready">ready</a></h2>
<p>有了ZK之后，新leader可以创建一个名为ready的znode；只有当这个ready的znode存在时，其他节点才会使用这个配置。<br />
当新leader决定更新配置的时候，它先把ready删除，把配置更新完之后，再重新创建ready。</p>
<p>这些更新配置的操作可以看作是pipeline执行的，且都是异步的。比如说，假设一个更新操作耗时2ms，5000个这样的更新就会耗时10s。<br />
因为请求都是被异步地进行发送，ZK会处理完一个请求之后，会立即从FIFO管道里拿出下一个请求进行处理。</p>
<p>如果leader在创建ready前宕机，其他节点就不会使用这个更新了一半的配置。</p>
<p>写成伪代码应该是如下的形式：</p>
<pre><code>leader：
    delete ready_znode
    updates ...
    create ready_znode

follower:
    while znode is existed:
        use this config
</code></pre>
<h2 id="waitnotify"><a class="header" href="#waitnotify">wait/notify</a></h2>
<p>上述方案仍然存在一个问题：假设一个follower看到ready znode存在，leader在其后将ready删除。这个时候该follower会尝试更新配置，leader会对配置进行修改。</p>
<p>解决这个问题的方法是使用通知机制：客户端监听一个znode的修改，如果znode的状态发生了改变，这个客户端会收到通知。<br />
我们可以把代码修改成如下的逻辑：</p>
<pre><code>leader：
    delete ready_znode
    updates ...
    create ready_znode

follower:
    &lt;-znode change:
        use this config
</code></pre>
<h2 id="sync"><a class="header" href="#sync">sync</a></h2>
<p>另一个问题是假设客户端除了和ZK通信以外，还存在别的管道。比如客户端A和B共享了一个第三方管道。
A更改了配置，并绕过ZK通知了B；<br />
B理应看到配置的更新，但是假设B连接的ZK节点没有最新的数据，它就看不到这个更新的配置。
为了解决这个问题，ZK提供了sync。</p>
<pre><code>while zk does not see sync:
    wait
read // this read is slow
</code></pre>
<p>sync的本质是个写请求，如果sync后面跟着一个read。那么，客户端就是在告诉ZK，在你从log里看到sync之前，不要返回给我读请求的结果。</p>
<p>这样的话可以保证，读请求可以看到sync对应的状态。</p>
<p>sync是个代价很高的操作。</p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="基于zookeeper实现锁"><a class="header" href="#基于zookeeper实现锁">基于Zookeeper实现锁</a></h1>
<h2 id="简单的锁"><a class="header" href="#简单的锁">简单的锁</a></h2>
<pre><code>for {
    try to create &quot;lock file&quot;, set ephemeral=TRUE:
        return
    except:
        continue
}
</code></pre>
<p>伪代码如上所示。<br />
简单锁实现使用了“锁文件”。用一个 znode 表示锁。<br />
客户端尝试创建一个带有临时标识的节点来获取锁。<br />
如果创建成功，客户端可以持有该锁。如果创建失败，客户端设置 watch 标识读取 znode，如果当前使用锁的领导者终止，会通知客户端。<br />
客户端在终止或显式删除 znode 来释放锁，其它等待的客户端重新尝试获取锁。</p>
<p>虽然简单锁协议可以工作，但还有一些问题。首先，有羊群效应问题，如果很多客户端等待锁，对这个锁的竞争就会很激烈。<br />
当锁释放时，仅仅有一个等待的客户端获得锁。下面两个原语克服了这个问题。</p>
<h2 id="无羊群效应的锁"><a class="header" href="#无羊群效应的锁">无羊群效应的锁</a></h2>
<p>将所有客户端按请求顺序排列，依次获得锁。希望获得锁的客户端做如下的操作：</p>
<pre><code>create &quot;lock file&quot;, set ephemeral=TRUE, sequential=TRUE
for {
    if now the &quot;lock file&quot; number is the lowest:
        return 
    else:
        watch and wait
}
</code></pre>
<p>每个客户端创建锁文件的时候都有一个编号。<br />
这个编号是递增的，只有当前client的锁文件的编号优先级最高，才可以认为该client获取到了这把锁。</p>
<blockquote>
<p><strong>额外的话</strong>：<br />
不得不说，Zookeeper的写作风格可太差了。和raft放在一起读，比较起来可真是天差地别。<br />
raft对于关键问题一点也不拖泥带水，把该讲的细节全部讲清楚。看完了之后就知道该怎么去实现。 </p>
<p>Zookeeper则一句话翻来覆去地来回说。至于实现细节，你去猜呗。</p>
<p>不过我也大致猜到了Zookeeper的复现方案。并且初步写了一个开发文档。<br />
打算有空的时候，基于824的开发框架，去复现Zookeeper。<br />
如果你看完这个文档对它感兴趣，欢迎来联系我，一起讨论复现Zookeeper的更多细节（实现方案、测试用例、部署、发布、文档等）。</p>
<p><a href="https://tarplkpqsm.feishu.cn/docs/doccnRgBThS2Br90CYZGHamCfKf">实现一个轻量级的Zookeeper</a></p>
</blockquote>
<div style="break-before: page; page-break-before: always;"></div><h1 id="craq"><a class="header" href="#craq">CRAQ</a></h1>
<div style="break-before: page; page-break-before: always;"></div><h1 id="time-clocks-and-the-ordering-of-events-in-a-distributed-system"><a class="header" href="#time-clocks-and-the-ordering-of-events-in-a-distributed-system">Time, Clocks, and the Ordering of Events in a Distributed System</a></h1>
<p><a href="https://lamport.azurewebsites.net/pubs/time-clocks.pdf">论文链接</a></p>
<blockquote>
<p>本文是分布式系统的开山之作。Lamport发表于1978年。</p>
</blockquote>
<blockquote>
<p><strong>摘要</strong></p>
<p>The concept of one event happening before another in a distributed system is examined, and is shown to define a partial ordering of the events. </p>
<p>A distributed algorithm is given for synchronizing a system of logical clocks which can be used to totally order the events. </p>
<p>The use of the total ordering is illustrated with a method for solving synchronization problems. </p>
<p>The algorithm is then specialized for synchronizing physical clocks, and a bound is derived on how far out of synchrony the clocks can become.</p>
</blockquote>
<p>本文对在分布式系统中，某事件先于另一事件发生的前后顺序关系，happening before，这一概念进行了详细的探讨。<br />
同时给出了一种分布式的算法，该算法描述了一个可以给系统进行同步的逻辑时钟。有了这个逻辑时钟后，就可以给分布式系统里发生的事件维护一个全序关系。<br />
有了全序关系后，我们给出了一个方法，可以使用全序关系来解决同步问题。<br />
紧接着，我们来看如何使用该算法对物理时钟进行同步，并且推导使用该算法后，时钟不一致的上限是什么。</p>
<h2 id="一些补充资料"><a class="header" href="#一些补充资料">一些补充资料</a></h2>
<h3 id="1-lamport关于这篇论文的访谈"><a class="header" href="#1-lamport关于这篇论文的访谈">1. Lamport关于这篇论文的访谈</a></h3>
<p><a href="https://www.youtube.com/watch?v=nfRouGH0oMg">youtube链接</a><br />
<a href="https://www.microsoft.com/en-us/research/publication/time-clocks-ordering-events-distributed-system/">文字版</a><br />
在访谈里，Lamport极其凡尔赛，让我们来看看他都说了什么：</p>
<ul>
<li>我不明白为什么只有搞计算机的人对这篇论文如此看重，明明这篇论文的普适性极强，可以应用在任何领域里。</li>
<li>别人都说这是关于时钟的、关于算法的论文，我觉得这是关于状态机的，当我把这个观点告诉其他人的时候，其他人都说没看出来，我只好回去重新再读一遍我的论文，看看是我疯了还是别人疯了。</li>
<li>为什么我能发表出这篇论文呢，这是因为我对分布式系统有一种直觉上的理解，但是我发现其他搞分布式系统的人都没有这种insight。</li>
<li>很多搞分布式系统的人喜欢谈论互斥、并发，他们会把分布式系统看作是计算机问题，或者是数学问题，而我把分布式系统看作是一个物理问题，所谓互斥，不就是让两个人不要在同一时间内干同一件事儿嘛。</li>
</ul>
<h3 id="2-一些关于解读这篇论文的博客"><a class="header" href="#2-一些关于解读这篇论文的博客">2. 一些关于解读这篇论文的博客</a></h3>
<ul>
<li><a href="https://mwhittaker.github.io/blog/lamports_logical_clocks/">Lamport's Logical Clocks</a></li>
<li><a href="https://blog.xiaohansong.com/lamport-logic-clock.html">分布式系统：Lamport 逻辑时钟</a></li>
<li><a href="https://www.cnblogs.com/hapjin/p/4790747.html">Lamport Logical Clock 学习</a></li>
<li><a href="https://en.wikipedia.org/wiki/Lamport_timestamp">维基百科-Lamport Timestamp</a></li>
<li><a href="https://martinfowler.com/articles/patterns-of-distributed-systems/lamport-clock.html">Lamport Clock</a></li>
<li><a href="https://internal-api-drive-stream.feishu.cn/space/api/box/stream/download/all/boxcnWqthlMLrKcWVvqrnUCZZWh/?mount_node_token=doxcnieYImeywCOI2gJEYZFVwre&amp;mount_point=docx_file">CSE452-Lecture9 Lamport Clock</a></li>
<li><a href="https://mp.weixin.qq.com/s/FZnJLPeTh-bV0amLO5CnoQ">分布式领域最重要的一篇论文，到底讲了什么？</a><br />
这篇文章讲的极好，我直播分享这篇论文的时候，发现从偏序-&gt;逻辑时钟-&gt;全序-&gt;异常行为-&gt;物理时钟，这之间的逻辑跳跃很难精确地描述清楚，而这篇文章则把我们为什么要引入逻辑时钟，进而为什么需要全序关系，讲的很明白。</li>
</ul>
<div style="break-before: page; page-break-before: always;"></div><h1 id="引言-3"><a class="header" href="#引言-3">引言</a></h1>
<p>我们日常思维里就有关于时间的概念。时间是由一个更基本的概念——事件发生的顺序得出的推论。<br />
比如我们说一件事情发生在3点15分，那么这个事件就发生在我们读时钟的值为3:15至我们读时钟的值为3:16之间。<br />
在系统方面，这种关于时间顺序的概念十分常见。比如说，在预定机票的系统里，我们要求：如果一个预定机票的请求被确认，那么它必须发生在航班满员之前。<br />
<strong>然而，当谈论到分布式系统里的事件的时候，我们必须重新审视有关时间的概念。</strong></p>
<p>一个分布式系统包含了若干个空间上独立分布的进程，它们彼此之间采用消息传递的方式进行通信。<br />
比如ARPAnet，它是一个计算机网络，网络中的计算机可以彼此互联，这就是一个分布式系统。<br />
单台计算机也可以看作是一个分布式系统，如果我们把CPU、内存、I/O看作是若干独立的进程（process）的话。<br />
如果系统中传递消息的时延与进程中不同事件发生的时间间隔相比，不可以忽略不计，那么我们就说这个系统是具有分布式特性的。</p>
<p>我们主要还是考察空间中独立分布的多台计算机组成的系统。不过，本文所讨论的概念同样也可以应用到更加广义的分布式系统中。<br />
特别地，在单机节点上的多进程系统，里面的很多问题就和分布式系统里的很像。因为如果不作额外措施的话，这两个系统里事件发生的顺序都具有不可预测性。</p>
<p>在分布式系统中，有时候我们无法明确判断出两个事件发生的先后顺序。<br />
“happened before”关系只是说明了系统中事件的一个偏序关系。<br />
我们发现人们并不完全理解上述事实，也不清楚的上述事实的隐藏含义是什么，许多分布式系统的问题也就因此产生。</p>
<p>在本文中，我们将讨论由“happened before”关系定义的偏序概念，并给出一个分布式算法，该算法对偏序关系进行拓展，采取该算法，可以给出系统中全部事件的、具有一致性的全序关系。<br />
这个算法可以用来实现分布式系统。我们阐述了这个算法的用途，并且用它来解决分布式系统中的同步问题。<br />
根据该算法所获得的事件顺序可能会不一致，这种不一致可能会被用户感知，进而发生一些奇怪的、意想不到的事情。这个问题可以通过引入一个物理时钟进行规避。<br />
我们描述了一个简单的同步这些物理时钟的方法，并且给出理论推导，论证基于这种方法，这些时钟失去同步的上限是什么。</p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="偏序关系"><a class="header" href="#偏序关系">偏序关系</a></h1>
<p>如果事件\(a\)发生的时间比事件\(b\)发生的时间要早，人们称事件\(a\)发生在事件\(b\)之前（\(a\) happened before an event \(b\)）。他们使用这个定义的时候，一般说的都是物理意义下的时间。<br />
然而，对于系统来说，只有当我们能够观测到系统内部事件的时候，才能正确地给出一个定义。<br />
如果我们使用物理意义下的时间来定义happened before关系，那么系统就必须具有若干物理上的时钟。但即便系统有了若干物理时钟，但这些时钟依然无法完美地精确到现实世界中的每一个时刻。<br />
<strong>因此，我们想在不使用物理时钟的情况下，定义happened before关系。</strong></p>
<p>首先，让我们先把系统的定义给捋清楚。我们假设一个系统由若干进程组成。每个进程都包含了一系列的事件。
基于上述定义，一个计算机的一小段代码的执行，就可以看作是一个事件，或者说单个机器指令的执行也可以看作是一个事件。<br />
我们假设一个进程里有一个事件序列，如果在这个事件序列中，\(a\)出现在\(b\)之前，那我们就称\(a\) happens before \(b\)。<br />
换句话说，一个单进程可以被定义成一个具有全序关系的事件集合。<br />
大多数Computer Science的语境下，进程都是这么被定义的。<br />
如果再把一个进程区分成若干个子进程，每个子进程再有各自的事件序列就太trivial了，我们就不干这种事情了。</p>
<p>我们把发送和接收消息看作是进程里的一个事件，进而将&quot;happened before&quot;关系用箭头\(\rightarrow\)表示。</p>
<p>定义：<br />
\(\rightarrow\) 描述了系统中事件集合中最小的相对关系，只有满足如下三个条件的时候，我们才使用这个符号：</p>
<ol>
<li>如果\(a\)和\(b\)属于同一进程里的事件，并且\(a\)在\(b\)之前，则\(a\rightarrow b\)。</li>
<li>一个进程发送消息的事件为\(a\)，另一个进程接收该消息的事件为\(b\)，则\(a\rightarrow b\)。</li>
<li>如果\(a\rightarrow b\)且\(b\rightarrow c\)，那么\(a\rightarrow c\)。</li>
</ol>
<p>如果事件\(a\)和\(b\)，既没有\(a\rightarrow b\)的关系，记做\(a\nrightarrow b\)；也没有\(b\rightarrow a\)的关系，记做\(b\nrightarrow a\)，那么我们说\(a\)和b是并发的。</p>
<p>我们认为\(a\)和它本身不存在happened before关系，即\(a\nrightarrow a\)。也就是说\(\rightarrow\)表明了系统里事件集合里的一种偏序关系，且这种关系不具有自反性（就是不指向它自身）。</p>
<p align="center"><img src="./assets/lamport_clock_f1.png" width="70%"></p>
<p>通过观察图1，我们来进一步考察这个定义。<br />
在图1中，横向表示不同的空间中，分布着不同的进程；纵向表示时间，时间的流向是朝上的。时间轴上的点表示事件，每一个进程都具有一个时间轴，波浪线表示消息传递。我们称这样的图为程序的“时空图”。</p>
<p>如果在图中有\(a\rightarrow b\)，那么意味着从事件\(a\)到事件\(b\)，必存在一条沿着\(a\)顺着进程的时间轴加上消息传递的波浪线走向\(b\)的一条路线。比如说\(p1\rightarrow r4\)。</p>
<p>另一种看待这种偏序关系的方式是，\(a\rightarrow b\)表示了事件\(a\)和事件\(b\)<strong>可能</strong>存在了某种因果关系，即有可能事件\(a\)导致了事件\(b\)的发生。<br />
如果事件\(a\)和事件\(b\)是并发的，那么就说明这两个事件没有因果关系。<br />
举个例子，图1中，事件\(p3\)和事件\(q3\)就是并发的。<br />
即使从物理时钟的角度来看，\(q3\)发生的时间点比\(p3\)要早，但对于进程\(P\)来说，直到它在\(p4\)事件里收到了进程\(Q\)的消息，在此之前，它都不知道\(Q\)执行过\(q3\)事件。（在\(p4\)事件之前，进程\(P\)顶多也就知道\(Q\)打算执行\(q3\)事件了。）</p>
<p>如果读者了解过狭义相对论里的时空公式，就会对上面的定义很熟悉了。</p>
<p>This definition will appear quite natural to the reader familiar with the invariant space-time formulation of special relativity, 
as described for example in [1] or the first chapter of [2].<br />
In relativity, the ordering of events is defined in terms of messages that could be sent.<br />
However, we have taken the more pragmatic approach of only considering messages that are actually sent.<br />
We should be able to determine if a system performed correctly by knowing only those events which did occur, without knowing which events could have occurred.</p>
<ul>
<li>[1] Schwartz, J.T. Relativity in lllustrations. New York U. Press, New York, 1962.</li>
<li>[2] Taylor, E.F., and Wheeler, J.A. Space-Time Physics, W.H. Freeman, San Francisco, 1966.</li>
</ul>
<div style="break-before: page; page-break-before: always;"></div><h1 id="逻辑时钟"><a class="header" href="#逻辑时钟">逻辑时钟</a></h1>
<p>接下来我们给系统引入时钟的概念。<br />
首先，从一个抽象的角度来看，时钟只是一种给事件分配一个数字的方式，这个数字是该事件发生的时间点。<br />
准确地，我们给每个进程\(P_i \)定义一个时钟\(C_i\)，这个时钟给进程中的任意一个事件\(a\)分配一个值\(C_i\langle a\rangle \)。<br />
整个系统的所有时钟用\(C \)来表示，它可以给任意一个事件\(b\)分配数字\(C\langle b\rangle \)。 若\(b\)是属于进程\(P_j \)的一个事件，则有\(C\langle b\rangle =C_j\langle b\rangle \)。<br />
目前来说，我们并不假设\(C_i\langle a\rangle \)的值与物理时间有关，所以我们把时钟\(C_i \)看作是一个逻辑上的时钟，而非物理上的时钟。<br />
这种逻辑时钟可以只使用计数器实现，不需要计时器的参与。</p>
<p>现在考虑这种定义是否能够让系统的逻辑时钟具有正确性。<br />
我们不能把该定义的正确性依托于物理时间，因为这需要时钟与物理时间保持同步。<br />
因此我们的定义必须依赖于事件发生的先后顺序。<br />
一个极强的合理的条件是：假设事件\(a\)先于事件\(b\)发生，那么\(a\)发生的时间点一定早于b。下文中，我们给出这个条件的严格定义。</p>
<blockquote>
<p><strong>Clock Condition</strong>:<br />
对于任意两个事件\(a\),\(b\),
若\(a\rightarrow b \)，则有\(C\langle a\rangle &lt; C\langle b\rangle \).</p>
</blockquote>
<p align="center"><img src="./assets/lamport_clock_f1.png" width="70%"></p>
<p>注意，这个条件的反命题是若\(a \nrightarrow b \)，则\(C\langle a\rangle =C\langle b\rangle \)。<br />
该反命题并不成立。因为这意味着两个并发的事件发生在同一时间。<br />
在图1中，\(p2\)和\(p3\)这两个事件，和\(q3 \)的关系都是并发执行的，如果反命题成立，则意味着\(p2\)、\(p3\)和\(q3 \)发生在同一时间。<br />
但是根据<strong>Clock Condition</strong>，有\(p2\rightarrow p3 \)，和该反命题矛盾。</p>
<p>从关系“\(\rightarrow \)”的定义出发，不难推导出，如果如下两个条件成立，则<strong>Clock Condition</strong>必成立。</p>
<ul>
<li><strong>C1</strong>. 如果a和b同属于进程\(P_i \)，并且a发生在b之前，则\(C_i\langle a \rangle &lt;C_i\langle b\rangle \)。</li>
<li><strong>C2</strong>. 如果a是进程\(P_i \)发送消息的事件，b是进程\(P_j \)接收该消息的事件，则\(C_i\langle a \rangle &lt; C_j\langle b\rangle \)。</li>
</ul>
<p>让我们通过程序时空图来进一步讨论时钟。假设一个进程的时钟每次都&quot;ticks&quot;所有数字，即每次事件发生的间隔中，时钟都在&quot;ticks&quot;。<br />
举个例子来说，如果a和b是进程\(P_i \)里连续发生的两个事件，\(C_i \langle a\rangle =4 \)，\(C_i\langle b\rangle =7 \)，那么在这两个事件的间隔中，时钟就滴答了5、6、7。</p>
<p align="center"><img src="./assets/lamport_clock_f2.png" width="70%"></p>
<p>We draw a dashed &quot;tick line&quot; through all the like-numbered ticks of the different processes.<br />
我们可以给这些不同的进程画上“tick line”。给图1画上“tick line”的结果如图2所示。</p>
<p><strong>条件C1</strong>意味着，同一进程中，位于相同进程时间线的任意两个事件之间必须有一条tick line。<br />
<strong>条件C2</strong>意味着，消息传递的波浪线必须穿过一条tick line。<br />
在图2中，箭头是有具体含义的，它代表了happens before的关系，不难看出，如果<strong>C1</strong>、<strong>C2</strong>这两个条件满足，则<strong>Clock Condition</strong>必满足。</p>
<blockquote>
<p><strong>额外的话</strong>：<br />
我对这个tick line有点不太理解，lamport在论文里并没有给出tick line的严格定义。上网搜别的资料，对这个tick line也没有过多的解释。<br />
我猜是说连接递增这个计数器的时刻，比如把所有set ticks=1的时刻用虚线连接起来，就是一条tick line。<br />
如果\(a\rightarrow b\)，那么从a到b的路线至少穿过一条tick line。）</p>
</blockquote>
<p align="center"><img src="./assets/lamport_clock_f3.png" width="70%"></p>
<p>接下来，我们可以使用笛卡尔坐标系来重新画这个tick line。把这些tick line抻直，我们就有了图3.<br />
图3是图2的另一种等价形式，都表示了系统中事件发生的次序。<br />
Without introducing the concept of physical time into the system (which requires introducing physical clocks), there is no way to decide which of these pictures is a better representation.</p>
<p>敏锐的读者应该发现，如果把网络中的进程在二维空间可视化出来，我们就有了一个三维的程序时空图。进程和消息仍然可以用线来表示，tick line则变成了二维的平面或曲面。</p>
<p>假设进程是计算机里的算法，事件则表示算法执行的具体过程。我们就可以在满足Clock Condition的情况下，引入时钟的概念了。<br />
进程\(P_i \)的时钟用寄存器\(C_i \)表示。那么\(C_i\langle a\rangle \)就是事件a发生的过程中，\(C_i
\)的值。<br />
在事件与事件的间隔中，\(C_i
\)的值也会发生改变，因此改变\(C_i
\)这一行为本身并不能看作是一个事件。</p>
<p>为了确保系统的时钟能够满足<strong>Clock Condition</strong>，我们需要确保该时钟可以满足<strong>条件C1</strong>和<strong>条件C2</strong>.
<strong>条件C1</strong>很好满足，进程只需要确保它遵守如下规则：</p>
<ul>
<li><strong>IR1</strong>. 每一个进程\(P_i \)在两个连续的事件间隔中，都需要递增\(C_i \)的值。</li>
</ul>
<p>为了确保<strong>条件C2</strong>，我们需要消息m里包含一个时间戳\(T_m \)，\(T_m \)的值等于该消息发送的时间。当一个进程收到带时间戳\(T_m \)的消息后，它必须将它的时钟增大到一个比\(T_m \)还要大的值。<br />
该规则的准确描述如下：</p>
<ul>
<li><strong>IR2</strong>. （a）如果事件a是进程\(P_i \)发送消息m的事件，则m必须包含一个时间戳\(T_m=C_i\langle a\rangle \)。<br />
(b) 当收到一个消息m的时候，进程\(P_j \)需要修改寄存器\(C_j \)的值，使该值大于等于当前\(C_j \)的值，并且还要大于\(T_m \)。</li>
</ul>
<p>在<strong>IR2.</strong> (b)中，我们认为接收消息m的事件发生在修改寄存器\(C_j \)之后。（这是为了数学形式上的好看，与具体实现无关）。</p>
<p>显然，<strong>IR2</strong>确保了<strong>条件C2</strong>可以被满足。因此，只要确保IR1和IR2，则Clock Condition必能满足，于是我们就有了给系统用的逻辑时钟，且该时钟是能够保证正确性的。</p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="给所有事件排一个全序关系"><a class="header" href="#给所有事件排一个全序关系">给所有事件排一个全序关系</a></h1>
<h2 id="全序关系"><a class="header" href="#全序关系">全序关系</a></h2>
<p>我们可以使用满足Clock Condition的系统时钟，给系统里所有的事件排一个全序关系。<br />
只要简单地根据这些事件发生的时间进行排序就好了。为了避免两个进程的时间戳相同，我们先给所有进程指定一个全序关系。<br />
准确地说，我们给出如下定义：</p>
<blockquote>
<p>如果a是进程\(P_i \)的事件，b是进程\(P_j \)的事件，记\(a \Rightarrow b\)的条件为</p>
<ol>
<li>\(C_i\langle a\rangle &lt; C_j \langle b\rangle
\)，或者</li>
<li>\(C_i\langle a\rangle = C_j \langle b\rangle \)并且\(P_i&lt;P_j \)</li>
</ol>
</blockquote>
<p>根据上述定义，我们很容易就可以得到系统中所有事件的全序关系，且Clock Condition意味着如果\(a \rightarrow b\)则有\(a \Rightarrow b\)。<br />
换句话说，使用关系符号\( \Rightarrow\)，可以完成&quot;happened before&quot;这种偏序关系到全序关系的转换。</p>
<p>\(\Rightarrow\)顺序取决于系统的时钟\(C_i
\)，在满足Clock Condition的情况下，不同的时钟会导致不同的\(\Rightarrow\)顺序。<br />
给定任何一个由\(\Rightarrow\)确定的全序关系，都会存在一系列的系统时钟能够满足Clock Condition，从而才有可能形成这种全序关系。<br />
在系统的事件中，只有偏序关系\(\rightarrow\)是唯一确定的，而\(\Rightarrow\)这种全序关系则依赖于时钟的选择。</p>
<h2 id="分布式锁"><a class="header" href="#分布式锁">分布式锁</a></h2>
<p>对于实现一个分布式系统来说，如果能维护一个全序关系，则必然是极好的。事实上，我们之所以要实现一个这样的具有正确性的系统逻辑时钟，就是为了获取一个全序关系。接下来，我们将论述如何使用这种事件的全序关系，来解决数据的互斥访问问题。<br />
假设一个系统里，有固定数目的进程，进程之间共享同一组资源。同一时间内，只有一个进程可以使用该资源，因此进程之间必须采取某种同步措施来避免冲突。<br />
我们希望能够找到一种算法，可以保证一个进程访问资源的时候满足如下三个条件：</p>
<blockquote>
<ol>
<li>一个拥有资源的进程能够将该资源授权给另一个进程的前提是，它必须要先释放该资源。</li>
<li>当有多个不同的请求访问该资源的时候，授予访问权限的顺序必须和请求被创建的顺序保持一致。</li>
<li>如果所有进程获取资源之后，会保证最终一定能够释放它，则所有获取资源的请求都会被授予资源的权限。</li>
</ol>
</blockquote>
<p>我们假定初始的时候，有一个特定的进程持有资源。<br />
这些要求都很显然，它们保证了算法的正确性。不过如果将这些条件应用到系统中事件的顺序性当中，<strong>条件2</strong>其实没有说如果两个并发请求同时过来，我们应该先处理哪一个请求。</p>
<p>这是一个nontrivial的问题，换言之，它很重要。
我们是否能安排一个一个scheduling process作为中心节点，来确保请求的顺序性呢？不行，除非我们做一些额外的规定。<br />
假设\(P_0 \)是这个scheduling process，\(P_1 \)先给\(P_0 \)发送请求，再给\(P_2 \)发送消息。在\(P_2
\)接收消息的过程中，它也发送请求给了\(P_0 \)。<br />
那就有可能存在\(P_2 \)的请求比\(P_1 \)的请求先一步到达\(P_0 \)的情形。<br />
在这种情形下，如果\(P_2 \)的请求先一步被授权，则违反了上述条件2。</p>
<p>为了解决该问题，我们引入上一章节的逻辑时钟，该逻辑时钟满足IR1和IR2两个条件。使用逻辑时钟，就可以给所有时间定义一个\(\Rightarrow\)全序关系。<br />
可以用逻辑时钟给所有请求和释放资源的操作维护一个全序关系，有了这个全序关系，我们就可以正确地实现分布式锁的算法。只要确保每个进程都能够知晓其他进程的操作就可以了。</p>
<p>为了简化问题，我们做一些额外的假设。<br />
这些假设不是必须的，但是有了这些假设以后，我们可以不必陷入具体的实现细节。假设\(P_i \)和\(P_j \)是系统里的任意两个进程，\(P_i \)到\(P_j \)的所有消息的接收顺序都与它们的发送顺序保持一致。<br />
进一步地，我们还假设所有发出去的消息最终都能够被收到。（如果我们给消息加上序列号，并且加上message acknowledgement protocol，就不需要再做上述假设了。）</p>
<blockquote>
<p>（额外的话：即消息不会乱序到达，也不会丢失。话说这不是TCP吗？）</p>
</blockquote>
<p>我们还假设任意一个进程都可以直接给其他的进程发送消息。</p>
<p>每个进程自身都维护了一个请求队列，除了它自己以外，其他进程无法看到这个队列。<br />
我们假设请求队列初始的时候，包含一条消息\(T_0 \)：\(P_0 \)请求资源；<br />
\( P_0 \)是上文提到的那个一开始拿到资源的进程，\(T_0 \)是比系统的任何一个逻辑时钟初始值都要小的一个时间戳。</p>
<blockquote>
<p>（注意这个请求队列说的不是HTTP Request，是请求资源的Request命令。）</p>
</blockquote>
<p>接着该算法定义了如下五条规则。为了方便起见，每一条规则定义的动作都可以看作是一个单一的事件。</p>
<ol>
<li>
<p>进程\(P_i \)如果要获取资源，需要发送一个消息\(T_m \)：\(P_i \)请求资源，并把这个消息发送给所有进程；<br />
然后将该消息放到自己的请求队列里面，\(T_m \)是这条消息的时间戳。</p>
</li>
<li>
<p>当\(P_j \)收到包含内容为\(P_i \)请求资源的消息\(T_m \)时，它将这条消息放到它的请求队列里面，并且给\(P_i \)发送一个确认消息（带时间戳的）。 </p>
</li>
<li>
<p>如果进程\(P_i \)要释放资源，它需要把请求队列里的包含\(P_i \)请求资源的\(T_m \)消息移除，并给其他所有进程发送一条带时间戳的\(P_i \)释放资源的消息。</p>
</li>
<li>
<p>当进程\(P_j \)收到\(P_i \)释放资源的消息之后，它就把它请求队列里所有包含\(P_i \)请求资源的消息\(T_m \)移除。</p>
</li>
<li>
<p>当如下两个条件满足的时候，进程\(P_i \)就获取到了资源的使用权：</p>
<ul>
<li><strong>(i)</strong> 在它的请求队列中，存在一个包含\(P_i \)请求资源内容的消息\(T_m \)，在全序关系\(\Rightarrow\)的定义下，该消息比任何其他请求的时间戳都要早。<br />
（To define the relation &quot;\(\Rightarrow\)&quot; for messages, we identify a message with the event of sending it.）<br />
即按照全序关系\(\Rightarrow\)排列后，该请求资源的命令位于请求队列的头部。</li>
<li><strong>(ii)</strong> \(P_i \)从剩下的所有进程里，都收到了一条比\(T_m \)要晚的消息。</li>
</ul>
</li>
</ol>
<blockquote>
<p>注意：第五条规则的条件(i)(ii)判断，是在进程本地进行的。</p>
</blockquote>
<p>我们可以很容易地验证由上述规则定义的算法能够满足上文提到的条件1、2、3。</p>
<p>首先，观察规则5的条件(ii)，我们前文提到有一个假设是消息都是按序接收的，如此则保证了\(P_i \)在处理它自己当前收到的请求的时候，它知道这个请求之前所有请求的信息。</p>
<p>由于只有满足规则3或者规则4，才可以将消息从请求队列里移除，那很容易能够得出条件1是能够满足的。</p>
<p>根据全序关系\(\Rightarrow\)是由偏序关系\(\rightarrow\)拓展得来的这一事实，那么条件2也满足。</p>
<p>规则2确保了在\(P_i \)请求资源之后，系统的状态最终一定会满足规则5的条件(ii)。</p>
<p>规则3和规则4还意味着：如果拿到资源的进程最终一定会释放它，那么规则5的条件2则最终一定会满足，因此条件3成立。</p>
<h2 id="状态机"><a class="header" href="#状态机">状态机</a></h2>
<p>这就是我们给出的分布式算法。<br />
每个独立的进程都遵守这些规则，没有一个作为中心节点/中心存储的进程对资源进行同步。</p>
<p>The synchronization is specified in terms of a State Machine, consisting of a set \(C\) of possible commands, a set \(S\) of possible states, and a function \(e\): \(C×S \rightarrow S\).<br />
The relation \(e(C, S) = S'\) means that executing the command \(C\) with the machine in state \(S\) causes the machine state to change to \(S'\). </p>
<p>在我们的例子里，命令集合C包含了所有\(P_i \)请求资源、\(P_i \)释放资源的命令，状态集合包含了请求队列中所有等待处理的请求命令。</p>
<p>那么当前谁持有资源呢？答案是请求队列里时间戳最小的那个请求的进程（位于队列头节点的），持有了当前资源。<br />
如果要执行一个请求资源的命令，则需要将该请求放到请求队列里的尾部；如果要执行释放资源的命令，则将请求该资源的命令从请求队列里移除。</p>
<p>每个进程都独立地维护着这个状态机，并且处理来自其他进程的命令。因为这些命令都是带时间戳的，因此我们可以根据它们的\(\Rightarrow\)关系达成同步。因此每个进程都执行着相同的一组命令序列。一个进程执行带时间戳为T的条件是：它知晓所有小于T的命令都已经得到了处理。这个算法是很直观的，我们就不再浪费时间去描述它了。</p>
<h2 id="宕机问题"><a class="header" href="#宕机问题">宕机问题</a></h2>
<p>这种方法可以解决分布式系统中，进程的同步问题。然而，这个算法需要系统中所有的进程参与。一个进程必须知道其他所有进程所发出的命令，因此只要有一个进程宕机了，就有可能导致状态机里其他的进程无法正常执行命令，从而导致系统不可用。</p>
<p>宕机问题是一个很严重的事情，不过这超出了这本论文的讨论范围。<br />
We will just observe that the entire concept of failure is only meaningful in the context of physical time.<br />
Without physical time, there is no way to distinguish a failed process from one which is just pausing between events.<br />
用户之所以认为系统“宕机”了，是因为ta认为ta等待响应的时间太长了。<br />
我们在文献《Lamport, L. The implementation of reliable distributed multiprocess systems. To appear in Computer Networks.》中讨论了如何在进程出现故障的情况下，继续让系统工作的方法。</p>
<h2 id="额外的话"><a class="header" href="#额外的话">额外的话</a></h2>
<p>这一小节令人费解的地方还是蛮多的。</p>
<p>通过这五条算法的规则，推导出该算法能够满足分布式锁的条件，并不是一目了然的。</p>
<p>关键是规则5，如何判断一个进程获取到了分布式锁的条件。一是要满足经过排序后，该进程的请求要位于请求队列的头节点；<br />
二是，原文中说</p>
<blockquote>
<p>\(P_i\) has received a message from every other process timestamped later than \(T_m\).</p>
</blockquote>
<p>也就是说该进程至少要从剩下的进程里收到一条比\(T_m\)还要大的消息。<br />
言外之意假设有进程P、Q、R，如果进程P想获取锁，那么除了它的命令请求要排在请求队列的头节点之外。
我们假设该命令的时间戳是\(T_m=10\)，那么它至少要收到从Q而来的比10还要晚的一条消息，再收到从R而来的比10还要晚的一条消息。<br />
满足上述两个条件之后，才进程P才可以认为它获取到了这把锁。</p>
<p>论文里并没有说这条更晚的消息是确认该命令的ACK还是其他。</p>
<blockquote>
<p><strong>参考</strong><br />
<a href="https://yang.observer/2020/07/26/time-lamport-logical-time/">计算机的时钟（二）：Lamport逻辑时钟</a></p>
</blockquote>
<div style="break-before: page; page-break-before: always;"></div><h1 id="异常事件"><a class="header" href="#异常事件">异常事件</a></h1>
<p>我们的资源分配算法给请求资源的命令排了一个全序关系\(\Rightarrow\)。有可能存在以下“异常行为”。<br />
假设我们有一个系统，系统里有若干计算机坐落在世界各地，彼此之间可以相互通信。假设用户A给计算机A发送了一个请求A，然后给他另一个城市的朋友打电话，告诉他去给计算机B发送一个请求B。
那么有可能请求B会包含一个比请求A的时间戳还要小的一个时间戳。<br />
这是因为该分布式系统其实不知道从时间线上来说，请求A发生的比B早，因为这个信息交换是存在于系统之外的，该系统则没有理由知道这个顺序关系。</p>
<p>让我们更进一步地来分析这个问题。令\(\varphi \)为系统里事件的集合。我们把\(\varphi \)和其他外部事件，比如上文中打电话的例子，放在一起进行考虑，将这些事件集合记为\(\underline{\varphi} \)。<br />
令\(\rightarrow\)表示事件集合\(\underline{\varphi} \)的“happened before”关系。<br />
在上述例子里，对于\(\underline{\varphi} \)来说，我们有\(a\rightarrow b\), 但是对于\(\varphi \)来说，却有\(a \nrightarrow b
\)。<br />
显然，没有任何办法能够保证基于事件集合\(\varphi \)，给出完全符合事件集合\(\underline{\varphi} \)的顺序关系。</p>
<p>有两种可行的方法可以避免这种异常行为。第一种是明确确定顺序关系\(\rightarrow\)所需要的必要信息。在我们的例子中，发送请求A的人会获得一个值为\(T_A
\)的时间戳，当进行请求B的时候，作为A的朋友，小B，则有责任去指定一个比\(T_A
\)大的时间戳。我们通过让用户对此事负责的方式，解决了这种异常行为。</p>
<p>第二种方式是，构建一个符合如下条件的系统系统时钟。</p>
<blockquote>
<p><strong>Strong Clock Condition</strong>:<br />
如果\(a\rightarrow b\)，则必有\(C_i\langle a\rangle &lt; C_j \langle b\rangle \)。</p>
</blockquote>
<p>这个条件里的\(\rightarrow\)比之前提到的<strong>Clock Condition</strong>里的\(\rightarrow\)要更强。我们构建的逻辑时钟并不满足这个条件。</p>
<p>让我们定义\(\underline{\varphi} \)为在物理意义下的时空中，发生的一系列“real“的事件集合，并且定义\(\rightarrow\)为在这种特殊的相对关系下，事件之间的偏序关系。</p>
<p>在我们生活的宇宙之中，一个神奇的事情就是我们可以基于彼此之间相互独立的物理时钟，构建一个满足Strong Clock Condition的系统。</p>
<p>接下来我们将探讨，如何使用该物理时钟，消除这种不确定行为。<br />
让我们开始把目光转向物理时钟。    </p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="物理时钟"><a class="header" href="#物理时钟">物理时钟</a></h1>
<p>现在让我们把物理时钟引入到之前提到的程序时空图里。令\(C_i(t)
\)表示时钟\(C_i
\)在物理时间\(t
\)时刻的值。为了数学上方便起见，我们假设时钟是连续运行的，而非离散地&quot;ticks&quot;. (A discrete clock can be thought of as a continuous one in which there is an error of up to \(\frac{1}{2}
\) &quot;tick&quot; in reading it.)</p>
<p>更精确地，我们假设\(C_i(t)
\)是一个对时间t变量连续、可微的函数，除了时钟被重置时间的那一刻，其余时间均不会产生不连续的跳跃间断点。那么\(\frac{dC_i(t)}{dt}
\)就表示了时钟在时间\(t
\)上的变化率。</p>
<p>为了使时钟\(C_i
\)是一个真正的物理时钟，那必须尽可能地让它的变化率保证正确。即，对所有的t时刻来说，必须有\(\frac{dC_i(t)}{dt}\approx1
\)。更精确地，我们假设时钟将满足如下条件：</p>
<blockquote>
<p><strong>PC1</strong>. </p>
<p>存在一个常数\(\kappa\ll1
\)，对于所有的\(i
\)来说：\(\left| dC_i(t)/dt-1 \right|&lt;\kappa
\).
对于石英表来说，\(\kappa\leq 10^{-6}
\).</p>
</blockquote>
<p>如此，我们就有了若干独立的、能够接近正确的速率的时钟，但还不够。这些时钟之间必须能够彼此同步。即对于所有的\(i \), \(j \), \(t \)来说，要有\(C_i(t)\approx C_j(t)
\)。</p>
<p>更精确地，必须存在一个足够小的常数\(\epsilon
\)能够满足如下条件：</p>
<blockquote>
<p><strong>PC2</strong>. </p>
<p>对于所有的\(i, j
\)，都有\(\left|C_i(t)-C_j(t) \right|&lt;\epsilon
\)。</p>
</blockquote>
<p>如果把图2中程序时空图的纵轴看作是物理时间，那么PC2就说明了每一条tick line的高度误差不会超过\(\epsilon
\)。</p>
<p>由于任意两个时钟都会存在速率上的差异，它们之间时钟的读数会产生飘移，越飘越远。因此我们需要设计出一种算法，来保证条件PC2成立。不过，我们还是先看看\(\epsilon
\)和\(\kappa
\)要多小，才能保证没有异常行为的发生。</p>
<p>我们必须确保系统\(\underline{\varphi}
\)里相关的物理事件满足Strong Clock Condition。假设我们的时钟能够满足平平无奇的Clock Condition，那我们只需要考虑系统\(\underline{\varphi}
\)里任意两个并发的事件a和b也能够满足Strong Clock Condition就可以了，即系统里那些\(a\nrightarrow b
\)的事件。因此，我们只需要讨论发生在不同进程里的那些事件。</p>
<p>假设a和b属于不同进程的两个事件，并且满足\(a\rightarrow b
\)，如果事件a发生在t时刻，b发生在\(t+\mu
\)时刻。那么，\(\mu
\)就要小于进程间传递消息的最小传输时间。我们可以令\(\mu
\)等于进程之间最短距离与光速的比值。然而，\(\mu
\)取决于系统间消息传递的时间，也就是说，如果系统通信性能不佳，\(\mu
\)有可能会很大。</p>
<p>为了避免异常行为，我们必须确保对于任意的\(i, j, t
\)，都有\(C_i(t+\mu)-C_j(t)&gt;0
\)。再结合条件PC1和PC2，我们可以把\(\kappa
\)和\(\epsilon
\)的值和\(\mu
\)的值关联起来。假设当时钟重置时间的时候，它的时钟只会向前重置而不会向后重置。（如果向后重置，会导致时钟违反条件C1）</p>
<p>那么，PC1就表示\(C_i(t+\mu)-C_i(t)&gt;(1-\kappa)\mu
\)。使用PC2，如果下述不等式成立，则很容易推导出\(C_i(t+\mu)-C_j(t)&gt;0
\)：</p>
<p>$$\epsilon/(1-\kappa)\leq\mu
$$.</p>
<p>有了这个不等式，再加上PC1和PC2，那就意味着异常行为是不可能发生的。</p>
<p>现在让我们来描述我们的算法，来确保PC2一定成立。令m为物理时刻t发出的一条消息，另一个进程在\(t'
\)时刻接收到该消息。我们定义\(\nu_m=t'-t
\)为消息m的总时延。该进程在接收消息m的时候，自然不会知道这个时延的大小。不过，我们可以假设接收消息的进程知道最小时延\(\mu_m\geq0
\)即\(\mu_m\leq\nu_m
\)。我们称\(\xi_m=\nu_m-\mu_m
\)为消息的不可预测时延。</p>
<p>现在我们指明物理时钟的两个规则：
&gt;</p>
<blockquote>
<p><strong>IR1</strong>'. </p>
<p>对每个i，如果\(P_i
\)没有在物理时间t时刻接收到消息，那么\(C_i
\)在t时刻是可微的，且\(\frac{dC_i(t)}{dt}&gt;0
\)。</p>
</blockquote>
<blockquote>
<p><strong>IR2</strong>'. </p>
<p>(a) 如果\(P_i
\)在物理时刻t发送了一条消息m，那么消息m则包含了一个时间戳\(T_m=C_i(t)
\)。</p>
<p>(b)当进程\(P_j
\)在\(t'
\)时刻收到消息m的时候，进程\(P_j
\)会把\(C_j(t')
\)的值设置成\(max(C_j(t'-0), T_m+\mu_m)
\)。</p>
</blockquote>
<p>尽管上述两条规则用到了物理时间的参数，不过一个进程只需要知道它自己的时钟读数和收到消息的时间戳，就可以决定是否重置时钟了。为了数学上的方便，我们假设每个时间都发生在单个的物理时间点上，且相同的进程中，不同的事件会发生在不同的时间点上。如此，这些规则就和之前提到的IR1、IR2等价，于是该物理时钟就满足了Clock Condition。The fact that real events have a finite duration causes no difficulty in implementing the algorithm. The only real concern in the implementation is making sure that the discrete clock ticks are frequent enough so C1 is maintained.</p>
<p>现在，让我们看看这种时钟同步算法是如何满足条件PC2的。我们假设由不同进程组成的系统，可以通过一个有向图进行描述，在该有向图中，如果有从\(P_i
\)到\(P_j
\)的弧线，则表示了\(P_i
\)在给\(P_j
\)发送消息，弧长表示发送消息所需的时间。</p>
<p>我们说对于任意一个时间t来说，一条消息要经过\(\tau
\)秒才能传输完成，那么进程\(P_i
\)在t时刻发送一条消息给\(P_j
\)的话，至少\(P_j
\)到\(t+\tau
\)的时刻才能接收到。</p>
<p>The diameter of the directed graph is the smallest number \(d
\) such that for any pair of distinct processes \(P_j, P_k
\)there is a path from \(P_j
\)to\(P_k
\) having at most \(d
\)arcs.</p>
<p>为了使PC2成立，下述定理限制了系统启动后，时钟同步所需要的时间。</p>
<p>定理：
假设具有半径d的进程有向图总是满足规则IR' 和 IR2'。 对于任何一个消息m，\(\mu_m\leq\mu
\)对于某些常数\(\mu
\)成立，并且当\(t\geq t_0
\)时，(a) PC1成立。(b) 存在常数\(\tau
\)和\(\xi
\)，每隔\(\tau
\)秒后，一个具有小于\(\xi
\)的不可预测时延的消息m会在每个弧线上发出。
当\(t\geq t_0+\tau d
\)时，有\(\epsilon\approx d(2\kappa\tau+\xi)
\)，近似地可以看作\(\mu+\xi\ll\tau
\)，于是PC2满足。</p>
<p>这个定理的证明相当困难，我们在附录里给出了证明。学术界有很多工作致力于解决物理时钟同步的问题。我们推荐读者去读《Ellingson, C, and Kulpinski, R.J. Dissemination of system-time. 1973》。文献中描述了一种估计消息传递延时，并且校准时钟频率的方法。However, the requirement that clocks are never set backwards seems to distinguish our situation from ones previously studied, and we believe this theorem to be a new result.</p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="结论-3"><a class="header" href="#结论-3">结论</a></h1>
<p>本文我们引入了“happening before”概念，从而定义了分布式系统中事件之间的偏序关系。<br />
我们提供了一个算法，该算法可以扩展偏序关系至一个带随机性的全序关系，基于此全序关系，可以用来构建分布式锁，进而解决系统之间不同节点之间互斥地访问资源的同步问题。<br />
在之后的论文里，我们会讨论如何基于这种方法解决任何同步问题。</p>
<p>基于逻辑时钟定义的全序关系具有一定的随机性。它可能会产生异常行为，使系统中不同的用户感知到事件发生顺序的歧义。</p>
<blockquote>
<p>比如用户A发送请求A，打电话告诉用户B发送请求B，系统可能会产生\(b\rightarrow a\)，但用户A的感知是\(a\rightarrow b\)）</p>
</blockquote>
<p>为了解决这种问题，我们可以使用同步的物理时钟。在附录里，我们推导了该物理时钟可以被同步的正确性。</p>
<p>In a distributed system, it is important to realize that the order in which events occur is only a partial ordering.<br />
We believe that this idea is useful in understanding any multiprocess system.<br />
It should help one to understand the basic problems of multiprocessing independently of the mechanisms used to solve them.<br />
（最后一段lamport发表了一段感言，意思是在分布式系统中，一个很重要的事情就是意识到：我们所说的事件发生的先后顺序，只是一种偏序关系。<br />
这个idea要时刻记在脑子里，这对于我们理解任何多进程系统都是有用的。<br />
我就直接贴原文上来吧，正好大家一起品一品。<del>主要是不知道最后一句咋翻译</del>）</p>
<h2 id="附录-物理时钟理论的正确性证明"><a class="header" href="#附录-物理时钟理论的正确性证明">附录-物理时钟理论的正确性证明</a></h2>
<p>略</p>
<h2 id="这篇论文对分布式领域的影响"><a class="header" href="#这篇论文对分布式领域的影响">这篇论文对分布式领域的影响</a></h2>
<blockquote>
<p><strong>额外的话</strong><br />
这篇论文堪称分布式系统领域最重要的论文。如果给想了解分布式的人只推荐一篇论文的话，我想就是这篇了。</p>
<p>Lamport在这篇文章里初步给出了分布式系统的定义，并且指出分布式系统里的一个重要问题：时钟同步。</p>
<p>首先他定义了happens before关系，在不引入物理时钟的情况下引入逻辑时钟，从而保证系统里的事件具有偏序关系。</p>
<p>进一步他把偏序关系拓展，给出了一种获取全序关系的方式。</p>
<p>但是依赖于逻辑时钟定义出的全序关系，可能会引发某些异常行为，于是Lamport又给出了一种物理时钟的算法。</p>
<p>除此之外，Lamport还给出了全序关系的应用：基于全序关系构建分布式锁。<br />
Lamport还提醒我们，可别小看这件事情，有了分布式锁，我们就可以构建具有一致性的分布式状态机了。</p>
<p>往后的几十年里，论文中提到的每一个概念都对分布式领域产生了深远的影响。</p>
</blockquote>
<ol>
<li>在大规模的分布式环境下产生单调递增的时间戳，raft里使用了term的概念，和lamport给出的逻辑时钟算法基本一致。</li>
<li>谷歌的全球级分布式数据库spanner则采用物理时钟解决这一问题，spanner甚至能够在跨越遍布全球的多个数据中心之间高效地产生单调递增的时间戳。做到这一点，靠的是一种称为TrueTime的机制，而这种机制的理论基础就是Lamport这篇论文中的物理时钟算法（两者之间有千丝万缕的联系）。</li>
<li>这篇论文中定义的「Happened Before」关系，不仅在分布式系统设计中成为考虑不同事件之间关系的基础，而且在多线程编程模型中也是重要的概念。</li>
<li>利用分布式状态机来实现数据复制的通用方法（State Machine Replication，简称SMR），其实也是这篇论文首创的。后来的人们称这种分布式的状态机为<strong>复制状态机</strong>，复制状态机的概念同样还出现在了raft、vm-ft论文里。</li>
</ol>
<div style="break-before: page; page-break-before: always;"></div><h1 id="killer-of-microseconds"><a class="header" href="#killer-of-microseconds">Killer of Microseconds</a></h1>
<p>这是一篇17年的essay，给这几年的OS/CPU设计指了一个方向。</p>
<blockquote>
<p><a href="./assets/Attack%20of%20the%20Killer%20Microseconds.pdf">Attack of the Killer Microseconds.pdf</a><br />
一句话：如今微秒级的时延成为性能瓶颈，我们需要下一代的操作系统/新的CPU。</p>
</blockquote>
<h2 id="内容概要"><a class="header" href="#内容概要">内容概要</a></h2>
<blockquote>
<p>Microsecond-scale I/O means tension between performance and productivity
that will need new latency-mitigating ideas, including in hardware.</p>
</blockquote>
<p>目前网络时延/设备IO的时延已经到了微秒量级，这意味着不管是从软件的角度，还是硬件的角度，都需要我们去寻找新的缓解时延的方法。
performance &amp; productivity</p>
<hr />
<p>曾经，为了解决I/O或者网络设备的时延问题，我们引入了中间层来解决它。</p>
<p><strong>DRAM(ns) &lt;-&gt; Disk I/O (ms)</strong></p>
<p>作为硬件设计者们，会给CPU加上流水线、预取指、指令预测技术、多级缓存架构，来缓解memory的I/O时延问题。<br />
作为软件设计者们，则会给OS引入多线程编程模型。假如我们要去读磁盘文件<em>read()</em>，就可以开一个新的thread去做这件事，于是就不会阻塞主进程的执行。</p>
<p>PS: 多线程属于同步编程模型。在异步编程模型里，人们使用事件驱动的方式来编写程序。（这个在游戏、前端里会比较常见）</p>
<p>作者的观点是（不一定对）：根据他在Google工作的经验之谈，同步编程模型远远优于异步编程模型，同步编程模型会更方便我们对程序进行编写和调试。
因为同步编程模型把管理任务切换的复杂性从程序员的program转移给了OS和thread_lib。</p>
<hr />
<p>目前，新兴的内存、网络设备的时延已经降到了us量级，如何处理us &lt;-&gt; ns之间的时延，就变成了一个新的问题。</p>
<p>与此同时，由于功耗所限，摩尔定律放缓，CPU的性能的增长速度是在下降的。所以大型的云服务公司，都开始从单机系统转向了分布式，以期提高系统性能。<br />
十年前，那种写完程序之后什么都不干，过十八个月后硬件性能翻一番，系统性能随之上升的时代一去不复返了。
未来的世界是<strong>分布式&amp;微服务</strong>的。</p>
<blockquote>
<p>也就是说，在分布式&amp;微服务大行其道的今天，对于服务器来说，我们的OS设计需要满足以下两点要求：<br />
1）更高的网络吞吐率。<br />
2）更短的短板效应。<br />
吞吐量高，才能响应更多的网络请求；更短的短板效应，才能优化用户的使用体验。</p>
</blockquote>
<hr />
<p>来看看OS是怎么浪费一个CPU的：</p>
<ul>
<li>进程之间的上下文切换是毫秒量级的。</li>
<li>RDMA-远程直接内存访问的时延是2us。</li>
</ul>
<p>假如我们还像以前那样，创建一个thread去读IO，那么I/O一来一回是4us，但是ctx_switch、request_queue相关的开销则是上千us！</p>
<h3 id="解决方案"><a class="header" href="#解决方案">解决方案</a></h3>
<ol>
<li>硬件上：新的硬件，以支持更快的context切换。</li>
<li>软件上：新的OS，支持更快的context切换，即更轻量的thread。</li>
</ol>
<p>额外的要求：新的CPU和OS仍需要提供同步编程模型，以满足程序员们的需要。</p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="the-design-and-operation-of-cloudlab"><a class="header" href="#the-design-and-operation-of-cloudlab">The Design and Operation of CloudLab</a></h1>
<blockquote>
<p><a href="./assets/cloudlab.pdf">cloudlab.pdf</a></p>
</blockquote>
<p>这篇论文就离谱...提供一个做实验的硬件共享平台？两年时间里有4k的用户，然后就能发论文了？<br />
而且里面没有提任何实现细节啊，更像是一篇公司年报，或者一个软广。</p>
<h2 id="摘要-6"><a class="header" href="#摘要-6">摘要</a></h2>
<blockquote>
<p>Given the highly empirical nature of research in cloud
computing, networked systems, and related fields, testbeds
play an important role in the research ecosystem. In this
paper, we cover one such facility, CloudLab, which supports
systems research by providing raw access to programmable
hardware, enabling research at large scales, and creating a
shared platform for repeatable research.<br />
We present our experiences designing CloudLab and operating it for four years,
serving nearly 4,000 users who have run over 79,000 experiments on 2,250 servers, switches, and other pieces of datacenter equipment.<br />
From this experience, we draw lessons organized around two themes.<br />
The first set comes from analysis of data regarding the use of CloudLab: how users interact with it, what they use it for, and the implications for facility design and operation.<br />
Our second set of lessons comes from looking at the ways that algorithms used
“under the hood,” such as resource allocation, have important— and sometimes unexpected—effects on user experience and behavior. These lessons can be of value to the designers and operators of IaaS facilities in general, systems testbeds in particular, and users who have a stake in understanding how these systems are built.</p>
</blockquote>
<h2 id="结论-4"><a class="header" href="#结论-4">结论</a></h2>
<blockquote>
<p>Testbeds for computer science research occupy a unique place in the overall landscape of computing infrastructure. 
They are often used in an attempt to overcome a basic impasse <a href="./assets/Overcoming_the_Internet_impasse_through_virtualization.pdf">3</a>: 
as computing technologies become popular, research into their fundamentals becomes simultaneously more valuable and more difficult to do.<br />
The existence of production systems such as the Internet and commercial clouds motivates work aimed at improving them, but production deployments offer service at a specific layer of abstraction, making it difficult or impossible to use them for research that seeks to work under that layer or to change the abstraction significantly.</p>
</blockquote>
<p>cloudlab平台的优点是：它的测试机器是在裸金属上部署的，而非虚拟机。</p>
<h2 id="简介"><a class="header" href="#简介">简介</a></h2>
<p>网址：<a href="https://www.cloudlab.us">https://www.cloudlab.us</a></p>
<p>可惜我不是MIT的学生，没法用这个平台。</p>
<h3 id="它的特点"><a class="header" href="#它的特点">它的特点</a></h3>
<ul>
<li>A shared cloud infrastructure for research and education in computer systems.</li>
<li>The CloudLab clusters have almost 15,000 cores distributed across three sites around the United States.</li>
</ul>
<h3 id="问题-1"><a class="header" href="#问题-1">问题</a></h3>
<ol>
<li>
<p>Q: Why do researchers need bare metal access to hardware? How is the hardware access provided by public clouds different?</p>
</li>
<li>
<p>Q: How does CloudLab maintain security?</p>
</li>
</ol>
<div style="break-before: page; page-break-before: always;"></div><h1 id="dpdk"><a class="header" href="#dpdk">DPDK</a></h1>
<blockquote>
<p><a href="https://www.dpdk.org/">DPDK</a><br />
<a href="https://doc.dpdk.org/guides/prog_guide/">DPDK programming guide</a><br />
<a href="./assets/nsdi14-paper-jeong.pdf">mTCP</a></p>
</blockquote>
<h2 id="简介-1"><a class="header" href="#简介-1">简介</a></h2>
<blockquote>
<p>The Data Plane Development Kit (DPDK) is an open-source software project with a vibrant community of development contributors.
Because it is open-source and free, a large portion of the tech industry involved in microprocessor research and development are working to improve DPDK with each release update. 
This includes computer scientists and researchers from Intel, IBM, and Cisco, among other industry leaders.<br />
According to the DPDK Programmer's Guide Overview, “The main goal of the DPDK is to provide a simple, complete framework for fast packet processing in <strong>data plane</strong> applications.” 
This makes DPDK ideal for the database as a service application.</p>
</blockquote>
<p>数据平面编程貌似是网络领域里面，这两年很火的一个概念。
可以绕过内核进行网络通信，从而避免上下文切换。</p>
<h3 id="背景-1"><a class="header" href="#背景-1">背景</a></h3>
<p>在Datacenter的服务里，网卡的速度快，但是从用户态进入内核态是一个很慢的操作。</p>
<p>使用DKDP对网卡进行编程，可以bypass内核，从而减小了网络数据包传输的延迟。(kernel-bypass networking)</p>
<h3 id="lab-玩转dpdk库"><a class="header" href="#lab-玩转dpdk库">LAB 玩转DPDK库</a></h3>
<p>虽然我们没有cloudlab，但是还是可以在自己的裸金属电脑上用linux跑DPDK的。</p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="spdk"><a class="header" href="#spdk">SPDK</a></h1>
<blockquote>
<p><a href="https://spdk.io/">https://spdk.io/</a></p>
<p>The Storage Performance Development Kit (SPDK) provides a set of tools and libraries for writing high performance, scalable, user-mode storage applications.</p>
</blockquote>
<div style="break-before: page; page-break-before: always;"></div><h1 id="shenango"><a class="header" href="#shenango">Shenango</a></h1>
<blockquote>
<p><a href="./assets/shenango_nsdi19.pdf">shenango_nsdi19.pdf</a><br />
出自MIT CSAIL实验室</p>
</blockquote>
<h1 id="shenango-achieving-high-cpu-efficiency-for-latency-sensitive-datacenter-workloads"><a class="header" href="#shenango-achieving-high-cpu-efficiency-for-latency-sensitive-datacenter-workloads">Shenango: Achieving High CPU Efficiency for Latency-sensitive Datacenter Workloads</a></h1>
<h2 id="摘要-7"><a class="header" href="#摘要-7">摘要</a></h2>
<blockquote>
<p>Datacenter applications demand microsecond-scale tail latencies and high request rates from operating systems, 
and most applications handle loads that have high variance over multiple timescales.<br />
Achieving these goals in a CPU-efficient way is an open problem. Because of the high overheads of today’s kernels, 
the best available solution to achieve microsecond-scale latencies is <strong>kernel-bypass networking</strong>, 
which dedicates CPU cores to applications for spin-polling the network card.<br />
But this approach wastes CPU: even at modest average loads, one must dedicate enough cores for the peak expected load.</p>
<p>Shenango achieves comparable latencies but at far greater CPU efficiency.<br />
It reallocates cores across applications at very fine granularity—every 5 μs—enabling cycles unused by latency-sensitive applications to be used productively by batch processing applications.<br />
It achieves such fast reallocation rates with<br />
(1) an efficient algorithm that detects when applications would benefit from more cores, and<br />
(2) a privileged component called the IOKernel that runs on a dedicated core,<br />
steering packets from the NIC and orchestrating core reallocations.<br />
When handling latency-sensitive applications,
such as memcached, we found that Shenango achieves tail latency and throughput comparable to ZygOS, a state-of-the-art,
kernel-bypass network stack, but can linearly trade latency-sensitive application throughput for batch processing application throughput,
vastly increasing CPU efficiency.</p>
</blockquote>
<h2 id="结论-5"><a class="header" href="#结论-5">结论</a></h2>
<blockquote>
<p>This paper presented Shenango, a system that can simul- taneously maintain CPU efficiency,
low tail latency, and high network throughput on machines handling multiple latency-sensitive and batch processing applications.<br />
Shenango achieves these benefits through its IOKernel, a dedicated core that integrates with networking to drive fine-grained core allocation adjustments between applications.<br />
The IOKernel makes use of a congestion detection algorithm that can react to application overload in μs timescales by tracking queuing backlog information for both packets and application threads.<br />
This design allows Shenango to significantly improve upon previous kernel bypass network stacks by recovering cycles wasted on busy spinning because of the provisioning gap between minimum and peak load.<br />
Finally, our per-application runtime makes these benefits more accessible to developers by providing high-level programming abstractions (e.g., lightweight threads and synchronous network sockets) at low overhead.</p>
</blockquote>
<p>这篇论文有点意思。结合服务器OS的要求（低时延/高CPU利用率），写了一个Shenango操作系统。</p>
<ul>
<li>IOKernel re-allocates cores to application every 10 us busy-spinning in a core.</li>
<li>Runtime library enables communication between applications and IOKernel and provides useful abstractions (i.e. light-weight threads)</li>
</ul>
<p>貌似它具有更轻量级的线程？而且这个项目还有源码！！</p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="tritonsort"><a class="header" href="#tritonsort">TritonSort</a></h1>
<blockquote>
<p><a href="./assets/TritonSort.pdf">TritonSort.pdf</a></p>
</blockquote>
<h2 id="tritonsort-a-balanced-large-scale-sorting-system"><a class="header" href="#tritonsort-a-balanced-large-scale-sorting-system">TritonSort: A Balanced Large-Scale Sorting System</a></h2>
<h3 id="摘要-8"><a class="header" href="#摘要-8">摘要</a></h3>
<blockquote>
<p>We present TritonSort, a highly efficient, scalable sorting system. It is designed to process large datasets,
and has been evaluated against as much as 100 TB of input
data spread across 832 disks in 52 nodes at a rate of 0.916
TB/min.<br />
When evaluated against the annual Indy GraySort
sorting benchmark, TritonSort is 60% better in absolute
performance and has over six times the per-node efficiency
of the previous record holder.<br />
In this paper, we describe
the hardware and software architecture necessary to operate TritonSort at this level of efficiency.<br />
Through careful
management of system resources to ensure cross-resource
balance, we are able to sort data at approximately 80% of
the disks’ aggregate sequential write speed.<br />
We believe the work holds a number of lessons for balanced system design and for scale-out architectures in general.<br />
While many interesting systems are able to scale linearly with additional servers, per-server performance can
lag behind per-server capacity by more than an order of
magnitude.<br />
Bridging the gap between high scalability and
high performance would enable either significantly cheaper
systems that are able to do the same work or provide the
ability to address significantly larger problem sets with the
same infrastructure.</p>
</blockquote>
<div style="break-before: page; page-break-before: always;"></div><h1 id="profiling-a-warehouse-scale-computer"><a class="header" href="#profiling-a-warehouse-scale-computer">Profiling a warehouse-scale computer</a></h1>
<blockquote>
<p><a href="./assets/kanev_profiling.pdf">kanev_profiling.pdf</a></p>
</blockquote>
<h2 id="摘要-9"><a class="header" href="#摘要-9">摘要</a></h2>
<blockquote>
<p>With the increasing prevalence of warehouse-scale (WSC)
and cloud computing, understanding the interactions of server
applications with the underlying microarchitecture becomes
ever more important in order to extract maximum performance
out of server hardware.<br />
To aid such understanding, this paper
presents a detailed microarchitectural analysis of live datacenter jobs, measured on more than 20,000 Google machines
over a three year period, and comprising thousands of different applications.<br />
We first find that WSC workloads are extremely diverse,
breeding the need for architectures that can tolerate application variability without performance loss.<br />
However, some
patterns emerge, offering opportunities for co-optimization
of hardware and software.<br />
For example, we identify common building blocks in the lower levels of the software stack.<br />
This “datacenter tax” can comprise nearly 30% of cycles
across jobs running in the fleet, which makes its constituents
prime candidates for hardware specialization in future server
systems-on-chips.<br />
We also uncover opportunities for classic
microarchitectural optimizations for server processors, especially in the cache hierarchy. Typical workloads place significant stress on instruction caches and prefer memory latency
over bandwidth.<br />
They also stall cores often, but compute heavily in bursts. These observations motivate several interesting
directions for future warehouse-scale computers.</p>
</blockquote>
<p>通过分析仓储服务器的特点，预估未来CPU处理器的设计方向。</p>
<h2 id="结论-6"><a class="header" href="#结论-6">结论</a></h2>
<blockquote>
<p>To better understand datacenter software performance properties, we profiled a warehouse-scale computer over a period of
several years.<br />
In this paper, we showed detailed microarchitectural measurements spanning tens of thousands of machines,
running thousands of different applications, while executing
the requests of billions of users.<br />
These workloads demonstrate significant diversity, both in
terms of the applications themselves, and within each individual one. By profiling across binaries, we found common
low-level functions (“datacenter tax”), which show potential
for specialized hardware in a future server SoC. Finally, at the
microarchitectural level, we identified a common signature
for WSC applications – low IPC, large instruction footprints,
bimodal ILP and a preference for latency over bandwidth –
which should influence future processor designs for the datacenter.<br />
These observations motivate several interesting directions for future warehouse-scale computers.<br />
The table below
briefly summarizes our findings and potential implications for
architecture design.</p>
</blockquote>
<p><img src="./assets/profilling_1.png" alt="figure_1" /></p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="the-design-philosophy-of-the-darpa-internet-protocols"><a class="header" href="#the-design-philosophy-of-the-darpa-internet-protocols">The Design Philosophy of The DARPA Internet Protocols</a></h1>
<h1 id="abstract"><a class="header" href="#abstract">Abstract</a></h1>
<p>The Internet protocol suite, TCP/IP, was first proposed fifteen years ago. It was developed by the Defense Advanced Research Projects Agency (DARPA), and has been used widely in military and commercial systems. While there have been papers and specifications that describe how the protocols work, it is sometimes difficult to deduce from these why the protocol is as it is. For example, the Internet protocol is based on a connectionless or datagram mode of service. The motivation for this has been greatly misunderstood. This paper attempts to capture some of the early reasoning which shaped the Internet protocols.</p>
<h1 id="introduction"><a class="header" href="#introduction">Introduction</a></h1>
<p>For the last 15 years [1], the Advanced Research Projects Agency of the U.S. Department of Defense has been developing a suite of protocols for packet switched networking. These protocols, which include the Internet Protocol (IP), and the Transmission Control Protocol (TCP), are now U.S. Department of Defense standards for internetworking, and are in wide use in the commercial networking environment. The ideas developed in this effort have also influenced other protocol suites, most importantly the connectionless configuration of the IS0 protocols [2,3,4].</p>
<p>While specific information on the DOD protocols is fairly generally available [5,6,7], it is sometimes difficult to determine the motivation and reasoning which led to the design.</p>
<p>In fact, the design philosophy has evolved considerably from the first proposal to the current standards. For example, the idea of the datagram, or connectionless service, does not receive particular emphasis in the first paper, but has come to be the defining characteristic of the protocol. Another example is the layering of the architecture into the IP and TCP layers. This seems basic to the design, but was also not a part of the original proposal. These changes in the Internet design arose through the repeated pattern of implementation and testing that occurred before the standards were set.</p>
<p>The Internet architecture is still evolving. Sometimes a new extension challenges one of the design principles, but in any case an understanding of the history of the design provides a necessary context for current design extensions. The connectionless configuration of IS0 protocols has also been colored by the history of the Internet suite, so an understanding of the Internet design philosophy may be helpful to those working with ISO.</p>
<p>This paper catalogs one view of the original objectives of the Internet architecture, and discusses the relation between these goals and the important features of the protocols.</p>
<h1 id="fundamental-goal"><a class="header" href="#fundamental-goal">Fundamental Goal</a></h1>
<p>The top level goal for the DARPA Internet Architecture was to develop an effective technique for multiplexed utilization of existing interconnected networks. Some elaboration is appropriate to make clear the meaning of that goal.</p>
<p>The components of the Internet were networks, which were to be interconnected to provide some larger service. The original goal was to connect together the original ARPANET[8] with the ARPA packet radio network[9,10], in order to give users on the packet radio network access to the large service machines on the ARPANET. At
the time it was assumed that there would be other sorts of networks to interconnect, although the local area network had not yet emerged.</p>
<p>An alternative to interconnecting existing networks would have been to design a unified system which incorporated a variety of different transmission media, a multi-media network.</p>
<blockquote>
<p>Perhaps “multi-media” was not well-defined in 1988. It now has a different meaning, of course.</p>
</blockquote>
<p>While this might have permitted a higher degree of integration, and thus better performance, it was felt that it was necessary to incorporate the then existing network architectures if Internet was to be useful in a practical sense. Further, networks represent administrative boundaries of control, and it was an ambition of this project to come to grips with the problem of integrating a number of separately administrated entities into a common utility.</p>
<p>The technique selected for multiplexing was packet switching.</p>
<p>An alternative such as circuit switching could have been considered, but the applications being supported, such as remote login, were naturally served by the packet switching paradigm, and the networks which were to be integrated together in this project were packet switching networks. So packet switching was accepted as a fundamental component of the Internet architecture. The final aspect of this
fundamental goal was the assumption of the particular technique for interconnecting these networks. Since the technique of store and forward packet switching, as demonstrated in the previous DARPA project, the ARPANET, was well understood, the top level assumption was that networks would be interconnected by a layer of Internet packet switches, which were called gateways.</p>
<p>From these assumptions comes the fundamental structure of the Internet: a packet switched communications facility in which a number of distinguishable networks are connected together using packet communications processors called gateways which implement a store and forward packet forwarding algorithm.</p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="the-design-philosophy-of-the-darpa-internet-protocols-1"><a class="header" href="#the-design-philosophy-of-the-darpa-internet-protocols-1">The Design Philosophy of The DARPA Internet Protocols</a></h1>
<h2 id="second-level-goals"><a class="header" href="#second-level-goals">Second Level Goals</a></h2>
<p>The top level goal stated in the previous section contains the word &quot;effective,&quot; without offering any definition of what an effective interconnection must achieve. The following list summarizes a more detailed set of goals which were established for the Internet architecture.</p>
<ol>
<li>Internet communication must continue despite loss of networks or gateways.</li>
<li>The Internet must support multiple types of communications service.</li>
<li>The Internet architecture must accommodate a variety of networks.</li>
<li>The Internet architecture must permit distributed management of its
resources.</li>
<li>The Internet architecture must be cost effective.</li>
<li>The Internet architecture must permit host attachment with a low level of
effort.</li>
<li>The resources used in the Internet architecture must be accountable.</li>
</ol>
<p>This set of goals might seem to be nothing more than a checklist of all the desirable network features. It is important to understand that these goals are in order of importance, and an entirely different network architecture would result if the order were changed. For example, since this network was designed to operate in a military context, which implied the possibility of a hostile environment, survivability was put as a first goal, and accountability as a last goal. During wartime, one is less concerned with detailed accounting of resources used than with mustering whatever resources are available and rapidly deploying them in an operational
manner. While the architects of the Internet were mindful of accountability, the problem received very little attention during the early stages of the design, and is only now being considered. An architecture primarily for commercial deployment would clearly place these goals at the opposite end of the list.</p>
<p>Similarly, the goal that the architecture be cost effective is clearly on the list, but below certain other goals, such as distributed management, or support of a wide variety of networks. Other protocol suites, including some of the more popular commercial architectures, have been optimized to a particular kind of network, for example a long haul store and forward network built of medium speed telephone lines, and deliver a very cost effective solution in this context, in exchange for dealing somewhat poorly with other kinds of nets, such as local area nets.</p>
<p>The reader should consider carefully the above list of goals, and recognize that this is not a &quot;motherhood&quot; list, but a set of priorities which strongly colored the design decisions within the Internet architecture. The following sections discuss the relationship between this list and the features of the Internet.</p>
<h2 id="survivability-in-the-face-of-failure"><a class="header" href="#survivability-in-the-face-of-failure">Survivability in the Face of Failure</a></h2>
<p>The most important goal on the list is that the Internet should continue to supply communications service, even though networks and gateways are failing. In particular, this goal was interpreted to mean that if two entities are communicating over the Internet and some failure causes the Internet to be temporarily disrupted and reconfigured to reconstitute the service, then the entities communicating should be able to continue without having to reestablish or reset the high level state of their conversation. More concretely, at the service interface of the transport layer, this architecture provides no facility to communicate to the client of the transport service that the synchronization between the sender and the receiver may have been lost. It was an assumption in this architecture that synchronization would never be lost unless there was no physical path over which any sort of communication could be achieved. In other words, at the top of transport, there is only one failure, and it is total partition. The architecture was to mask completely any transient failure.</p>
<p>To achieve this goal, the state information which describes the on-going conversation must be protected. Specific examples of state information would be the number of packets transmitted, the number of packets acknowledged, or the number of outstanding flow control permissions. If the lower layers of the architecture lose this information, they will not be able to tell if data has been lost, and the application layer will have to cope with the loss of synchrony. This architecture insisted that this disruption not occur, which meant that the state information must be protected from loss.</p>
<p>In some network architectures, this state is stored in the intermediate packet switching nodes of the network. In this case, to protect the information from loss, it must replicated. Because of the distributed nature of the replication, algorithms to ensure robust replication are themselves difficult to build, and few networks with distributed state information provide any sort of protection against failure. The alternative, which this architecture chose, is to take this information and gather it at the endpoint of the net, at the entity which is utilizing the service of the network. I call this approach to reliability &quot;fate-sharing.&quot; The fate-sharing model suggests that it is acceptable to lose the state information associated with an entity if, at the same time, the entity itself is lost. Specifically, information about transport level synchronization is stored in the host which is attached to the net and using its communication service.</p>
<p>There are two important advantages to fate-sharing over replication. First, fate- sharing protects against any number of intermediate failures, whereas replication can only protect against a certain number (less than the number of replicated copies). Second, fate-sharing is much easier to engineer than replication.</p>
<p>There are two consequences to the fate-sharing approach to survivability. First, the intermediate packet switching nodes, or gateways, must not have any essential state information about on-going connections. Instead, they are stateless packet switches, a class of network design sometimes called a &quot;datagram&quot; network. Secondly, rather more trust is placed in the host machine than in an architecture where the network ensures the reliable delivery of data. If the host resident algorithms that ensure the
sequencing and acknowledgment of data fail, applications on that machine are prevented from operation.</p>
<p>Despite the fact that survivability is the first goal in the list, it is still second to the top level goal of interconnection of existing networks. A more survivable technology might have resulted from a single multimedia network design. For example, the Internet makes very weak assumptions about the ability of a network to report that it has failed. Internet is thus forced to detect network failures using Internet level mechanisms, with the potential for a slower and less specific error detection</p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="the-design-philosophy-of-the-darpa-internet-protocols-2"><a class="header" href="#the-design-philosophy-of-the-darpa-internet-protocols-2">The Design Philosophy of The DARPA Internet Protocols</a></h1>
<h2 id="types-of-service"><a class="header" href="#types-of-service">Types of Service</a></h2>
<p>The second goal of the Internet architecture is that it should support, at the transport service level, a variety of types of service. Different types of service are distinguished by differing requirements for such things as speed, latency and reliability. The traditional type of service is the bidirectional reliable delivery of data. This service, which is sometimes called a &quot;virtual circuit&quot; service, is appropriate for such applications as remote login or file transfer. It was the first service provided in the Internet architecture, using the Transmission Control Protocol (TCP)[11]. It was early recognized that even this service had multiple variants, because remote login required a service with low delay in delivery, but low requirements for bandwidth, while file transfer was less concerned with delay, but very concerned with high throughput. TCP attempted to provide both these types of service.</p>
<p>The initial concept of TCP was that it could be general enough to support any needed type of service. However, as the full range of needed services became clear, it seemed too difficult to build support for all of them into one protocol.</p>
<p>The first example of a service outside the range of TCP was support for XNET[12], the cross-Internet debugger. TCP did not seem a suitable transport for XNET for several reasons. First, a debugger protocol should not be reliable. This conclusion may seem odd, but under conditions of stress or failure (which may be exactly when a debugger is needed) asking for reliable communications may prevent any communications at all. It is much better to build a service which can deal with whatever gets through, rather than insisting that every byte sent be delivered in order. Second, if TCP is general enough to deal with a broad range of clients, it is presumably somewhat complex. Again, it seemed wrong to expect support for this complexity in a debugging environment, which may lack even basic services expected in an operating system (e.g. support for timers.) So XNET was designed to run directly on top of the datagram service provided by Internet.</p>
<p>Another service which did not fit TCP was real time delivery of digitized speech, which was needed to support the teleconferencing aspect of command and control applications. In real time digital speech, the primary requirement is not a reliable service, but a service which minimizes and smooths the delay in the delivery of packets. The application layer is digitizing the analog speech, packetizing the resulting bits, and sending them out across the network on a regular basis. They must arrive at the receiver at a regular basis in order to be converted back to the analog signal. If packets do not arrive when expected, it is impossible to reassemble the signal in real time. A surprising observation about the control of variation in delay is that the most serious source of delay in networks is the mechanism to provide reliable delivery. A typical reliable transport protocol responds to a missing packet by requesting a retransmission and delaying the delivery of any subsequent packets until the lost packet has been retransmitted. It then delivers that packet and all remaining ones in sequence. The delay while this occurs can be many times the round trip delivery time of the net, and may completely disrupt the speech reassembly algorithm. In contrast, it is very easy to cope with an occasional missing packet. The missing speech can simply be replaced by a short period of silence, which in most cases does not impair the intelligibility of the speech to the listening human. If it does, high level error correction can occur, and the listener can ask the speaker to repeat the damaged phrase.</p>
<p>It was thus decided, fairly early in the development of the Internet architecture, that more than one transport service would be required, and the architecture must be prepared to tolerate simultaneously transports which wish to constrain reliability, delay, or bandwidth, at a minimum.</p>
<p>This goal caused TCP and IP, which originally had been a single protocol in the architecture, to be separated into two layers. TCP provided one particular type of service, the reliable sequenced data stream, while IP attempted to provide a basic building block out of which a variety of types of service could be built. This building block was the datagram, which had also been adopted to support survivability. Since the reliability associated with the delivery of a datagram was not guaranteed, but &quot;best effort,&quot; it was possible to build out of the datagram a service that was reliable (by acknowledging and retransmitting at a higher level), or a service which traded reliability for the primitive delay characteristics of the underlying network substrate. The User Datagram Protocol (UDP)[13] was created to provide a application-level interface to the basic datagram service of Internet.</p>
<p>The architecture did not wish to assume that the underlying networks themselves support multiple types of services, because this would violate the goal of using existing networks. Instead, the hope was that multiple types of service could be constructed out of the basic datagram building block using algorithms within the host and the gateway. For example, (although this is not done in most current implementations) it is possible to take datagrams which are associated with a controlled delay but unreliable service and place them at the head of the transmission queues unless their lifetime has expired, in which case they would be
discarded; while packets associated with reliable streams would be placed at the back of the queues, but never discarded, no matter how long they had been in the net.</p>
<p>It proved more difficult than first hoped to provide multiple types of service without explicit support from the underlying networks. The most serious problem was that networks designed with one particular type of service in mind were not flexible enough to support other services. Most commonly, a network will have been designed under the assumption that it should deliver reliable service, and will inject delays as a part of producing reliable service, whether or not this reliability is desired. The interface behavior defined by X.25, for example, implies reliable delivery, and there is no way to turn this feature off. Therefore, although Internet operates successfully over X.25 networks it cannot deliver the desired variability of type service in that context. Other networks which have an intrinsic datagram service are much more flexible in the type of service they will permit. but these networks are much less common, especially in the long-haul context.</p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="the-design-philosophy-of-the-darpa-internet-protocols-3"><a class="header" href="#the-design-philosophy-of-the-darpa-internet-protocols-3">The Design Philosophy of The DARPA Internet Protocols</a></h1>
<h2 id="varieties-of-networks"><a class="header" href="#varieties-of-networks">Varieties of Networks</a></h2>
<p>It was very important for the success of the Internet architecture that it be able to incorporate and utilize a wide variety of network technologies, including military
and commercial facilities. The Internet architecture has been very successful in meeting this goal: it is operated over a wide variety of networks, including long haul nets (the ARPANET itself and various X.25 networks), local area nets (Ethernet, ringnet, etc.), broadcast satellite nets (the DARPA Atlantic Satellite Network[14,15] operating at 64 kilobits per second and the DARPA Experimental Wideband Satellite Net[16] operating within the United States at 3 megabits per second), packet radio networks (the DARPA packet radio network, as well as an experimental British packet radio net and a network developed by amateur radio operators), a variety of serial links, ranging from 1200 bit per second asynchronous connections to TI links, and a variety of other ad hoc facilities, including intercomputer busses and the transport service provided by the higher layers of other network suites, such as IBM's HASP.</p>
<p>The Internet architecture achieves this flexibility by making a minimum set of assumptions about the function which the net will provide. The basic assumption is that network can transport a packet or datagram. The packet must be of reasonable size, perhaps 100 bytes minimum, and should be delivered with reasonable but not perfect reliability. The network must have some suitable form of addressing if it is more than a point to point link.</p>
<p>There are a number of services which are explicitly not assumed from the network. These include reliable or sequenced delivery, network level broadcast or multicast, priority ranking of transmitted packet, multiple types of service, and internal knowledge of failures, speeds, or delays. If these services had been required, then in order to accommodate a network within the Internet, it would be necessary either that the network support these services directly, or that the network interface software provide enhancements to simulate these services at the endpoint of the network. It was felt that this was an undesirable approach, because these services would have to be re-engineered and reimplemented for every single network and every single host interface to every network. By engineering these services at the transport, for example reliable delivery via TCP, the engineering must be done only once, and the implementation must be done only once for each host. After that, the implementation of interface software for a new network is usually very simple.</p>
<h2 id="other-goals"><a class="header" href="#other-goals">Other Goals</a></h2>
<p>The three goals discussed so far were those which had the most profound impact on the design on the architecture. The remaining goals, because they were lower in importance, were perhaps less effectively met, or not so completely engineered. The goal of permitting distributed management of the Internet has certainly been met in certain respects. For example, not all of the gateways in the Internet are implemented and managed by the same agency. There are several different management centers within the deployed Internet, each operating a subset of the gateways, and there is a two-tiered routing algorithm which permits gateways from different administrations to exchange routing tables, even though they do not completely trust each other, and a variety of private routing algorithms used among
the gateways in a single administration. Similarly, the various organizations which manage the gateways are not necessarily the same organizations that manage the networks to which the gateways are attached.</p>
<p>On the other hand, some of the most significant problems with the Internet today relate to lack of sufficient tools for distributed management, especially in the area of routing. In the large Internet being currently operated, routing decisions need to be constrained by policies for resource usage. Today this can be done only in a very limited way, which requires manual setting of tables. This is error-prone and at the same time not sufficiently powerful. The most important change in the Internet architecture over the next few years will probably be the development of a new generation of tools for management of resources in the context of multiple administrations.</p>
<p>It is clear that in certain circumstances, the Internet architecture does not produce as cost effective a utilization of expensive communication resources as a more tailored architecture would. The headers of Internet packets are fairly long (a typical header is 40 bytes), and if short packets are sent, this overhead is apparent. The worse case, of course, is the single character remote login packets, which carry 40 bytes of header and one byte of data. Actually, it is very difficult for any protocol suite to claim that these sorts of interchanges are carried out with reasonable efficiency. At the other extreme, large packets for file transfer, with perhaps 1,000 bytes of data, have an overhead for the header of only four percent.</p>
<p>Another possible source of inefficiency is retransmission of lost packets. Since Internet does not insist that lost packets be recovered at the network level, it may be necessary to retransmit a lost packet from one end of the Internet to the other. This means that the retransmitted packet may cross several intervening nets a second time, whereas recovery at the network level would not generate this repeat traffic. This is an example of the tradeoff resulting from the decision, discussed above, of providing services from the end-points. The network interface code is much simpler, but the overall efficiency is potentially less. However, if the retransmission rate is low enough (for example, 1%) then the incremental cost is tolerable. As a rough rule
of thumb for networks incorporated into the architecture, a loss of one packet in a hundred is quite reasonable, but a loss of one packet in ten suggests that reliability enhancements be added to the network if that type of service is required.</p>
<p>The cost of attaching a host to the Internet is perhaps somewhat higher than in other architectures, because all of the mechanisms to provide the desired types of service, such as acknowledgments and retransmission strategies, must be implemented in the host rather than in the network. Initially, to programmers who were not familiar with protocol implementation, the effort of doing this seemed somewhat daunting. Implementers tried such things as moving the transport protocols to a front end processor, with the idea that the protocols would be implemented only once, rather than again for every type of host. However, this required the invention of a host to front end protocol which some thought almost as complicated to implement as the original transport protocol. As experience with protocols increases, the anxieties associated with implementing a protocol suite within the host seem to be decreasing, and implementations are now available for a wide variety of machines, including personal computers and other machines with very limited computing resources.</p>
<p>A related problem arising from the use of host-resident mechanisms is that poor implementation of the mechanism may hurt the network as well as the host. This problem was tolerated, because the initial experiments involved a limited number of host implementations which could be controlled. However, as the use of Internet has grown, this problem has occasionally surfaced in a serious way. In this respect, the goal of robustness, which led to the method of fate-sharing, which led to host- resident algorithms, contributes to a loss of robustness if the host misbehaves.</p>
<p>The last goal was accountability. In fact, accounting was discussed in the first paper by Cerf and Kahn as an important function of the protocols and gateways. However,
at the present time, the Internet architecture contains few tools for accounting for packet flows. This problem is only now being studied, as the scope of the architecture is being expanded to include non-military consumers who are seriously concerned with understanding and monitoring the usage of the resources within the Internet.</p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="the-design-philosophy-of-the-darpa-internet-protocols-4"><a class="header" href="#the-design-philosophy-of-the-darpa-internet-protocols-4">The Design Philosophy of The DARPA Internet Protocols</a></h1>
<h2 id="architecture-and-implementation"><a class="header" href="#architecture-and-implementation">Architecture and Implementation</a></h2>
<p>The previous discussion clearly suggests that one of the goals of the Internet architecture was to provide wide flexibility in the service offered. Different transport protocols could be used to provide different types of service, and different networks could be incorporated. Put another way, the architecture tried very hard not to constrain the range of service which the Internet could be engineered to provide. This, in turn, means that to understand the service which can be offered by a particular implementation of an Internet, one must look not to the architecture, but to the actual engineering of the software within the particular hosts and gateways, and to the particular networks which have been incorporated. I will use the term &quot;realization&quot; to describe a particular set of networks, gateways and hosts which have been connected together in the context of the Internet architecture. Realizations can differ by orders of magnitude in the service which they offer. Realizations have been built out of 1200 bit per second phone lines, and out of networks only with speeds greater than 1 megabit per second. Clearly, the throughput expectations which one can have of these realizations differ by orders of magnitude. Similarly, some Internet realizations have delays measured in tens of milliseconds, where others have delays measured in seconds. Certain applications such as real time speech work fundamentally differently across these two realizations. Some Internets have been engineered so that there is great redundancy in the gateways and paths. These Internets are survivable, because resources exist which can be reconfigured after failure. Other Internet realizations, to reduce cost, have single points of connectivity through the realization, so that a failure may partition the Internet into two halves.</p>
<p>The Internet architecture tolerates this variety of realization by design. However, it leaves the designer of a particular realization with a great deal of engineering to do. One of the major struggles of this architectural development was to understand how to give guidance to the designer of a realization, guidance which would relate the engineering of the realization to the types of service which would result. For example, the designer must answer the following sort of question. What sort of bandwidths must he in the underlying networks, if the overall service is to deliver a throughput of a certain rate? Given a certain model of possible failures within this realization, what sorts of redundancy ought to be engineered into the realization?</p>
<p>Most of the known network design aids did not seem helpful in answering these sorts of questions. Protocol verifiers, for example, assist in confirming that protocols meet specifications. However, these tools almost never deal with performance issues, which are essential to the idea of the type of service. Instead, they deal with the much more restricted idea of logical correctness of the protocol with respect to specification. While tools to verify logical correctness are useful, both at the specification and implementation stage, they do not help with the severe problems that often arise related to performance. A typical implementation experience is that even after logical correctness has been demonstrated, design faults are discovered that may cause a performance degradation of an order of magnitude. Exploration of this problem has led to the conclusion that the difficulty usually arises, not in the protocol itself, but in the operating system on which the protocol runs. This being the case, it is difficult to address the problem within the context of the architectural specification. However, we still strongly feel the need to give the implementer guidance. We continue to struggle with this problem today.</p>
<p>The other class of design aid is the simulator, which takes a particular realization and explores the service which it can deliver under a variety of loadings. No one has yet attempted to construct a simulator which take into account the wide variability of the gateway implementation, the host implementation, and the network performance which one sees within possible Internet realizations. It is thus the case that the analysis of most Internet realizations is done on the back of an envelope. It is a comment on the goal structure of the Internet architecture that a back of the envelope analysis, if done by a sufficiently knowledgeable person, is usually sufficient. The designer of a particular Internet realization is usually less concerned with obtaining the last five percent possible in line utilization than knowing whether the desired type of service can be achieved at all given the resources at hand at the moment.</p>
<p>The relationship between architecture and performance is an extremely challenging one. The designers of the Internet architecture felt very strongly that it was a serious mistake to attend only to logical correctness and ignore the issue of performance. However, they experienced great difficulty in formalizing any aspect of performance constraint within the architecture. These difficulties arose both because the goal of the architecture was not to constrain performance, but to permit variability, and secondly (and perhaps more fundamentally), because there seemed to be no useful formal tools for describing performance.</p>
<p>This problem was particularly aggravating because the goal of the Internet project was to produce specification documents which were to become military standards. It is a well known problem with government contracting that one cannot expect a contractor to meet any criteria which is not a part of the procurement standard. If the Internet is concerned about performance, therefore, it was mandatory that performance requirements be put into the procurement specification. It was trivial to invent specifications which constrained the performance, for example to specify that the implementation must be capable of passing 1,000 packets a second. However, this sort of constraint could not be part of the architecture, and it was therefore up to the individual performing the procurement to recognize that these performance constraints must be added to the specification, and to specify them properly to achieve a realization which provides the required types of service. We do not have a good idea how to offer guidance in the architecture for the person performing this task.</p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="the-design-philosophy-of-the-darpa-internet-protocols-5"><a class="header" href="#the-design-philosophy-of-the-darpa-internet-protocols-5">The Design Philosophy of The DARPA Internet Protocols</a></h1>
<h2 id="datagrams"><a class="header" href="#datagrams">Datagrams</a></h2>
<p>The fundamental architectural feature of the Internet is the use of datagrams as the entity which is transported across the underlying networks. As this paper has suggested, there are several reasons why datagrams are important within the architecture. First, they eliminate the need for connection state within the intermediate switching nodes, which means that the Internet can be reconstituted after a failure without concern about state. Secondly, the datagram provides a basic building block out of which a variety of types of service can be implemented. In contrast to the virtual circuit, which usually implies a fixed type of service, the datagram provides a more elemental service which the endpoints can combine as appropriate to build the type of service needed. Third, the datagram represents the minimum network service assumption, which has permitted a wide variety of networks to be incorporated into various Internet realizations. The decision to use the datagram was an extremely successful one, which allowed the Internet to meet its most important goals very successfully.</p>
<p>There is a mistaken assumption often associated with datagrams, which is that the motivation for datagrams is the support of a higher level service which is essentially equivalent to the datagram. In other words, it has sometimes been suggested that the datagram is provided because the transport service which the application requires is a datagram service. In fact, this is seldom the case. While some applications in the Internet, such as simple queries of date servers or name servers, use an access method based on an unreliable datagram, most services within the Internet would like a more sophisticated transport model than simple datagram. Some services would like the reliability enhanced, some would like the delay smoothed and buffered, but almost all have some expectation more complex than a datagram. It is important to understand that the role of the datagram in this respect is as a building block, and not as a service in itself.</p>
<h2 id="tcp"><a class="header" href="#tcp">TCP</a></h2>
<p>There were several interesting and controversial design decisions in the development of TCP, and TCP itself went through several major versions before it became a reasonably stable standard. Some of these design decisions, such as window management and the nature of the port address structure, are discussed in a series of implementation notes published as part of the TCP protocol handbook [17,18]. But again the motivation for the decision is sometimes lacking. ln this section, I attempt to capture some of the early reasoning that went into parts of TCP. This section is of necessity incomplete; a complete review of the history of TCP itself would require another paper of this length.</p>
<p>The original ARPANET host-to host protocol provided flow control based on both bytes and packets. This seemed overly complex, and the designers of TCP felt that only one form of regulation would he sufficient. The choice was to regulate the delivery of bytes, rather than packets. Flow control and acknowledgment in TCP is thus based on byte number rather than packet number. Indeed, in TCP there is no significance to the packetization of the data.</p>
<p>This decision was motivated by several considerations, some of which became irrelevant and others of which were more important than anticipated. One reason to acknowledge bytes was to permit the insertion of control information into the sequence space of the bytes, so that control as well as data could be acknowledged. That use of the sequence space was dropped, in favor of ad hoc techniques for dealing with each control message. While the original idea has appealing generality, it caused complexity in practice.</p>
<p>A second reason for the byte stream was to permit the TCP packet to be broken up into smaller packets if necessary in order to fit through a net with a small packet size. But this function was moved to the IP layer when IP was split from TCP, and IP was forced to invent a different method of fragmentation.</p>
<p>A third reason for acknowledging bytes rather than packets was to permit a number of small packets to be gathered together into one larger packet in the sending host if retransmission of the data was necessary. It was not clear if this advantage would be important; it turned out to be critical. Systems such as UNIX which have a internal communication model based on single character interactions often send many packets with one byte of data in them. (One might argue from a network perspective that this behavior is silly, but it was a reality, and a necessity for interactive remote login.) It was often observed that such a host could produce a flood of packets with one byte of data, which would arrive much faster than a slow host could process them. The result is lost packets and retransmission.</p>
<p>If the retransmission was of the original packets, the same problem would repeat on every retransmission, with a performance impact so intolerable as to prevent operation. But since the bytes were gathered into one packet for retransmission, the retransmission occurred in a much more effective way which permitted practical operation.</p>
<p>On the other hand, the acknowledgment of bytes could be seen as creating this problem in the first place. If the basis of flow control had been packets rather than bytes, then this flood might never have occurred. Control at the packet level has the effect, however, of providing a severe limit on the throughput if small packets are sent. If the receiving host specifies a number of packets to receive, without any knowledge of the number of bytes in each, the actual amount of data received could vary by a factor of 1000, depending on whether the sending host puts one or one thousand bytes in each packet.</p>
<p>In retrospect, the correct design decision may have been that if TCP is to provide effective support of a variety of services, both packets and bytes must be regulated, as was done in the original ARPANET protocols.</p>
<p>Another design decision related to the byte stream was the End-Of-Letter flag, or EOL. This has now vanished from the protocol, replaced by the push flag, or PSH. The original idea of EOL was to break the byte stream into records. It was implemented by putting data from separate records into separate packets, which was not compatible with the idea of combining packets on retransmission. So the semantics of EOL was changed to a weaker form, meaning only that the data up to this point in the stream was one or more complete application-level elements, which should occasion a flush of any internal buffering in TCP or the network. By saying &quot;one or more&quot; rather than &quot;exactly one&quot;, it became possible to combine several together and preserve the goal of compacting data in reassembly. But the weaker semantics meant that various applications had to invent an ad hoc mechanism for delimiting records on top of the data stream.</p>
<p>In this evolution of EOL semantics, there was a little known intermediate form, which generated great debate. Depending on the buffering strategy of the host, the byte stream model of TCP can cause great problems in one improbable case. Consider a host in which the incoming data is put in a sequence of fixed size buffers. A buffer is returned to the user either when it is full, or an EOL is received. Now consider the case of the arrival of an out-of- order packet which is so far out of order to he beyond the current buffer. Now further consider that after receiving this out- of-order packet, a packet with an EOL causes the current buffer to be returned to the user only partially full. This particular sequence of actions has the effect of causing the out of order data in the next buffer to be in the wrong place, because of the empty bytes in the buffer returned to the user. Coping with this generated book- keeping problems in the host which seemed unnecessary.</p>
<p>To cope with this it was proposed that the EOL should &quot;use up&quot; all the sequence space up to the next value which was zero mod the buffer size. In other words, it was proposed that EOL should be a tool for mapping the byte stream to the buffer management of the host. This idea was not well received at the time, as it seemed much too ad hoc, and only one host seemed to have this problem3. In retrospect, it may have been the correct idea to incorporate into TCP some means of relating the sequence space and the buffer management algorithm of the host. At the time, the designers simply lacked the insight to see how that might be done in a sufficiently general manner.</p>
<h2 id="conclusion"><a class="header" href="#conclusion">Conclusion</a></h2>
<p>In the context of its priorities, the Internet architecture has been very successful. The protocols are widely used in the commercial and military environment, and have spawned a number of similar architectures. At the same time, its success has made clear that in certain situations, the priorities of the designers do not match the needs of the actual users. More attention to such things as accounting, resource management and operation of regions with separate administrations are needed.</p>
<p>While the datagram has served very well in solving the most important goals of the Internet, it has not served so well when we attempt to address some of the goals which were further down the priority list. For example, the goals of resource management and accountability have proved difficult to achieve in the context of datagrams. As the previous section discussed, most datagrams are a part of some sequence of packets from source to destination, rather than isolated units at the application level. However, the gateway cannot directly see the existence of this sequence, because it is forced to deal with each packet in isolation. Therefore, resource management decisions or accounting must be done on each packet separately. Imposing the datagram model on the Internet layer has deprived that layer of an important source of information which it could use in achieving these goals.</p>
<p>This suggests that there may be a better building block than the datagram for the next generation of architecture. The general characteristic of this building block is that it would identify a sequence of packets traveling from the source to the destination, without assuming any particular type of service with that service. I have used the word &quot;flow&quot; to characterize this building block. It would be necessary for the gateways to have flow state in order to remember the nature of the flows which are passing through them, but the state information would not be critical in maintaining the desired type of service associated with the flow. Instead, that type of service would be enforced by the end points, which would periodically send messages to ensure that the proper type of service was being associated with the flow. In this way, the state information associated with the flow could be lost in a crash without permanent disruption of the service features being used. I call this concept &quot;soft state,&quot; and it may very well permit us to achieve our primary goals of survivability and flexibility, while at the same time doing a better job of dealing with the issue of resource management and accountability. Exploration of alternative building blocks constitute one of the current directions for research within the DARPA Internet program.</p>
<h2 id="acknowledgments----a-historical-perspective"><a class="header" href="#acknowledgments----a-historical-perspective">Acknowledgments -- A Historical Perspective</a></h2>
<p>It would be impossible to acknowledge all the contributors to the Internet project; there have literally been hundreds over the 15 years of development: designers, implementers, writers and critics. Indeed, an important topic, which probably deserves a paper in itself, is the process by which this project was managed. The participants came from universities, research laboratories and corporations, and they united (to some extent) to achieve this common goal.</p>
<p>The original vision for TCP came from Robert Kahn and Vinton Cerf, who saw very clearly, back in 1973, how a protocol with suitable features might be the glue that would pull together the various emerging network technologies. From their position at DARPA, they guided the project in its early days to the point where TCP and IP became standards for the DOD.</p>
<p>The author of this paper joined the project in the mid-70s, and took over architectural responsibility for TCP/IP in 1981. He would like to thank all those who have worked with him, and particularly those who took the time to reconstruct some of the lost history in this paper.</p>
<div style="break-before: page; page-break-before: always;"></div><h1 id="最后"><a class="header" href="#最后">最后</a></h1>
<p>如果你想参与贡献，欢迎提交pr 👉 <a href="https://gitee.com/asueeer/papers_book">https://gitee.com/asueeer/papers_book</a></p>
<hr />
<p>如果你有什么新的想法，欢迎联系asueeer@163.com。</p>
<hr />
<p>如果你觉得这个文档有用，欢迎请我喝一杯咖啡。</p>
<p><img width="30%" src="assets/end_qr_code.png" alt="QR CODE" /></p>

                    </main>

                    <nav class="nav-wrapper" aria-label="Page navigation">
                        <!-- Mobile navigation buttons -->


                        <div style="clear: both"></div>
                    </nav>
                </div>
            </div>

            <nav class="nav-wide-wrapper" aria-label="Page navigation">

            </nav>

        </div>




        <script type="text/javascript">
            window.playground_copyable = true;
        </script>


        <script src="elasticlunr.min.js" type="text/javascript" charset="utf-8"></script>
        <script src="mark.min.js" type="text/javascript" charset="utf-8"></script>
        <script src="searcher.js" type="text/javascript" charset="utf-8"></script>

        <script src="clipboard.min.js" type="text/javascript" charset="utf-8"></script>
        <script src="highlight.js" type="text/javascript" charset="utf-8"></script>
        <script src="book.js" type="text/javascript" charset="utf-8"></script>

        <!-- Custom JS scripts -->

        <script type="text/javascript">
        window.addEventListener('load', function() {
            MathJax.Hub.Register.StartupHook('End', function() {
                window.setTimeout(window.print, 100);
            });
        });
        </script>

    </body>
</html>
