<!DOCTYPE html>
<!--[if IE 8]><html class="no-js lt-ie9" lang="en" > <![endif]-->
<!--[if gt IE 8]><!--> <html class="no-js" lang="en" > <!--<![endif]-->
<head>
  <meta charset="utf-8">
  <meta http-equiv="X-UA-Compatible" content="IE=edge">
  <meta name="viewport" content="width=device-width, initial-scale=1.0">
  
  
  
  <link rel="shortcut icon" href="../../img/favicon.ico">
  <title>Coroutines - COROS Documentation</title>
  <link rel="stylesheet" href="../../css/theme.css" />
  <link rel="stylesheet" href="../../css/theme_extra.css" />
  <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/styles/github.min.css" />
  
  <script>
    // Current page data
    var mkdocs_page_name = "Coroutines";
    var mkdocs_page_input_path = "guides/01_coroutines.md";
    var mkdocs_page_url = null;
  </script>
  
  <script src="../../js/jquery-2.1.1.min.js" defer></script>
  <script src="../../js/modernizr-2.8.3.min.js" defer></script>
  <script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/highlight.min.js"></script>
  <script>hljs.initHighlightingOnLoad();</script> 
  
</head>

<body class="wy-body-for-nav" role="document">

  <div class="wy-grid-for-nav">

    
    <nav data-toggle="wy-nav-shift" class="wy-nav-side stickynav">
    <div class="wy-side-scroll">
      <div class="wy-side-nav-search">
        <a href="../.." class="icon icon-home"> COROS Documentation</a>
        <div role="search">
  <form id ="rtd-search-form" class="wy-form" action="../../search.html" method="get">
    <input type="text" name="q" placeholder="Search docs" title="Type search term here" />
  </form>
</div>
      </div>

      <div class="wy-menu wy-menu-vertical" data-spy="affix" role="navigation" aria-label="main navigation">
                <ul>
                    <li class="toctree-l1"><a class="reference internal" href="../..">Home</a>
                    </li>
                </ul>
                <p class="caption"><span class="caption-text">About</span></p>
                <ul>
                    <li class="toctree-l1"><a class="reference internal" href="../../COPYRIGHT/">Copyright</a>
                    </li>
                    <li class="toctree-l1"><a class="reference internal" href="../../LICENSE/">License</a>
                    </li>
                </ul>
                <p class="caption"><span class="caption-text">The Beginner's Guide to:</span></p>
                <ul>
                    <li class="toctree-l1"><a class="reference internal" href="../../tutorials/01_getting_started/">Getting Started</a>
                    </li>
                </ul>
                <p class="caption"><span class="caption-text">The Hitchker's Guide to:</span></p>
                <ul class="current">
                    <li class="toctree-l1 current"><a class="reference internal current" href="./">Coroutines</a>
    <ul class="current">
    <li class="toctree-l2"><a class="reference internal" href="#so-what-is-a-coroutine">So what is a coroutine?</a>
    </li>
    <li class="toctree-l2"><a class="reference internal" href="#coroutines-in-coros">Coroutines in COROS</a>
        <ul>
    <li class="toctree-l3"><a class="reference internal" href="#data-types">Data types</a>
    </li>
    <li class="toctree-l3"><a class="reference internal" href="#api">API</a>
    </li>
    <li class="toctree-l3"><a class="reference internal" href="#babys-first-coroutine">Baby's first coroutine</a>
    </li>
        </ul>
    </li>
    <li class="toctree-l2"><a class="reference internal" href="#coroutine-life-cycles">Coroutine life cycles</a>
    </li>
    <li class="toctree-l2"><a class="reference internal" href="#advanced-coroutine-usage">Advanced coroutine usage</a>
        <ul>
    <li class="toctree-l3"><a class="reference internal" href="#punctuated-execution">Punctuated execution</a>
    </li>
    <li class="toctree-l3"><a class="reference internal" href="#a-15-pass-assembler">A 1.5-pass assembler</a>
    </li>
    <li class="toctree-l3"><a class="reference internal" href="#producer-consumer-patterns">Producer-Consumer patterns</a>
    </li>
    <li class="toctree-l3"><a class="reference internal" href="#wrapping-resumption">Wrapping resumption</a>
    </li>
        </ul>
    </li>
    <li class="toctree-l2"><a class="reference internal" href="#tldr-summary">TL/DR Summary</a>
    </li>
    </ul>
                    </li>
                    <li class="toctree-l1"><a class="reference internal" href="../02_event_queues/">Event Queues</a>
                    </li>
                    <li class="toctree-l1"><a class="reference internal" href="../03_components/">Components</a>
                    </li>
                </ul>
                <p class="caption"><span class="caption-text">The Technocrat's Guide to:</span></p>
                <ul>
                    <li class="toctree-l1"><a class="reference internal" href="../../reference/01_co/">co.h</a>
                    </li>
                    <li class="toctree-l1"><a class="reference internal" href="../../reference/02_evq/">evq.h</a>
                    </li>
                    <li class="toctree-l1"><a class="reference internal" href="../../reference/03_comp/">comp.h</a>
                    </li>
                </ul>
                <p class="caption"><span class="caption-text">The Technomancer's Guide to:</span></p>
                <ul>
                    <li class="toctree-l1"><a class="reference internal" href="../../architecture/01_coroutines/">Coroutines</a>
                    </li>
                    <li class="toctree-l1"><a class="reference internal" href="../../architecture/02_event_queues/">Event Queues</a>
                    </li>
                    <li class="toctree-l1"><a class="reference internal" href="../../architecture/03_porting/">Porting</a>
                    </li>
                </ul>
      </div>
    </div>
    </nav>

    <section data-toggle="wy-nav-shift" class="wy-nav-content-wrap">

      
      <nav class="wy-nav-top" role="navigation" aria-label="top navigation">
        <i data-toggle="wy-nav-top" class="fa fa-bars"></i>
        <a href="../..">COROS Documentation</a>
      </nav>

      
      <div class="wy-nav-content">
        <div class="rst-content">
          <div role="navigation" aria-label="breadcrumbs navigation">
  <ul class="wy-breadcrumbs">
    <li><a href="../..">Docs</a> &raquo;</li>
    
      
        
          <li>The Hitchker's Guide to: &raquo;</li>
        
      
    
    <li>Coroutines</li>
    <li class="wy-breadcrumbs-aside">
      
    </li>
  </ul>
  
  <hr/>
</div>
          <div role="main">
            <div class="section">
              
                <h1 id="the-hitchhikers-guide-to-coroutines">The Hitchhiker's Guide to Coroutines</h1>
<p>The central design decision of COROS was to base it on an uncommon concurrency mechanism called a <em>coroutine</em>.  Anybody who has programmed in <em>Modula-2</em> or <em>Lua</em> already knows what a coroutine is and why they're a powerful tool, but most people today, especially in the web world, have only heard of two forms of concurrency: pre-emptive threading (e.g. pthreads) and asynchronous reactors (e.g. Node).</p>
<p>What if I told you there were a third way?  A way that combines the advantages of reactors and threads, but without the flaws of either?  Well, as unbelievable as this may sound to many, there <a href="https://en.wikipedia.org/wiki/Coroutine">exists such a third way</a>.  And to my knowledge the earliest description of what we would now call a coroutine was in Donald Knuth's famous <em>The Art of Computer Programming (Vol 1: Fundamental Algorithms)</em> published in 1968.  This is very old technology, in short.</p>
<h2 id="so-what-is-a-coroutine">So what is a coroutine?</h2>
<p>Those of you who have had the benefit of using languages like Lua, Modula-2, and Tcl 8.6+ will already know what a coroutine is and can skip this section.  (Those who use Lua might be able to skip this entire documents given that COROS' implementation of coroutines is heavily based upon Lua's API.  Just reading the reference will be more than enough.)  For the rest of you, here's a brief description of what a coroutine is.  Consult the above Wikipedia article for more detailed information or read the Lua reference manual section on coroutines.</p>
<p>At their most basic, a coroutine is a subroutine you can pause in the middle of execution, and then when resumed they come back right where they left off, all their local state intact, and continue executing from where they were paused.  They can pause even when buried deep in function call frames (they are "stackful") and they are ideally also first-class: coroutines can be passed around.</p>
<p>Coroutines come in two main flavours: asymmetric and symmetric.  Asymmetric coroutines have a caller/callee relationship and can only yield results (and take messages from) their caller.  Symmetric coroutines can yield to any other coroutine they know about.  In practice these two are interchangeable: any symmetric coroutine system can be wrapped so it acts like an asymmetric one, and any asymmetric one can be wrapped in a way that makes it act for all practical purposes like a symmetric one.</p>
<h2 id="coroutines-in-coros">Coroutines in COROS</h2>
<p>Coroutines in COROS (CO) are stackful asymmetric coroutines where coroutines are first-class objects.  (This is also called a "full coroutine".)  The API is very much a copy of the Lua API for coroutines, modified naturally for C's limitations.  In this document we will look at how the coroutines of COROS are used.</p>
<h3 id="data-types">Data types</h3>
<p>There are three main data types used in the CO component: <code>stack_t</code>, <code>co_t</code>, and <code>co_function</code>.  These are the type of a single stack item (typically a <code>uintptr_t</code> since stacks tend to push and pop values that match a pointer size), a coroutine handle, and a coroutine implementation function respectively.</p>
<p>For the most part you can safely ignore <code>stack_t</code>.  Modern cores have well-balanced stack and word sizes so the default of defining it as a uintptr_t will be more than enough.</p>
<p><code>co_t</code> is an opaque data type storing the information about a coroutine used as a handle in all other operations.  By putting all of a coroutine's operational data into such a structure, coroutines can be passed around as objects (even to other coroutines), making them effectively first-class entities.</p>
<p><code>co_function</code> is the core of what a coroutine works from.  All coroutines are implemented as a <code>co_t</code> wrapper around a <code>co_function</code>.  When you write coroutine-based code, you will be writing <code>co_function</code>-typed functions and creating a coroutine from it.</p>
<p>There is a fourth data type that you will unfortunately start getting very familiar with when working with CO: <code>void *</code>.  C does not support genericity, nor does it conveniently support "var" data types like Lua does.  Because of this, <code>co_function</code>s can only work on <code>void *</code> values and return same.  Use of casting and validation at the boudaries will be needed for stable software.</p>
<h3 id="api">API</h3>
<p>CO has only four functions (plus a fifth synonym): <code>co_create()</code>, <code>co_destroy()</code>, <code>co_resume()</code>, <code>co_yield()</code>, and <code>co_start()</code>.  <code>co_start()</code> is just a synonym for <code>co_resume()</code> and can thus be ignored for now.  The first two functions do exactly what they say on the tin.  This leaves <code>co_resume()</code> and <code>co_yield()</code>.  We'll talk about these later, but first we have to show...</p>
<h3 id="babys-first-coroutine">Baby's first coroutine</h3>
<p>Here's some simple C code:</p>
<pre><code class="language-C">/* NOTE: DOCUMENTATION IS IN PROGRESS SO THIS CODE IS CURRENTLY UNTESTED. */

#include &lt;stdbool.h&gt;
#include &lt;stdint.h&gt;
#include &lt;stdio.h&gt;

#include &quot;co.h&quot;

static void *naturals(void *data)   // &lt;1&gt;
{
    uint64_t counter = data ? *((uint64_t *)data) : 0 // &lt;2&gt;

    while (true)
    {
        void *reposition = co_yield(&amp;counter);  // &lt;3&gt;
        if (reposition != NULL)
        {
            counter = *((uint64_t *)reposition);
        }
        else
        {
            ++counter;
        }
    }
}

int main(int argc, char** argv)
{
    uint64_t val = 0;
    co_t co_naturals = co_create(naturals, 32, NULL);   // &lt;4&gt;
    co_start(co_naturals, NULL);                        // &lt;5&gt;

    do
    {
        val = *(uint64_t *)co_resume(co_naturals, NULL); // &lt;6&gt;
    }
    while (val != 0)

    co_destroy(&amp;co_naturals);   // &lt;7&gt;
    return 0;
}
</code></pre>
<p>This code implements a basic generator with CO, adding a couple of silly little features.  The most important part can be found beginning at comment <code>&lt;1&gt;</code>.
This is the <code>co_function</code> mentioned before and is where the coroutine actually executes.  At <code>&lt;2&gt;</code> we see that if no data pointer is passed in when first call we initialize the internal counter to 0, otherwise we initialize it to the value of the counter passed in.  Because the counter is 64-bit and we're on a 32-bit machine, we are passed a <em>pointer</em> to that value and must dereference it.</p>
<p>At point <code>&lt;4&gt;</code> we create a coroutine object by passing it <code>naturals</code>, our implementation function, and two other mysterious parameters.  The first of these is the stack size, the second is a pointer to the stack's storage.</p>
<blockquote>
<p><em><strong>A side word on stacks:</strong> An unfortunate truth of any constrained system is that resources are limited in ways that people used to programming for large systems will not understand.  The nature of embedded systems is such that the usual trick of using page-backed growable stacks for programs cannot work.  We must therefore pay attention to stacks and stack manipulations.</em></p>
<p><em>Each coroutine has its own stack, separate from the system stack.  A stack is just a block of memory.  We have to tell <code>co_create()</code> how many </em>items<em> of <code>stack_t</code> the stack will have.  In this case we've chosen 32 because our stack needs are modest.  For very complicated coroutines we may need 1024 or ... well, your SRAM size is the limit.  Unfortunately selecting a stack size is somewhat of a black art until you get an intuition for it.  As such, here are some rules of thumb:</em></p>
<ol>
<li><em>A starting value of 256 is very adequate for most simple coroutines that maybe call a few layers deep in the stack when running.  It will probably </em>not<em> be enough if you start messing around with recursion since C does not do tail call elimination.</em></li>
<li><em>A very simple coroutine that makes no external calls might get away with 64 or even 32.</em></li>
<li><em>Inspecting the assembler generated by the compiler can help in assessing stack usage.  Third-party tools also exist to help in this.</em></li>
<li><em>As a general rule of thumb, if working code crashes when you make a deep call, double the stack.  If code is working fine, try halving it to see if it will still work fine.  Divide and conquer over iterations will start to help you build an intuition on how to use stacks.</em></li>
</ol>
<p><em>We now return you to the explanation in progress….</em></p>
</blockquote>
<p>CO lets you either supply your own stack storage (in which case it is incumbent upon you to ensure you have provided enough room according to your item count) or to pass <code>NULL</code> to let COROS allocate the space for you.  Most of the time passing <code>NULL</code> is fine provided the coroutine is made early in the life of the program and that you don't expect to destroy the coroutine until the end.  Otherwise it is probably better to statically allocate your own and pass that pointer in.</p>
<p>We have selected dynamic stack allocation.  Behind the scenes CO will <code>calloc()</code> some memory for a stack that is <code>32 * sizeof(stack_t)</code> in size.</p>
<p>At point <code>&lt;5&gt;</code> we call <code>co_start()</code> (which as you recall is a synonym for <code>co_resume()</code>).  We use <code>co_start()</code> for semantic signalling that we are initializing the coroutine.  We are ignoring its return value (it will be a pointer to <code>NULL</code>) and we are not passing in a start value (<code>NULL</code>).</p>
<p>This takes us immediately to point <code>&lt;2&gt;</code> of the code.  Because the <code>data</code> value is <code>NULL</code> we initialize the counter to 0.  We then enter the endless loop.</p>
<p>At point <code>&lt;3&gt;</code> we have the heart of how coroutines operate.  We are "yielding" here.  The value we pass to <code>co_yield()</code> is the <strong>return value</strong> of, in this case, <code>co_start()</code>.  Although we ignore this return value, the key is that we have <strong>returned</strong>.  We are back at point <code>&lt;5&gt;</code> and are now going to enter the <code>do</code> loop.  At point <code>&lt;6&gt;</code> we call <code>co_resume()</code>  This now "resumes" the function <code>naturals()</code> ... at point <code>&lt;3&gt;</code>.  We are not re-entering from the top.  And all of our local state (note: the counter isn't static) is intact.  The return value of <code>co_yield()</code> is the value we passed to <code>co_resume()</code> (<code>NULL</code> again).</p>
<p>Because we got a <code>NULL</code>, we just increment the counter and loop back to <code>&lt;3&gt;</code>.  This time we actually use the return value, storing it in the variable <code>val</code>.  We continue doing this, endlessly repeating the <code>co_resume()</code> call at point <code>&lt;6&gt;</code> until the counter wraps (with a 64-bit counter, this will take a few million years on any plausible embedded system running at full tilt) at which point the loop exists, we destroy the coroutine at <code>&lt;7&gt;</code> and then exit the program.</p>
<p>There are a few things to note:</p>
<ol>
<li>The state of <code>naturals()</code>, although being all in local variables, is not lost despite us bouncing back and forth between <code>main()</code> and <code>naturals()</code>.</li>
<li>Communication occurs both ways.  The data passed in <code>co_resume/start()</code> is either the data parameter of <code>naturals()</code> (first time) or the return value of <code>co_yield()</code> (every subsequent time).  The data passed in <code>co_yield()</code> is the return value of <code>co_resume/start()</code>.</li>
<li>Although we did not use this in this simple example, <code>naturals()</code> can be initialized with a different starting point <em>and</em> can have its counting point changed during use.  (Writing the code to do this is an exercise left to the reader.)</li>
<li>Most importantly of all, because <code>naturals()</code> uses nothing but local, non-static variables, there can be more than one coroutine backed by that function running at any given time.  We only use one.  There could be two.  Ten.  A hundred.  It doesn't matter.  State, including the stack values, is stored in the coroutine object, not in the implementation function.</li>
</ol>
<p>The CO API is deceptively simple, but it has within it quite a lot of depth.</p>
<h2 id="coroutine-life-cycles">Coroutine life cycles</h2>
<p>A typical coroutine has three phases of life that it goes through:</p>
<ol>
<li>Initialization.</li>
<li>Operation.</li>
<li>Termination.</li>
</ol>
<p>Initialization is what occurs at the start of the coroutine when it is first resumed.  The data passed into it is usually some kind of initialization construct; a coroutine for operating a UART, for example, might be told which device, what baud rate, etc. as the parameter to its first <code>co_resume/start()</code> call.  Resource allocation and initialization also occurs in this phase.</p>
<p>Once initialization is complete, the operation phase begins.  A typical expression of this phase is a loop in which operations are performed, control is yielded one or more times in the block, and if an exit condition is not present, starts its run again.  How this loop does its operation of course depends on precisely what the coroutine is intended to do.  In the case of <code>naturals()</code> it just generates the next number in a sequence.  It could instead give you the next leaf of a tree according to some ordering convention (depth-first, breadth-first, etc.).  It may monitor a piece of hardware for a change of state.  It may instead periodically change the state of a piece of hardware.  Whatever it does, it does so with logic that is linear and simple to read, with points of its own choosing where it gives up the CPU to other tasks.</p>
<p>Termination is an optional state.  Many coroutines start and then just keep running forever, waiting to be awakened into performing their next task.  For those which do not, however, it is conventional to release any resources claimed during initialization before returning.</p>
<blockquote>
<p><em><strong>A brief note on returning from a coroutine:</strong> Note that when returning, a coroutine remains a coroutine.  This is a safety feature that permits even an expired coroutine to be resumed without crashing the system.  A terminated coroutine simply turns into a do-nothing coroutine that immediately yields <code>NULL</code> when resumed.</em></p>
</blockquote>
<h2 id="advanced-coroutine-usage">Advanced coroutine usage</h2>
<p>What appears to be a simple programming trick that can be mildly helpful conceals within it a lot of uses.  The use case demonstrated here is using a coroutine as a sequence generator.  Just as easily it could be used to iterate over an array of values.  There are, however, many other uses that highlight the flexibility of coroutines.  In the following sections the code is illustrative and not intended to be cut and paste, but it shows how a coroutine might be structured for certain problem types.</p>
<h3 id="punctuated-execution">Punctuated execution</h3>
<p>Consider one of those old door locks that had five buttons that had to be pressed in a certain sequence to unlock the door.  If trying to emulate a door lock like that in firmware with conventional code, the logic might look something like this:</p>
<pre><code class="language-C">/* ... */

/* the desired sequence is stored here */
#define SEQUENCE_LENGTH 5
static int sequence[SEQUENCE_LENGTH] = { 1, 5, 2, 3, 4 };

/* ... */

bool pass = true;
for (size_t i = 0; i &lt; SEQUENCE_LENGTH; i++)
{
    if (wait_for_button() != sequence[i]) { pass = false; }
}
return pass;

/* ... */
</code></pre>
<p>The idea is that you wait for the next button press and if it fails to match, you store the fact the sequence has failed.  When all five buttons are pushed, you return whether all buttons were pressed in the right sequence or not.  (A real system would deal with timeouts and such as well, but for purposes of illustration this is the core.)</p>
<p>This is fine if <em>all</em> your system does is monitor five buttons.  If, however, it does anything else (like, say, taking commands from a control console), this is a show-stopper.  The call to <code>wait_for_button()</code> is blocking.  Fixing this starts involving obfuscation with loops, with asynch reactor architectures with callbacks, and all other kinds of issues.</p>
<p>The coroutine version of it is easier:</p>
<pre><code class="language-C">/* the desired sequence is stored here */
#define SEQUENCE_LENGTH 5

/* we communicate with the resuming thread with these signals */
enum reader_state
{
    MORE = 0,
    PASS = 1,
    FAIL = 2,
};

/* ... */

/* wrap up the C nastiness with `void *` types */
static int get_next_key(void)
{
    return (int)co_yield((void *)MORE);
}

static void *keypad(void *data)
{
    int sequence[SEQUENCE_LENGTH];

    /* make a local copy of the sequence to reduce coupling */
    memcpy(sequence, data, sizeof(sequence));

    while (true)
    {
        enum reader_state rv = PASS;
        for (size_t i = 0; i &lt; SEQUENCE_LENGTH; i++)
        {
            if (get_next_key() != sequence[i]) { rv = FAIL; }
        }
        co_yield((void *)rv);
    }
}
</code></pre>
<p>This does look a bit more involved, but the actual logic where the business happens is identical: a for loop over keys in sequence until all five keys are acquired, at which point it signals success or failure.  The key difference is that <code>get_next_key()</code> <em>does not block</em>.  It instead yields, signalling its desire for more numbers.  The routine it yields to may block.  Or it may be in a monitoring loop that looks for keys, manages communication, etc.  Or this may all be plugged into an event system like EVQ.  Or maybe some EXTI interrupts are directly telling us which key is pressed as it is pressed.  What's important is that <strong>we do not care</strong>.  The process of getting the next key is isolated from us in how it is implemented.</p>
<blockquote>
<p><em><strong>Minor implementation note:</strong> The way this is implemented requires an extra resume at the caller level.  This can be fixed by using a different means of passing messages, but this is outside the scope of this illustration.</em></p>
</blockquote>
<p>What we have here is a highlight of the strengths of coroutines.  A similar routine could be made using just callbacks in the form of an asynch reactor, but such callbacks are infamous for requiring static variables (meaning the same routine can't be used multiple times) or for forcing users to carry around states manually to pass back each time.  They're also infamous for requiring, in complicated (especially nested!) cases requiring code to be written in a style that has been called "reversed and inside-out" by critics.</p>
<p>The code in the coroutine is automatically passing that aforementioned state around without any extra state variables needing to be created and the code style is just linear execution.</p>
<h3 id="a-15-pass-assembler">A 1.5-pass assembler</h3>
<p>Assemblers come in two basic flavours: 1-pass and 2-pass.  In a 1-pass assembler the code is gone over only once, generating symbols, addresses, and opcodes in that one pass.  There is very limited backtracking capability in forward references; in effect code is generated for forward references with conservative (read: wasteful) instructions generated for branches, loads, and stores which then get backfilled once the symbol is found.  This means, for example, that a 32-bit absolute address branch may be used for a goal that turns out to have only been 8 bytes away, allowing a relative address only a byte long.</p>
<p>In a 2-pass assembler, the code is gone over twice.  The first time it resolves all the forward references, making a note of addresses and placement, before the second pass generates the code, resolving to more efficient representations by selecting more appropriate branch addressing.</p>
<p>1-pass assemblers are fast and nasty.  2-pass assemblers are slow and complicated.</p>
<p>Coroutines allow a middle path: the 1.5-pass assembler.</p>
<p>A 1.5-pass assembler starts with a coroutine that operates like the second pass of a 2-pass assembler.  It reads and processes source, generating opcodes as it goes.  As it encounters symbols in order it inserts them in the symbol table, just like like the first pass of a 2-pass assembler.  If, however, it encounters a forward pass, it resumes a "first pass" phase that proceeds to read ahead in the source, resolving symbols and addresses it finds until it finds the specific forward-defined symbol needed by the "second pass".  It then yields back to the "second pass" and the code generator suddenly has that symbol (and probably several others) in its symbol table and can continue moving forward, generating the right size code for the branches loads, and stores.  Each time it encounters an unknown symbol it resumes the first pass which carries on from the last place it was while filling in the symbol tables.</p>
<p>This 1.5 pass assembler has all the flexibility of a 2-pass assembler, but runs in what looks like a single pass.  It will typically be faster than a 2-pass assembler (because it only ever has to read ahead if forward references are found meaning it doesn't have to run over the whole source code twice), and it will generate far better code than a 1-pass assembler ever could.</p>
<p>This ability to call the "first" pass and have it stop part-way, only to continue on from that very point when called again is a hallmark of coroutine strength.</p>
<h3 id="producer-consumer-patterns">Producer-Consumer patterns</h3>
<p>Subroutines are not well-suited to producer/consumer problems.  Consider a "producer" that monitors a serial port and a "consumer" that reads bytes.  In traditional code things may look something like this pseudocode:</p>
<pre><code>get_bytes_from_uart(num):
    rv = []
    for count = 1 to num:
        rv.append(read(uart))
    return rv

use_bytes_from_uart():
    do_something_that_needs_3_bytes(get_bytes_from_uart(3))
    do_something_that_needs_5_bytes(get_bytes_from_uart(5))
    do_something_that_needs_9_bytes(get_bytes_from_uart(9))
    do_something_that_needs_7_bytes(get_bytes_from_uart(7))
    ...
</code></pre>
<p>This looks perfectly cromulent, right?</p>
<p>Except ...</p>
<p>Where did <code>uart</code> come from?  It has to be something that was stored in a file-static or global variable.  This makes it impossible to use this routine for multiple UARTs.  Of course you could specify the uart in the parameter list but that sounds suspiciously like carrying state around and ... wasn't automatically carrying unrelated-to-the-client state instead of forcing the user to do it one of the benefits of coroutines?</p>
<p>And in this case, too, the producer is a simple pattern with only a simple state (the uart) involved.  Some producers may have very complicated states (indeed entire state machines!) involved that would have to be recreated each time they're invoked to the point that often people start the producers and have the producers drive the consumers.  Which works fine iff the consumer's internal logic is simple.</p>
<p>Coroutines get rid of this problem entirely.</p>
<pre><code class="language-C">#define MAX_BYTES 32
static void *get_bytes_from_uart(void *data)
{
    /* initialization phase */
    uart_t uart = initialize_uart((comm_params *)data);
    uint8_t bytes[MAX_BYTES];

    /* operation phase */
    while (true)
    {
        int num = (int)co_yield(bytes);
        for (size_t i = 0; i &lt; num; i++)
        {
            bytes[i] = read(uart);
        }
    }
}

void use_bytes_from_uart()
{
    /* configure some_comm_params here */

    co_t gbfu = co_create(get_bytes_from_uart, 64, NULL);
    co_start(gbfu, &amp;some_comm_params);  /* use co_start alias to signal intent: we're initializing */

    /* now we just resume at need */
    do_something_that_needs_3_bytes((uint8_t *)co_resume(gbfu, (void *)3));
    do_something_that_needs_5_bytes((uint8_t *)co_resume(gbfu, (void *)5));
    do_something_that_needs_9_bytes((uint8_t *)co_resume(gbfu, (void *)9));
    do_something_that_needs_7_bytes((uint8_t *)co_resume(gbfu, (void *)7));
    /*...*/
}
</code></pre>
<p>In this more concrete example, the logic is as straightforward and linear as the original pseudocode was, but it has the advantages of isolating things like the return buffer, the hardware handle, etc.  This makes it possible to have multiple instances of this coroutine for different uart/client pairings.  The use of local variables and, bizarrely to C programmer eyes, <strong>returning</strong> local variables is perfectly safe because the stack frame does not evaporate on yielding.  This means proper, full, local encapsulation of things can be maintained without extra cost and without the re-entrancy breaking of standard subroutine calls.</p>
<p>There are, naturally, other solutions to this situation.  The most obvious (passing state around) has already been addressed earlier in the documentation, but as was noted there, this involves clients manually carrying around data that is not relevant to their problem domains which is fragile (and potentially dangerous since they could inadvertently corrupt that state).  Another solution is to use a callback mechanism, but again, as commented above about asynch reactor architectures, callbacks can lead to very complicated "reverse and inside-out" code paths that are difficult to reason about and can also be very fragile.  (Of course this can also be solved with mailboxes or event queues or the like ... which is what EVQ is about and will be addressed in the user guide for that component.  Suffice it to say that EVQ plus the coroutines of CO are designed to work hand in glove.)</p>
<h3 id="wrapping-resumption">Wrapping resumption</h3>
<p>As we saw from the earlier <code>use_bytes_from_uart()</code> example, there is a bit of cumbersome syntax in using coroutines.  Because coroutines are used as a library rather than a language construct (like a subroutine is in most languages), we have cumbersome syntax with casts when invoking coroutines.  For example <code>do_something_that_needs_3_bytes((uint8_t *)co_resume(gbfu, (void *)3));</code> is just visually clunky and hard to read.  It's far better in such cases to wrap coroutine invocations in helper functions (something that you will see in the COMP user guide) so that the code is semantically clearer at the expense of a few extra static helpers.  For example, <code>use_bytes_from_uart()</code> is probably better expressed thusly:</p>
<pre><code class="language-C">static uint8_t *collect_bytes(int count)
{
    static co_t gbfu;
    static comm_params some_comm_params;
    static bool initialized = false;

    if (!initialized)
    {
        initialize_params(&amp;some_comm_params);
        co_t gbfu = co_create(get_bytes_from_uart, 64, NULL);
        co_start(gbfu, &amp;some_comm_params);
        initialized = true;
    }

    return (uint8_t *)co_resume(gbfu, (void *)count);
}

void use_bytes_from_uart()
{
    do_something_that_needs_3_bytes(collect_bytes(3));
    do_something_that_needs_5_bytes(collect_bytes(5));
    do_something_that_needs_9_bytes(collect_bytes(9));
    do_something_that_needs_7_bytes(collect_bytes(7));
    /*...*/
}
</code></pre>
<p>We now have the call to collect bytes from the uart named appropriately, making a semantic reading of code easier.  The nastiness of casting void pointers is carefully wrapped into a single space so that programmer error is less likely: it's easier to visibly inspect and/or test a single wrapping function than it is to test every use of its functionality.  In addition we've put the initialization into a one-use block of code in the <code>collect_byte()</code> function, keeping the logic of <code>use_bytes_from_uart()</code> clearer and again less error prone.</p>
<p>There are naturally other ways to cleave this wrapping.  <code>gbfu</code> could be passed into the wrapper and the initialization block could be wrapped into a separate function that returns it, for example.  Whichever way that makes reading and using the code easier.  The key takeaway is that the low-level coroutine clumsiness, an unfortunate side effect of C's own clunky capabilities, can be wrapped so that the rest of the code reads naturally and easily.</p>
<h2 id="tldr-summary">TL/DR Summary</h2>
<ol>
<li>CO is COROS' coroutine component and forms the foundation of COROS' concurrency mechanism.</li>
<li>A coroutine is a subroutine that can be stopped mid-execution (yield) to be restarted (resume) from that same point later on.</li>
<li>Coroutines, when yielding, can pass data to the invoker and can receive data from the invoker when resumed.</li>
<li>Coroutines are, in effect, threads without pre-emption, and thus without the unpleasant and difficult handling of synchronization and signalling that pre-emptive threads bring.</li>
<li>Coroutines are very flexible execution components with a variety of use cases ranging from stream generation to complicated producer/consumer pairings.</li>
<li>Several such use cases have been illustrated, but this illustration is not a comprehensive one.</li>
</ol>
<p>For small to medium-sized embedded systems, using coroutines as a concurrency mechanism is a better solution than its alternatives (pre-emptive threading, asynch reactors, or even the classic embedded check/dispatch loop in <code>main()</code>) in a large number of situations.</p>
<p>Going on from here, the next read should be <a href="../02_event_queues/"><em>The Hitchhiker's Guide to Event Queues</em></a> so that the way CO is integrated with an advanced event queue system can be explored.</p>
              
            </div>
          </div>
          <footer>
  
    <div class="rst-footer-buttons" role="navigation" aria-label="footer navigation">
      
        <a href="../02_event_queues/" class="btn btn-neutral float-right" title="Event Queues">Next <span class="icon icon-circle-arrow-right"></span></a>
      
      
        <a href="../../tutorials/01_getting_started/" class="btn btn-neutral" title="Getting Started"><span class="icon icon-circle-arrow-left"></span> Previous</a>
      
    </div>
  

  <hr/>

  <div role="contentinfo">
    <!-- Copyright etc -->
    
  </div>

  Built with <a href="https://www.mkdocs.org/">MkDocs</a> using a <a href="https://github.com/snide/sphinx_rtd_theme">theme</a> provided by <a href="https://readthedocs.org">Read the Docs</a>.
</footer>
      
        </div>
      </div>

    </section>

  </div>

  <div class="rst-versions" role="note" aria-label="versions">
    <span class="rst-current-version" data-toggle="rst-current-version">
      
      
        <span><a href="../../tutorials/01_getting_started/" style="color: #fcfcfc;">&laquo; Previous</a></span>
      
      
        <span style="margin-left: 15px"><a href="../02_event_queues/" style="color: #fcfcfc">Next &raquo;</a></span>
      
    </span>
</div>
    <script>var base_url = '../..';</script>
    <script src="../../js/theme.js" defer></script>
      <script src="../../search/main.js" defer></script>
    <script defer>
        window.onload = function () {
            SphinxRtdTheme.Navigation.enable(true);
        };
    </script>

</body>
</html>
