<!DOCTYPE html><html lang="en"><head><meta charset="UTF-8"><meta http-equiv="X-UA-Compatible" content="IE=edge"><meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1"><meta name="description" content="MPI"><meta name="keywords" content="MPI"><meta name="author" content="LiYang"><meta name="copyright" content="LiYang"><title>MPI | 一条鲤鱼</title><link rel="shortcut icon" href="/melody-favicon.ico"><link rel="stylesheet" href="/css/index.css?version=1.9.0"><link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/font-awesome@latest/css/font-awesome.min.css?version=1.9.0"><meta name="format-detection" content="telephone=no"><meta http-equiv="x-dns-prefetch-control" content="on"><link rel="dns-prefetch" href="https://cdn.jsdelivr.net"><meta http-equiv="Cache-Control" content="no-transform"><meta http-equiv="Cache-Control" content="no-siteapp"><script>var GLOBAL_CONFIG = { 
  root: '/',
  algolia: undefined,
  localSearch: undefined,
  copy: {
    success: 'Copy successfully',
    error: 'Copy error',
    noSupport: 'The browser does not support'
  },
  hexoVersion: '5.4.0'
} </script><meta name="generator" content="Hexo 5.4.0"><link rel="alternate" href="/atom.xml" title="一条鲤鱼" type="application/atom+xml">
</head><body><i class="fa fa-arrow-right" id="toggle-sidebar" aria-hidden="true"></i><div id="sidebar" data-display="true"><div class="toggle-sidebar-info text-center"><span data-toggle="Toggle article">Toggle site</span><hr></div><div class="sidebar-toc"><div class="sidebar-toc__title">Catalog</div><div class="sidebar-toc__progress"><span class="progress-notice">You've read</span><span class="progress-num">0</span><span class="progress-percentage">%</span><div class="sidebar-toc__progress-bar"></div></div><div class="sidebar-toc__content"><ol class="toc"><li class="toc-item toc-level-1"><a class="toc-link" href="#%E5%B9%B6%E8%A1%8C%E7%A8%8B%E5%BA%8F%E8%AE%BE%E8%AE%A1%E5%AF%BC%E8%AE%BA"><span class="toc-number">1.</span> <span class="toc-text">并行程序设计导论</span></a><ol class="toc-child"><li class="toc-item toc-level-2"><a class="toc-link" href="#%E7%94%A8MPI%E8%BF%9B%E8%A1%8C%E5%88%86%E5%B8%83%E5%BC%8F%E5%86%85%E5%AD%98%E7%BC%96%E7%A8%8B"><span class="toc-number">1.1.</span> <span class="toc-text">用MPI进行分布式内存编程</span></a><ol class="toc-child"><li class="toc-item toc-level-3"><a class="toc-link" href="#1%E3%80%81%E9%A2%84%E5%A4%87%E7%9F%A5%E8%AF%86"><span class="toc-number">1.1.1.</span> <span class="toc-text">1、预备知识</span></a><ol class="toc-child"><li class="toc-item toc-level-4"><a class="toc-link" href="#1-1-%E7%BC%96%E8%AF%91%E4%B8%8E%E6%89%A7%E8%A1%8C"><span class="toc-number">1.1.1.1.</span> <span class="toc-text">1.1 编译与执行</span></a></li><li class="toc-item toc-level-4"><a class="toc-link" href="#1-2-MPI%E7%A8%8B%E5%BA%8F"><span class="toc-number">1.1.1.2.</span> <span class="toc-text">1.2 MPI程序</span></a></li><li class="toc-item toc-level-4"><a class="toc-link" href="#1-3-MPI-Init%E5%92%8CMPI-Finalize"><span class="toc-number">1.1.1.3.</span> <span class="toc-text">1.3 MPI_Init和MPI_Finalize</span></a></li><li class="toc-item toc-level-4"><a class="toc-link" href="#1-4-%E9%80%9A%E4%BF%A1%E5%AD%90%E3%80%81MPI-Comm-size%E5%92%8CMPI-Comm-rank"><span class="toc-number">1.1.1.4.</span> <span class="toc-text">1.4 通信子、MPI_Comm_size和MPI_Comm_rank</span></a></li><li class="toc-item toc-level-4"><a class="toc-link" href="#1-5-SPMD%E7%A8%8B%E5%BA%8F"><span class="toc-number">1.1.1.5.</span> <span class="toc-text">1.5 SPMD程序</span></a></li><li class="toc-item toc-level-4"><a class="toc-link" href="#1-6-MPI-Send"><span class="toc-number">1.1.1.6.</span> <span class="toc-text">1.6 MPI_Send</span></a></li><li class="toc-item toc-level-4"><a class="toc-link" href="#1-7-MPI-Recv"><span class="toc-number">1.1.1.7.</span> <span class="toc-text">1.7 MPI_Recv</span></a></li><li class="toc-item toc-level-4"><a class="toc-link" href="#1-8-%E6%B6%88%E6%81%AF%E5%8C%B9%E9%85%8D"><span class="toc-number">1.1.1.8.</span> <span class="toc-text">1.8 消息匹配</span></a></li><li class="toc-item toc-level-4"><a class="toc-link" href="#1-9-status-p%E5%8F%82%E6%95%B0"><span class="toc-number">1.1.1.9.</span> <span class="toc-text">1.9 status_p参数</span></a></li><li class="toc-item toc-level-4"><a class="toc-link" href="#1-10-MPI-Send%E5%92%8CMPI-Recv%E7%9A%84%E8%AF%AD%E4%B9%89"><span class="toc-number">1.1.1.10.</span> <span class="toc-text">1.10 MPI_Send和MPI_Recv的语义</span></a></li><li class="toc-item toc-level-4"><a class="toc-link" href="#1-11-%E6%BD%9C%E5%9C%A8%E7%9A%84%E9%99%B7%E9%98%B1"><span class="toc-number">1.1.1.11.</span> <span class="toc-text">1.11 潜在的陷阱</span></a></li></ol></li></ol></li></ol></li></ol></div></div><div class="author-info hide"><div class="author-info__avatar text-center"><img src="/img/avatar.png"></div><div class="author-info__name text-center">LiYang</div><div class="author-info__description text-center"></div><hr><div class="author-info-articles"><a class="author-info-articles__archives article-meta" href="/archives"><span class="pull-left">Articles</span><span class="pull-right">13</span></a><a class="author-info-articles__tags article-meta" href="/tags"><span class="pull-left">Tags</span><span class="pull-right">6</span></a><a class="author-info-articles__categories article-meta" href="/categories"><span class="pull-left">Categories</span><span class="pull-right">7</span></a></div></div></div><div id="content-outer"><div class="no-bg" id="top-container"><div id="page-header"><span class="pull-left"> <a id="site-name" href="/">一条鲤鱼</a></span><i class="fa fa-bars toggle-menu pull-right" aria-hidden="true"></i><span class="pull-right menus">   <a class="site-page" href="/">Home</a><a class="site-page" href="/archives">Archives</a><a class="site-page" href="/tags">Tags</a><a class="site-page" href="/categories">Categories</a></span><span class="pull-right"></span></div><div id="post-info"><div id="post-title">MPI</div><div id="post-meta"><time class="post-meta__date"><i class="fa fa-calendar" aria-hidden="true"></i> 2021-09-14</time><span class="post-meta__separator">|</span><i class="fa fa-inbox post-meta__icon" aria-hidden="true"></i><a class="post-meta__categories" href="/categories/%E5%B9%B6%E8%A1%8C%E8%AE%A1%E7%AE%97/">并行计算</a></div></div></div><div class="layout" id="content-inner"><article id="post"><div class="article-container" id="post-content"><h1 id="并行程序设计导论"><a href="#并行程序设计导论" class="headerlink" title="并行程序设计导论"></a>并行程序设计导论</h1><h2 id="用MPI进行分布式内存编程"><a href="#用MPI进行分布式内存编程" class="headerlink" title="用MPI进行分布式内存编程"></a>用MPI进行分布式内存编程</h2><p>​    在消息传递程序中，运行在一个核-内存对上的程序通常称为一个<strong>进程</strong>。两个进程可以通过调用函数来进行通信：一个进程调用<strong>发送</strong>函数，另一个调用<strong>接收</strong>函数。我们将使用消息传递的实现称为消息传递接口（Message-Passing Interface，MPI）。</p>
<p>​    MPI并不是一种新的语言，它定义了一个可以被C、C++和Fortran程序调用的函数库。我们将介绍MPI中的一些不同的发送与接收函数。还将学习一些可以涉及多于两个进程的“全局”通信函数。</p>
<h3 id="1、预备知识"><a href="#1、预备知识" class="headerlink" title="1、预备知识"></a>1、预备知识</h3><h4 id="1-1-编译与执行"><a href="#1-1-编译与执行" class="headerlink" title="1.1 编译与执行"></a>1.1 编译与执行</h4><p>​    编译与运行程序主要取决于系统。许多系统都有称为mpicc的命令来编译程序：</p>
<figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="meta">$</span><span class="bash"> mpicc -g -Wall -o mpi_hello mpi_hello.c</span></span><br></pre></td></tr></table></figure>

<p><em>注：-g表示允许使用调试器         -Wall表示显式警告           -o<outfile>编译出的可执行文件的文件名为outfile</outfile></em></p>
<p><em>在对程序计时时，我们用-O2选项来告诉编译器对代码进行优化。</em></p>
<ul>
<li><p>典型情况下，mpiicc是C语言编译器的<strong>包装脚本</strong>（wrapper script）。包装脚本的主要目的是运行某个程序。在这种情况下，程序就是C语言编译器。通过告知编译器从何处取得需要的头文件、什么库函数连接到对象文件等，包装脚本可以简化编译器的运行。</p>
</li>
<li><p>许多系统还支持用mpiexec命令来启动程序：</p>
<figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="meta">$</span><span class="bash"> mpiexec -n &lt;number of processes&gt; ./mpi_hello</span></span><br></pre></td></tr></table></figure></li>
</ul>
<p><strong>一个简单例子：打印来自进程问候语句的MPI程序</strong></p>
<figure class="highlight c"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br></pre></td><td class="code"><pre><span class="line"><span class="meta">#<span class="meta-keyword">include</span><span class="meta-string">&lt;stdio.h&gt;</span></span></span><br><span class="line"><span class="meta">#<span class="meta-keyword">include</span><span class="meta-string">&lt;string.h&gt;</span> </span></span><br><span class="line"><span class="meta">#<span class="meta-keyword">include</span><span class="meta-string">&lt;mpi.h&gt;</span></span></span><br><span class="line"></span><br><span class="line"><span class="keyword">const</span> <span class="keyword">int</span> MAX_STRING=<span class="number">100</span>;</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">int</span> <span class="title">main</span><span class="params">(<span class="keyword">void</span>)</span></span>&#123;</span><br><span class="line">    <span class="keyword">char</span> greeting[MAX_STRING];</span><br><span class="line">    <span class="keyword">int</span> comm_sz;<span class="comment">//进程数量</span></span><br><span class="line">    <span class="keyword">int</span> my_rank; <span class="comment">//进程号</span></span><br><span class="line"></span><br><span class="line">    MPI_Init(<span class="literal">NULL</span>,<span class="literal">NULL</span>);</span><br><span class="line">    MPI_Comm_size(MPI_COMM_WORLD,&amp;comm_sz);</span><br><span class="line">    MPI_Comm_rank(MPI_COMM_WORLD,&amp;my_rank);</span><br><span class="line"></span><br><span class="line">    <span class="keyword">if</span>(my_rank!=<span class="number">0</span>)&#123; </span><br><span class="line">        <span class="built_in">sprintf</span>(greeting,<span class="string">&quot;Greetings from process %d of %d!&quot;</span>,my_rank,comm_sz);</span><br><span class="line">        MPI_Send(greeting,<span class="built_in">strlen</span>(greeting)+<span class="number">1</span>,MPI_CHAR,<span class="number">0</span>,<span class="number">0</span>,MPI_COMM_WORLD);</span><br><span class="line">    &#125;<span class="keyword">else</span>&#123;</span><br><span class="line">        <span class="built_in">printf</span>(<span class="string">&quot;Greetings from process %d of %d!\n&quot;</span>,my_rank,comm_sz);</span><br><span class="line">        <span class="keyword">for</span>(<span class="keyword">int</span> q=<span class="number">1</span>;q&lt;comm_sz;q++)&#123;</span><br><span class="line">            MPI_Recv(greeting,MAX_STRING,MPI_CHAR,q,<span class="number">0</span>,MPI_COMM_WORLD,MPI_STATUS_IGNORE);</span><br><span class="line">            <span class="built_in">printf</span>(<span class="string">&quot;%s\n&quot;</span>,greeting);</span><br><span class="line">        &#125;</span><br><span class="line">    &#125;</span><br><span class="line">    MPI_Finalize();</span><br><span class="line">    <span class="keyword">return</span> <span class="number">0</span>;</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure>



<h4 id="1-2-MPI程序"><a href="#1-2-MPI程序" class="headerlink" title="1.2 MPI程序"></a>1.2 MPI程序</h4><p>​    C语言中使用MPI需要包含头文件mpi.h。头文件包括了MPI函数的原形、宏定义、类型定义等，它还包括了编译MPI程序需要的全部定义与声明。</p>
<p>​    所有MPI定义的标识符都由字符串MPI_开始。下划线后的第一个字母大写，表示函数名和MPI定义的类型。MPI定义的宏和常量的所有字母都是大写的。</p>
<h4 id="1-3-MPI-Init和MPI-Finalize"><a href="#1-3-MPI-Init和MPI-Finalize" class="headerlink" title="1.3 MPI_Init和MPI_Finalize"></a>1.3 MPI_Init和MPI_Finalize</h4><ul>
<li><p>调用<strong>MPI_Init</strong>是为了告知MPI系统进行所有必要的初始化设置。例如，系统可能需要为消息缓冲区分配存储空间，为进程指定进程号等。从经验上看，在程序调用MPI_Init前，不应该调用其他MPI函数。</p>
</li>
<li><p>MPI_Init的语法结构为：</p>
<figure class="highlight c"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">int</span> <span class="title">MPI_Init</span><span class="params">(</span></span></span><br><span class="line"><span class="params"><span class="function">	<span class="keyword">int</span>* argc_p,</span></span></span><br><span class="line"><span class="params"><span class="function">	<span class="keyword">char</span>*** argv_p</span></span></span><br><span class="line"><span class="params"><span class="function">	)</span></span>;</span><br></pre></td></tr></table></figure>

<ul>
<li>参数argc_p和argv_p是指向参数argc和argv的指针。然而，当程序不使用这些参数时，可以只是将它们设置为NULL。</li>
<li>就像大部分MPI函数一样，MPI_Init返回一个int型错误码，在大部分情况下，我们忽略这些错误码。</li>
</ul>
</li>
<li><p>调用<strong>MPI_Finalize</strong>是为了告知MPI系统MPI已经使用完毕，为MPI而分配的任何资源都可以释放了。</p>
<ul>
<li>它的结构很简单：</li>
</ul>
<figure class="highlight c"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">int</span> <span class="title">MPI_Finalize</span><span class="params">(<span class="keyword">void</span>)</span></span>;</span><br></pre></td></tr></table></figure>

<ul>
<li>一般而言，在调用MPI_Finalize后，就不应该调用MPI函数了。</li>
</ul>
</li>
<li><p>注意：我们比一定要向MPI_Init传递argc和argv的指针，也不一定要在main函数中调用MPI_Init和MPI_Finalize。</p>
</li>
</ul>
<h4 id="1-4-通信子、MPI-Comm-size和MPI-Comm-rank"><a href="#1-4-通信子、MPI-Comm-size和MPI-Comm-rank" class="headerlink" title="1.4 通信子、MPI_Comm_size和MPI_Comm_rank"></a>1.4 通信子、MPI_Comm_size和MPI_Comm_rank</h4><ul>
<li><p>在MPI中，<strong>通信子</strong>（communicator）指的是一组可以互相发送消息的进程集合。</p>
</li>
<li><p>MPI_Init的其中一个目的，是在用户启动程序时，定义由用户启动的所有进程所组成的通信子。这个通信子称为MPI_COMM_WORLD。</p>
</li>
<li><p>MPI_Comm_size和MPI_Comm_rank的语法结构为：</p>
<figure class="highlight c"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">int</span> <span class="title">MPI_Comm_size</span><span class="params">(</span></span></span><br><span class="line"><span class="params"><span class="function">	MPI_Comm comm,  <span class="comment">//in</span></span></span></span><br><span class="line"><span class="params"><span class="function">	<span class="keyword">int</span>* comm_sz_p	<span class="comment">//out</span></span></span></span><br><span class="line"><span class="params"><span class="function">)</span></span>;</span><br><span class="line"><span class="function"><span class="keyword">int</span> <span class="title">MPI_Comm_rank</span><span class="params">(</span></span></span><br><span class="line"><span class="params"><span class="function">	MPI_Comm comm, <span class="comment">//in</span></span></span></span><br><span class="line"><span class="params"><span class="function">    <span class="keyword">int</span>* my_rank_p <span class="comment">//out</span></span></span></span><br><span class="line"><span class="params"><span class="function">)</span></span>;</span><br></pre></td></tr></table></figure></li>
<li><p>这两个函数中，第一个参数是一个通信子，它所属的类型是MPI为通信子定义的特殊类型：MPI_Comm。</p>
</li>
<li><p>MPI_Comm_size函数在它的第二个参数里返回通信子的进程数；MPI_Comm_rank函数在它的第二个参数里返回正在调用进程在通信子中的进程号。</p>
</li>
<li><p>在MPI_COMM_WORLD中经常用参数comm_sz表示进程的数量，用参数my_rank来表示进程号。</p>
</li>
</ul>
<h4 id="1-5-SPMD程序"><a href="#1-5-SPMD程序" class="headerlink" title="1.5 SPMD程序"></a>1.5 SPMD程序</h4><p>在并行编程中，0号进程常做的事是，当其他进程生成和发送消息时，它负责接受消息并打印出来。</p>
<p>事实上，大部分MPI程序都是这样的，编写一个单个程序，让不同进程产生不同动作。实现方式是，简单地让进程按照它们的进程号来匹配程序分支。这一方法称为单程序多数据流（Single Program，Multiple Data，SPMD）。</p>
<h4 id="1-6-MPI-Send"><a href="#1-6-MPI-Send" class="headerlink" title="1.6 MPI_Send"></a>1.6 MPI_Send</h4><p>每个消息的发送都是调用MPI_Send来实现的，其语法结构为：</p>
<figure class="highlight c"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">int</span> <span class="title">MPI_Send</span><span class="params">(</span></span></span><br><span class="line"><span class="params"><span class="function">	<span class="keyword">void</span>* msg_buf_p,</span></span></span><br><span class="line"><span class="params"><span class="function">	<span class="keyword">int</span>  msg_size,</span></span></span><br><span class="line"><span class="params"><span class="function">	MPI_Datatype msg_type,</span></span></span><br><span class="line"><span class="params"><span class="function">	<span class="keyword">int</span> dest,</span></span></span><br><span class="line"><span class="params"><span class="function">	<span class="keyword">int</span> tag,</span></span></span><br><span class="line"><span class="params"><span class="function">	MPI_Comm communicator</span></span></span><br><span class="line"><span class="params"><span class="function">)</span></span>;</span><br></pre></td></tr></table></figure>

<ul>
<li><p>第一个参数msg_buf_p是一个指向包含消息内容的内存块的指针。</p>
</li>
<li><p>第二个参数msg_size是消息字符串的长度（总字符数量）</p>
</li>
<li><p>第三个参数msg_type是发送的数据类型，由于c语言中的类型（int，char等）不能作为参数传递给函数，所有MPI定义了一个特殊的类型：MPI_Datatype，用作参数msg_type，包括MPI_CHAR、MPI_SHORT、MPI_INT、MPI_LONG、MPI_LONG_LONG、MPI_UNSIGNED_CHAR、MPI_UNSIGNED_SHORT。</p>
</li>
<li><p>第四个参数dest指定了要接受消息的进程的进程号。</p>
</li>
<li><p>第五个参数tag是个非负int型，用于区分看上去完全一样的消息。例如，假设1号进程要向0号进程发送浮点数，其中一些要打印出来，而另一些要参与计算，为了区分，可以将用于打印的tag设为0，用于计算的tag设为1。</p>
<p>最后一个参数是一个通信子。所有涉及通信的MPI函数都有一个通信子参数。通信子最重要的目的之一是指定通信范围。通信子指的是一组可以互相发送消息的进程的集合。反过来，一个通信子中的进程所发送的消息不能被另一个通信子中的进程接收。由于MPI提供了创建新通信子的函数，因此通信子这一特性可以用于复杂程序，并保证消息不会意外地在错误的地方被接收。</p>
</li>
</ul>
<h4 id="1-7-MPI-Recv"><a href="#1-7-MPI-Recv" class="headerlink" title="1.7 MPI_Recv"></a>1.7 MPI_Recv</h4><p>MPI_Recv的前六个参数对应了MPI_Send的前六个参数：</p>
<figure class="highlight c"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">int</span> <span class="title">MPI_Recv</span><span class="params">(</span></span></span><br><span class="line"><span class="params"><span class="function">	<span class="keyword">void</span>* msg_buf_p,</span></span></span><br><span class="line"><span class="params"><span class="function">	<span class="keyword">int</span> buf_size,</span></span></span><br><span class="line"><span class="params"><span class="function">	MPI_Datatype buf_type,</span></span></span><br><span class="line"><span class="params"><span class="function">	<span class="keyword">int</span> source,</span></span></span><br><span class="line"><span class="params"><span class="function">	<span class="keyword">int</span> tag,</span></span></span><br><span class="line"><span class="params"><span class="function">	MPI_Comm communicator,</span></span></span><br><span class="line"><span class="params"><span class="function">	MPI_Status* status_p</span></span></span><br><span class="line"><span class="params"><span class="function">)</span></span>;</span><br></pre></td></tr></table></figure>

<ul>
<li>前三个参数制定了用于<strong>接收消息</strong>的内存。后面三个参数用来识别消息。要与发送消息的进程相匹配。</li>
<li>最后一个参数status_p，在大部分情况下，调用函数并不使用这个参数，赋予其特殊的MPI常量MPI_STATUS_IGNORE就行。</li>
</ul>
<h4 id="1-8-消息匹配"><a href="#1-8-消息匹配" class="headerlink" title="1.8 消息匹配"></a>1.8 消息匹配</h4><ul>
<li><p>假定q号进程调用了MPI_Send函数：</p>
<figure class="highlight c++"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="built_in">MPI_Send</span>(send_buf_p,send_buf_sz,send_type,dest,send_tag,send_comm);</span><br></pre></td></tr></table></figure></li>
<li><p>并且假定r号进程调用了MPI_Recv函数：</p>
<figure class="highlight c++"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="built_in">MPI_Recv</span>(recv_buf_p,recv_buf_sz,recv_type,src,recv_tag,recv_comm,&amp;status);</span><br></pre></td></tr></table></figure></li>
<li><p>则q号进程调用MPI_Send函数所发送的消息可以被r号进程调用MPI_Recv函数接收，如果：</p>
<ul>
<li>recv_comm=send_comm,</li>
<li>recv_tag=send_tag,</li>
<li>dest=r并且src=q。</li>
<li><strong>如果recv_type=send_type，同时recv_buf_sz（接收缓冲）≥send_buf_sz（发送缓冲），那么由q号进程发送的消息就可以被r号进程成功地接收。</strong></li>
</ul>
</li>
<li><p>一个进程可以接收多个进程发来的消息，接收进程并不知道其他进程发送消息的顺序。如果分配给每个进程的工作所要耗费的实践是无法预测的，那么0号进程就无法知道其他进程完成工作的顺序。这样，如果按顺序接收各进程的结果，就会出现等待。为了避免这个问题，MPI提供了一个特殊的常量<strong>MPI_ANY_SOURCE</strong>，可以传递给MPI_Recv。这样0号进程执行下列代码，那么它可以按照进程完成工作的顺序来接收结果了：</p>
<figure class="highlight c++"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">for</span>(i=<span class="number">1</span>;i&lt;comm_sz,i++)&#123;</span><br><span class="line">	<span class="built_in">MPI_Recv</span>(result,result_sz,result_type,MPI_ANY_SOURCE,result_tag,comm,</span><br><span class="line">		MPI_STATUS_IGNORE);</span><br><span class="line">	<span class="built_in">Process_result</span>(result);</span><br><span class="line">&#125;</span><br></pre></td></tr></table></figure></li>
<li><p>类似地，一个进程也有可能接收多条来自另一个进程的有着不同标签的消息，并且接收进程并不知道消息发送的顺序。在这种情况下，MPI提供了特殊常量<strong>MPI_ANY_TAG</strong>，可以将它传给MPI_Recv的参数tag。</p>
</li>
<li><p>当使用这些“通配符（wildcard）参数时，有几点需要强调：</p>
<ul>
<li>1、<strong>只有接收者可以使用通配符参数。</strong>发送者必须指定一个进程号与一个非负整数标签。因此，MPI使用的是所谓的“推（push）“通信机制，而不是”拉（pull）“通信机制。</li>
<li>2、<strong>通信子参数没有通配符。发送者和接收者都必须指定通信子。</strong></li>
</ul>
</li>
</ul>
<h4 id="1-9-status-p参数"><a href="#1-9-status-p参数" class="headerlink" title="1.9 status_p参数"></a>1.9 status_p参数</h4><p>回想之前所说的规则，可以发现接收者可以在不知道以下信息的情况下接收消息：</p>
<p>1、消息中的数据量；</p>
<p>2、消息的发送者；</p>
<p>3、消息的标签</p>
<p>那么，接收者是如何找出这些值的？回想一下，MPI_Recv的最后一个参数的类型为<strong>MPI_Status</strong>*。MPI类型MPI_Status是一个有至少三个成员的结构，<strong>MPI_SOURCE、MPI_TAG和MPI_ERROR</strong>。</p>
<ul>
<li><p>假定程序含有如下的定义：<code>MPI_Status status;</code></p>
</li>
<li><p>那么将&amp;status作为最后一个参数传递给MPI_Recv函数并调用它后，可以通过检查以下两个成员来确定发送者和标签：</p>
<figure class="highlight c++"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">status.MPI_SOURCE</span><br><span class="line">status.MPI_TAG</span><br></pre></td></tr></table></figure></li>
<li><p>接收到的数据量不是存储在应用程序可以直接访问到的域中，但用户可以调用MPI_Get_count函数找回这个值。例如，假设对MPI_Recv的调用中，接收缓冲区的类型为recv_type，再次传递&amp;status参数，则以下调用</p>
<p><code>MPI_Get_count(&amp;status,recv_type,&amp;count)</code>会返回count参数接收到的元素数量。一般而言，MPI_Get_count的语法结构为：</p>
<figure class="highlight c++"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">int</span> <span class="title">MPI_Get_count</span><span class="params">(</span></span></span><br><span class="line"><span class="params"><span class="function">	MPI_Status* status_p,</span></span></span><br><span class="line"><span class="params"><span class="function">	MPI_Datatype type,</span></span></span><br><span class="line"><span class="params"><span class="function">	<span class="keyword">int</span>* count_p</span></span></span><br><span class="line"><span class="params"><span class="function">	)</span></span>;</span><br></pre></td></tr></table></figure>

<p>注意，count值并不能简单地作为MPI_Status变量的成员直接访问，因为它取决于接收数据的类型。因此，确定该值的过程需要一次计算（如接收到的字节数/每个对象的字节数）。如果这个信息不是必须的，那么我们没必要为了得到该值浪费一次计算。</p>
</li>
</ul>
<h4 id="1-10-MPI-Send和MPI-Recv的语义"><a href="#1-10-MPI-Send和MPI-Recv的语义" class="headerlink" title="1.10 MPI_Send和MPI_Recv的语义"></a>1.10 MPI_Send和MPI_Recv的语义</h4><p>​    当我们将消息从一个进程发送到另一个进程时，发生的许多细节取决于具体的系统，但我们可以有一些一般化的概念。<strong>发送进程组装信息</strong>，例如，它为实际要发送的数据添加“信封“信息。信息包括目标进程的进程号、发送进程的进程号、标签、通信子，以及消息大小等信息。一旦消息组装完毕，有两种可能性：<strong>发送进程可以缓冲消息，也可以阻塞（block）</strong>。如果它缓冲消息，则MPI系统将会把消息（包括数据和信封）放置在它自己的内部存储器里，并返回MPI_Send的调用。</p>
<p>​    另一方面，如果系统发生阻塞，那么它将一直等待，直到可以开始发送消息，并不立即返回对MPI_Send的调用。因此，如果使用MPI_Send，当函数返回时，实际上并不知道消息是否已经发送出去。我们之中你到消息所有的存储区，即发送缓冲区，可以被程序再次使用。如果我们需要直到消息是否已经发送出去，或者无论消息是否已经发送出去我们都让MPI_Send调用后立即返回，那么可以使用MPI提供的<strong>发送消息的替代方法</strong>。</p>
<p>​    MPI_Send的精确行为是由MPI实现所决定的。但是，典型的实现方法有一个<strong>默认的消息”截止“大小（”cutoff“message size）</strong>。如果一条消息的大小小于”截止“大小，它将被缓冲；如果大于截止大小，它将被阻塞。</p>
<p>​    与MPI_Send不同，<strong>MPI_Recv函数总是阻塞的，直到接收到一条匹配的消息</strong>。因此，当MPI_Recv函数调用返回时，就知道一条消息已经存储在接收缓冲区中了（除非产生了错误)。接收消息函数同样有<strong>替代函数</strong>，系统检查是否有一条匹配的消息并返回，而不管缓冲区中有没有消息。</p>
<p>​    MPI要求消息时<strong>不可超越的（nonovertaking）</strong>。即如果q号进程发送了两条消息给r号进程，那么q进程发送的第一条消息必须在第二条消息之前可用。但是，如果消息时来自不同进程的，消息的到达顺序是没有限制的。这本质上是因为MPI不能对网络的性能有强制要求。</p>
<h4 id="1-11-潜在的陷阱"><a href="#1-11-潜在的陷阱" class="headerlink" title="1.11 潜在的陷阱"></a>1.11 潜在的陷阱</h4><p>MPI_Recv的语义会导致MPI编程中的一个陷阱：如果一个进程试图接收消息，但没有相匹配的消息，那么该进程将会永远阻塞在那里，即<strong>进程悬挂</strong>。因此，在设计程序时，我们需要保证每条接收都有一条相匹配的发送。更重要的是，编写代码时，要格外小心以防止因调用MPI_Send和MPI_Recv出现<strong>匹配错误</strong>。</p>
<p>简单的说，如果调用MPI_Send发生了阻塞，并且没有相匹配的接收，那么发送进程就悬挂起来。另一方面，如果调用MPI_Send被缓冲，但没有相匹配的接收，那么消息将被丢失。</p>
</div></article><div class="post-copyright"><div class="post-copyright__author"><span class="post-copyright-meta">Author: </span><span class="post-copyright-info"><a href="mailto:undefined">LiYang</a></span></div><div class="post-copyright__type"><span class="post-copyright-meta">Link: </span><span class="post-copyright-info"><a href="http://example.com/2021/09/14/learnMPI/">http://example.com/2021/09/14/learnMPI/</a></span></div><div class="post-copyright__notice"><span class="post-copyright-meta">Copyright Notice: </span><span class="post-copyright-info">All articles in this blog are licensed under <a target="_blank" rel="noopener" href="https://creativecommons.org/licenses/by-nc-sa/4.0/">CC BY-NC-SA 4.0</a> unless stating additionally.</span></div></div><div class="post-meta__tag-list"><a class="post-meta__tags" href="/tags/MPI/">MPI</a></div><nav id="pagination"><div class="prev-post pull-left"><a href="/2021/09/15/Rust%E4%B8%AD%E7%9A%84Rc%E5%92%8CBox-leak-%E6%9C%BA%E5%88%B6/"><i class="fa fa-chevron-left">  </i><span>Rust中的Rc、Arc和Box::leak()机制</span></a></div></nav></div></div><footer><div class="layout" id="footer"><div class="copyright">&copy;2013 - 2022 By LiYang</div><div class="framework-info"><span>Driven - </span><a target="_blank" rel="noopener" href="http://hexo.io"><span>Hexo</span></a><span class="footer-separator">|</span><span>Theme - </span><a target="_blank" rel="noopener" href="https://github.com/Molunerfinn/hexo-theme-melody"><span>Melody</span></a></div><div class="busuanzi"><script async src="//busuanzi.ibruce.info/busuanzi/2.3/busuanzi.pure.mini.js"></script><span id="busuanzi_container_page_pv"><i class="fa fa-file"></i><span id="busuanzi_value_page_pv"></span><span></span></span></div></div></footer><i class="fa fa-arrow-up" id="go-up" aria-hidden="true"></i><script src="https://cdn.jsdelivr.net/npm/animejs@latest/anime.min.js"></script><script src="https://cdn.jsdelivr.net/npm/jquery@latest/dist/jquery.min.js"></script><script src="https://cdn.jsdelivr.net/npm/@fancyapps/fancybox@latest/dist/jquery.fancybox.min.js"></script><script src="https://cdn.jsdelivr.net/npm/velocity-animate@latest/velocity.min.js"></script><script src="https://cdn.jsdelivr.net/npm/velocity-ui-pack@latest/velocity.ui.min.js"></script><script src="/js/utils.js?version=1.9.0"></script><script src="/js/fancybox.js?version=1.9.0"></script><script src="/js/sidebar.js?version=1.9.0"></script><script src="/js/copy.js?version=1.9.0"></script><script src="/js/fireworks.js?version=1.9.0"></script><script src="/js/transition.js?version=1.9.0"></script><script src="/js/scroll.js?version=1.9.0"></script><script src="/js/head.js?version=1.9.0"></script><script>if(/Android|webOS|iPhone|iPod|iPad|BlackBerry/i.test(navigator.userAgent)) {
  $('#nav').addClass('is-mobile')
  $('footer').addClass('is-mobile')
  $('#top-container').addClass('is-mobile')
}</script></body></html>