<!DOCTYPE html><html><head>
      <title>Understanding Deep Learning Techniques for Image Segmentation</title>
      <meta charset="utf-8">
      <meta name="viewport" content="width=device-width, initial-scale=1.0">
      
      <link rel="stylesheet" href="file:///c:\Users\Administrator\.vscode\extensions\shd101wyy.markdown-preview-enhanced-0.5.2\node_modules\@shd101wyy\mume\dependencies\katex\katex.min.css">
      
      

      
      
      
      
      
      
      

      <style>
      /**
 * prism.js Github theme based on GitHub's theme.
 * @author Sam Clarke
 */
code[class*="language-"],
pre[class*="language-"] {
  color: #333;
  background: none;
  font-family: Consolas, "Liberation Mono", Menlo, Courier, monospace;
  text-align: left;
  white-space: pre;
  word-spacing: normal;
  word-break: normal;
  word-wrap: normal;
  line-height: 1.4;

  -moz-tab-size: 8;
  -o-tab-size: 8;
  tab-size: 8;

  -webkit-hyphens: none;
  -moz-hyphens: none;
  -ms-hyphens: none;
  hyphens: none;
}

/* Code blocks */
pre[class*="language-"] {
  padding: .8em;
  overflow: auto;
  /* border: 1px solid #ddd; */
  border-radius: 3px;
  /* background: #fff; */
  background: #f5f5f5;
}

/* Inline code */
:not(pre) > code[class*="language-"] {
  padding: .1em;
  border-radius: .3em;
  white-space: normal;
  background: #f5f5f5;
}

.token.comment,
.token.blockquote {
  color: #969896;
}

.token.cdata {
  color: #183691;
}

.token.doctype,
.token.punctuation,
.token.variable,
.token.macro.property {
  color: #333;
}

.token.operator,
.token.important,
.token.keyword,
.token.rule,
.token.builtin {
  color: #a71d5d;
}

.token.string,
.token.url,
.token.regex,
.token.attr-value {
  color: #183691;
}

.token.property,
.token.number,
.token.boolean,
.token.entity,
.token.atrule,
.token.constant,
.token.symbol,
.token.command,
.token.code {
  color: #0086b3;
}

.token.tag,
.token.selector,
.token.prolog {
  color: #63a35c;
}

.token.function,
.token.namespace,
.token.pseudo-element,
.token.class,
.token.class-name,
.token.pseudo-class,
.token.id,
.token.url-reference .token.variable,
.token.attr-name {
  color: #795da3;
}

.token.entity {
  cursor: help;
}

.token.title,
.token.title .token.punctuation {
  font-weight: bold;
  color: #1d3e81;
}

.token.list {
  color: #ed6a43;
}

.token.inserted {
  background-color: #eaffea;
  color: #55a532;
}

.token.deleted {
  background-color: #ffecec;
  color: #bd2c00;
}

.token.bold {
  font-weight: bold;
}

.token.italic {
  font-style: italic;
}


/* JSON */
.language-json .token.property {
  color: #183691;
}

.language-markup .token.tag .token.punctuation {
  color: #333;
}

/* CSS */
code.language-css,
.language-css .token.function {
  color: #0086b3;
}

/* YAML */
.language-yaml .token.atrule {
  color: #63a35c;
}

code.language-yaml {
  color: #183691;
}

/* Ruby */
.language-ruby .token.function {
  color: #333;
}

/* Markdown */
.language-markdown .token.url {
  color: #795da3;
}

/* Makefile */
.language-makefile .token.symbol {
  color: #795da3;
}

.language-makefile .token.variable {
  color: #183691;
}

.language-makefile .token.builtin {
  color: #0086b3;
}

/* Bash */
.language-bash .token.keyword {
  color: #0086b3;
}

/* highlight */
pre[data-line] {
  position: relative;
  padding: 1em 0 1em 3em;
}
pre[data-line] .line-highlight-wrapper {
  position: absolute;
  top: 0;
  left: 0;
  background-color: transparent;
  display: block;
  width: 100%;
}

pre[data-line] .line-highlight {
  position: absolute;
  left: 0;
  right: 0;
  padding: inherit 0;
  margin-top: 1em;
  background: hsla(24, 20%, 50%,.08);
  background: linear-gradient(to right, hsla(24, 20%, 50%,.1) 70%, hsla(24, 20%, 50%,0));
  pointer-events: none;
  line-height: inherit;
  white-space: pre;
}

pre[data-line] .line-highlight:before, 
pre[data-line] .line-highlight[data-end]:after {
  content: attr(data-start);
  position: absolute;
  top: .4em;
  left: .6em;
  min-width: 1em;
  padding: 0 .5em;
  background-color: hsla(24, 20%, 50%,.4);
  color: hsl(24, 20%, 95%);
  font: bold 65%/1.5 sans-serif;
  text-align: center;
  vertical-align: .3em;
  border-radius: 999px;
  text-shadow: none;
  box-shadow: 0 1px white;
}

pre[data-line] .line-highlight[data-end]:after {
  content: attr(data-end);
  top: auto;
  bottom: .4em;
}html body{font-family:"Helvetica Neue",Helvetica,"Segoe UI",Arial,freesans,sans-serif;font-size:16px;line-height:1.6;color:#333;background-color:#fff;overflow:initial;box-sizing:border-box;word-wrap:break-word}html body>:first-child{margin-top:0}html body h1,html body h2,html body h3,html body h4,html body h5,html body h6{line-height:1.2;margin-top:1em;margin-bottom:16px;color:#000}html body h1{font-size:2.25em;font-weight:300;padding-bottom:.3em}html body h2{font-size:1.75em;font-weight:400;padding-bottom:.3em}html body h3{font-size:1.5em;font-weight:500}html body h4{font-size:1.25em;font-weight:600}html body h5{font-size:1.1em;font-weight:600}html body h6{font-size:1em;font-weight:600}html body h1,html body h2,html body h3,html body h4,html body h5{font-weight:600}html body h5{font-size:1em}html body h6{color:#5c5c5c}html body strong{color:#000}html body del{color:#5c5c5c}html body a:not([href]){color:inherit;text-decoration:none}html body a{color:#08c;text-decoration:none}html body a:hover{color:#00a3f5;text-decoration:none}html body img{max-width:100%}html body>p{margin-top:0;margin-bottom:16px;word-wrap:break-word}html body>ul,html body>ol{margin-bottom:16px}html body ul,html body ol{padding-left:2em}html body ul.no-list,html body ol.no-list{padding:0;list-style-type:none}html body ul ul,html body ul ol,html body ol ol,html body ol ul{margin-top:0;margin-bottom:0}html body li{margin-bottom:0}html body li.task-list-item{list-style:none}html body li>p{margin-top:0;margin-bottom:0}html body .task-list-item-checkbox{margin:0 .2em .25em -1.8em;vertical-align:middle}html body .task-list-item-checkbox:hover{cursor:pointer}html body blockquote{margin:16px 0;font-size:inherit;padding:0 15px;color:#5c5c5c;border-left:4px solid #d6d6d6}html body blockquote>:first-child{margin-top:0}html body blockquote>:last-child{margin-bottom:0}html body hr{height:4px;margin:32px 0;background-color:#d6d6d6;border:0 none}html body table{margin:10px 0 15px 0;border-collapse:collapse;border-spacing:0;display:block;width:100%;overflow:auto;word-break:normal;word-break:keep-all}html body table th{font-weight:bold;color:#000}html body table td,html body table th{border:1px solid #d6d6d6;padding:6px 13px}html body dl{padding:0}html body dl dt{padding:0;margin-top:16px;font-size:1em;font-style:italic;font-weight:bold}html body dl dd{padding:0 16px;margin-bottom:16px}html body code{font-family:Menlo,Monaco,Consolas,'Courier New',monospace;font-size:.85em !important;color:#000;background-color:#f0f0f0;border-radius:3px;padding:.2em 0}html body code::before,html body code::after{letter-spacing:-0.2em;content:"\00a0"}html body pre>code{padding:0;margin:0;font-size:.85em !important;word-break:normal;white-space:pre;background:transparent;border:0}html body .highlight{margin-bottom:16px}html body .highlight pre,html body pre{padding:1em;overflow:auto;font-size:.85em !important;line-height:1.45;border:#d6d6d6;border-radius:3px}html body .highlight pre{margin-bottom:0;word-break:normal}html body pre code,html body pre tt{display:inline;max-width:initial;padding:0;margin:0;overflow:initial;line-height:inherit;word-wrap:normal;background-color:transparent;border:0}html body pre code:before,html body pre tt:before,html body pre code:after,html body pre tt:after{content:normal}html body p,html body blockquote,html body ul,html body ol,html body dl,html body pre{margin-top:0;margin-bottom:16px}html body kbd{color:#000;border:1px solid #d6d6d6;border-bottom:2px solid #c7c7c7;padding:2px 4px;background-color:#f0f0f0;border-radius:3px}@media print{html body{background-color:#fff}html body h1,html body h2,html body h3,html body h4,html body h5,html body h6{color:#000;page-break-after:avoid}html body blockquote{color:#5c5c5c}html body pre{page-break-inside:avoid}html body table{display:table}html body img{display:block;max-width:100%;max-height:100%}html body pre,html body code{word-wrap:break-word;white-space:pre}}.markdown-preview{width:100%;height:100%;box-sizing:border-box}.markdown-preview .pagebreak,.markdown-preview .newpage{page-break-before:always}.markdown-preview pre.line-numbers{position:relative;padding-left:3.8em;counter-reset:linenumber}.markdown-preview pre.line-numbers>code{position:relative}.markdown-preview pre.line-numbers .line-numbers-rows{position:absolute;pointer-events:none;top:1em;font-size:100%;left:0;width:3em;letter-spacing:-1px;border-right:1px solid #999;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none}.markdown-preview pre.line-numbers .line-numbers-rows>span{pointer-events:none;display:block;counter-increment:linenumber}.markdown-preview pre.line-numbers .line-numbers-rows>span:before{content:counter(linenumber);color:#999;display:block;padding-right:.8em;text-align:right}.markdown-preview .mathjax-exps .MathJax_Display{text-align:center !important}.markdown-preview:not([for="preview"]) .code-chunk .btn-group{display:none}.markdown-preview:not([for="preview"]) .code-chunk .status{display:none}.markdown-preview:not([for="preview"]) .code-chunk .output-div{margin-bottom:16px}.scrollbar-style::-webkit-scrollbar{width:8px}.scrollbar-style::-webkit-scrollbar-track{border-radius:10px;background-color:transparent}.scrollbar-style::-webkit-scrollbar-thumb{border-radius:5px;background-color:rgba(150,150,150,0.66);border:4px solid rgba(150,150,150,0.66);background-clip:content-box}html body[for="html-export"]:not([data-presentation-mode]){position:relative;width:100%;height:100%;top:0;left:0;margin:0;padding:0;overflow:auto}html body[for="html-export"]:not([data-presentation-mode]) .markdown-preview{position:relative;top:0}@media screen and (min-width:914px){html body[for="html-export"]:not([data-presentation-mode]) .markdown-preview{padding:2em calc(50% - 457px + 2em)}}@media screen and (max-width:914px){html body[for="html-export"]:not([data-presentation-mode]) .markdown-preview{padding:2em}}@media screen and (max-width:450px){html body[for="html-export"]:not([data-presentation-mode]) .markdown-preview{font-size:14px !important;padding:1em}}@media print{html body[for="html-export"]:not([data-presentation-mode]) #sidebar-toc-btn{display:none}}html body[for="html-export"]:not([data-presentation-mode]) #sidebar-toc-btn{position:fixed;bottom:8px;left:8px;font-size:28px;cursor:pointer;color:inherit;z-index:99;width:32px;text-align:center;opacity:.4}html body[for="html-export"]:not([data-presentation-mode])[html-show-sidebar-toc] #sidebar-toc-btn{opacity:1}html body[for="html-export"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc{position:fixed;top:0;left:0;width:300px;height:100%;padding:32px 0 48px 0;font-size:14px;box-shadow:0 0 4px rgba(150,150,150,0.33);box-sizing:border-box;overflow:auto;background-color:inherit}html body[for="html-export"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc::-webkit-scrollbar{width:8px}html body[for="html-export"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc::-webkit-scrollbar-track{border-radius:10px;background-color:transparent}html body[for="html-export"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc::-webkit-scrollbar-thumb{border-radius:5px;background-color:rgba(150,150,150,0.66);border:4px solid rgba(150,150,150,0.66);background-clip:content-box}html body[for="html-export"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc a{text-decoration:none}html body[for="html-export"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc ul{padding:0 1.6em;margin-top:.8em}html body[for="html-export"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc li{margin-bottom:.8em}html body[for="html-export"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc ul{list-style-type:none}html body[for="html-export"]:not([data-presentation-mode])[html-show-sidebar-toc] .markdown-preview{left:300px;width:calc(100% -  300px);padding:2em calc(50% - 457px -  150px);margin:0;box-sizing:border-box}@media screen and (max-width:1274px){html body[for="html-export"]:not([data-presentation-mode])[html-show-sidebar-toc] .markdown-preview{padding:2em}}@media screen and (max-width:450px){html body[for="html-export"]:not([data-presentation-mode])[html-show-sidebar-toc] .markdown-preview{width:100%}}html body[for="html-export"]:not([data-presentation-mode]):not([html-show-sidebar-toc]) .markdown-preview{left:50%;transform:translateX(-50%)}html body[for="html-export"]:not([data-presentation-mode]):not([html-show-sidebar-toc]) .md-sidebar-toc{display:none}
/* Please visit the URL below for more information: */
/*   https://shd101wyy.github.io/markdown-preview-enhanced/#/customize-css */

      </style>
    </head>
    <body for="html-export">
      <div class="mume markdown-preview  ">
      <ul>
<li><a href="#abstract">Abstract</a></li>
<li><a href="#1-introduction">1 Introduction</a></li>
<li><a href="#2-motivation">2 Motivation</a>
<ul>
<li><a href="#21-contribution">2.1 Contribution</a></li>
</ul>
</li>
<li><a href="#3-impact-of-deep-learning-on-image-segmentation">3 Impact of Deep Learning on Image Segmentation</a>
<ul>
<li><a href="#31-effectiveness-of-convolutions-for-segmentation">3.1 Effectiveness of convolutions for segmentation</a></li>
<li><a href="#32-impact-of-larger-and-more-complex-datasets">3.2 Impact of larger and more complex datasets</a></li>
</ul>
</li>
<li><a href="#4-image-segmentation-using-deep-learning">4 Image Segmentation using Deep Learning</a>
<ul>
<li><a href="#41-convolutional-neural-networks">4.1 Convolutional Neural Networks</a>
<ul>
<li><a href="#411-fully-convolutional-layers">4.1.1 Fully convolutional layers</a></li>
<li><a href="#412-region-proposal-networks">4.1.2 Region proposal networks</a></li>
<li><a href="#413-deeplab">4.1.3 DeepLab</a></li>
<li><a href="#414-using-inter-pixel-correlation-to-improve-cnn-based-segmentation">4.1.4 Using inter pixel correlation to improve CNN based segmentation</a></li>
<li><a href="#415-multi-scale-networks">4.1.5 Multi-scale networks</a></li>
</ul>
</li>
<li><a href="#42-convolutional-autoencoders">4.2 Convolutional autoencoders</a>
<ul>
<li><a href="#421-skip-connections">4.2.1 Skip Connections</a></li>
<li><a href="#422-forwarding-pooling-indices">4.2.2 Forwarding pooling indices</a></li>
</ul>
</li>
<li><a href="#43-adversarial-models">4.3 Adversarial Models</a></li>
<li><a href="#44-sequential-models">4.4 Sequential Models</a>
<ul>
<li><a href="#441-recurrent-models">4.4.1 Recurrent Models</a></li>
<li><a href="#442-attention-models">4.4.2 Attention Models</a></li>
</ul>
</li>
<li><a href="#45-weakly-supervised-or-unsupervised-models">4.5 Weakly Supervised or Unsupervised Models</a>
<ul>
<li><a href="#451-weakly-supervised-algorithms">4.5.1 Weakly supervised algorithms</a></li>
</ul>
</li>
</ul>
</li>
</ul>
<h1 class="mume-header" id="abstract">Abstract</h1>

<ul>
<li>The machine learning community has been overwhelmed by a plethora of deep learning based approaches.<br>
&#x673A;&#x5668;&#x5B66;&#x4E60;&#x793E;&#x533A;&#x5DF2;&#x7ECF;&#x88AB;&#x8FC7;&#x591A;&#x7684;&#x6DF1;&#x5EA6;&#x5B66;&#x4E60;&#x65B9;&#x6CD5;&#x6240;&#x6DF9;&#x6CA1;</li>
<li>Many challenging computer vision tasks such as detection, localization, recognition and segmentation of objects in unconstrained environment are being efficiently addressed by various types of deep neural networks like convolutional neural networks, recurrent networks, adversarial networks&#xFF0C; autoencoders and so on.<br>
&#x8BB8;&#x591A;&#x6311;&#x6218;&#x6027;&#x7684;cv&#x4EFB;&#x52A1;&#xFF0C;&#x5982;&#x5BF9;&#x65E0;&#x7EA6;&#x675F;&#x73AF;&#x5883;&#x4E2D;&#x7269;&#x4F53;&#x7684;&#x68C0;&#x6D4B;&#xFF0C;&#x5B9A;&#x4F4D;&#xFF0C;&#x8BC6;&#x522B;&#x548C;&#x5206;&#x5272;&#xFF0C;&#x901A;&#x8FC7;&#x5404;&#x79CD;&#x7C7B;&#x578B;&#x7684;&#x6DF1;&#x5EA6;&#x795E;&#x7ECF;&#x7F51;&#x7EDC;&#x5982;CNN&#xFF0C;RNN&#xFF0C;&#x5BF9;&#x6297;&#x7F51;&#x7EDC;&#xFF0C;&#x81EA;&#x52A8;&#x7F16;&#x7801;&#x5668;&#x7B49;&#xFF0C;&#x5DF2;&#x7ECF;&#x88AB;&#x6709;&#x6548;&#x5730;&#x5904;&#x7406;</li>
<li>While there have been plenty of analytical studies regarding the object detection or recognition domain, many new deep learning techniques have surfaced with respect to image segmentation techniques.<br>
&#x5728;&#x5DF2;&#x6709;&#x7684;&#x5927;&#x91CF;&#x5173;&#x4E8E;&#x7269;&#x4F53;&#x68C0;&#x6D4B;&#x548C;&#x8BC6;&#x522B;&#x9886;&#x57DF;&#x7684;&#x5206;&#x6790;&#x7814;&#x7A76;&#x4E2D;&#xFF0C;&#x8BB8;&#x591A;&#x5173;&#x4E8E;&#x56FE;&#x50CF;&#x5206;&#x5272;&#x6280;&#x672F;&#x65B0;&#x7684;&#x6DF1;&#x5EA6;&#x5B66;&#x4E60;&#x65B9;&#x6CD5;&#x51FA;&#x73B0;&#x4E86;</li>
<li>This paper approaches these various deep learning techniques of image segmentation from an analytical perspective.<br>
&#x672C;&#x6587;&#x4ECE;&#x5206;&#x6790;&#x7684;&#x89C6;&#x89D2;&#x641C;&#x96C6;&#x4E86;&#x8FD9;&#x4E9B;&#x7528;&#x4E8E;&#x56FE;&#x50CF;&#x5206;&#x5272;&#x4E0D;&#x540C;&#x7684;&#x6DF1;&#x5EA6;&#x5B66;&#x4E60;&#x6280;&#x672F;</li>
<li>The main goal of this work is to provide an intuitive understanding of the major techniques that has made significant contribution to the image segmentation domain.<br>
&#x8FD9;&#x9879;&#x5DE5;&#x4F5C;&#x7684;&#x4E3B;&#x8981;&#x76EE;&#x6807;&#x662F;&#x63D0;&#x4F9B;&#x5BF9;&#x4E8E;&#x56FE;&#x50CF;&#x5206;&#x5272;&#x9886;&#x57DF;&#x505A;&#x51FA;&#x91CD;&#x8981;&#x8D21;&#x732E;&#x7684;&#x4E3B;&#x8981;&#x6280;&#x672F;&#x76F4;&#x89C2;&#x7684;&#x7406;&#x89E3;</li>
<li>Starting from some of the traditional image segmentation approaches, the paper progresses describing the effect deep learning had on the image segmentation domain.<br>
&#x4ECE;&#x4E00;&#x4E9B;&#x4F20;&#x7EDF;&#x7684;&#x56FE;&#x50CF;&#x5206;&#x5272;&#x65B9;&#x6CD5;&#x5F00;&#x59CB;&#xFF0C;&#x6587;&#x7AE0;&#x9010;&#x6E10;&#x63CF;&#x8FF0;&#x4E86;&#x6DF1;&#x5EA6;&#x5B66;&#x4E60;&#x5728;&#x56FE;&#x50CF;&#x5206;&#x5272;&#x9886;&#x57DF;&#x7684;&#x5F71;&#x54CD;</li>
<li>Thereafter, most of the major segmentation algorithms have been logically categorized with paragraphs dedicated to their unique contribution.<br>
&#x5728;&#x8FD9;&#x4E4B;&#x540E;&#xFF0C;&#x5C06;&#x5927;&#x90E8;&#x5206;&#x4E3B;&#x8981;&#x7684;&#x5206;&#x5272;&#x7B97;&#x6CD5;&#x6309;&#x903B;&#x8F91;&#x5206;&#x7C7B;&#xFF0C;&#x5E76;&#x7528;&#x4E00;&#x4E9B;&#x7BC7;&#x5E45;&#x6765;&#x63CF;&#x8FF0;&#x4ED6;&#x4EEC;&#x72EC;&#x7279;&#x7684;&#x8D21;&#x732E;</li>
<li>With an ample amount of intuitive explanations, the reader is expected to have an improved ability to visualize the internal dynamics of these processes.<br>
&#x901A;&#x8FC7;&#x5927;&#x91CF;&#x76F4;&#x89C2;&#x7684;&#x89E3;&#x91CA;&#xFF0C;&#x8BFB;&#x8005;&#x53EF;&#x4EE5;&#x66F4;&#x597D;&#x5730;&#x53EF;&#x89C6;&#x5316;&#x8FD9;&#x4E9B;&#x8FC7;&#x7A0B;&#x7684;&#x5185;&#x90E8;&#x52A8;&#x6001;</li>
</ul>
<h1 class="mume-header" id="1-introduction">1 Introduction</h1>

<p><strong>paragraph 1</strong></p>
<ul>
<li>Image segmentation can be defined as a specific image processing technique which is used to divide an image into two or more meaningful regions.<br>
&#x56FE;&#x50CF;&#x5206;&#x5272;&#x53EF;&#x4EE5;&#x88AB;&#x5B9A;&#x4E49;&#x4E3A;&#x4E00;&#x79CD;&#x7279;&#x5B9A;&#x7684;&#x56FE;&#x50CF;&#x5904;&#x7406;&#x6280;&#x672F;&#xFF0C;&#x7528;&#x4E8E;&#x8BB2;&#x4E00;&#x5E45;&#x56FE;&#x50CF;&#x5206;&#x6210;&#x4E00;&#x4E2A;&#x6216;&#x591A;&#x4E2A;&#x5177;&#x6709;&#x610F;&#x4E49;&#x7684;&#x533A;&#x57DF;</li>
<li>Image segmentation can also be seen as a process of defining boundaries between separate semantic entities in an image.<br>
&#x56FE;&#x50CF;&#x5206;&#x5272;&#x4E5F;&#x53EF;&#x4EE5;&#x88AB;&#x89C6;&#x4E3A;&#x5B9A;&#x4E49;&#x533A;&#x5206;&#x56FE;&#x50CF;&#x4E2D;&#x7684;&#x8BED;&#x4E49;&#x5B9E;&#x4F53;&#x7684;&#x8FC7;&#x7A0B;</li>
<li>From a more technical perspective, image segmentation is a process of assigning a label to each pixel in the image such that pixels with the same label are connected with respect to some visual or semantic property (Fig. 1).<br>
&#x4ECE;&#x66F4;&#x591A;&#x6280;&#x672F;&#x6027;&#x7684;&#x89C6;&#x89D2;&#x6765;&#x770B;&#xFF0C;&#x56FE;&#x50CF;&#x5206;&#x5272;&#x662F;&#x4E00;&#x4E2A;&#x5BF9;&#x56FE;&#x50CF;&#x4E2D;&#x6BCF;&#x4E00;&#x4E2A;&#x50CF;&#x7D20;&#x5206;&#x914D;&#x6807;&#x7B7E;&#x8FC7;&#x7A0B;&#xFF0C;&#x5176;&#x4E2D;&#x62E5;&#x6709;&#x4E00;&#x4E9B;&#x89C6;&#x89C9;&#x6216;&#x8BED;&#x4E49;&#x6027;&#x8D28;&#x4E0A;&#x76F8;&#x540C;&#x6807;&#x7B7E;&#x7684;&#x50CF;&#x7D20;&#x5C06;&#x88AB;&#x8FDE;&#x63A5;&#x5230;&#x4E00;&#x8D77;</li>
</ul>
<p><strong>paragraph 2</strong></p>
<ul>
<li>Image segmentation subsumes a large class of finely related problems in computer vision. The most classic version is semantic segmentation [66].<br>
&#x56FE;&#x50CF;&#x5206;&#x5272;&#x5305;&#x542B;&#x4E86;cv&#x4E2D;&#x7684;&#x4E00;&#x7C7B;&#x7CBE;&#x7EC6;&#x76F8;&#x5173;&#x7684;&#x95EE;&#x9898;</li>
<li>In semantic segmentation, each pixel is classified into one of the predefined set of classes such that pixels belonging to the same class belongs to an unique semantic entity in the image.<br>
&#x5728;&#x8BED;&#x4E49;&#x5206;&#x5272;&#x4E2D;&#xFF0C;&#x6BCF;&#x4E00;&#x4E2A;&#x50CF;&#x7D20;&#x88AB;&#x5206;&#x5230;&#x9884;&#x5148;&#x5B9A;&#x4E49;&#x7684;&#x7C7B;&#x522B;&#x4E2D;&#x7684;&#x4E00;&#x7C7B;&#xFF0C;&#x540C;&#x4E00;&#x4E2A;&#x7C7B;&#x522B;&#x7684;&#x50CF;&#x7D20;&#x5C5E;&#x4E8E;&#x56FE;&#x50CF;&#x4E2D;&#x72EC;&#x4E00;&#x7684;&#x8BED;&#x4E49;&#x5B9E;&#x4F53;</li>
<li>It is also worthy to note that the semantics in question depends not only on the data but also the problem that needs to be addressed.<br>
&#x503C;&#x5F97;&#x6CE8;&#x610F;&#x7684;&#x662F;&#x95EE;&#x9898;&#x4E2D;&#x7684;&#x8BED;&#x4E49;&#x4E0D;&#x4EC5;&#x53D6;&#x51B3;&#x4E8E;&#x6570;&#x636E;&#xFF0C;&#x4E5F;&#x53D6;&#x51B3;&#x4E8E;&#x9700;&#x8981;&#x89E3;&#x51B3;&#x7684;&#x95EE;&#x9898;</li>
<li>For example, for a pedestrian detection system, the whole body of person should belong to the same segment, however for a action recognition system, it might be necessary to segment different body parts into different classes.<br>
&#x4E3E;&#x4F8B;&#xFF0C;&#x5BF9;&#x4E8E;&#x4E00;&#x4E2A;&#x884C;&#x4EBA;&#x68C0;&#x6D4B;&#x7CFB;&#x7EDF;&#xFF0C;&#x4EBA;&#x7684;&#x6574;&#x4E2A;&#x8EAB;&#x4F53;&#x5E94;&#x8BE5;&#x5C5E;&#x4E8E;&#x76F8;&#x540C;&#x7684;&#x5206;&#x5272;&#x7ED3;&#x679C;&#xFF0C;&#x4F46;&#x5728;&#x52A8;&#x4F5C;&#x8BC6;&#x522B;&#x7CFB;&#x7EDF;&#x4E2D;&#xFF0C;&#x9700;&#x8981;&#x5C06;&#x4E0D;&#x540C;&#x7684;&#x8EAB;&#x4F53;&#x90E8;&#x4F4D;&#x5206;&#x5272;&#x5230;&#x4E0D;&#x540C;&#x7684;&#x7C7B;&#x522B;&#x4E2D;</li>
<li>Other forms of image segmentation can focus on the most important object in a scene. A particular class of problem called saliency detection [19] is born from this.<br>
&#x5176;&#x4ED6;&#x5F62;&#x5F0F;&#x7684;&#x56FE;&#x50CF;&#x5206;&#x5272;&#x5219;&#x5173;&#x6CE8;&#x4E8E;&#x573A;&#x666F;&#x4E2D;&#x6700;&#x91CD;&#x8981;&#x7684;&#x7269;&#x4F53;&#x3002;&#x4E00;&#x4E2A;&#x7279;&#x5B9A;&#x7684;&#x95EE;&#x9898;&#x7C7B;&#x522B;&#xFF0C;&#x663E;&#x8457;&#x6027;&#x68C0;&#x6D4B;&#x5C31;&#x51FA;&#x81EA;&#x4E8E;&#x6B64;</li>
<li>Other variants of this domain can be foreground background separation problems. In many systems like, image retrieval or visual question answering it is often necessary to count the number of objects. Instance specific segmentation addresses that issue. Instance specific segmentation is often coupled with object detection systems to detect and segment multiple instances of the same object[43] in a scene.<br>
&#x8FD9;&#x4E2A;&#x9886;&#x57DF;&#x4E2D;&#x7684;&#x5176;&#x4ED6;&#x53D8;&#x4F53;&#xFF0C;&#x5982;&#x524D;&#x666F;&#x80CC;&#x666F;&#x5206;&#x5272;&#x95EE;&#x9898;&#x3002;&#x5728;&#x8BB8;&#x591A;&#x7CFB;&#x7EDF;&#x4E2D;&#xFF0C;&#x5982;&#x56FE;&#x50CF;&#x4FEE;&#x590D;&#x6216;&#x89C6;&#x89C9;&#x95EE;&#x9898;&#x56DE;&#x7B54;&#xFF0C;&#x7ECF;&#x5E38;&#x9700;&#x8981;&#x8BA1;&#x7B97;&#x7269;&#x4F53;&#x7684;&#x6570;&#x91CF;&#xFF0C;&#x5B9E;&#x4F8B;&#x5177;&#x4F53;&#x5206;&#x5272;&#x89E3;&#x51B3;&#x4E86;&#x8FD9;&#x4E2A;&#x95EE;&#x9898;&#x3002;&#x5B9E;&#x4F8B;&#x5206;&#x5272;&#x7ECF;&#x5E38;&#x4E0E;&#x5176;&#x4ED6;&#x7269;&#x4F53;&#x68C0;&#x6D4B;&#x7B97;&#x6CD5;&#x7ED3;&#x5408;&#xFF0C;&#x7528;&#x4E8E;&#x68C0;&#x6D4B;&#x548C;&#x5206;&#x5272;&#x573A;&#x666F;&#x4E2D;&#x76F8;&#x540C;&#x7269;&#x4F53;&#x7684;&#x591A;&#x4E2A;&#x5B9E;&#x4F8B;</li>
<li>Segmentation in the temporal space is also a challenging domain and has various application. In object tracking scenarios, pixel level classification is not only performed in the spatial domain but also across time.<br>
&#x65F6;&#x7A7A;&#x7684;&#x5206;&#x5272;&#x4E5F;&#x662F;&#x4E00;&#x4E2A;&#x6709;&#x6311;&#x6218;&#x6027;&#x7684;&#x9886;&#x57DF;&#xFF0C;&#x6709;&#x591A;&#x65B9;&#x9762;&#x7684;&#x5E94;&#x7528;&#x3002;&#x5728;&#x7269;&#x4F53;&#x8DDF;&#x8E2A;&#x60C5;&#x666F;&#x4E2D;&#xFF0C;&#x50CF;&#x7D20;&#x7EA7;&#x7684;&#x5206;&#x7C7B;&#x4E0D;&#x4EC5;&#x4EC5;&#x5728;&#x7A7A;&#x95F4;&#x4E2D;&#x6267;&#x884C;&#xFF0C;&#x800C;&#x4E14;&#x8DE8;&#x65F6;&#x95F4;&#x6267;&#x884C;</li>
<li>Other applications in traffic analysis or surveillance needs to perform motion segmentation to analyze paths of moving objects. In the field of segmentation with lower semantic level, over-segmentation is also a common approach where images are divided into extremely small regions to ensure boundary adherence, at the cost of creating a lot of spurious edges.<br>
&#x6D41;&#x91CF;&#x5206;&#x6790;&#x548C;&#x76D1;&#x89C6;&#x4E2D;&#x7684;&#x5176;&#x4ED6;&#x5E94;&#x7528;&#x9700;&#x8981;&#x6267;&#x884C;&#x8FD0;&#x52A8;&#x5206;&#x5272;&#x6765;&#x5206;&#x6790;&#x8FD0;&#x52A8;&#x7269;&#x4F53;&#x7684;&#x8DEF;&#x5F84;&#x3002;&#x5728;&#x66F4;&#x4F4E;&#x7EA7;&#x8BED;&#x4E49;&#x5206;&#x5272;&#x7684;&#x9886;&#x57DF;&#x4E2D;&#xFF0C;&#x8FC7;&#x5206;&#x5272;&#x4E5F;&#x662F;&#x4E00;&#x79CD;&#x5E38;&#x89C1;&#x7684;&#x65B9;&#x6CD5;&#xFF0C;&#x5176;&#x5C06;&#x56FE;&#x50CF;&#x5206;&#x4E3A;&#x6781;&#x5C0F;&#x7684;&#x533A;&#x5757;&#x4EE5;&#x786E;&#x4FDD;&#x6CBF;&#x7740;&#x8FB9;&#x754C;&#xFF0C;&#x4EE5;&#x4EA7;&#x751F;&#x5927;&#x91CF;&#x7684;&#x4F2A;&#x8FB9;&#x7F18;&#x4E3A;&#x4EE3;&#x4EF7;<br>
Over-segmentation algorithms are often combined with region merging techniques to perform image segmen- tation. Even simple color or texture segmentation also finds its use in various scenarios. Another important distinction between segmentation algorithms is the need of interactions from the user. While it is desirable to have fully automated systems, a little bit of interaction from the user can improve the quality of segmentation to a large extent. This is especially applicable where we are dealing with complex scenes or we do not posses an ample amount of data to train the system.<br>
&#x8FC7;&#x5206;&#x5272;&#x7B97;&#x6CD5;&#x7ECF;&#x5E38;&#x548C;&#x533A;&#x57DF;&#x878D;&#x5408;&#x6280;&#x672F;&#x7ED3;&#x5408;&#x6765;&#x6267;&#x884C;&#x56FE;&#x50CF;&#x5206;&#x5272;&#x4EFB;&#x52A1;&#x3002;&#x5373;&#x4F7F;&#x662F;&#x7B80;&#x5355;&#x7684;&#x989C;&#x8272;&#x6216;&#x7EB9;&#x7406;&#x5206;&#x5272;&#x4E5F;&#x80FD;&#x5404;&#x79CD;&#x60C5;&#x51B5;&#x4E0B;&#x4F7F;&#x7528;&#x3002;&#x5206;&#x5272;&#x7B97;&#x6CD5;&#x95F4;&#x7684;&#x53E6;&#x4E00;&#x4E2A;&#x91CD;&#x8981;&#x7684;&#x533A;&#x522B;&#x662F;&#x9700;&#x8981;&#x4E0E;&#x7528;&#x6237;&#x4E4B;&#x95F4;&#x8FDB;&#x884C;&#x4EA4;&#x4E92;&#x3002;&#x867D;&#x7136;&#x5168;&#x81EA;&#x52A8;&#x7CFB;&#x7EDF;&#x4E5F;&#x662F;&#x53EF;&#x53D6;&#x7684;&#xFF0C;&#x4F46;&#x4E0E;&#x7528;&#x6237;&#x95F4;&#x4E00;&#x70B9;&#x70B9;&#x7684;&#x4EA4;&#x4E92;&#x4E5F;&#x53EF;&#x4EE5;&#x6781;&#x5927;&#x5730;&#x63D0;&#x9AD8;&#x5206;&#x5272;&#x7684;&#x8D28;&#x91CF;&#x3002;&#x8FD9;&#x5728;&#x6211;&#x4EEC;&#x5904;&#x7406;&#x590D;&#x6742;&#x573A;&#x666F;&#x6216;&#x6CA1;&#x6709;&#x8DB3;&#x591F;&#x6570;&#x91CF;&#x7684;&#x8BAD;&#x7EC3;&#x6570;&#x636E;&#x7684;&#x7CFB;&#x7EDF;&#x4E2D;&#x5C24;&#x5176;&#x9002;&#x7528;&#x3002;</li>
</ul>
<p><strong>paragraph 3</strong></p>
<ul>
<li>Segmentation algorithms has several applications in the real world. In medical image processing [123] as well we need to localize various abnormalities like aneurysms [48], tumors [145], cancerous elements like melanoma detection [189], or specific organs during surgeries [206].<br>
&#x5206;&#x5272;&#x7B97;&#x6CD5;&#x5728;&#x73B0;&#x5B9E;&#x4E16;&#x754C;&#x4E2D;&#x6709;&#x4E00;&#x4E9B;&#x5E94;&#x7528;&#x3002;&#x5728;&#x533B;&#x5B66;&#x56FE;&#x50CF;&#x5904;&#x7406;&#x6211;&#x4EEC;&#x4E5F;&#x9700;&#x8981;&#x5B9A;&#x4F4D;&#x4E00;&#x4E9B;&#x5F02;&#x5E38;&#xFF0C;&#x6BD4;&#x5982;&#x52A8;&#x8109;&#x7624;&#xFF0C;&#x80BF;&#x7624;&#xFF0C;&#x9ED1;&#x8272;&#x7D20;&#x7624;&#x7B49;&#x764C;&#x6027;&#x6210;&#x5206;&#x68C0;&#x6D4B;&#xFF0C;&#x6216;&#x8005;&#x5728;&#x624B;&#x672F;&#x4E2D;&#x5206;&#x8FA8;&#x5668;&#x5B98;</li>
<li>Another domain where segmentation is important is surveillance. Many problems such as pedestrian detection [113], traffic surveillance [60] require the segmentation of specific objects e.g. persons or cars.<br>
&#x53E6;&#x4E00;&#x4E2A;&#x5206;&#x5272;&#x5F88;&#x91CD;&#x8981;&#x7684;&#x9886;&#x57DF;&#x662F;&#x76D1;&#x89C6;&#x3002;&#x8BB8;&#x591A;&#x95EE;&#x9898;&#x6BD4;&#x5982;&#x884C;&#x4EBA;&#x68C0;&#x6D4B;&#xFF0C;&#x6D41;&#x91CF;&#x76D1;&#x89C6;&#x9700;&#x8981;&#x5BF9;&#x7279;&#x5B9A;&#x7684;&#x7269;&#x4F53;&#x8FDB;&#x884C;&#x5206;&#x5272;&#xFF0C;&#x6BD4;&#x5982;&#x4EBA;&#x548C;&#x8F66;&#x8F86;</li>
<li>Other domains include satellite imagery [11, 17], guidance systems in defense [119], forensics such as face [5], iris [51] and fingerprint [144] recognition.<br>
&#x5176;&#x4ED6;&#x9886;&#x57DF;&#x5305;&#x62EC;&#x536B;&#x661F;&#x56FE;&#x50CF;&#xFF0C;&#x56FD;&#x9632;&#x5236;&#x5BFC;&#x7CFB;&#x7EDF;&#xFF0C;&#x53F8;&#x6CD5;&#x8BC1;&#x636E;&#x5982;&#x4EBA;&#x8138;&#xFF0C;&#x8679;&#x819C;&#x548C;&#x6307;&#x7EB9;&#x7684;&#x8BC6;&#x522B;</li>
<li>Generally traditional methods such as histogram thresholding [195], hybridization [193, 87] feature space clustering [40], region- based approaches [59], edge detection approaches [184], fuzzy approaches [39], entropy-based approaches [47], neural networks (Hopfield neural network [35], self-organizing maps [27]), physics-based approaches [158] etc. are used popularly in this purpose.<br>
&#x901A;&#x5E38;&#x4F20;&#x7EDF;&#x7684;&#x65B9;&#x6CD5;&#x6BD4;&#x5982;&#x76F4;&#x65B9;&#x56FE;&#x9608;&#x503C;&#x5316;&#xFF0C;&#x6742;&#x4EA4;&#x7279;&#x5F81;&#x7A7A;&#x95F4;&#x805A;&#x7C7B;&#xFF0C;&#x57FA;&#x4E8E;&#x533A;&#x57DF;&#x7684;&#x65B9;&#x6CD5;&#xFF0C;&#x8FB9;&#x7F18;&#x68C0;&#x6D4B;&#x65B9;&#x6CD5;&#xFF0C;&#x6A21;&#x7CCA;&#x65B9;&#x6CD5;&#xFF0C;&#x57FA;&#x4E8E;&#x71B5;&#x7684;&#x65B9;&#x6CD5;&#xFF0C;&#x795E;&#x7ECF;&#x7F51;&#x7EDC;&#xFF08;Hopfield &#x795E;&#x7ECF;&#x7F51;&#x7EDC;&#xFF0C;&#x81EA;&#x7EC4;&#x7EC7;&#x56FE;&#xFF09;&#xFF0C;&#x57FA;&#x4E8E;&#x7269;&#x7406;&#x5B66;&#x7684;&#x65B9;&#x6CD5;&#x7B49;&#x7B49;&#xFF0C;&#x5728;&#x8FD9;&#x9879;&#x4EFB;&#x52A1;&#x4E2D;&#x5F97;&#x5230;&#x5E7F;&#x6CDB;&#x8FD0;&#x7528;</li>
<li>However, such feature-based approaches have a common bottleneck that they are dependent on the quality of feature extracted by the domain experts. Generally, humans are bound to miss latent or abstract features for image segmentation.<br>
&#x4F46;&#x662F;&#xFF0C;&#x8FD9;&#x4E9B;&#x57FA;&#x4E8E;&#x7279;&#x5F81;&#x7684;&#x65B9;&#x6CD5;&#x6709;&#x4E00;&#x4E2A;&#x76F8;&#x540C;&#x7684;&#x74F6;&#x9888;&#xFF0C;&#x5373;&#x8FD9;&#x4E9B;&#x65B9;&#x6CD5;&#x90FD;&#x4F9D;&#x8D56;&#x4E8E;&#x9886;&#x57DF;&#x4E13;&#x5BB6;&#x63D0;&#x53D6;&#x5F97;&#x5230;&#x7684;&#x7279;&#x5F81;&#x6570;&#x91CF;&#x3002;&#x901A;&#x5E38;&#xFF0C;&#x4EBA;&#x5728;&#x56FE;&#x50CF;&#x5206;&#x5272;&#x4E2D;&#x5FC5;&#x5B9A;&#x4F1A;&#x9519;&#x8FC7;&#x4E00;&#x4E9B;&#x6F5C;&#x5728;&#x7684;&#x6216;&#x62BD;&#x8C61;&#x7684;&#x7279;&#x5F81;</li>
<li>On the other hand, deep learning in general addresses this issue of automated feature learning. In this regard one of the most common technique in computer vision was introduced soon by the name of convolutional neural networks [110] that learned a cascaded set of convolutional kernels through backpropagation [182]. Since then, it has been improved significantly with features like layer-wise training [13], rectified linear activations [153], batch normalization [84], auxiliary classifiers [52], atrous convolutions [211], skip connections [78], better optimization techniques [97] and so on.<br>
&#x5728;&#x5176;&#x4ED6;&#x65B9;&#x9762;&#xFF0C;&#x6DF1;&#x5EA6;&#x5B66;&#x4E60;&#x901A;&#x5E38;&#x80FD;&#x591F;&#x89E3;&#x51B3;&#x81EA;&#x52A8;&#x7279;&#x5F81;&#x5B66;&#x4E60;&#x7684;&#x8FD9;&#x4E00;&#x95EE;&#x9898;&#x3002;&#x5728;&#x8FD9;&#x65B9;&#x9762;&#x6700;&#x5E38;&#x89C1;&#x7684;&#x8BA1;&#x7B97;&#x673A;&#x89C6;&#x89C9;&#x6280;&#x672F;&#x4E4B;&#x4E00;&#xFF1A;&#x5377;&#x79EF;&#x795E;&#x7ECF;&#x7F51;&#x7EDC;&#x5F88;&#x5FEB;&#x88AB;&#x4ECB;&#x7ECD;&#xFF0C;&#x5B83;&#x80FD;&#x591F;&#x901A;&#x8FC7;&#x53CD;&#x5411;&#x4F20;&#x64AD;&#x5B66;&#x4E60;&#x4E00;&#x8FDE;&#x4E32;&#x7684;&#x5377;&#x79EF;&#x6838;&#x3002;&#x81EA;&#x6B64;&#x4EE5;&#x540E;&#xFF0C;&#x5B83;&#x4EE5;&#x5206;&#x5C42;&#x8BAD;&#x7EC3;&#x3001;&#x7EA0;&#x6B63;&#x7EBF;&#x6027;&#x6FC0;&#x6D3B;&#x3001;&#x6279;&#x5F52;&#x4E00;&#x5316;&#x3001;&#x8F85;&#x52A9;&#x5206;&#x7C7B;&#x5668;&#x3001;&#x5706;&#x5377;&#x79EF;&#x3001;&#x8DF3;&#x8DC3;&#x8FDE;&#x63A5;&#xFF08;&#x6B8B;&#x5DEE;&#x7F51;&#x7EDC;&#xFF09;&#x3001;&#x66F4;&#x597D;&#x7684;&#x4F18;&#x5316;&#x6280;&#x672F;&#x4E3A;&#x7279;&#x5F81;&#xFF0C;&#x88AB;&#x663E;&#x8457;&#x5730;&#x53D1;&#x5C55;</li>
<li>With all these there was a large number of new types of image segmentation techniques as well. Various such techniques drew inspiration from popular net- works such as AlexNet [104], convolutional autoencoders [141], recurrent neural networks [143], residual networks [78] and so on.<br>
&#x540C;&#x65F6;&#xFF0C;&#x968F;&#x7740;&#x8FD9;&#x4E9B;&#x6280;&#x672F;&#x4E5F;&#x51FA;&#x73B0;&#x4E86;&#x5927;&#x91CF;&#x7684;&#x65B0;&#x5F0F;&#x56FE;&#x50CF;&#x5206;&#x5272;&#x6280;&#x672F;&#x3002;&#x5404;&#x79CD;&#x8FD9;&#x6837;&#x7684;&#x6280;&#x672F;&#x90FD;&#x4ECE;AlexNet&#xFF0C;&#x5377;&#x79EF;&#x81EA;&#x52A8;&#x7F16;&#x7801;&#x5668;&#xFF0C;&#x5FAA;&#x73AF;&#x5377;&#x79EF;&#x7F51;&#x7EDC;&#xFF0C;&#x6B8B;&#x5DEE;&#x7F51;&#x7EDC;&#x7B49;&#x7B49;&#x4E4B;&#x7C7B;&#x7684;&#x6D41;&#x884C;&#x7F51;&#x7EDC;&#x5438;&#x5F15;&#x4E86;&#x5927;&#x91CF;&#x7684;&#x7075;&#x611F;</li>
</ul>
<h1 class="mume-header" id="2-motivation">2 Motivation</h1>

<ul>
<li>There have been many reviews and surveys regarding the traditional technologies associated with image segmentation [61, 160]. While some of them specialized in application areas [107, 123, 185], while other focused on specific types of algorithms [20, 19, 59].<br>
&#x76EE;&#x524D;&#x5DF2;&#x7ECF;&#x6709;&#x5927;&#x91CF;&#x5173;&#x4E8E;&#x4E8E;&#x56FE;&#x50CF;&#x5206;&#x5272;&#x76F8;&#x5173;&#x7684;&#x4F20;&#x7EDF;&#x6280;&#x672F;&#x7684;&#x8BC4;&#x5BA1;&#x548C;&#x7814;&#x7A76;&#x3002;&#x5176;&#x4E2D;&#x4E00;&#x90E8;&#x5206;&#x4E13;&#x6CE8;&#x4E8E;&#x5E94;&#x7528;&#x9886;&#x57DF;&#xFF0C;&#x800C;&#x5176;&#x4ED6;&#x7684;&#x5219;&#x5173;&#x6CE8;&#x7279;&#x5B9A;&#x7C7B;&#x522B;&#x7684;&#x7B97;&#x6CD5;</li>
<li>With arrival of deep learning techniques many new classes of image segmentation algorithms have surfaced. Earlier studies [219] have shown the potential of deep learning based approaches. There have been more recent studies [68] which cover a number of methods and compare them on the basis of their reported performance.<br>
&#x968F;&#x7740;&#x6DF1;&#x5EA6;&#x5B66;&#x4E60;&#x6280;&#x672F;&#x7684;&#x6765;&#x4E34;&#xFF0C;&#x8BB8;&#x591A;&#x65B0;&#x7684;&#x56FE;&#x50CF;&#x5206;&#x5272;&#x7B97;&#x6CD5;&#x7C7B;&#x522B;&#x51FA;&#x73B0;&#x4E86;&#x3002;&#x65E9;&#x671F;&#x7814;&#x7A76;&#x5DF2;&#x7ECF;&#x5C55;&#x793A;&#x4E86;&#x57FA;&#x4E8E;&#x6DF1;&#x5EA6;&#x5B66;&#x4E60;&#x65B9;&#x6CD5;&#x7684;&#x6F5C;&#x529B;&#x3002;&#x6700;&#x8FD1;&#x6709;&#x4E00;&#x4E9B;&#x7814;&#x7A76;&#x8986;&#x76D6;&#x4E86;&#x5927;&#x91CF;&#x7684;&#x65B9;&#x6CD5;&#x5E76;&#x6839;&#x636E;&#x62A5;&#x544A;&#x7684;&#x6027;&#x80FD;&#x5BF9;&#x5B83;&#x4EEC;&#x8FDB;&#x884C;&#x4E86;&#x6BD4;&#x8F83;</li>
<li>The work of Garcia et al. [66] lists a variety of deep learning based segmentation techniques. They have tabulated the performance of various state of the art networks on several modern challenges. The resources are incredibly useful for understanding the current state-of-the-art in this domain. While knowing the available methods is quite useful to develop products, however, to contribute to this domain as a researcher, one needs to understand the underlying mechanics of the methods that make them confident.<br>
&#x52A0;&#x897F;&#x4E9A;&#x7B49;&#x4EBA;&#x7684;work&#x5217;&#x4E3E;&#x4E86;&#x5927;&#x91CF;&#x7684;&#x57FA;&#x4E8E;&#x6DF1;&#x5EA6;&#x5B66;&#x4E60;&#x65B9;&#x6CD5;&#x7684;&#x5206;&#x5272;&#x6280;&#x672F;&#x3002;&#x4ED6;&#x4EEC;&#x5217;&#x4E3E;&#x4E86;&#x5728;&#x4E00;&#x4E9B;&#x73B0;&#x4EE3;&#x6311;&#x6218;&#x4E0A;&#x53D6;&#x5F97;SOTA&#x7ED3;&#x679C;&#x7684;&#x7F51;&#x7EDC;&#x3002;&#x8FD9;&#x4E9B;&#x8D44;&#x6E90;&#x5728;&#x7406;&#x89E3;&#x5F53;&#x524D;&#x9886;&#x57DF;&#x4E2D;&#x7684;SOTA&#x6210;&#x679C;&#x662F;&#x76F8;&#x5F53;&#x6709;&#x7528;&#x7684;&#x3002;&#x8BA4;&#x8BC6;&#x8FD9;&#x4E9B;&#x53EF;&#x884C;&#x7684;&#x65B9;&#x6CD5;&#x5BF9;&#x4E8E;&#x4EA7;&#x54C1;&#x5F00;&#x53D1;&#x5F88;&#x6709;&#x5E2E;&#x52A9;&#xFF0C;&#x4F5C;&#x4E3A;&#x7814;&#x7A76;&#x8005;&#x4E3A;&#x4E86;&#x5BF9;&#x8FD9;&#x4E2A;&#x9886;&#x57DF;&#x505A;&#x51FA;&#x8D21;&#x732E;&#xFF0C;&#x4F60;&#x9700;&#x8981;&#x7406;&#x89E3;&#x8FD9;&#x4E9B;&#x4F7F;&#x4ED6;&#x4EEC;&#x5145;&#x6EE1;&#x4FE1;&#x606F;&#x7684;&#x65B9;&#x6CD5;&#x7684;&#x57FA;&#x7840;&#x539F;&#x7406;</li>
<li>In the present work, our main motivation is to answer the question why the methods are designed in a way they are. Understanding the mechanics of modern techniques would make it easier to tackle new challenges and develop better algorithms.<br>
&#x5728;&#x4E4B;&#x524D;&#x7684;&#x5DE5;&#x4F5C;&#x4E2D;&#xFF0C;&#x6211;&#x4EEC;&#x7684;&#x4E3B;&#x8981;&#x52A8;&#x673A;&#x662F;&#x56DE;&#x7B54;&#x4E3A;&#x4EC0;&#x4E48;&#x8FD9;&#x4E9B;&#x65B9;&#x6CD5;&#x4F1A;&#x88AB;&#x4EE5;&#x8FD9;&#x6837;&#x7684;&#x65B9;&#x5F0F;&#x8BBE;&#x8BA1;&#x7684;&#x95EE;&#x9898;&#x3002;&#x7406;&#x89E3;&#x73B0;&#x4EE3;&#x6280;&#x672F;&#x7684;&#x539F;&#x7406;&#x80FD;&#x591F;&#x4F7F;&#x5F97;&#x5E94;&#x5BF9;&#x65B0;&#x7684;&#x6311;&#x6218;&#xFF0C;&#x4EE5;&#x53CA;&#x5F00;&#x53D1;&#x65B0;&#x7684;&#x66F4;&#x597D;&#x7684;&#x7B97;&#x6CD5;&#x65F6;&#x66F4;&#x52A0;&#x5BB9;&#x6613;</li>
<li>Our approach carefully analyses each method to understand why they succeed at what they do and also why they fail for certain problems. Being aware of pros and cons of such method new designs can be initiated that reaps the benefits of the pros and overcomes the cons.<br>
&#x6211;&#x4EEC;&#x4ED4;&#x7EC6;&#x5730;&#x5206;&#x6790;&#x4E86;&#x6BCF;&#x4E00;&#x79CD;&#x65B9;&#x6CD5;&#x4EE5;&#x7406;&#x89E3;&#x4E3A;&#x4EC0;&#x4E48;&#x5B83;&#x4EEC;&#x80FD;&#x591F;&#x6210;&#x529F;&#x5730;&#x5B8C;&#x6210;&#x5DE5;&#x4F5C;&#xFF0C;&#x4EE5;&#x53CA;&#x4E3A;&#x4EC0;&#x4E48;&#x4F1A;&#x56E0;&#x4E3A;&#x67D0;&#x4E9B;&#x95EE;&#x9898;&#x800C;&#x5931;&#x8D25;&#x3002;&#x610F;&#x8BC6;&#x5230;&#x8FD9;&#x4E9B;&#x65B9;&#x6CD5;&#x7684;&#x5229;&#x5F0A;&#x4E4B;&#x540E;&#xFF0C;&#x80FD;&#x591F;&#x542F;&#x52A8;&#x65B0;&#x7684;&#x8BBE;&#x8BA1;&#xFF0C;&#x4ECE;&#x4F18;&#x70B9;&#x4E2D;&#x53D7;&#x76CA;&#x5E76;&#x514B;&#x670D;&#x7F3A;&#x70B9;</li>
<li>We recommend the works of Alberto Garcia-Garcia [66] to get an overview of some of the best image segmentation techniques using deep learning while our focus would be to understand why, when and how these techniques perform on various challenges.<br>
&#x6211;&#x4EEC;&#x63A8;&#x8350;&#x52A0;&#x897F;&#x4E9A;&#x7684;&#x7684;&#x5DE5;&#x4F5C;&#x6765;&#x5BF9;&#x8FD9;&#x4E9B;&#x6700;&#x68D2;&#x7684;&#x4F7F;&#x7528;&#x6DF1;&#x5EA6;&#x5B66;&#x4E60;&#x7684;&#x56FE;&#x50CF;&#x5206;&#x5272;&#x65B9;&#x6CD5;&#x83B7;&#x5F97;&#x6982;&#x89C8;&#xFF0C;&#x501F;&#x6B64;&#x6211;&#x4EEC;&#x7684;&#x5173;&#x6CE8;&#x70B9;&#x5C06;&#x5728;&#x4E8E;&#x7406;&#x89E3;&#x8FD9;&#x4E9B;&#x65B9;&#x6CD5;&#x4E3A;&#x4F55;&#xFF0C;&#x4F55;&#x65F6;&#x548C;&#x5982;&#x4F55;&#x5BF9;&#x4E0D;&#x540C;&#x7684;&#x6311;&#x6218;&#x8D77;&#x4F5C;&#x7528;</li>
</ul>
<div align="center"><img src="./resource/img1.png" width="800"></div>
<center>fig. 2</center>
<h2 class="mume-header" id="21-contribution">2.1 Contribution</h2>

<ul>
<li>The paper has been designed in a way such that new researchers reap the most benefits. Initially some of the traditional techniques have been discussed to uphold the frameworks before the deep learning era. Gradually the various factors governing the onset of deep learning has been discussed so that readers have a good idea of the current direction in which machine learning is progressing.<br>
&#x672C;&#x6587;&#x4EE5;&#x80FD;&#x4F7F;&#x65B0;&#x7814;&#x7A76;&#x8005;&#x53D6;&#x5F97;&#x6700;&#x5927;&#x7684;&#x5229;&#x76CA;&#x7684;&#x65B9;&#x5F0F;&#x8BBE;&#x8BA1;&#x3002;&#x5728;&#x6DF1;&#x5EA6;&#x5B66;&#x4E60;&#x65F6;&#x4EE3;&#x4E4B;&#x524D;&#xFF0C;&#x8D77;&#x521D;&#x8BA8;&#x8BBA;&#x4E86;&#x4E00;&#x4E9B;&#x4F20;&#x7EDF;&#x7684;&#x65B9;&#x6CD5;&#x3002;&#x9010;&#x6E10;&#x5730;&#xFF0C;&#x8BA8;&#x8BBA;&#x4E86;&#x5F71;&#x54CD;&#x6DF1;&#x5EA6;&#x5B66;&#x4E60;&#x7684;&#x8BB8;&#x591A;&#x56E0;&#x7D20;&#xFF0C;&#x8BA9;&#x8BFB;&#x8005;&#x80FD;&#x591F;&#x5BF9;&#x673A;&#x5668;&#x5B66;&#x4E60;&#x76EE;&#x524D;&#x7684;&#x53D1;&#x5C55;&#x65B9;&#x5411;&#x6709;&#x4E00;&#x4E2A;&#x5F88;&#x597D;&#x7684;&#x4E86;&#x89E3;</li>
<li>In the subsequent sections the major deep learning algorithms have been briefly described in a generic way to establish a clearer concept of the procedures in the mind of the readers. The image segmentation algorithms discussed thereafter have been categorized into the major families of algorithms that governed the last few years in this domain.<br>
&#x5728;&#x63A5;&#x4E0B;&#x6765;&#x7684;&#x7AE0;&#x8282;&#x4E2D;&#xFF0C;&#x4EE5;&#x901A;&#x4FD7;&#x7684;&#x65B9;&#x5F0F;&#x7B80;&#x8981;&#x5730;&#x8BB2;&#x89E3;&#x4E86;&#x4E3B;&#x8981;&#x7684;&#x6DF1;&#x5EA6;&#x5B66;&#x4E60;&#x7B97;&#x6CD5;&#xFF0C;&#x5728;&#x8BFB;&#x8005;&#x5FC3;&#x4E2D;&#x5EFA;&#x7ACB;&#x6E05;&#x6670;&#x7684;&#x7A0B;&#x5E8F;&#x6982;&#x5FF5;</li>
<li>The concepts behind all the major approaches have been explained through a very simple language with minimum amount of complicated mathematics. Almost all the diagrams corresponding to major networks have been drawn using a common representational format as shown in fig. 2.<br>
&#x901A;&#x8FC7;&#x975E;&#x5E38;&#x7B80;&#x5355;&#x7684;&#x8BED;&#x8A00;&#x4EE5;&#x53CA;&#x6700;&#x5C11;&#x7684;&#x590D;&#x6742;&#x6570;&#x5B66;&#x89E3;&#x91CA;&#x4E86;&#x6240;&#x6709;&#x4E3B;&#x8981;&#x65B9;&#x6CD5;&#x80CC;&#x540E;&#x7684;&#x6982;&#x5FF5;&#x3002;&#x51E0;&#x4E4E;&#x6240;&#x6709;&#x4E0E;&#x4E3B;&#x8981;&#x7F51;&#x7EDC;&#x5BF9;&#x5E94;&#x7684;&#x56FE;&#x8868;&#x90FD;&#x4F7F;&#x7528;&#x901A;&#x7528;&#x7684;&#x5177;&#x6709;&#x4EE3;&#x8868;&#x6027;&#x7684;&#x683C;&#x5F0F;&#x753B;&#x6210;&#xFF0C;&#x5982;&#x56FE;2&#x6240;&#x793A;</li>
<li>The various approaches that have been discussed comes with different representations for architectures. The unified representation scheme allows the user to understand the fundamental similarities and differences between networks. Finally, the major application areas have been discussed to help new researchers pursue a field of their choice.<br>
&#x672C;&#x6587;&#x6240;&#x8BA8;&#x8BBA;&#x7684;&#x5404;&#x79CD;&#x65B9;&#x6CD5;&#x5BF9;&#x4E8E;&#x4E0D;&#x540C;&#x7684;&#x67B6;&#x6784;&#x9644;&#x5E26;&#x4E86;&#x4E0D;&#x540C;&#x7684;&#x63CF;&#x8FF0;&#x3002;&#x8FD9;&#x4E9B;&#x72EC;&#x4E00;&#x65E0;&#x4E8C;&#x7684;&#x63CF;&#x8FF0;&#x65B9;&#x6848;&#x4F7F;&#x8BFB;&#x8005;&#x53EF;&#x4EE5;&#x7406;&#x89E3;&#x7F51;&#x7EDC;&#x4E4B;&#x95F4;&#x57FA;&#x672C;&#x7684;&#x76F8;&#x4F3C;&#x548C;&#x4E0D;&#x540C;&#x4E4B;&#x5904;&#x3002;&#x6700;&#x540E;&#xFF0C;&#x8BA8;&#x8BBA;&#x4E3B;&#x8981;&#x7684;&#x5E94;&#x7528;&#x9886;&#x57DF;&#xFF0C;&#x5E2E;&#x52A9;&#x65B0;&#x7684;&#x7814;&#x7A76;&#x8005;&#x6839;&#x636E;&#x81EA;&#x5DF1;&#x7684;&#x9009;&#x62E9;&#x8FFD;&#x6C42;&#x76F8;&#x5E94;&#x7684;&#x9886;&#x57DF;</li>
</ul>
<h1 class="mume-header" id="3-impact-of-deep-learning-on-image-segmentation">3 Impact of Deep Learning on Image Segmentation</h1>

<ul>
<li>The development of deep learning algorithms like convolutional neural networks or deep autoencoders not only affected typical tasks like object classification but are also efficient in other related tasks like object detection, localization, tracking, or as in this case image segmentation.<br>
&#x5377;&#x79EF;&#x795E;&#x7ECF;&#x7F51;&#x7EDC;&#x6216;&#x6DF1;&#x5EA6;&#x81EA;&#x52A8;&#x7F16;&#x7801;&#x5668;&#x7B49;&#x6DF1;&#x5EA6;&#x5B66;&#x4E60;&#x7B97;&#x6CD5;&#x7684;&#x53D1;&#x5C55;&#x4E0D;&#x4EC5;&#x5F71;&#x54CD;&#x7269;&#x4F53;&#x5206;&#x7C7B;&#x4E4B;&#x7C7B;&#x5178;&#x578B;&#x7684;&#x4EFB;&#x52A1;&#xFF0C;&#x4E5F;&#x5F71;&#x54CD;&#x4E86;&#x5176;&#x4ED6;&#x76F8;&#x5173;&#x7684;&#x4EFB;&#x52A1;&#xFF0C;&#x5982;&#x7269;&#x4F53;&#x68C0;&#x6D4B;&#xFF0C;&#x5B9A;&#x4F4D;&#xFF0C;&#x8DDF;&#x8E2A;&#x6216;&#x8FD9;&#x79CD;&#x60C5;&#x51B5;&#x4E0B;&#x7684;&#x56FE;&#x50CF;&#x5206;&#x5272;</li>
</ul>
<h2 class="mume-header" id="31-effectiveness-of-convolutions-for-segmentation">3.1 Effectiveness of convolutions for segmentation</h2>

<ul>
<li>As an operation convolution can be simply defined as the function that performs a sum-of-product between kernel weights and input values while convoluting the smaller kernel over a larger image.<br>
&#x5377;&#x79EF;&#x4F5C;&#x4E3A;&#x4E00;&#x79CD;&#x64CD;&#x4F5C;&#xFF0C;&#x80FD;&#x88AB;&#x7B80;&#x5355;&#x5730;&#x5B9A;&#x4E49;&#x4E3A;&#x5728;&#x4F7F;&#x7528;&#x6BD4;&#x56FE;&#x50CF;&#x5C0F;&#x7684;kernel&#x8FDB;&#x884C;&#x5377;&#x79EF;&#x64CD;&#x4F5C;&#x65F6;&#x5BF9;kernel&#x6743;&#x91CD;&#x548C;&#x8F93;&#x5165;&#x6570;&#x636E;&#x4E4B;&#x95F4;&#x6267;&#x884C;&#x4E58;&#x79EF;&#x7D2F;&#x52A0;&#x7684;&#x51FD;&#x6570;</li>
<li>For a typical image with k channels we can convolute a smaller sized kernel with k channels along the x and y direction to obtain an output in the format of a 2 dimensional matrix. It has been observed that after training a typical CNN the convolutional kernels tend to generate activation maps with respect to certain features of the objects [214].<br>
&#x5BF9;&#x4E8E;&#x4E00;&#x5F20;&#x62E5;&#x6709;k&#x4E2A;&#x901A;&#x9053;&#x7684;&#x5178;&#x578B;&#x56FE;&#x7247;&#xFF0C;&#x6211;&#x4EEC;&#x53EF;&#x4EE5;&#x4F7F;&#x7528;&#x4E00;&#x4E2A;&#x540C;&#x6837;&#x62E5;&#x6709;k&#x4E2A;&#x901A;&#x9053;&#x7684;&#x66F4;&#x5C0F;&#x7684;&#x5377;&#x79EF;&#x6838;&#x5728;x&#x548C;y&#x65B9;&#x5411;&#x4E0A;&#x8FDB;&#x884C;&#x5377;&#x79EF;&#x64CD;&#x4F5C;&#xFF0C;&#x4EE5;&#x83B7;&#x5F97;&#x4E00;&#x4E2A;&#x5F62;&#x5F0F;&#x4E3A;2&#x7EF4;&#x77E9;&#x9635;&#x7684;&#x8F93;&#x51FA;&#x3002;&#x53EF;&#x4EE5;&#x89C2;&#x5BDF;&#x5230;&#xFF0C;&#x5728;&#x8BAD;&#x7EC3;&#x4E00;&#x4E2A;&#x5178;&#x578B;&#x7684;CNN&#x4E4B;&#x540E;&#xFF0C;&#x5377;&#x79EF;&#x6838;&#x503E;&#x5411;&#x4E8E;&#x751F;&#x6210;&#x5173;&#x4E8E;&#x7269;&#x4F53;&#x5177;&#x4F53;&#x7279;&#x5F81;&#x7684;&#x6FC0;&#x6D3B;&#x6620;&#x5C04;</li>
<li>Given the nature of activations, it can be seen as segmentation masks of object specific features. Hence the key to generating requirement specific segmentation is already embedded within this output activation matrices.<br>
&#x8003;&#x8651;&#x5230;&#x6FC0;&#x6D3B;&#x7684;&#x7279;&#x6027;&#xFF0C;&#x5B83;&#x53EF;&#x4EE5;&#x89C6;&#x4E3A;&#x7269;&#x4F53;&#x5177;&#x4F53;&#x7279;&#x5F81;&#x7684;&#x5206;&#x5272;&#x63A9;&#x819C;&#x3002;&#x56E0;&#x6B64;&#x751F;&#x6210;&#x7279;&#x5B9A;&#x4E8E;&#x9700;&#x6C42;&#x7684;&#x5206;&#x5272;&#x7684;key&#x5DF2;&#x7ECF;&#x548C;&#x8F93;&#x51FA;&#x7684;&#x6FC0;&#x6D3B;&#x77E9;&#x9635;&#x5D4C;&#x5165;&#x5230;&#x4E00;&#x8D77;</li>
<li>Most of the image segmentation algorithm uses this property of CNNs to somehow generate the segmentation masks as required to solve the problem. As shown below in fig. 3, the earlier layers capture local features like the contour or a small part of an object.<br>
&#x56FE;&#x50CF;&#x5206;&#x5272;&#x7684;&#x5927;&#x90E8;&#x5206;&#x7B97;&#x6CD5;&#x4F7F;&#x7528;CNN&#x7684;&#x8FD9;&#x4E2A;&#x6027;&#x8D28;&#x6765;&#x4EE5;&#x67D0;&#x79CD;&#x65B9;&#x5F0F;&#x751F;&#x6210;&#x89E3;&#x51B3;&#x95EE;&#x9898;&#x6240;&#x9700;&#x7684;&#x5206;&#x5272;&#x63A9;&#x819C;&#x3002;&#x5982;&#x56FE;3&#x6240;&#x793A;&#xFF0C;&#x524D;&#x9762;&#x7684;&#x5C42;&#x63D0;&#x53D6;&#x8F6E;&#x5ED3;&#x548C;&#x7269;&#x4F53;&#x7684;&#x5C0F;&#x90E8;&#x5206;&#x4E4B;&#x7C7B;&#x7684;&#x5C40;&#x90E8;&#x7279;&#x5F81;</li>
<li>In the later layers more global features are activated such as field, people or sky. It can also be noted from this figure that the earlier layers show sharper activations as compared to the later ones.<br>
&#x5728;&#x540E;&#x9762;&#x7684;&#x5C42;&#x6FC0;&#x6D3B;&#x4E86;&#x66F4;&#x591A;&#x7684;&#x5168;&#x5C40;&#x7279;&#x5F81;&#xFF0C;&#x5982;&#x7530;&#x5730;&#xFF0C;&#x4EFB;&#x52A1;&#x548C;&#x5929;&#x7A7A;&#x3002;&#x4ECE;&#x8FD9;&#x5F20;&#x56FE;&#x540C;&#x6837;&#x53EF;&#x4EE5;&#x6CE8;&#x610F;&#x5230;&#xFF0C;&#x4E0E;&#x540E;&#x9762;&#x7684;&#x5C42;&#x76F8;&#x6BD4;&#xFF0C;&#x524D;&#x9762;&#x7684;&#x5C42;&#x5C55;&#x73B0;&#x51FA;&#x66F4;&#x950B;&#x5229;&#x7684;&#x6FC0;&#x6D3B;</li>
</ul>
<div align="center"><img src="./resource/img2.png" width="800"></div>
<center>fig. 3 &#x8F93;&#x5165;&#x56FE;&#x50CF;&#x548C;&#x5178;&#x578B;&#x7684;CNN&#x8F93;&#x51FA;&#x7684;&#x91C7;&#x6837;&#x6FC0;&#x6D3B;&#x6620;&#x5C04;&#x3002;&#x4E0A;&#x884C;&#x4E3A;&#x8F93;&#x5165;&#x56FE;&#x50CF;&#x548C;&#x524D;&#x9762;&#x5C42;&#x8F93;&#x51FA;&#x7684;&#x4E24;&#x5F20;&#x6FC0;&#x6D3B;&#x6620;&#x5C04;&#xFF0C;&#x663E;&#x793A;&#x4E86;&#x7269;&#x4F53;&#x7684;&#x90E8;&#x5206;&#x5982;T&#x6064;&#x548C;&#x8F6E;&#x5ED3;&#x4E4B;&#x7C7B;&#x7684;&#x7279;&#x5F81;&#x3002;&#x4E0B;&#x884C;&#x4E3A;&#x540E;&#x9762;&#x5C42;&#x8F93;&#x51FA;&#x7684;&#x66F4;&#x5177;&#x610F;&#x4E49;&#x7684;&#x6FC0;&#x6D3B;&#xFF0C;&#x5982;&#x7530;&#x5730;&#xFF0C;&#x4EBA;&#x7269;&#x548C;&#x5929;&#x7A7A;</center>
<h2 class="mume-header" id="32-impact-of-larger-and-more-complex-datasets">3.2 Impact of larger and more complex datasets</h2>

<ul>
<li>The second impact that deep learning brought to the world of image segmentation is the plethora of datasets, challenges and competitions. These factors encouraged researchers across the world to come up with various state-of-the-art technologies to implement segmentation across various domains. A list of many such datasets have been provided in table 1<br>
&#x6DF1;&#x5EA6;&#x5B66;&#x4E60;&#x7ED9;&#x56FE;&#x50CF;&#x5206;&#x5272;&#x4E16;&#x754C;&#x5E26;&#x6765;&#x7684;&#x7B2C;&#x4E8C;&#x4E2A;&#x51B2;&#x51FB;&#x662F;&#x8FC7;&#x591A;&#x7684;&#x6570;&#x636E;&#x96C6;&#xFF0C;&#x6311;&#x6218;&#x548C;&#x7ADE;&#x8D5B;&#x3002;&#x8FD9;&#x4E2A;&#x56E0;&#x7D20;&#x9F13;&#x52B1;&#x5168;&#x4E16;&#x754C;&#x7684;&#x7814;&#x53D1;&#x8005;&#x627E;&#x51FA;&#x5404;&#x79CD;SOTA&#x7684;&#x6280;&#x672F;&#x6765;&#x9002;&#x5E94;&#x4E0D;&#x540C;&#x9886;&#x57DF;&#x7684;&#x5206;&#x5272;&#x4EFB;&#x52A1;&#x3002;&#x8868;1&#x5217;&#x51FA;&#x4E86;&#x8FD9;&#x4E9B;&#x6570;&#x636E;&#x96C6;</li>
</ul>
<h1 class="mume-header" id="4-image-segmentation-using-deep-learning">4 Image Segmentation using Deep Learning</h1>

<ul>
<li>As explained before, convolutions are quite effective in generating semantic activation maps that has components which inherently constitute various semantic segments. Various methods have been implemented to make use of these internal activations to segment the images. A summary of major deep learning based segmentation algorithms are provided in table 2 along with brief description of their major contribution.<br>
&#x5982;&#x4E4B;&#x524D;&#x6240;&#x89E3;&#x91CA;&#x7684;&#xFF0C;&#x5377;&#x79EF;&#x5728;&#x751F;&#x6210;&#x8BED;&#x4E49;&#x6FC0;&#x6D3B;&#x6620;&#x5C04;&#x65B9;&#x9762;&#x5341;&#x5206;&#x9AD8;&#x6548;&#xFF0C;&#x8FD9;&#x4E9B;&#x6FC0;&#x6D3B;&#x6620;&#x5C04;&#x5177;&#x6709;&#x56FA;&#x6709;&#x5730;&#x6784;&#x6210;&#x5404;&#x79CD;&#x8BED;&#x4E49;&#x5206;&#x5272;&#x7684;&#x7EC4;&#x4EF6;&#x3002;&#x5404;&#x79CD;&#x65B9;&#x6CD5;&#x88AB;&#x5B9E;&#x65BD;&#x5DF2;&#x5C06;&#x8FD9;&#x4E9B;&#x5185;&#x90E8;&#x6FC0;&#x6D3B;&#x7528;&#x4E8E;&#x5206;&#x5272;&#x56FE;&#x50CF;&#x3002;&#x4E3B;&#x8981;&#x7684;&#x57FA;&#x4E8E;&#x6DF1;&#x5EA6;&#x5B66;&#x4E60;&#x7684;&#x5206;&#x5272;&#x7B97;&#x6CD5;&#x4EE5;&#x53CA;&#x5BF9;&#x4ED6;&#x4EEC;&#x4E3B;&#x8981;&#x8D21;&#x732E;&#x7684;&#x7B80;&#x8981;&#x7684;&#x63CF;&#x8FF0;&#x63D0;&#x4F9B;&#x5728;&#x8868;2</li>
</ul>
<h2 class="mume-header" id="41-convolutional-neural-networks">4.1 Convolutional Neural Networks</h2>

<p><strong>4.1 &#x5377;&#x79EF;&#x795E;&#x7ECF;&#x7F51;&#x7EDC;</strong></p>
<ul>
<li>Convolutional neural networks being one of the most commonly used methods in computer vision has adopted many simple modifications to perform well in segmentation tasks as well.<br>
&#x4F5C;&#x4E3A;&#x8BA1;&#x7B97;&#x673A;&#x89C6;&#x89C9;&#x4E2D;&#x6700;&#x5E38;&#x7528;&#x7684;&#x65B9;&#x6CD5;&#x4E4B;&#x4E00;&#xFF0C;&#x5377;&#x79EF;&#x795E;&#x7ECF;&#x7F51;&#x7EDC;&#x4E5F;&#x91C7;&#x7EB3;&#x4E86;&#x8BB8;&#x591A;&#x7B80;&#x5355;&#x7684;&#x6539;&#x8FDB;&#x6765;&#x5728;&#x5206;&#x5272;&#x4EFB;&#x52A1;&#x4E2D;&#x83B7;&#x5F97;&#x66F4;&#x597D;&#x7684;&#x6548;&#x679C;</li>
</ul>
<h3 class="mume-header" id="411-fully-convolutional-layers">4.1.1 Fully convolutional layers</h3>

<p><strong>4.1.1 &#x5168;&#x5377;&#x79EF;&#x5C42;</strong></p>
<ul>
<li>Classification tasks generally require a linear output in the form of a probability distribution over the number of classes. To convert volumes of 2 dimensional activation maps into linear layers they were often flattened.<br>
&#x5206;&#x7C7B;&#x4EFB;&#x52A1;&#x901A;&#x5E38;&#x9700;&#x8981;&#x4E00;&#x4E2A;&#x5F62;&#x5F0F;&#x4E3A;&#x7C7B;&#x522B;&#x6570;&#x76EE;&#x6982;&#x7387;&#x5206;&#x5E03;&#x7684;&#x7EBF;&#x6027;&#x8F93;&#x51FA;&#x3002;&#x4E3A;&#x4E86;&#x5C06;2&#x7EF4;&#x6FC0;&#x6D3B;&#x6620;&#x5C04;&#x8F6C;&#x5316;&#x4E3A;&#x7EBF;&#x6027;&#x5C42;&#xFF0C;&#x4ED6;&#x4EEC;&#x5E38;&#x5E38;&#x7ECF;&#x8FC7;&#x5C55;&#x5F00;</li>
<li>The flattened shape allowed the execution of fully connected networks to obtain the probability distribution. However, this kind of reshaping loses the spatial relations among the pixels in the image. In a fully convolutional neural network(FCN) [130] the output of the last convolutional block is directly used for a pixel level classification. FCNs were first implemented on the PASCAL VOC 2011 segmentation dataset[54] and achieved a pixel accuracy of 90.3% and a mean IOU of 62.7%.<br>
&#x5C55;&#x5F00;&#x7684;&#x5F62;&#x72B6;&#x4F7F;&#x5F97;&#x5168;&#x8FDE;&#x63A5;&#x7F51;&#x7EDC;&#x7684;&#x6267;&#x884C;&#x7ED3;&#x679C;&#x83B7;&#x5F97;&#x6982;&#x7387;&#x7684;&#x5206;&#x5E03;&#x3002;&#x4F46;&#x662F;&#xFF0C;&#x8FD9;&#x79CD;&#x53D8;&#x5F62;&#x4E27;&#x5931;&#x4E86;&#x56FE;&#x50CF;&#x4E2D;&#x50CF;&#x7D20;&#x4E4B;&#x95F4;&#x7684;&#x7A7A;&#x95F4;&#x5173;&#x7CFB;&#x3002;&#x5728;&#x5168;&#x5377;&#x79EF;&#x795E;&#x7ECF;&#x7F51;&#x7EDC;&#xFF08;FCN&#xFF09;&#x4E2D;&#xFF0C;&#x6700;&#x540E;&#x4E00;&#x4E2A;&#x5377;&#x79EF;&#x5757;&#x7684;&#x8F93;&#x51FA;&#x76F4;&#x63A5;&#x7528;&#x4E8E;&#x50CF;&#x7D20;&#x7EA7;&#x522B;&#x7684;&#x5206;&#x7C7B;&#x3002;FCNs &#x9996;&#x6B21;&#x5728;PASCAL VOC 2011&#x5206;&#x5272;&#x6570;&#x636E;&#x96C6;&#x4E0A;&#x5B9E;&#x73B0;&#xFF0C;&#x5E76;&#x5B9E;&#x73B0;&#x4E86;90.3%&#x7684;&#x50CF;&#x7D20;&#x51C6;&#x786E;&#x7387;&#x548C;62.7%&#x7684;&#x5E73;&#x5747;IOU&#x3002;</li>
<li>Another way to avoid fully connected linear layers is the use of a full size average pooling to convert a set of 2 dimensional activation maps to a set of scalar values. As these pooled scalars are connected to the output layers, the weights corresponding to each class may be used to perform weighted summation of the corresponding activation maps in the previous layers. This process called Global Average Pooling(GAP) [121] can be directly used on various trained networks like residual network to find object specific activation zones which can be used for pixel level segmentation.<br>
&#x53E6;&#x4E00;&#x4E2A;&#x907F;&#x514D;&#x5168;&#x8FDE;&#x63A5;&#x7EBF;&#x6027;&#x5C42;&#x7684;&#x65B9;&#x6CD5;&#x662F;&#x4F7F;&#x7528;&#x5168;&#x5C3A;&#x5BF8;&#x5E73;&#x5747;&#x503C;&#x6C60;&#x5316;&#x6765;&#x5C06;2&#x7EF4;&#x6FC0;&#x6D3B;&#x6620;&#x5C04;&#x8F6C;&#x6362;&#x4E3A;&#x4E00;&#x7EC4;&#x6807;&#x91CF;&#x503C;&#x3002;&#x56E0;&#x4E3A;&#x8FD9;&#x4E9B;&#x6C60;&#x5316;&#x6807;&#x91CF;&#x88AB;&#x8FDE;&#x63A5;&#x5230;&#x8F93;&#x51FA;&#x5C42;&#xFF0C;&#x53EF;&#x4EE5;&#x5C06;&#x6BCF;&#x4E00;&#x4E2A;&#x7C7B;&#x522B;&#x76F8;&#x5E94;&#x7684;&#x6743;&#x91CD;&#x7528;&#x4E8E;&#x4E0E;&#x5148;&#x524D;&#x5C42;&#x7684;&#x76F8;&#x5E94;&#x6FC0;&#x6D3B;&#x6620;&#x5C04;&#x8FDB;&#x884C;&#x52A0;&#x6743;&#x6C42;&#x548C;&#x3002;&#x8FD9;&#x4E2A;&#x8FC7;&#x7A0B;&#x88AB;&#x79F0;&#x4E3A;&#x5168;&#x5C40;&#x5E73;&#x5747;&#x6C60;&#x5316;&#xFF08;GAP&#xFF09;&#xFF0C;&#x80FD;&#x591F;&#x76F4;&#x63A5;&#x7528;&#x4E8E;&#x4E0D;&#x540C;&#x7684;&#x8BAD;&#x7EC3;&#x8FC7;&#x7684;&#x7F51;&#x7EDC;&#x5982;&#x6B8B;&#x5DEE;&#x7F51;&#x7EDC;&#xFF0C;&#x6765;&#x5BFB;&#x627E;&#x7269;&#x4F53;&#x80FD;&#x591F;&#x7528;&#x4E8E;&#x50CF;&#x7D20;&#x7EA7;&#x522B;&#x5206;&#x5272;&#x7684;&#x786E;&#x5207;&#x6FC0;&#x6D3B;&#x533A;&#x57DF;</li>
<li>The major issues with algorithm such as this is the loss of sharpness due to the intermediate sub-sampling operations. Sub-sampling is a common operation in convolutional neural networks to increase the sensory area of kernels.<br>
&#x5982;&#x6B64;&#x7C7B;&#x7B97;&#x6CD5;&#x7684;&#x4E3B;&#x8981;&#x7684;&#x95EE;&#x9898;&#x662F;&#x5176;&#x4E2D;&#x7684;&#x4E0B;&#x91C7;&#x6837;&#x5E26;&#x6765;&#x7684;&#x9510;&#x5EA6;&#x635F;&#x5931;&#x3002;&#x4E0B;&#x91C7;&#x6837;&#x662F;&#x5377;&#x79EF;&#x795E;&#x7ECF;&#x7F51;&#x7EDC;&#x4E2D;&#x7528;&#x4E8E;&#x589E;&#x52A0;&#x5185;&#x6838;&#x611F;&#x77E5;&#x57DF;&#x7684;&#x4E00;&#x4E2A;&#x5E38;&#x7528;&#x64CD;&#x4F5C;</li>
<li>What it means is that as the activations maps reduces in size in the subsequent layers, the kernels convoluting over them actually corresponds to a larger area in the original image. However, it reduces the image size in the process, which when up-sampled to original size loses sharpness. Many approaches have been implemented to handle this issue.<br>
&#x8FD9;&#x610F;&#x5473;&#x7740;&#x5728;&#x968F;&#x540E;&#x7684;&#x5C42;&#x4E2D;&#x6FC0;&#x6D3B;&#x6620;&#x5C04;&#x7684;&#x5927;&#x5C0F;&#x51CF;&#x5C0F;&#x4E86;&#xFF0C;&#x5377;&#x79EF;&#x6838;&#x5B9E;&#x9645;&#x4E0A;&#x5BF9;&#x5E94;&#x4E8E;&#x539F;&#x56FE;&#x50CF;&#x4E2D;&#x66F4;&#x5927;&#x7684;&#x533A;&#x57DF;&#x3002;&#x4F46;&#x662F;&#xFF0C;&#x8FD9;&#x4E2A;&#x64CD;&#x4F5C;&#x51CF;&#x5C0F;&#x4E86;&#x56FE;&#x50CF;&#x7684;&#x5C3A;&#x5BF8;&#xFF0C;&#x5728;&#x5229;&#x7528;&#x4E0A;&#x91C7;&#x6837;&#x6765;&#x8FD8;&#x539F;&#x56FE;&#x50CF;&#x5927;&#x5C0F;&#x65F6;&#x4F1A;&#x7B97;&#x662F;&#x9510;&#x5EA6;&#x3002;&#x4E3A;&#x4E86;&#x89E3;&#x51B3;&#x8FD9;&#x4E2A;&#x95EE;&#x9898;&#xFF0C;&#x8BB8;&#x591A;&#x65B9;&#x6CD5;&#x5F97;&#x5230;&#x4E86;&#x5E94;&#x7528;</li>
<li>For fully convolutional models, skip connections from preceding layers can be used to obtain sharper versions of the activations from which finer segments can be chalked out (Refer fig. 4). Another work showed how the usage of high dimensional kernels to capture global information with FCN models created better segmentation masks [165].<br>
&#x5BF9;&#x4E8E;&#x5168;&#x5377;&#x79EF;&#x6A21;&#x578B;&#xFF0C;&#x8DF3;&#x8FC7;&#x5148;&#x524D;&#x5C42;&#x7684;&#x8FDE;&#x63A5;&#x80FD;&#x591F;&#x7528;&#x4E8E;&#x83B7;&#x53D6;&#x66F4;&#x9510;&#x5229;&#x7684;&#x6FC0;&#x6D3B;&#xFF0C;&#x501F;&#x6B64;&#x53EF;&#x4EE5;&#x83B7;&#x5F97;&#x66F4;&#x597D;&#x7684;&#x5206;&#x5272;&#x7ED3;&#x679C;&#xFF08;&#x5982;fig. 4 &#xFF09;&#x3002;&#x53E6;&#x4E00;&#x4E2A;&#x5DE5;&#x4F5C;&#x5C55;&#x793A;&#x4E86;FCN&#x6A21;&#x578B;&#x5982;&#x4F55;&#x5229;&#x7528;&#x9AD8;&#x5206;&#x8FA8;&#x7387;&#x7684;&#x5185;&#x6838;&#x83B7;&#x53D6;&#x5168;&#x5C40;&#x4FE1;&#x606F;&#xFF0C;&#x4EA7;&#x751F;&#x66F4;&#x597D;&#x7684;&#x5206;&#x5272;&#x63A9;&#x819C;</li>
</ul>
<div align="center"><img src="./resource/fig4.png" width="500"></div>
<center>fig. 4</center>
<ul>
<li>Segmentation algorithms can also be treated as boundary detection technique. convolutional features are also very useful from that perspective [139]. While earlier layers can provide fine details, later layers focus more on the coarser boundaries.<br>
&#x5206;&#x5272;&#x7B97;&#x6CD5;&#x4E5F;&#x80FD;&#x591F;&#x88AB;&#x89C6;&#x4E3A;&#x8FB9;&#x7F18;&#x68C0;&#x6D4B;&#x6280;&#x672F;&#x3002;&#x5377;&#x79EF;&#x7279;&#x5F81;&#x5728;&#x8FD9;&#x4E2A;&#x89D2;&#x5EA6;&#x540C;&#x6837;&#x6709;&#x7528;&#x3002;&#x5148;&#x524D;&#x7684;&#x5C42;&#x80FD;&#x591F;&#x63D0;&#x4F9B;&#x597D;&#x7684;&#x7EC6;&#x8282;&#xFF0C;&#x540E;&#x9762;&#x7684;&#x5C42;&#x5219;&#x66F4;&#x591A;&#x5173;&#x6CE8;&#x8F83;&#x7C97;&#x7684;&#x8FB9;&#x754C;</li>
</ul>
<p><strong>DeepMask and SharpMask</strong></p>
<ul>
<li>DeepMask [166] was a name given to a project at Facebook AI Research (FAIR) related to image segmentation. It exhibited the same school of thought as FCN models except that the model was capable of multi-tasking (Refer fig. 5).<br>
DeepMask&#x662F;FAIR&#x5173;&#x4E8E;&#x56FE;&#x50CF;&#x5206;&#x5272;&#x4E00;&#x9879;&#x5DE5;&#x4F5C;&#x7684;&#x540D;&#x79F0;&#x3002;&#x5B83;&#x5C55;&#x793A;&#x4E86;&#x4E0E;FCN&#x6A21;&#x578B;&#x540C;&#x4E00;&#x4E2A;&#x6D3E;&#x7CFB;&#x7684;&#x60F3;&#x6CD5;&#xFF0C;&#x671F;&#x671B;&#x80FD;&#x9002;&#x7528;&#x4E8E;&#x591A;&#x79CD;&#x4EFB;&#x52A1;</li>
</ul>
<div align="center"><img src="./resource/fig5.png" width="500"></div>
<center>fig. 5</center>
<ul>
<li>It had two main branches coming out of a shared feature representation. One of them created a pixel level classification of or a probabilistic mask for the central object and the second branch generated a score corresponding to the object recognition accuracy.<br>
&#x5728;&#x8868;&#x5F81;&#x5171;&#x4EAB;&#x7279;&#x5F81;&#x4E0A;&#x51FA;&#x73B0;&#x4E86;&#x4E24;&#x79CD;&#x4E3B;&#x8981;&#x7684;&#x5206;&#x652F;&#x3002;&#x5176;&#x4E2D;&#x4E00;&#x4E2A;&#x4EA7;&#x751F;&#x4E86;&#x4E2D;&#x5FC3;&#x7269;&#x4F53;&#x7684;&#x50CF;&#x7D20;&#x7EA7;&#x522B;&#x5206;&#x7C7B;&#x6216;&#x6982;&#x7387;&#x63A9;&#x819C;&#xFF0C;&#x53E6;&#x4E00;&#x4E2A;&#x5206;&#x652F;&#x751F;&#x6210;&#x4E86;&#x4E0E;&#x7269;&#x4F53;&#x8BC6;&#x522B;&#x51C6;&#x786E;&#x7387;&#x5BF9;&#x5E94;&#x7684;&#x5F97;&#x5206;&#x3002;</li>
<li>The network coupled with sliding windows of sixteen strides to create segments of objects at various locations of the image, whereas the score helped in identifying which of the segments were good.<br>
&#x7F51;&#x7EDC;&#x4E0E;&#x6B65;&#x957F;&#x4E3A;16&#x7684;&#x6ED1;&#x52A8;&#x7A97;&#x53E3;&#x914D;&#x5408;&#x6765;&#x5728;&#x56FE;&#x7247;&#x7684;&#x4E0D;&#x540C;&#x4F4D;&#x7F6E;&#x751F;&#x6210;&#x7269;&#x4F53;&#x7684;&#x5206;&#x5272;&#xFF0C;&#x800C;&#x5F97;&#x5206;&#x53EF;&#x4EE5;&#x5E2E;&#x52A9;&#x8FA8;&#x8BC6;&#x54EA;&#x4E9B;&#x5206;&#x5272;&#x6548;&#x679C;&#x66F4;&#x597D;</li>
<li>The network was further upgraded in SharpMask [167], where probabilistic masks from each layer were combined in top-down fashion using convolutional refinements at every steps to generate high resolution masks (Refer fig. 6). The sharpmask scored an average recall of 39.3 which beats deepmask, which scored 36.6 on the MS COCO Segmentation Dataset.<br>
&#x7F51;&#x7EDC;&#x540E;&#x6765;&#x5728;SharpMask&#x4E2D;&#x8FDB;&#x4E00;&#x6B65;&#x5347;&#x7EA7;&#xFF0C;&#x6BCF;&#x4E00;&#x5C42;&#x7684;&#x6982;&#x7387;&#x63A9;&#x819C;&#x5728;&#x751F;&#x6210;&#x9AD8;&#x5206;&#x8FA8;&#x7387;&#x63A9;&#x819C;&#x7684;&#x6BCF;&#x4E00;&#x6B65;&#x4F7F;&#x7528;&#x5377;&#x79EF;&#x7EC6;&#x5206;&#x81EA;&#x4E0A;&#x800C;&#x4E0B;&#x7ED3;&#x5408;&#xFF08;&#x5982;fig. 6&#xFF09;&#x3002;&#x5728;MS COCO&#x5206;&#x5272;&#x6570;&#x636E;&#x96C6;&#x4E2D;&#xFF0C;sharpmask&#x4EE5;39.3&#x7684;&#x5E73;&#x5747;&#x53EC;&#x56DE;&#x7387;&#x51FB;&#x8D25;&#x4E86;deepmask&#xFF08;36.6&#x7684;&#x5E73;&#x5747;&#x53EC;&#x56DE;&#x7387;&#xFF09;&#x3002;</li>
</ul>
<div align="center"><img src="./resource/fig6.png" width="500"></div>
<center>fig. 6</center>
<h3 class="mume-header" id="412-region-proposal-networks">4.1.2 Region proposal networks</h3>

<p><strong>4.1.2 &#x533A;&#x57DF;&#x751F;&#x6210;&#x7F51;&#x7EDC;</strong></p>
<ul>
<li>Another similar wing that started developing with image segmentation was object localization. Task such as this involved locating specific objects in images. Expected outputs for such problems is normally a set of bounding boxes corresponding to the queried objects. Though strictly stating, some of these algo- rithms do not address image segmentation problems, however their approaches are of relevance to this domain.<br>
&#x53E6;&#x4E00;&#x4E2A;&#x4E0E;&#x56FE;&#x50CF;&#x5206;&#x5272;&#x4E00;&#x8D77;&#x5F00;&#x59CB;&#x53D1;&#x5C55;&#x7684;&#x76F8;&#x4F3C;&#x9886;&#x57DF;&#x662F;&#x7269;&#x4F53;&#x5B9A;&#x4F4D;&#x3002;&#x6B64;&#x7C7B;&#x4EFB;&#x52A1;&#x5305;&#x542B;&#x4E86;&#x5B9A;&#x4F4D;&#x56FE;&#x50CF;&#x4E2D;&#x7279;&#x5B9A;&#x7684;&#x7269;&#x4F53;&#x3002;&#x6B64;&#x7C7B;&#x95EE;&#x9898;&#x6240;&#x671F;&#x671B;&#x7684;&#x8F93;&#x51FA;&#x901A;&#x5E38;&#x4E3A;&#x4E00;&#x7EC4;&#x4E0E;&#x5BFB;&#x627E;&#x7684;&#x7269;&#x4F53;&#x6240;&#x5BF9;&#x5E94;&#x7684;bounding box&#x3002;&#x5C3D;&#x7BA1;&#x4E25;&#x683C;&#x6765;&#x8BF4;&#xFF0C;&#x8FD9;&#x4E9B;&#x7B97;&#x6CD5;&#x4E2D;&#x7684;&#x4E00;&#x90E8;&#x5206;&#x5E76;&#x4E0D;&#x89E3;&#x51B3;&#x56FE;&#x50CF;&#x5206;&#x5272;&#x95EE;&#x9898;&#xFF0C;&#x4F46;&#x662F;&#x4ED6;&#x4EEC;&#x7684;&#x76EE;&#x7684;&#x4E0E;&#x8FD9;&#x4E2A;&#x9886;&#x57DF;&#x76F8;&#x5173;</li>
</ul>
<p><strong>RCNN (Region-based Convolutional Neural Networks)</strong><br>
<strong>RCNN&#xFF08;&#x57FA;&#x4E8E;&#x533A;&#x57DF;&#x7684;&#x5377;&#x79EF;&#x795E;&#x7ECF;&#x7F51;&#x7EDC;&#xFF09;</strong></p>
<ul>
<li>The introduction of the CNNs raised many new questions in the domain of computer vision. One of them primarily being whether a network like AlexNet can be extended to detect the presence of more than one object. Region-based-CNN [70] or more commonly known as R-CNN used selective search technique to propose probable object regions and performed classification on the cropped window to verify sensible localization based on the output probability distribution.<br>
&#x5BF9;&#x4E8E;CNN&#x7684;&#x4ECB;&#x7ECD;&#x5F15;&#x8D77;&#x4E86;&#x8BA1;&#x7B97;&#x673A;&#x89C6;&#x89C9;&#x9886;&#x57DF;&#x4E2D;&#x8BB8;&#x591A;&#x65B0;&#x7684;&#x95EE;&#x9898;&#x3002;&#x5176;&#x4E2D;&#x4E00;&#x4E2A;&#x4E3B;&#x8981;&#x7684;&#x95EE;&#x9898;&#x4E3A;&#x50CF;AlexNet&#x4E4B;&#x7C7B;&#x7684;&#x7F51;&#x7EDC;&#x80FD;&#x5426;&#x591F;&#x88AB;&#x62D3;&#x5C55;&#x4EE5;&#x68C0;&#x6D4B;&#x4E00;&#x4E2A;&#x4EE5;&#x4E0A;&#x7269;&#x4F53;&#x7684;&#x5B58;&#x5728;&#x3002;&#x57FA;&#x4E8E;&#x533A;&#x57DF;&#x7684;CNN&#xFF0C;&#x6216;&#x8005;&#x66F4;&#x901A;&#x4FD7;&#x5730;&#x8BF4;&#xFF0C;R-CNN&#x4F7F;&#x7528;&#x9009;&#x62E9;&#x6027;&#x641C;&#x7D22;&#x6280;&#x672F;&#x6765;&#x63D0;&#x51FA;&#x53EF;&#x80FD;&#x7684;&#x7269;&#x4F53;&#x533A;&#x57DF;&#x5E76;&#x5728;&#x526A;&#x5207;&#x7684;&#x7A97;&#x53E3;&#x4E2D;&#x8FDB;&#x884C;&#x5206;&#x7C7B;&#xFF0C;&#x57FA;&#x4E8E;&#x8F93;&#x51FA;&#x7684;&#x6982;&#x7387;&#x5206;&#x5E03;&#x6765;&#x786E;&#x8BA4;&#x5408;&#x7406;&#x7684;&#x5B9A;&#x4F4D;</li>
<li>Selective search technique [198, 200] analyses various aspects like texture, color, or intensities to cluster the pixels into objects. The bounding boxes corresponding to these segments are passed through classifying networks to short-list some of the most sensible boxes. Finally, with a simple linear regression network tighter co-ordinate can be obtained.<br>
&#x9009;&#x62E9;&#x6027;&#x641C;&#x7D22;&#x6280;&#x672F;&#x5206;&#x6790;&#x4E86;&#x4E0D;&#x540C;&#x7684;&#x65B9;&#x9762;&#x5982;&#x7EB9;&#x7406;&#xFF0C;&#x989C;&#x8272;&#xFF0C;&#x6216;&#x5F3A;&#x5EA6;&#x6765;&#x5C06;&#x50CF;&#x7D20;&#x805A;&#x7C7B;&#x4E3A;&#x5BF9;&#x8C61;&#x3002;&#x4E0E;&#x8FD9;&#x4E9B;&#x5206;&#x5272;&#x5BF9;&#x5E94;&#x7684;bounding box&#x901A;&#x8FC7;&#x5206;&#x7C7B;&#x7F51;&#x7EDC;&#x6765;&#x9009;&#x51FA;&#x6700;&#x5408;&#x7406;&#x7684;box&#x3002;&#x6700;&#x540E;&#xFF0C;&#x901A;&#x8FC7;&#x4E00;&#x4E2A;&#x7B80;&#x5355;&#x7684;&#x7EBF;&#x6027;&#x56DE;&#x5F52;&#x7F51;&#x7EDC;&#x80FD;&#x591F;&#x83B7;&#x5F97;&#x66F4;&#x7D27;&#x5BC6;&#x7684;&#x5750;&#x6807;</li>
<li>The main downside of the technique is its computational cost. The network needs to compute a forward pass for every bounding box proposition. The problem with sharing computation across all boxes was that the boxes were of different sizes and hence uniform sized features were not achievable. In the upgraded Fast R-CNN [69], ROI (Region of Interest) Pooling was proposed in which region of interests were dynamically pooled to obtain a fixed size feature output.<br>
&#x8FD9;&#x9879;&#x6280;&#x672F;&#x4E3B;&#x8981;&#x7684;&#x7F3A;&#x70B9;&#x662F;&#x5B83;&#x5728;&#x8BA1;&#x7B97;&#x4E0A;&#x7684;&#x6D88;&#x8017;&#x3002;&#x7F51;&#x7EDC;&#x9700;&#x8981;&#x8BA1;&#x7B97;&#x5BF9;&#x6240;&#x6709;&#x63D0;&#x51FA;&#x7684;bounding box&#x8BA1;&#x7B97;&#x524D;&#x5411;&#x4F20;&#x9012;&#x3002;&#x5BF9;&#x6240;&#x6709;box&#x8FDB;&#x884C;&#x5171;&#x4EAB;&#x8BA1;&#x7B97;&#x7684;&#x95EE;&#x9898;&#x5728;&#x4E8E;&#x8FD9;&#x4E9B;box&#x5C3A;&#x5BF8;&#x4E0D;&#x4E00;&#xFF0C;&#x56E0;&#x6B64;&#x4E0D;&#x53EF;&#x80FD;&#x8FBE;&#x5230;&#x7279;&#x5F81;&#x5C3A;&#x5BF8;&#x7684;&#x4E00;&#x81F4;&#x6027;&#x3002;&#x5728;&#x6539;&#x8FDB;&#x7684;Fast R-CNN&#x4E2D;&#xFF0C;&#x63D0;&#x51FA;&#x4E86;ROI&#x6C60;&#x5316;&#xFF0C;&#x5C06;ROI&#x8FDB;&#x884C;&#x52A8;&#x6001;&#x6C60;&#x5316;&#x6765;&#x83B7;&#x5F97;&#x7279;&#x5B9A;&#x5C3A;&#x5BF8;&#x7684;&#x7279;&#x5F81;&#x8F93;&#x51FA;</li>
<li>Henceforth, the network was mainly bottlenecked by the selective search technique for candidate region proposal. In Faster-RCNN [175], instead of depending on external features, the intermediate activation maps were used to propose bounding boxes, thus speeding up the feature extraction process. Bounding boxes are representative of the location of the object, however they do not provide pixel-level segments.<br>
&#x6B64;&#x540E;&#xFF0C;&#x8FD9;&#x4E2A;&#x7F51;&#x7EDC;&#x9047;&#x5230;&#x4E86;&#x5019;&#x9009;&#x533A;&#x57DF;&#x63D0;&#x51FA;&#x7684;&#x9009;&#x62E9;&#x6027;&#x641C;&#x7D22;&#x6280;&#x672F;&#x7684;&#x74F6;&#x9888;&#x3002;&#x5728;Faster-RCNN&#x4E2D;&#xFF0C;&#x4E2D;&#x95F4;&#x6FC0;&#x6D3B;&#x6620;&#x5C04;&#x4EE3;&#x66FF;&#x4E86;&#x5BF9;&#x5916;&#x90E8;&#x7279;&#x5F81;&#x7684;&#x4F9D;&#x8D56;&#xFF0C;&#x7528;&#x5728;&#x4E86;bounding box&#x7684;&#x63D0;&#x51FA;&#x4E4B;&#x4E0A;&#xFF0C;&#x8FD9;&#x52A0;&#x901F;&#x4E86;&#x7279;&#x5F81;&#x63D0;&#x53D6;&#x7684;&#x8FC7;&#x7A0B;&#x3002;bounding box&#x4EE3;&#x8868;&#x4E86;&#x7269;&#x4F53;&#x7684;&#x4F4D;&#x7F6E;&#xFF0C;&#x4F46;&#x5B83;&#x4E0D;&#x63D0;&#x4F9B;&#x50CF;&#x7D20;&#x7EA7;&#x522B;&#x7684;&#x5206;&#x5272;</li>
<li>The Faster R-CNN network was extended as Mask R-CNN [76] with a parallel branch that performed pixel level object specific binary classification to provide accurate segments. With Mask-RCNN an average precision of 35.7 was attained in the COCO[122] test images. The family of RCNN algorithms have been depicted in fig.7.<br>
Faster R-CNN&#x7F51;&#x7EDC;&#x88AB;&#x62D3;&#x5C55;&#x4E3A;Mask R-CNN&#xFF0C;&#x5176;&#x5177;&#x6709;&#x4E00;&#x4E2A;&#x5E73;&#x884C;&#x7684;&#x5206;&#x652F;&#x6765;&#x6267;&#x884C;&#x50CF;&#x7D20;&#x7EA7;&#x522B;&#x7684;&#x7279;&#x5B9A;&#x7269;&#x4F53;&#x4E8C;&#x5206;&#x7C7B;&#xFF0C;&#x63D0;&#x4F9B;&#x4E86;&#x51C6;&#x786E;&#x7684;&#x5206;&#x5272;&#x3002;&#x901A;&#x8FC7;Mask-RCNN&#xFF0C;&#x5728;COCO&#x6D4B;&#x8BD5;&#x96C6;&#x4E0A;&#x83B7;&#x5F97;&#x4E86;&#x4E86;35.7&#x7684;&#x5E73;&#x5747;&#x51C6;&#x786E;&#x7387;&#x3002;RCNN&#x7B97;&#x6CD5;&#x5BB6;&#x65CF;&#x63CF;&#x7ED8;&#x4E8E;fig. 7&#x3002;</li>
</ul>
<div align="center"><img src="./resource/fig7-1.png" width="500"></div>
<div align="center"><img src="./resource/fig7-2.png" width="500"></div>
<div align="center"><img src="./resource/fig7-3.png" width="500"></div>
<div align="center"><img src="./resource/fig7-4.png" width="500"></div>
<center>fig. 7</center>
<ul>
<li>Region proposal networks have often been combined with other networks [118, 44] to give instance level segmentations. RCNN was further improved under the name of HyperNet [99] by using features from multiple layers of the feature extractor. Region proposal networks have also been implemented for instance specific segmentation as well. As mentioned before object detection capabilities of approaches like RCNN are often coupled with segmentation models to generate different masks for different instances of the same object[43].<br>
&#x533A;&#x57DF;&#x751F;&#x6210;&#x7F51;&#x7EDC;&#x7ECF;&#x5E38;&#x4E0E;&#x5176;&#x4ED6;&#x7F51;&#x7EDC;&#x7ED3;&#x5408;&#x6765;&#x8FDB;&#x884C;&#x5B9E;&#x4F8B;&#x5206;&#x5272;&#x3002;&#x5728;HyperNet&#x4E2D;&#xFF0C;&#x901A;&#x8FC7;&#x4F7F;&#x7528;&#x591A;&#x5C42;&#x7279;&#x5F81;&#x63D0;&#x53D6;&#x5668;&#x7684;&#x7279;&#x5F81;&#xFF0C;RCNN&#x5F97;&#x5230;&#x8FDB;&#x4E00;&#x6B65;&#x6539;&#x5584;&#x3002;&#x533A;&#x57DF;&#x751F;&#x6210;&#x7F51;&#x7EDC;&#x4E5F;&#x88AB;&#x7528;&#x4E8E;&#x5B9E;&#x4F8B;&#x5206;&#x5272;&#x3002;&#x5982;&#x4E4B;&#x524D;&#x6240;&#x63D0;&#x5230;&#x7684;&#xFF0C;RCNN&#x4E4B;&#x7C7B;&#x65B9;&#x6CD5;&#x7684;&#x7269;&#x4F53;&#x68C0;&#x6D4B;&#x80FD;&#x529B;&#x7ECF;&#x5E38;&#x4E0E;&#x5206;&#x5272;&#x6A21;&#x578B;&#x7ED3;&#x5408;&#xFF0C;&#x7528;&#x4E8E;&#x5BF9;&#x540C;&#x4E00;&#x7269;&#x4F53;&#x4E0D;&#x540C;&#x7684;&#x5B9E;&#x4F8B;&#x751F;&#x6210;&#x4E0D;&#x540C;&#x7684;&#x63A9;&#x819C;</li>
</ul>
<h3 class="mume-header" id="413-deeplab">4.1.3 DeepLab</h3>

<ul>
<li>While pixel level segmentation was effective, two complementing issues were still affecting the performance. Firstly, smaller kernel sizes failed to capture contextual information. In classification problems, this is handled using pooling layers that increases the sensory area of the kernels with respect to the original image. But in segmentation that reduces the sharpness of the segmented output. Alternative usage of larger kernels tend to be slower due to significanty larger number of trainable parameters.<br>
&#x5C3D;&#x7BA1;&#x50CF;&#x7D20;&#x7EA7;&#x522B;&#x7684;&#x5206;&#x5272;&#x662F;&#x9AD8;&#x6548;&#x7684;&#xFF0C;&#x4E24;&#x4E2A;&#x8865;&#x5145;&#x95EE;&#x9898;&#x4ECD;&#x7136;&#x4F1A;&#x5F71;&#x54CD;&#x7F51;&#x7EDC;&#x7684;&#x8868;&#x73B0;&#x3002;&#x9996;&#x5148;&#xFF0C;&#x66F4;&#x5C0F;&#x7684;&#x5185;&#x6838;&#x5C3A;&#x5BF8;&#x65E0;&#x6CD5;&#x83B7;&#x53D6;&#x524D;&#x540E;&#x5173;&#x7CFB;&#x7684;&#x4FE1;&#x606F;&#x3002;&#x5728;&#x5206;&#x7C7B;&#x95EE;&#x9898;&#x4E2D;&#xFF0C;&#x901A;&#x8FC7;&#x5229;&#x7528;&#x6C60;&#x5316;&#x5C42;&#x589E;&#x52A0;&#x5185;&#x6838;&#x5BF9;&#x539F;&#x56FE;&#x50CF;&#x7684;&#x611F;&#x53D7;&#x91CE;&#x6765;&#x89E3;&#x51B3;&#x8FD9;&#x4E2A;&#x95EE;&#x9898;&#x3002;&#x4F46;&#x5728;&#x5206;&#x5272;&#x95EE;&#x9898;&#x4E2D;&#xFF0C;&#x8FD9;&#x4F1A;&#x5BFC;&#x81F4;&#x5206;&#x5272;&#x8F93;&#x51FA;&#x9510;&#x5EA6;&#x7684;&#x51CF;&#x5C0F;&#x3002;&#x7531;&#x4E8E;&#x4F7F;&#x7528;&#x4E86;&#x5927;&#x91CF;&#x7684;&#x53EF;&#x8BAD;&#x7EC3;&#x53C2;&#x6570;&#xFF0C;&#x66F4;&#x5927;&#x5185;&#x6838;&#x7684;&#x66FF;&#x4EE3;&#x7528;&#x6CD5;&#x5F80;&#x5F80;&#x8F83;&#x6162;&#x3002;</li>
<li>To handle this issue the DeepLab [30, 32] family of algorithms demonstrated the usage of various methodologies like atrous convolutions [211], spatial pooling pyramids [77] and fully connected conditional random fields [100] to perform image segmentation with great efficiency. The DeepLab algorithm was able to attain a meanIOU of 79.7 on the PASCAL VOC 2012 dataset[54].<br>
&#x4E3A;&#x4E86;&#x89E3;&#x51B3;&#x8FD9;&#x4E2A;&#x95EE;&#x9898;&#xFF0C;DeepLab&#x7B97;&#x6CD5;&#x5BB6;&#x65CF;<a href="https://arxiv.org/pdf/1606.00915.pdf">[30]</a>, <a href="https://arxiv.org/pdf/1706.05587.pdf">[32]</a>&#x63D0;&#x51FA;&#x4E86;&#x5BF9;&#x4E0D;&#x540C;&#x65B9;&#x6CD5;&#x7684;&#x5E94;&#x7528;&#xFF0C;&#x5982;&#x591A;&#x5B54;&#x5377;&#x79EF;<a href="https://arxiv.org/pdf/1511.07122.pdf">[211]</a>&#xFF0C;&#x7A7A;&#x95F4;&#x6C60;&#x5316;&#x91D1;&#x5B57;&#x5854;<a href="https://arxiv.org/pdf/1406.4729.pdf">[77]</a>&#x548C;&#x5168;&#x8FDE;&#x63A5;&#x6761;&#x4EF6;&#x968F;&#x673A;&#x573A;<a href="https://arxiv.org/pdf/1210.5644.pdf">[100]</a>&#x6765;&#x6267;&#x884C;&#x6548;&#x7387;&#x6781;&#x9AD8;&#x7684;&#x56FE;&#x50CF;&#x5206;&#x5272;&#x3002;DeepLab&#x7B97;&#x6CD5;&#x5728;PASCAL VOC 2012&#x6570;&#x636E;&#x96C6;&#x4E0A;&#x80FD;&#x591F;&#x83B7;&#x5F97;79.7&#x7684;&#x5E73;&#x5747;IOU</li>
</ul>
<p><strong>Atrous/Dilated Convolution</strong><br>
<strong>&#x7A00;&#x758F;&#x5377;&#x79EF;/&#x591A;&#x5B54;&#x5377;&#x79EF;</strong></p>
<ul>
<li>The size of the convolution kernels in any layer determine the sensory response area of the network. While smaller kernels extract local information, larger kernels try to focus on more contextual information. However, larger kernels normally comes with more number of parameters.<br>
&#x5377;&#x79EF;&#x6838;&#x7684;&#x5927;&#x5C0F;&#x8868;&#x5F81;&#x4E86;&#x7F51;&#x7EDC;&#x7684;&#x611F;&#x53D7;&#x5E94;&#x7B54;&#x57DF;&#x3002;&#x5C0F;&#x7684;&#x5377;&#x79EF;&#x6838;&#x63D0;&#x53D6;&#x5C40;&#x90E8;&#x4FE1;&#x606F;&#xFF0C;&#x5927;&#x7684;&#x5377;&#x79EF;&#x6838;&#x8BD5;&#x56FE;&#x5173;&#x6CE8;&#x4E0A;&#x4E0B;&#x6587;&#x7ED3;&#x6784;&#x4FE1;&#x606F;&#x3002;&#x4F46;&#x662F;&#xFF0C;&#x66F4;&#x5927;&#x7684;&#x5377;&#x79EF;&#x6838;&#x7ECF;&#x5E38;&#x5E26;&#x6709;&#x66F4;&#x591A;&#x6570;&#x91CF;&#x7684;&#x53C2;&#x6570;</li>
<li>For example to have a sensory region of 6 &#xD7; 6, one must have 36 neurons. To reduce the number of parameters in the CNN, the sensory area is increased in higher layers through techniques like pooling. Pooling layers reduce the size of the image. When an image is pooled by a 2 &#xD7; 2 kernel with a stride of two, the size of the image reduces by 25%. A kernel with an area of 3 &#xD7; 3 corresponds to a larger sensory area of 6 &#xD7; 6 in the original image.<br>
&#x4F8B;&#x5982;&#xFF0C;&#x4E3A;&#x4E86;&#x62E5;&#x6709;6&#xD7;6&#x5927;&#x5C0F;&#x7684;&#x611F;&#x53D7;&#x91CE;&#xFF0C;&#x9700;&#x8981;&#x6709;36&#x4E2A;&#x795E;&#x7ECF;&#x5143;&#x3002;&#x4E3A;&#x4E86;&#x51CF;&#x5C0F;CNN&#x4E2D;&#x7684;&#x53C2;&#x6570;&#x6570;&#x91CF;&#xFF0C;&#x611F;&#x53D7;&#x91CE;&#x901A;&#x8FC7;&#x6C60;&#x5316;&#x4E4B;&#x7C7B;&#x7684;&#x6280;&#x672F;&#x5728;&#x66F4;&#x9AD8;&#x5C42;&#x5F97;&#x5230;&#x589E;&#x5927;&#x3002;&#x6C60;&#x5316;&#x5C42;&#x51CF;&#x5C0F;&#x4E86;&#x56FE;&#x50CF;&#x7684;&#x5C3A;&#x5BF8;&#x3002;&#x5F53;&#x4E00;&#x4E2A;&#x56FE;&#x50CF;&#x4EE5;2 &#xD7; 2&#x5927;&#x5C0F;&#xFF0C;&#x6B65;&#x957F;&#x4E3A;2&#x7684;&#x5185;&#x6838;&#x6C60;&#x5316;&#x65F6;&#xFF0C;&#x56FE;&#x50CF;&#x7684;&#x5927;&#x5C0F;&#x51CF;&#x5C0F;&#x4E86;75%&#xFF08;&#x6307;&#x9762;&#x79EF;&#xFF09;&#xFF0C;3 &#xD7; 3&#x5927;&#x5C0F;&#x7684;&#x5377;&#x79EF;&#x6838;&#x5BF9;&#x5E94;&#x4E8E;&#x539F;&#x56FE;&#x50CF;&#x4E2D;6 &#xD7; 6&#x7684;&#x611F;&#x53D7;&#x91CE;&#x3002;</li>
<li>However, unlike before now only 18 neurons (9 for each layer) are needed in the convolution kernel. In case of segmentation, pooling creates new problems. The reduction in the image size results in loss of sharpness in generated segments as the reduced maps are scaled up to image size.<br>
&#x5982;&#x4ECA;&#x5728;&#x5377;&#x79EF;&#x6838;&#x4E2D;&#x53EA;&#x9700;&#x8981;18&#x4E2A;&#x795E;&#x7ECF;&#x5143;&#xFF08;&#x6BCF;&#x5C42;9&#x4E2A;&#xFF09;&#x3002;&#x5728;&#x5206;&#x5272;&#x4EFB;&#x52A1;&#x4E2D;&#xFF0C;&#x6C60;&#x5316;&#x4EA7;&#x751F;&#x4E86;&#x65B0;&#x7684;&#x95EE;&#x9898;&#xFF0C;&#x968F;&#x7740;&#x7F29;&#x5C0F;&#x7684;&#x6620;&#x5C04;&#x6309;&#x6BD4;&#x4F8B;&#x653E;&#x5927;&#x5230;&#x56FE;&#x50CF;&#x7684;&#x5C3A;&#x5BF8;&#xFF0C;&#x56FE;&#x50CF;&#x5C3A;&#x5BF8;&#x7684;&#x51CF;&#x5C0F;&#x5BFC;&#x81F4;&#x4E86;&#x751F;&#x6210;&#x5206;&#x5272;&#x7684;&#x9510;&#x5EA6;&#x7684;&#x635F;&#x5931;</li>
<li>To deal with these two issues simultaneously, dilated or atrous convolutions play a key role. Atrous/Dilated convolutions increase the field of view without increasing the number of parameters. As shown in fig.8 a 3&#xD7;3 kernel with a dilation factor of 1 can act upon an area of 5&#xD7;5 in the image.<br>
&#x4E3A;&#x4E86;&#x540C;&#x65F6;&#x89E3;&#x51B3;&#x8FD9;&#x4E24;&#x4E2A;&#x95EE;&#x9898;&#xFF0C;&#x591A;&#x5B54;&#x5377;&#x79EF;&#x8D77;&#x5230;&#x4E86;&#x91CD;&#x8981;&#x7684;&#x4F5C;&#x7528;&#x3002;&#x591A;&#x5B54;/&#x7A00;&#x758F;&#x5377;&#x79EF;&#x4E0D;&#x9700;&#x8981;&#x589E;&#x52A0;&#x53C2;&#x6570;&#x7684;&#x6570;&#x91CF;&#x5C31;&#x80FD;&#x63D0;&#x9AD8;&#x611F;&#x53D7;&#x91CE;&#x7684;&#x5927;&#x5C0F;&#x3002;&#x5982;&#x56FE;8&#x6240;&#x793A;&#xFF0C;&#x4E00;&#x4E2A;3&#xD7;3&#x5927;&#x5C0F;&#xFF0C;&#x7A00;&#x758F;&#x56E0;&#x5B50;&#x4E3A;1&#x7684;&#x5185;&#x6838;&#x80FD;&#x591F;&#x5BF9;&#x56FE;&#x50CF;&#x4E2D;5&#xD7;5&#x5927;&#x5C0F;&#x7684;&#x533A;&#x57DF;&#x8D77;&#x4F5C;&#x7528;</li>
</ul>
<div align="center"><img src="./resource/fig8.png" width="250"></div>
<center>fig. 8 &#x666E;&#x901A;&#x5377;&#x79EF;&#xFF08;&#x7EA2;&#x8272;&#xFF09;&#x548C;&#x591A;&#x5B54;/&#x7A00;&#x758F;&#x5377;&#x79EF;&#xFF08;&#x7EFF;&#x8272;&#xFF09;</center>
<ul>
<li>Each row and column of the kernel has three neurons which is multiplied with intensity values in the image which separated by the dilation factor of 1. In this way the kernels can span over larger areas while keeping the number of neurons low and also preserving the sharpness of the image. Besides the DeepLab algorithms, atrous convolutions [34] have also been used with auto encoder based architectures.<br>
&#x5377;&#x79EF;&#x6838;&#x7684;&#x6BCF;&#x4E00;&#x884C;&#x548C;&#x6BCF;&#x4E00;&#x5217;&#x90FD;&#x6709;3&#x4E2A;&#x795E;&#x7ECF;&#x5143;&#x4E0E;&#x56FE;&#x50CF;&#x4E2D;&#x7684;&#x50CF;&#x7D20;&#x503C;&#x76F8;&#x4E58;&#xFF0C;&#x4ED6;&#x4EEC;&#x4E4B;&#x95F4;&#x4EE5;&#x5927;&#x5C0F;&#x4E3A;1&#x7684;&#x7A00;&#x758F;&#x56E0;&#x5B50;&#x76F8;&#x9694;&#x5F00;&#x3002;&#x5728;&#x8FD9;&#x79CD;&#x65B9;&#x5F0F;&#x4E0B;&#xFF0C;&#x5377;&#x79EF;&#x6838;&#x5728;&#x66F4;&#x5C11;&#x7684;&#x795E;&#x7ECF;&#x5143;&#x6570;&#x91CF;&#x4EE5;&#x53CA;&#x4FDD;&#x6301;&#x56FE;&#x50CF;&#x7684;&#x9510;&#x5EA6;&#x7684;&#x540C;&#x65F6;&#x80FD;&#x591F;&#x8DE8;&#x8FC7;&#x66F4;&#x5927;&#x7684;&#x533A;&#x57DF;&#x3002;&#x9664;&#x4E86;DeepLab&#x7B97;&#x6CD5;&#xFF0C;&#x7A00;&#x758F;&#x5377;&#x79EF;&#x4E5F;&#x7528;&#x4E8E;&#x57FA;&#x4E8E;&#x81EA;&#x52A8;&#x7F16;&#x7801;&#x5668;&#x7684;&#x67B6;&#x6784;&#x4E2D;</li>
</ul>
<div align="center"><img src="./resource/fig9.png" width="400"></div>
<center>fig. 9 DeepLab&#x7F51;&#x7EDC;&#x7ED3;&#x6784;&#x4E0E;&#x6807;&#x51C6;&#x7684;VGG&#x7F51;&#x7EDC;&#xFF08;&#x4E0A;&#xFF09;&#x5BF9;&#x6BD4;&#xFF0C;&#x5206;&#x522B;&#x4E3A;&#x5E26;&#x4E32;&#x7EA7;&#x591A;&#x5B54;&#x5377;&#x79EF;&#xFF08;&#x4E2D;&#xFF09;&#x548C;&#x591A;&#x5B54;&#x7A7A;&#x95F4;&#x6C60;&#x5316;&#x91D1;&#x5B57;&#x5854;&#xFF08;&#x4E0B;&#xFF09;</center>
<p><strong>Spatial Pyramid Pooling</strong></p>
<ul>
<li>Spatial pyramid pooling [77] was introduced in R-CNN where ROI pooling showed the benefit of using multi-scale regions for object localization. However, in DeepLab, atrous convolutions were preferred over pooling layers for changing field of view or sensory area. To imitate the effect of ROI pooling, multiple branches with atrous convolutions of different dilations were combined together to utilize multi-scale properties for image segmentation.<br>
R-CNN&#x4E2D;&#x5F15;&#x5165;&#x4E86;&#x7A7A;&#x95F4;&#x91D1;&#x5B57;&#x5854;&#x6C60;&#x5316;<a href="https://arxiv.org/pdf/1406.4729.pdf">[77]</a>&#xFF0C;&#x5176;&#x4E2D;ROI&#x6C60;&#x5316;&#x5C55;&#x793A;&#x4E86;&#x5BF9;&#x7269;&#x4F53;&#x5B9A;&#x4F4D;&#x4F7F;&#x7528;&#x591A;&#x5C3A;&#x5EA6;&#x533A;&#x57DF;&#x7684;&#x4FBF;&#x5229;&#x3002;&#x4F46;&#x5728;DeepLab&#x4E2D;&#xFF0C;&#x7A00;&#x758F;&#x5377;&#x79EF;&#x4EE3;&#x66FF;&#x4E86;&#x6C60;&#x5316;&#x5C42;&#x6765;&#x6539;&#x53D8;&#x611F;&#x53D7;&#x91CE;&#x7684;&#x5927;&#x5C0F;&#x3002;&#x4E3A;&#x4E86;&#x6A21;&#x62DF;ROI&#x6C60;&#x5316;&#x7684;&#x4F5C;&#x7528;&#xFF0C;&#x5E26;&#x6709;&#x4E0D;&#x540C;&#x7A00;&#x758F;&#x5EA6;&#x7684;&#x7A00;&#x758F;&#x5377;&#x79EF;&#x7684;&#x591A;&#x4E2A;&#x5206;&#x652F;&#x7ED3;&#x5408;&#x5728;&#x4E00;&#x8D77;&#x6765;&#x5229;&#x7528;&#x56FE;&#x50CF;&#x5206;&#x5272;&#x7684;&#x591A;&#x5C3A;&#x5EA6;&#x7279;&#x6027;</li>
</ul>
<p><strong>Fully connected conditional random field</strong></p>
<ul>
<li>Conditional random field is a undirected discriminative probabilistic graphical model that is often used for various sequence learning problems. Unlike discrete classifiers, while classifying a sample it takes into account the labels of other neighboring samples.<br>
&#x6761;&#x4EF6;&#x968F;&#x673A;&#x573A;&#x662F;&#x65E0;&#x65B9;&#x5411;&#x7684;&#x5224;&#x522B;&#x6982;&#x7387;&#x56FE;&#x6A21;&#x578B;&#xFF0C;&#x7ECF;&#x5E38;&#x7528;&#x4E8E;&#x4E0D;&#x540C;&#x5E8F;&#x5217;&#x5B66;&#x4E60;&#x95EE;&#x9898;&#x3002;&#x4E0D;&#x50CF;&#x79BB;&#x6563;&#x5206;&#x7C7B;&#x5668;&#xFF0C;&#x5728;&#x5206;&#x7C7B;&#x4E00;&#x4E2A;&#x6837;&#x672C;&#x7684;&#x65F6;&#x5019;&#xFF0C;&#x4F1A;&#x8003;&#x8651;&#x5230;&#x5176;&#x4ED6;&#x76F8;&#x90BB;&#x6837;&#x672C;&#x7684;&#x6807;&#x7B7E;</li>
<li>Image segmentation can be treated as a sequence of pixel classifications. The label of a pixel is not only dependent on its own intensity values but also the values of neighboring pixels. The use of such probabilistic graphical models is often used in the field of image segmentation and hence it deserves a dedicated section (section 4.1.4).<br>
&#x56FE;&#x50CF;&#x5206;&#x5272;&#x53EF;&#x4EE5;&#x88AB;&#x89C6;&#x4E3A;&#x50CF;&#x7D20;&#x5E8F;&#x5217;&#x7684;&#x5206;&#x7C7B;&#x3002;&#x50CF;&#x7D20;&#x7684;&#x6807;&#x7B7E;&#x4E0D;&#x4EC5;&#x4EC5;&#x4F9D;&#x8D56;&#x4E8E;&#x5B83;&#x81EA;&#x5DF1;&#x7684;&#x50CF;&#x7D20;&#x503C;&#xFF0C;&#x4E5F;&#x4E8E;&#x76F8;&#x90BB;&#x50CF;&#x7D20;&#x6709;&#x5173;&#x3002;&#x8BE5;&#x6982;&#x7387;&#x56FE;&#x6A21;&#x578B;&#x7ECF;&#x5E38;&#x7528;&#x4E8E;&#x56FE;&#x50CF;&#x5206;&#x5272;&#x9886;&#x57DF;&#xFF0C;&#x56E0;&#x6B64;&#x7528;&#x4E00;&#x4E2A;&#x4E13;&#x95E8;&#x7684;&#x90E8;&#x5206;&#x6765;&#x63CF;&#x8FF0;&#xFF08;&#x89C1;4.1.4&#xFF09;</li>
</ul>
<h3 class="mume-header" id="414-using-inter-pixel-correlation-to-improve-cnn-based-segmentation">4.1.4 Using inter pixel correlation to improve CNN based segmentation</h3>

<p><strong>&#x4F7F;&#x7528;&#x50CF;&#x7D20;&#x95F4;&#x76F8;&#x5173;&#x6027;&#x6765;&#x6539;&#x5584;&#x57FA;&#x4E8E;CNN&#x7684;&#x5206;&#x5272;</strong></p>
<ul>
<li>The use of probabilistic graphical models such as markov random fields (MRF) or conditional random fields (CRF) for image segmentation thrived on its own even without the inclusion of CNN based feature extractors. The CRF or MRF is mainly characterized by an energy function with a unary and a pairwise component.<br>
&#x56FE;&#x50CF;&#x5206;&#x5272;&#x4E2D;&#x6982;&#x7387;&#x56FE;&#x6A21;&#x578B;&#x7684;&#x5E94;&#x7528;&#x5982;&#x9A6C;&#x5C14;&#x79D1;&#x592B;&#x968F;&#x673A;&#x573A;&#xFF08;MRF&#xFF09;&#x6216;&#x6761;&#x4EF6;&#x968F;&#x673A;&#x573A;&#xFF08;CRF&#xFF09;&#x5F97;&#x5230;&#x4E86;&#x72EC;&#x7ACB;&#x7684;&#x53D1;&#x5C55;&#xFF0C;&#x5373;&#x4F7F;&#x6CA1;&#x6709;&#x4F7F;&#x7528;&#x57FA;&#x4E8E;CNN&#x7684;&#x7279;&#x5F81;&#x63D0;&#x53D6;&#x5668;&#x3002;CRF&#x6216;MRF&#x7684;&#x7279;&#x5F81;&#x662F;&#x5177;&#x6709;&#x4E00;&#x5143;&#x548C;&#x4E8C;&#x5143;&#x5206;&#x91CF;&#x7684;&#x80FD;&#x91CF;&#x51FD;&#x6570;</li>
</ul>
<p><span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mtable width="100%"><mtr><mtd width="50%"></mtd><mtd><mrow><mi>E</mi><mo stretchy="false">(</mo><mi>x</mi><mo stretchy="false">)</mo><mo>=</mo><munder><mo>&#x2211;</mo><mi>i</mi></munder><mrow><msub><mi>&#x3B8;</mi><mi>i</mi></msub><mo stretchy="false">(</mo><msub><mi>x</mi><mi>i</mi></msub><mo stretchy="false">)</mo></mrow><mo>+</mo><munder><mo>&#x2211;</mo><mrow><mi>i</mi><mi>j</mi></mrow></munder><mrow><msub><mi>&#x3B8;</mi><mrow><mi>i</mi><mi>j</mi></mrow></msub><mo stretchy="false">(</mo><msub><mi>x</mi><mi>i</mi></msub><mo separator="true">,</mo><msub><mi>y</mi><mi>j</mi></msub><mo stretchy="false">)</mo></mrow></mrow></mtd><mtd width="50%"></mtd><mtd><mtext>(1)</mtext></mtd></mtr></mtable><annotation encoding="application/x-tex">E(x)=\sum_i{\theta_i(x_i)}+\sum_{ij}{\theta_{ij}(x_i,y_j)}\tag{1}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:1em;vertical-align:-0.25em;"></span><span class="mord mathdefault" style="margin-right:0.05764em;">E</span><span class="mopen">(</span><span class="mord mathdefault">x</span><span class="mclose">)</span><span class="mspace" style="margin-right:0.2777777777777778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2777777777777778em;"></span></span><span class="base"><span class="strut" style="height:2.327674em;vertical-align:-1.277669em;"></span><span class="mop op-limits"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:1.0500050000000003em;"><span style="top:-1.872331em;margin-left:0em;"><span class="pstrut" style="height:3.05em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathdefault mtight">i</span></span></span><span style="top:-3.050005em;"><span class="pstrut" style="height:3.05em;"></span><span><span class="mop op-symbol large-op">&#x2211;</span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:1.277669em;"><span></span></span></span></span></span><span class="mspace" style="margin-right:0.16666666666666666em;"></span><span class="mord"><span class="mord"><span class="mord mathdefault" style="margin-right:0.02778em;">&#x3B8;</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.31166399999999994em;"><span style="top:-2.5500000000000003em;margin-left:-0.02778em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathdefault mtight">i</span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span><span class="mopen">(</span><span class="mord"><span class="mord mathdefault">x</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.31166399999999994em;"><span style="top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathdefault mtight">i</span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span><span class="mclose">)</span></span><span class="mspace" style="margin-right:0.2222222222222222em;"></span><span class="mbin">+</span><span class="mspace" style="margin-right:0.2222222222222222em;"></span></span><span class="base"><span class="strut" style="height:2.463782em;vertical-align:-1.413777em;"></span><span class="mop op-limits"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:1.050005em;"><span style="top:-1.8723309999999997em;margin-left:0em;"><span class="pstrut" style="height:3.05em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight"><span class="mord mathdefault mtight">i</span><span class="mord mathdefault mtight" style="margin-right:0.05724em;">j</span></span></span></span><span style="top:-3.0500049999999996em;"><span class="pstrut" style="height:3.05em;"></span><span><span class="mop op-symbol large-op">&#x2211;</span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:1.413777em;"><span></span></span></span></span></span><span class="mspace" style="margin-right:0.16666666666666666em;"></span><span class="mord"><span class="mord"><span class="mord mathdefault" style="margin-right:0.02778em;">&#x3B8;</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.311664em;"><span style="top:-2.5500000000000003em;margin-left:-0.02778em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight"><span class="mord mathdefault mtight">i</span><span class="mord mathdefault mtight" style="margin-right:0.05724em;">j</span></span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.286108em;"><span></span></span></span></span></span></span><span class="mopen">(</span><span class="mord"><span class="mord mathdefault">x</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.31166399999999994em;"><span style="top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathdefault mtight">i</span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span><span class="mpunct">,</span><span class="mspace" style="margin-right:0.16666666666666666em;"></span><span class="mord"><span class="mord mathdefault" style="margin-right:0.03588em;">y</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.311664em;"><span style="top:-2.5500000000000003em;margin-left:-0.03588em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathdefault mtight" style="margin-right:0.05724em;">j</span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.286108em;"><span></span></span></span></span></span></span><span class="mclose">)</span></span></span><span class="tag"><span class="strut" style="height:2.463782em;vertical-align:-1.413777em;"></span><span class="mord text"><span class="mord">(</span><span class="mord"><span class="mord">1</span></span><span class="mord">)</span></span></span></span></span></span></p>
<ul>
<li>while non-deep learning approaches focused on building efficient pairwise potentials like exploiting long-range dependencies, designing higher-order potentials and exploring contexts of semantic labels, deep learning based approaches focused on generating a strong unary potentials and using simple pairwise components to boost the performance.<br>
&#x975E;&#x6DF1;&#x5EA6;&#x5B66;&#x4E60;&#x65B9;&#x6CD5;&#x4FA7;&#x91CD;&#x4E8E;&#x5EFA;&#x7ACB;&#x9AD8;&#x6548;&#x7684;&#x6210;&#x5BF9;&#x7684;&#x52BF;&#xFF0C;&#x5982;&#x5229;&#x7528;&#x957F;&#x8303;&#x56F4;&#x7684;&#x76F8;&#x5173;&#x6027;&#xFF0C;&#x8BBE;&#x8BA1;&#x9AD8;&#x9636;&#x52BF;&#xFF0C;&#x4EE5;&#x53CA;&#x63A2;&#x7D22;&#x8BED;&#x4E49;&#x6807;&#x7B7E;&#x7684;&#x4E0A;&#x4E0B;&#x6587;&#x5173;&#x7CFB;&#xFF0C;&#x800C;&#x6DF1;&#x5EA6;&#x5B66;&#x4E60;&#x65B9;&#x6CD5;&#x5219;&#x4FA7;&#x91CD;&#x4E8E;&#x751F;&#x6210;&#x5F3A;&#x4E00;&#x5143;&#x52BF;&#x4EE5;&#x53CA;&#x4F7F;&#x7528;&#x7B80;&#x5355;&#x7684;&#x6210;&#x5BF9;&#x7EC4;&#x4EF6;&#x6765;&#x63D0;&#x5347;&#x6027;&#x80FD;</li>
<li>CRFs have usually been coupled with deep learning based methods in two ways. One as a separate post-processing module and the other as an trainable module in an end-to-end network like deep parsing networks[128] or spatial propagation networks[126].<br>
CRF&#x901A;&#x5E38;&#x4EE5;&#x4E24;&#x79CD;&#x65B9;&#x6CD5;&#x4E0E;&#x57FA;&#x4E8E;&#x6DF1;&#x5EA6;&#x5B66;&#x4E60;&#x7684;&#x65B9;&#x6CD5;&#x7ED3;&#x5408;&#x3002;&#x4E00;&#x662F;&#x4EE5;&#x5355;&#x72EC;&#x7684;&#x540E;&#x5904;&#x7406;&#x6A21;&#x5757;&#x7684;&#x65B9;&#x5F0F;&#xFF0C;&#x4E8C;&#x662F;&#x5728;&#x7AEF;&#x5230;&#x7AEF;&#x7F51;&#x7EDC;&#x4E2D;&#x4EE5;&#x53EF;&#x8BAD;&#x7EC3;&#x7684;&#x6A21;&#x5757;&#x7684;&#x5F62;&#x5F0F;&#xFF0C;&#x5982;&#x6DF1;&#x5EA6;&#x89E3;&#x6790;&#x7F51;&#x7EDC;<a href="https://arxiv.org/abs/1509.02634">[128]</a>&#x6216;&#x7A7A;&#x95F4;&#x4F20;&#x64AD;&#x7F51;&#x7EDC;<a href="https://arxiv.org/abs/1710.01020">[126]</a></li>
</ul>
<p><strong>Using CRFs to improve Fully convolutional networks</strong><br>
<strong>&#x4F7F;&#x7528;CRF&#x6539;&#x8FDB;&#x5168;&#x5377;&#x79EF;&#x7F51;&#x7EDC;</strong></p>
<ul>
<li>One of the earliest implementations that kick-started this paradigm of boundary refinements was the works of [101] With the introduction of fully convolutional networks for image segmentation it was quite possible to draw coarse segments for objects in images.<br>
<a href>[101]</a>&#x7684;&#x5DE5;&#x4F5C;&#x5F00;&#x542F;&#x8FD9;&#x79CD;&#x8FB9;&#x754C;&#x7EC6;&#x5316;&#x8303;&#x4F8B;&#x6700;&#x65E9;&#x7684;&#x5B9E;&#x73B0;&#x4E4B;&#x4E00;&#x3002;&#x968F;&#x7740;&#x5168;&#x5377;&#x79EF;&#x795E;&#x7ECF;&#x7F51;&#x7EDC;&#x5728;&#x56FE;&#x50CF;&#x5206;&#x5272;&#x4E2D;&#x7684;&#x5F15;&#x5165;&#xFF0C;&#x63D0;&#x53D6;&#x56FE;&#x50CF;&#x4E2D;&#x76EE;&#x6807;&#x7684;&#x8F83;&#x7C97;&#x7684;&#x5206;&#x5272;&#x6210;&#x4E3A;&#x53EF;&#x80FD;</li>
<li>However, getting sharper segments was still a problem. In the works of [29], the output pixel level prediction was used as a unary potential for a fully connected CRF. For each pair of pixels i and j in the image the pairwise potential was defined as<br>
&#x4F46;&#x662F;&#xFF0C;&#x83B7;&#x53D6;&#x8F83;&#x9510;&#x5229;&#x7684;&#x5206;&#x5272;&#x4F9D;&#x7136;&#x662F;&#x4E00;&#x4E2A;&#x95EE;&#x9898;&#x3002;&#x5728;<a href>[29]</a>&#x7684;&#x5DE5;&#x4F5C;&#x4E2D;&#xFF0C;&#x8F93;&#x51FA;&#x7684;&#x50CF;&#x7D20;&#x7EA7;&#x522B;&#x9884;&#x6D4B;&#x88AB;&#x7528;&#x4F5C;&#x5168;&#x8FDE;&#x63A5;CRF&#x7684;&#x4E00;&#x5143;&#x52BF;&#x3002;&#x5BF9;&#x4E8E;&#x56FE;&#x50CF;&#x4E2D;&#x7684;&#x6BCF;&#x4E00;&#x5BF9;&#x50CF;&#x7D20;i&#xFF0C;j&#xFF0C;&#x6210;&#x5BF9;&#x7684;&#x52BF;&#x88AB;&#x5B9A;&#x4E49;&#x4E3A;&#xFF1A;</li>
</ul>
<p><span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mtable width="100%"><mtr><mtd width="50%"></mtd><mtd><mtable rowspacing="0.15999999999999992em" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mtable rowspacing="0.24999999999999992em" columnalign="right left" columnspacing="0em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="true"><mrow><msub><mi>&#x3B8;</mi><mrow><mi>i</mi><mi>j</mi></mrow></msub><mo stretchy="false">(</mo><msub><mi>x</mi><mi>i</mi></msub><mo separator="true">,</mo><msub><mi>x</mi><mi>j</mi></msub><mo stretchy="false">)</mo><mo>=</mo><mi>&#x3BC;</mi><mo stretchy="false">(</mo><msub><mi>x</mi><mi>i</mi></msub><mo separator="true">,</mo><msub><mi>x</mi><mi>j</mi></msub><mo stretchy="false">)</mo><mo stretchy="false">[</mo><msub><mi>&#x3C9;</mi><mn>1</mn></msub><mi>e</mi><mi>x</mi><mi>p</mi><mo stretchy="false">(</mo><mo>&#x2212;</mo><mfrac><mrow><mi mathvariant="normal">&#x2223;</mi><mi mathvariant="normal">&#x2223;</mi><msub><mi>p</mi><mi>i</mi></msub><mo>&#x2212;</mo><msub><mi>p</mi><mi>j</mi></msub><mi mathvariant="normal">&#x2223;</mi><msup><mi mathvariant="normal">&#x2223;</mi><mn>2</mn></msup></mrow><mrow><mn>2</mn><msubsup><mi>&#x3C3;</mi><mi>&#x3B1;</mi><mn>2</mn></msubsup></mrow></mfrac></mrow></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="true"><mrow><mrow></mrow><mo>&#x2212;</mo><mfrac><mrow><mi mathvariant="normal">&#x2223;</mi><mi mathvariant="normal">&#x2223;</mi><msub><mi>I</mi><mi>i</mi></msub><mo>&#x2212;</mo><msub><mi>I</mi><mi>j</mi></msub><mi mathvariant="normal">&#x2223;</mi><msup><mi mathvariant="normal">&#x2223;</mi><mn>2</mn></msup></mrow><mrow><mn>2</mn><msubsup><mi>&#x3C3;</mi><mi>&#x3B2;</mi><mn>2</mn></msubsup></mrow></mfrac><mo stretchy="false">)</mo></mrow></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="true"><mrow></mrow></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="true"><mrow><mrow></mrow><mo>+</mo><msub><mi>&#x3C9;</mi><mn>2</mn></msub><mi>e</mi><mi>x</mi><mi>p</mi><mo stretchy="false">(</mo><mo>&#x2212;</mo><mfrac><mrow><mi mathvariant="normal">&#x2223;</mi><mi mathvariant="normal">&#x2223;</mi><msub><mi>p</mi><mi>i</mi></msub><mo>&#x2212;</mo><msub><mi>p</mi><mi>j</mi></msub><mi mathvariant="normal">&#x2223;</mi><msup><mi mathvariant="normal">&#x2223;</mi><mn>2</mn></msup></mrow><mrow><mn>2</mn><msubsup><mi>&#x3C3;</mi><mi>&#x3B3;</mi><mn>2</mn></msubsup></mrow></mfrac><mo stretchy="false">)</mo><mo stretchy="false">]</mo></mrow></mstyle></mtd></mtr></mtable></mstyle></mtd></mtr></mtable></mtd><mtd width="50%"></mtd><mtd><mtext>(2)</mtext></mtd></mtr></mtable><annotation encoding="application/x-tex">\begin{matrix}
    \begin{aligned}
        \theta_{ij}(x_i,x_j)=\mu(x_i,x_j)[\omega_1 exp(-\frac{||p_i-p_j||^2}{2\sigma^2_{\alpha}}&amp;-\frac{||I_i-I_j||^2}{2\sigma^2_{\beta}}) \\
          &amp;+\omega_2 exp(-\frac{||p_i-p_j||^2}{2\sigma^2_{\gamma}})]\tag{2}
    \end{aligned}
\end{matrix}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:5.7747399999999995em;vertical-align:-2.6373699999999998em;"></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:3.1373699999999998em;"><span style="top:-5.13737em;"><span class="pstrut" style="height:5.13737em;"></span><span class="mord"><span class="mord"><span class="mtable"><span class="col-align-r"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:3.1373699999999998em;"><span style="top:-5.137369999999999em;"><span class="pstrut" style="height:3.4911079999999997em;"></span><span class="mord"><span class="mord"><span class="mord mathdefault" style="margin-right:0.02778em;">&#x3B8;</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.311664em;"><span style="top:-2.5500000000000003em;margin-left:-0.02778em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight"><span class="mord mathdefault mtight">i</span><span class="mord mathdefault mtight" style="margin-right:0.05724em;">j</span></span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.286108em;"><span></span></span></span></span></span></span><span class="mopen">(</span><span class="mord"><span class="mord mathdefault">x</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.31166399999999994em;"><span style="top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathdefault mtight">i</span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span><span class="mpunct">,</span><span class="mspace" style="margin-right:0.16666666666666666em;"></span><span class="mord"><span class="mord mathdefault">x</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.311664em;"><span style="top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathdefault mtight" style="margin-right:0.05724em;">j</span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.286108em;"><span></span></span></span></span></span></span><span class="mclose">)</span><span class="mspace" style="margin-right:0.2777777777777778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2777777777777778em;"></span><span class="mord mathdefault">&#x3BC;</span><span class="mopen">(</span><span class="mord"><span class="mord mathdefault">x</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.31166399999999994em;"><span style="top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathdefault mtight">i</span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span><span class="mpunct">,</span><span class="mspace" style="margin-right:0.16666666666666666em;"></span><span class="mord"><span class="mord mathdefault">x</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.311664em;"><span style="top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathdefault mtight" style="margin-right:0.05724em;">j</span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.286108em;"><span></span></span></span></span></span></span><span class="mclose">)</span><span class="mopen">[</span><span class="mord"><span class="mord mathdefault" style="margin-right:0.03588em;">&#x3C9;</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.30110799999999993em;"><span style="top:-2.5500000000000003em;margin-left:-0.03588em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">1</span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span><span class="mord mathdefault">e</span><span class="mord mathdefault">x</span><span class="mord mathdefault">p</span><span class="mopen">(</span><span class="mord">&#x2212;</span><span class="mord"><span class="mopen nulldelimiter"></span><span class="mfrac"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:1.4911079999999999em;"><span style="top:-2.314em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">2</span><span class="mord"><span class="mord mathdefault" style="margin-right:0.03588em;">&#x3C3;</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.740108em;"><span style="top:-2.4530000000000003em;margin-left:-0.03588em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight"><span class="mord mathdefault mtight" style="margin-right:0.0037em;">&#x3B1;</span></span></span></span><span style="top:-2.9890000000000003em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">2</span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.247em;"><span></span></span></span></span></span></span></span></span><span style="top:-3.23em;"><span class="pstrut" style="height:3em;"></span><span class="frac-line" style="border-bottom-width:0.04em;"></span></span><span style="top:-3.677em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">&#x2223;</span><span class="mord">&#x2223;</span><span class="mord"><span class="mord mathdefault">p</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.31166399999999994em;"><span style="top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathdefault mtight">i</span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2222222222222222em;"></span><span class="mbin">&#x2212;</span><span class="mspace" style="margin-right:0.2222222222222222em;"></span><span class="mord"><span class="mord mathdefault">p</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.311664em;"><span style="top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathdefault mtight" style="margin-right:0.05724em;">j</span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.286108em;"><span></span></span></span></span></span></span><span class="mord">&#x2223;</span><span class="mord"><span class="mord">&#x2223;</span><span class="msupsub"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.8141079999999999em;"><span style="top:-3.063em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">2</span></span></span></span></span></span></span></span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.933em;"><span></span></span></span></span></span><span class="mclose nulldelimiter"></span></span></span></span><span style="top:-2.2228459999999997em;"><span class="pstrut" style="height:3.4911079999999997em;"></span><span class="mord"></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:2.6373699999999998em;"><span></span></span></span></span></span><span class="col-align-l"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:3.1373699999999998em;"><span style="top:-5.137369999999999em;"><span class="pstrut" style="height:3.4911079999999997em;"></span><span class="mord"><span class="mord"></span><span class="mspace" style="margin-right:0.2222222222222222em;"></span><span class="mbin">&#x2212;</span><span class="mspace" style="margin-right:0.2222222222222222em;"></span><span class="mord"><span class="mopen nulldelimiter"></span><span class="mfrac"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:1.4911079999999999em;"><span style="top:-2.314em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">2</span><span class="mord"><span class="mord mathdefault" style="margin-right:0.03588em;">&#x3C3;</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.795908em;"><span style="top:-2.3986920000000005em;margin-left:-0.03588em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight"><span class="mord mathdefault mtight" style="margin-right:0.05278em;">&#x3B2;</span></span></span></span><span style="top:-3.0448000000000004em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">2</span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.4374159999999999em;"><span></span></span></span></span></span></span></span></span><span style="top:-3.23em;"><span class="pstrut" style="height:3em;"></span><span class="frac-line" style="border-bottom-width:0.04em;"></span></span><span style="top:-3.677em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">&#x2223;</span><span class="mord">&#x2223;</span><span class="mord"><span class="mord mathdefault" style="margin-right:0.07847em;">I</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.31166399999999994em;"><span style="top:-2.5500000000000003em;margin-left:-0.07847em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathdefault mtight">i</span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2222222222222222em;"></span><span class="mbin">&#x2212;</span><span class="mspace" style="margin-right:0.2222222222222222em;"></span><span class="mord"><span class="mord mathdefault" style="margin-right:0.07847em;">I</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.311664em;"><span style="top:-2.5500000000000003em;margin-left:-0.07847em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathdefault mtight" style="margin-right:0.05724em;">j</span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.286108em;"><span></span></span></span></span></span></span><span class="mord">&#x2223;</span><span class="mord"><span class="mord">&#x2223;</span><span class="msupsub"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.8141079999999999em;"><span style="top:-3.063em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">2</span></span></span></span></span></span></span></span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:1.123416em;"><span></span></span></span></span></span><span class="mclose nulldelimiter"></span></span><span class="mclose">)</span></span></span><span style="top:-2.2228459999999997em;"><span class="pstrut" style="height:3.4911079999999997em;"></span><span class="mord"><span class="mord"></span><span class="mspace" style="margin-right:0.2222222222222222em;"></span><span class="mbin">+</span><span class="mspace" style="margin-right:0.2222222222222222em;"></span><span class="mord"><span class="mord mathdefault" style="margin-right:0.03588em;">&#x3C9;</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.30110799999999993em;"><span style="top:-2.5500000000000003em;margin-left:-0.03588em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">2</span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span><span class="mord mathdefault">e</span><span class="mord mathdefault">x</span><span class="mord mathdefault">p</span><span class="mopen">(</span><span class="mord">&#x2212;</span><span class="mord"><span class="mopen nulldelimiter"></span><span class="mfrac"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:1.4911079999999999em;"><span style="top:-2.314em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">2</span><span class="mord"><span class="mord mathdefault" style="margin-right:0.03588em;">&#x3C3;</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.7401079999999999em;"><span style="top:-2.4530000000000003em;margin-left:-0.03588em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight"><span class="mord mathdefault mtight" style="margin-right:0.05556em;">&#x3B3;</span></span></span></span><span style="top:-2.989em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">2</span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.383108em;"><span></span></span></span></span></span></span></span></span><span style="top:-3.23em;"><span class="pstrut" style="height:3em;"></span><span class="frac-line" style="border-bottom-width:0.04em;"></span></span><span style="top:-3.677em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">&#x2223;</span><span class="mord">&#x2223;</span><span class="mord"><span class="mord mathdefault">p</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.31166399999999994em;"><span style="top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathdefault mtight">i</span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2222222222222222em;"></span><span class="mbin">&#x2212;</span><span class="mspace" style="margin-right:0.2222222222222222em;"></span><span class="mord"><span class="mord mathdefault">p</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.311664em;"><span style="top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathdefault mtight" style="margin-right:0.05724em;">j</span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.286108em;"><span></span></span></span></span></span></span><span class="mord">&#x2223;</span><span class="mord"><span class="mord">&#x2223;</span><span class="msupsub"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.8141079999999999em;"><span style="top:-3.063em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">2</span></span></span></span></span></span></span></span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:1.069108em;"><span></span></span></span></span></span><span class="mclose nulldelimiter"></span></span><span class="mclose">)</span><span class="mclose">]</span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:2.6373699999999998em;"><span></span></span></span></span></span></span></span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:2.6373699999999998em;"><span></span></span></span></span></span></span></span></span><span class="tag"><span class="strut" style="height:5.7747399999999995em;vertical-align:-2.6373699999999998em;"></span><span class="mord text"><span class="mord">(</span><span class="mord"><span class="mord">2</span></span><span class="mord">)</span></span></span></span></span></span></p>
<ul>
<li>Here, <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>&#x3BC;</mi><mo stretchy="false">(</mo><msub><mi>x</mi><mi>i</mi></msub><mo separator="true">,</mo><msub><mi>x</mi><mi>j</mi></msub><mo stretchy="false">)</mo><mo>=</mo><mn>1</mn></mrow><annotation encoding="application/x-tex">\mu(x_i,x_j)=1</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:1.036108em;vertical-align:-0.286108em;"></span><span class="mord mathdefault">&#x3BC;</span><span class="mopen">(</span><span class="mord"><span class="mord mathdefault">x</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.31166399999999994em;"><span style="top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathdefault mtight">i</span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span><span class="mpunct">,</span><span class="mspace" style="margin-right:0.16666666666666666em;"></span><span class="mord"><span class="mord mathdefault">x</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.311664em;"><span style="top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathdefault mtight" style="margin-right:0.05724em;">j</span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.286108em;"><span></span></span></span></span></span></span><span class="mclose">)</span><span class="mspace" style="margin-right:0.2777777777777778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2777777777777778em;"></span></span><span class="base"><span class="strut" style="height:0.64444em;vertical-align:0em;"></span><span class="mord">1</span></span></span></span> if <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><msub><mi>x</mi><mi>i</mi></msub><mo mathvariant="normal">&#x2260;</mo><msub><mi>x</mi><mi>j</mi></msub><mo separator="true">,</mo><mn>0</mn></mrow><annotation encoding="application/x-tex">x_i\neq x_j,0</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8888799999999999em;vertical-align:-0.19444em;"></span><span class="mord"><span class="mord mathdefault">x</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.31166399999999994em;"><span style="top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathdefault mtight">i</span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2777777777777778em;"></span><span class="mrel"><span class="mrel"><span class="mord"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.69444em;"><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="rlap"><span class="strut" style="height:0.8888799999999999em;vertical-align:-0.19444em;"></span><span class="inner"><span class="mrel">&#xE020;</span></span><span class="fix"></span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.19444em;"><span></span></span></span></span></span></span><span class="mrel">=</span></span><span class="mspace" style="margin-right:0.2777777777777778em;"></span></span><span class="base"><span class="strut" style="height:0.9305479999999999em;vertical-align:-0.286108em;"></span><span class="mord"><span class="mord mathdefault">x</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.311664em;"><span style="top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathdefault mtight" style="margin-right:0.05724em;">j</span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.286108em;"><span></span></span></span></span></span></span><span class="mpunct">,</span><span class="mspace" style="margin-right:0.16666666666666666em;"></span><span class="mord">0</span></span></span></span> otherwise and <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><msub><mi>&#x3C9;</mi><mn>1</mn></msub></mrow><annotation encoding="application/x-tex">\omega_1</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.58056em;vertical-align:-0.15em;"></span><span class="mord"><span class="mord mathdefault" style="margin-right:0.03588em;">&#x3C9;</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.30110799999999993em;"><span style="top:-2.5500000000000003em;margin-left:-0.03588em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">1</span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span></span></span></span>, <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><msub><mi>&#x3C9;</mi><mn>2</mn></msub></mrow><annotation encoding="application/x-tex">\omega_2</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.58056em;vertical-align:-0.15em;"></span><span class="mord"><span class="mord mathdefault" style="margin-right:0.03588em;">&#x3C9;</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.30110799999999993em;"><span style="top:-2.5500000000000003em;margin-left:-0.03588em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">2</span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span></span></span></span> are the weights given to the kernels. The expression uses two gaussian kernels. The first one is a bilateral kernel that depends on both pixel positions<span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mo stretchy="false">(</mo><msub><mi>p</mi><mi>i</mi></msub><mo separator="true">,</mo><msub><mi>p</mi><mi>j</mi></msub><mo stretchy="false">)</mo></mrow><annotation encoding="application/x-tex">(p_i, p_j)</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:1.036108em;vertical-align:-0.286108em;"></span><span class="mopen">(</span><span class="mord"><span class="mord mathdefault">p</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.31166399999999994em;"><span style="top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathdefault mtight">i</span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span><span class="mpunct">,</span><span class="mspace" style="margin-right:0.16666666666666666em;"></span><span class="mord"><span class="mord mathdefault">p</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.311664em;"><span style="top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathdefault mtight" style="margin-right:0.05724em;">j</span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.286108em;"><span></span></span></span></span></span></span><span class="mclose">)</span></span></span></span> and their corresponding intensities in the RGB channels. The second kernel is only dependent on the the pixel positions. <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><msub><mi>&#x3C3;</mi><mi>&#x3B1;</mi></msub></mrow><annotation encoding="application/x-tex">\sigma_{\alpha}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.58056em;vertical-align:-0.15em;"></span><span class="mord"><span class="mord mathdefault" style="margin-right:0.03588em;">&#x3C3;</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.151392em;"><span style="top:-2.5500000000000003em;margin-left:-0.03588em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight"><span class="mord mathdefault mtight" style="margin-right:0.0037em;">&#x3B1;</span></span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span></span></span></span>, <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><msub><mi>&#x3C3;</mi><mi>&#x3B2;</mi></msub></mrow><annotation encoding="application/x-tex">\sigma_{\beta}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.716668em;vertical-align:-0.286108em;"></span><span class="mord"><span class="mord mathdefault" style="margin-right:0.03588em;">&#x3C3;</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3361079999999999em;"><span style="top:-2.5500000000000003em;margin-left:-0.03588em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight"><span class="mord mathdefault mtight" style="margin-right:0.05278em;">&#x3B2;</span></span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.286108em;"><span></span></span></span></span></span></span></span></span></span> and <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><msub><mi>&#x3C3;</mi><mi>&#x3B3;</mi></msub></mrow><annotation encoding="application/x-tex">\sigma_{\gamma}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.716668em;vertical-align:-0.286108em;"></span><span class="mord"><span class="mord mathdefault" style="margin-right:0.03588em;">&#x3C3;</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.15139200000000003em;"><span style="top:-2.5500000000000003em;margin-left:-0.03588em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight"><span class="mord mathdefault mtight" style="margin-right:0.05556em;">&#x3B3;</span></span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.286108em;"><span></span></span></span></span></span></span></span></span></span> controls the scale of the Gaussian kernels.<br>
&#x6B64;&#x5904;&#xFF0C;&#x5982;&#x679C; <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><msub><mi>x</mi><mi>i</mi></msub><mo mathvariant="normal">&#x2260;</mo><msub><mi>x</mi><mi>j</mi></msub><mo separator="true">,</mo><mi>&#x3BC;</mi><mo stretchy="false">(</mo><msub><mi>x</mi><mi>i</mi></msub><mo separator="true">,</mo><msub><mi>x</mi><mi>j</mi></msub><mo stretchy="false">)</mo><mo>=</mo><mn>1</mn></mrow><annotation encoding="application/x-tex">x_i\neq x_j,\mu(x_i,x_j)=1</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8888799999999999em;vertical-align:-0.19444em;"></span><span class="mord"><span class="mord mathdefault">x</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.31166399999999994em;"><span style="top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathdefault mtight">i</span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2777777777777778em;"></span><span class="mrel"><span class="mrel"><span class="mord"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.69444em;"><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="rlap"><span class="strut" style="height:0.8888799999999999em;vertical-align:-0.19444em;"></span><span class="inner"><span class="mrel">&#xE020;</span></span><span class="fix"></span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.19444em;"><span></span></span></span></span></span></span><span class="mrel">=</span></span><span class="mspace" style="margin-right:0.2777777777777778em;"></span></span><span class="base"><span class="strut" style="height:1.036108em;vertical-align:-0.286108em;"></span><span class="mord"><span class="mord mathdefault">x</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.311664em;"><span style="top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathdefault mtight" style="margin-right:0.05724em;">j</span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.286108em;"><span></span></span></span></span></span></span><span class="mpunct">,</span><span class="mspace" style="margin-right:0.16666666666666666em;"></span><span class="mord mathdefault">&#x3BC;</span><span class="mopen">(</span><span class="mord"><span class="mord mathdefault">x</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.31166399999999994em;"><span style="top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathdefault mtight">i</span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span><span class="mpunct">,</span><span class="mspace" style="margin-right:0.16666666666666666em;"></span><span class="mord"><span class="mord mathdefault">x</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.311664em;"><span style="top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathdefault mtight" style="margin-right:0.05724em;">j</span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.286108em;"><span></span></span></span></span></span></span><span class="mclose">)</span><span class="mspace" style="margin-right:0.2777777777777778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2777777777777778em;"></span></span><span class="base"><span class="strut" style="height:0.64444em;vertical-align:0em;"></span><span class="mord">1</span></span></span></span>&#xFF0C;&#x5426;&#x5219; <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>&#x3BC;</mi><mo stretchy="false">(</mo><msub><mi>x</mi><mi>i</mi></msub><mo separator="true">,</mo><msub><mi>x</mi><mi>j</mi></msub><mo stretchy="false">)</mo><mo>=</mo><mn>0</mn></mrow><annotation encoding="application/x-tex">\mu(x_i,x_j)=0</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:1.036108em;vertical-align:-0.286108em;"></span><span class="mord mathdefault">&#x3BC;</span><span class="mopen">(</span><span class="mord"><span class="mord mathdefault">x</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.31166399999999994em;"><span style="top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathdefault mtight">i</span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span><span class="mpunct">,</span><span class="mspace" style="margin-right:0.16666666666666666em;"></span><span class="mord"><span class="mord mathdefault">x</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.311664em;"><span style="top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathdefault mtight" style="margin-right:0.05724em;">j</span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.286108em;"><span></span></span></span></span></span></span><span class="mclose">)</span><span class="mspace" style="margin-right:0.2777777777777778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2777777777777778em;"></span></span><span class="base"><span class="strut" style="height:0.64444em;vertical-align:0em;"></span><span class="mord">0</span></span></span></span>&#x3002;<span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><msub><mi>&#x3C9;</mi><mn>1</mn></msub></mrow><annotation encoding="application/x-tex">\omega_1</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.58056em;vertical-align:-0.15em;"></span><span class="mord"><span class="mord mathdefault" style="margin-right:0.03588em;">&#x3C9;</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.30110799999999993em;"><span style="top:-2.5500000000000003em;margin-left:-0.03588em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">1</span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span></span></span></span>, <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><msub><mi>&#x3C9;</mi><mn>2</mn></msub></mrow><annotation encoding="application/x-tex">\omega_2</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.58056em;vertical-align:-0.15em;"></span><span class="mord"><span class="mord mathdefault" style="margin-right:0.03588em;">&#x3C9;</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.30110799999999993em;"><span style="top:-2.5500000000000003em;margin-left:-0.03588em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">2</span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span></span></span></span> &#x4E3A;&#x5185;&#x6838;&#x7684;&#x6743;&#x91CD;&#x3002;&#x8FD9;&#x4E2A;&#x8868;&#x8FBE;&#x5F0F;&#x4F7F;&#x7528;&#x4E86;&#x4E24;&#x4E2A;&#x9AD8;&#x65AF;&#x6838;&#x3002;&#x7B2C;&#x4E00;&#x4E2A;&#x662F;&#x4E00;&#x4E2A;&#x53CC;&#x8FB9;&#x5185;&#x6838;&#xFF0C;&#x5176;&#x4F9D;&#x8D56;&#x4E8E;&#x4E24;&#x4E2A;&#x50CF;&#x7D20;&#x7684;&#x4F4D;&#x7F6E; <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mo stretchy="false">(</mo><msub><mi>p</mi><mi>i</mi></msub><mo separator="true">,</mo><msub><mi>p</mi><mi>j</mi></msub><mo stretchy="false">)</mo></mrow><annotation encoding="application/x-tex">(p_i, p_j)</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:1.036108em;vertical-align:-0.286108em;"></span><span class="mopen">(</span><span class="mord"><span class="mord mathdefault">p</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.31166399999999994em;"><span style="top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathdefault mtight">i</span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span><span class="mpunct">,</span><span class="mspace" style="margin-right:0.16666666666666666em;"></span><span class="mord"><span class="mord mathdefault">p</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.311664em;"><span style="top:-2.5500000000000003em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathdefault mtight" style="margin-right:0.05724em;">j</span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.286108em;"><span></span></span></span></span></span></span><span class="mclose">)</span></span></span></span> &#x4EE5;&#x53CA;&#x4ED6;&#x4EEC;&#x5728;RGB&#x901A;&#x9053;&#x5BF9;&#x5E94;&#x7684;&#x5F3A;&#x5EA6;&#x3002;&#x53E6;&#x4E00;&#x4E2A;&#x9AD8;&#x65AF;&#x6838;&#x53EA;&#x4F9D;&#x8D56;&#x4E8E;&#x50CF;&#x7D20;&#x7684;&#x4F4D;&#x7F6E;&#x3002;<span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><msub><mi>&#x3C3;</mi><mi>&#x3B1;</mi></msub></mrow><annotation encoding="application/x-tex">\sigma_{\alpha}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.58056em;vertical-align:-0.15em;"></span><span class="mord"><span class="mord mathdefault" style="margin-right:0.03588em;">&#x3C3;</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.151392em;"><span style="top:-2.5500000000000003em;margin-left:-0.03588em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight"><span class="mord mathdefault mtight" style="margin-right:0.0037em;">&#x3B1;</span></span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span></span></span></span>, <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><msub><mi>&#x3C3;</mi><mi>&#x3B2;</mi></msub></mrow><annotation encoding="application/x-tex">\sigma_{\beta}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.716668em;vertical-align:-0.286108em;"></span><span class="mord"><span class="mord mathdefault" style="margin-right:0.03588em;">&#x3C3;</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3361079999999999em;"><span style="top:-2.5500000000000003em;margin-left:-0.03588em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight"><span class="mord mathdefault mtight" style="margin-right:0.05278em;">&#x3B2;</span></span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.286108em;"><span></span></span></span></span></span></span></span></span></span> &#x548C; <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><msub><mi>&#x3C3;</mi><mi>&#x3B3;</mi></msub></mrow><annotation encoding="application/x-tex">\sigma_{\gamma}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.716668em;vertical-align:-0.286108em;"></span><span class="mord"><span class="mord mathdefault" style="margin-right:0.03588em;">&#x3C3;</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.15139200000000003em;"><span style="top:-2.5500000000000003em;margin-left:-0.03588em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight"><span class="mord mathdefault mtight" style="margin-right:0.05556em;">&#x3B3;</span></span></span></span></span><span class="vlist-s">&#x200B;</span></span><span class="vlist-r"><span class="vlist" style="height:0.286108em;"><span></span></span></span></span></span></span></span></span></span> &#x63A7;&#x5236;&#x9AD8;&#x65AF;&#x6838;&#x7684;&#x6BD4;&#x4F8B;</li>
<li>The intuition behind the design of such a pairwise potential energy function is to ensure that nearby pixels of similar intensities in the RGB channels are classified under the same class. This model has also been later included in the popular network called DeepLab (refer section 4.1.3). In the various versions of the DeepLab algorithm, the use of CRF was able to boost the mean IOU on the Pascal 2012 Dataset by significant amount(upto 4% in some cases).<br>
&#x8BBE;&#x8BA1;&#x8FD9;&#x6837;&#x4E00;&#x4E2A;&#x6210;&#x5BF9;&#x52BF;&#x80FD;&#x51FD;&#x6570;&#x80CC;&#x540E;&#x7684;&#x76F4;&#x89C9;&#x662F;&#x4E3A;&#x4E86;&#x786E;&#x4FDD;&#x76F8;&#x90BB;&#x4E14;&#x5F3A;&#x5EA6;&#x76F8;&#x4F3C;&#x7684;&#x50CF;&#x7D20;&#x80FD;&#x591F;&#x88AB;&#x5206;&#x5230;&#x540C;&#x4E00;&#x4E2A;&#x79CD;&#x7C7B;&#x4E2D;&#x3002;&#x8FD9;&#x4E2A;&#x6A21;&#x578B;&#x540E;&#x6765;&#x4E5F;&#x88AB;&#x5305;&#x542B;&#x5230;&#x4E86;&#x6D41;&#x884C;&#x7684;&#x7F51;&#x7EDC;&#xFF08;DeepLab&#xFF09;&#x4E2D;&#x3002;&#x5728;&#x4E0D;&#x540C;&#x7684;DeepLab&#x7B97;&#x6CD5;&#x4E2D;&#xFF0C;CRF&#x7684;&#x4F7F;&#x7528;&#x80FD;&#x591F;&#x5927;&#x91CF;&#x5730;&#x63D0;&#x5347;Pascal2012&#x6570;&#x636E;&#x96C6;&#x7684;&#x5E73;&#x5747;IOU&#xFF08;&#x540C;&#x4E2A;&#x7C7B;&#x522B;&#x4E2D;&#x8FBE;&#x5230;&#x4E86;4%&#xFF09;</li>
</ul>
<p><strong>CRF as RNN</strong></p>
<ul>
<li>
<p>While CRF is an useful post-processing module[101] for any deep learning based semantic image segmentation architecture, yet one of the main drawbacks was that it could not be used as a part of an end-to-end architecture. In the standard CRF model the pairwise potentials can be represented in terms of a sum of weighted Gaussians. However since the exact minimization is intractable a mean-field approximation of the CRF distribution is considered to represent the distribution with a simpler version which is simply a product of independent marginal distributions.</p>
</li>
<li>
<p>This mean-field approximation in its native form isn&#x2019;t suitable for back-propagation. In the works of [221], this step was replaced by a set of convolutional operation that is iterated over a recurrent pipeline until convergence is reached. As reported in their work, with the pro- posed approach an mIOU of 74.7 was obtained as compared to 71.0 by BoxSup and 72.7 by DeepLab. The sequence of operations can be most easily explained as follows.</p>
<blockquote>
<ol>
<li>Initialization : A SoftMax operations over the unary potentials can give us the intial distribution to work with.<br>
&#x521D;&#x59CB;&#x5316;&#xFF1A;&#x5BF9;&#x4E00;&#x5143;&#x52BF;&#x7684;SoftMax&#x64CD;&#x4F5C;&#x80FD;&#x591F;&#x4E3A;&#x6211;&#x4EEC;&#x63D0;&#x4F9B;&#x521D;&#x59CB;&#x7684;&#x5206;&#x5E03;</li>
<li>Message Passing : Convoluting using two Gaussian kernels, one spatial and one bilateral kernel. Similar to the actual implementation of CRF, the splatting and slicing also occurs while building the permutohedral lattice for efficient computation of the fully connected CRF<br>
&#x4FE1;&#x606F;&#x4F20;&#x9012;&#xFF1A;&#x4F7F;&#x7528;&#x4E24;&#x4E2A;&#x9AD8;&#x65AF;&#x6838;&#x8FDB;&#x884C;&#x5377;&#x79EF;&#xFF0C;&#x4E00;&#x4E2A;&#x662F;&#x7A7A;&#x95F4;&#x6838;&#xFF0C;&#x53E6;&#x4E00;&#x4E2A;&#x662F;&#x53CC;&#x8FB9;&#x6838;&#x3002;&#x4E0E;CRF&#x7684;&#x5B9E;&#x9645;&#x5B9E;&#x73B0;&#x7C7B;&#x4F3C;&#xFF0C;&#x5728;&#x6784;&#x5EFA;&#x4E3A;&#x4E86;&#x9AD8;&#x6548;&#x8BA1;&#x7B97;&#x5168;&#x8FDE;&#x63A5;CRF&#x7684;&#x56DB;&#x9762;&#x4F53;&#x6676;&#x683C;&#x65F6;&#xFF0C;&#x4E5F;&#x4F1A;&#x53D1;&#x751F;&#x6E85;&#x5C04;&#x548C;&#x5207;&#x7247;</li>
<li>Weighting Filter Outputs : Convoluting with 1 &#xD7; 1 kernels with the required number of channels the filter outputs can be weighted and summed. The weights can be easily learnt through backpropagation.<br>
&#x6743;&#x503C;&#x6EE4;&#x6CE2;&#x8F93;&#x51FA;&#xFF1A;&#x4F7F;&#x7528;&#x5E26;&#x6709;&#x9700;&#x6C42;&#x6570;&#x91CF;&#x901A;&#x9053; 1 &#xD7; 1 &#x5185;&#x6838;&#x8FDB;&#x884C;&#x5377;&#x79EF;&#xFF0C;&#x6EE4;&#x6CE2;&#x5668;&#x7684;&#x8F93;&#x51FA;&#x80FD;&#x591F;&#x8FDB;&#x884C;&#x52A0;&#x6743;&#x6C42;&#x548C;&#x3002;&#x6743;&#x503C;&#x80FD;&#x591F;&#x5BB9;&#x6613;&#x5730;&#x901A;&#x8FC7;&#x53CD;&#x5411;&#x4F20;&#x64AD;&#x5B66;&#x4E60;&#x5F97;&#x5230;</li>
<li>Compatibility Transform : Considering a compatibility function to keep a track of uncertainty between various labels, a simple 1 &#xD7; 1 convolution with the same number of input and output channel is enough to simulate that. Unlike the potts model that assigns the same penalty, here the compatibility function can be learnt and hence a much better alternative.<br>
&#x76F8;&#x5BB9;&#x6027;&#x53D8;&#x6362;&#xFF1A;&#x8003;&#x8651;&#x76F8;&#x5BB9;&#x6027;&#x51FD;&#x6570;&#x6765;&#x4FDD;&#x6301;&#x5BF9;&#x4E0D;&#x540C;&#x6807;&#x7B7E;&#x4E4B;&#x95F4;&#x4E0D;&#x786E;&#x5B9A;&#x6027;&#x7684;&#x8FFD;&#x8E2A;&#xFF0C;&#x4E00;&#x4E2A;&#x7B80;&#x5355;&#x7684;&#x8F93;&#x5165;&#x8F93;&#x51FA;&#x901A;&#x9053;&#x76F8;&#x540C;&#x7684; 1 &#xD7; 1 &#x5377;&#x79EF;&#x5C31;&#x8DB3;&#x4EE5;&#x6A21;&#x62DF;&#x5B83;&#x4E86;&#x3002;&#x4E0E;&#x5206;&#x914D;&#x76F8;&#x540C;&#x60E9;&#x7F5A;&#x7684;potts&#x6A21;&#x578B;&#x4E0D;&#x540C;&#xFF0C;&#x8FD9;&#x91CC;&#x7684;&#x76F8;&#x5BB9;&#x6027;&#x51FD;&#x6570;&#x80FD;&#x591F;&#x901A;&#x8FC7;&#x5B66;&#x4E60;&#x5F97;&#x5230;&#xFF0C;&#x56E0;&#x6B64;&#x662F;&#x4E00;&#x4E2A;&#x66F4;&#x597D;&#x7684;&#x9009;&#x62E9;</li>
<li>Adding the unary potentials : This can be performed by a simple element wise subtraction of the penalty from the compatibility transform from the unary potentials<br>
&#x6DFB;&#x52A0;&#x4E00;&#x5143;&#x52BF;&#xFF1A;&#x53EF;&#x4EE5;&#x901A;&#x8FC7;&#x4E00;&#x4E2A;&#x7B80;&#x5355;&#x7684;&#x5143;&#x7D20;&#x4ECE;&#x4E00;&#x5143;&#x52BF;&#x7684;&#x76F8;&#x5BB9;&#x6027;&#x53D8;&#x6362;&#x4E2D;&#x51CF;&#x53BB;&#x60E9;&#x7F5A;&#x5B9E;&#x73B0;</li>
<li>Normalization : The outputs can be normalized with another simple softmax function.<br>
&#x5F52;&#x4E00;&#x5316;&#xFF1A;&#x4F7F;&#x7528;&#x53E6;&#x4E00;&#x4E2A;&#x7B80;&#x5355;&#x7684;softmax&#x51FD;&#x6570;&#x5BF9;&#x8F93;&#x51FA;&#x8FDB;&#x884C;&#x5F52;&#x4E00;&#x5316;</li>
</ol>
</blockquote>
</li>
</ul>
<p><strong>Incorporating higher order dependencies</strong><br>
<strong>&#x5408;&#x5E76;&#x9AD8;&#x9636;&#x4F9D;&#x8D56;&#x6027;</strong></p>
<ul>
<li>Another end-to-end network inspired from CRFs, incorporate higher order relations into a deep network . With a deep parsing network [128] pixel-wise prediction from a standard VGG-like feature extractor (but with lesser pooling operations) is boosted using a sequence of special convolution and pooling operations.<br>
&#x53E6;&#x4E00;&#x4E2A;&#x4ECE;CRF&#x83B7;&#x5F97;&#x7075;&#x611F;&#x7684;&#x7AEF;&#x5230;&#x7AEF;&#x7F51;&#x7EDC;&#x662F;&#x5408;&#x5E76;&#x66F4;&#x9AD8;&#x9636;&#x7684;&#x5173;&#x7CFB;&#x5230;&#x4E00;&#x4E2A;&#x6DF1;&#x5EA6;&#x7F51;&#x7EDC;&#x4E2D;&#x3002;&#x901A;&#x8FC7;&#x4E00;&#x4E2A;&#x6DF1;&#x5EA6;&#x89E3;&#x6790;&#x7F51;&#x7EDC;<a href>[128]</a>&#xFF0C;&#x6807;&#x51C6;VGG&#x7279;&#x5F81;&#x63D0;&#x53D6;&#x5668;&#x8F93;&#x51FA;&#x7684;&#x50CF;&#x7D20;&#x9884;&#x6D4B;&#x88AB;&#x901A;&#x8FC7;&#x4F7F;&#x7528;&#x7279;&#x6B8A;&#x7684;&#x5377;&#x79EF;&#x548C;&#x6C60;&#x5316;&#x64CD;&#x4F5C;&#x5E8F;&#x5217;&#x800C;&#x63D0;&#x5347;</li>
<li>Firstly , by using local convolutions that implement large unshared convolutional kernels across the different positions of the feature map, to obtain translation dependent features that model long-distance dependencies. Similar to standard CRFs a spatial convolution penalizes probabilistic maps based on local label contexts.<br>
&#x9996;&#x5148;&#xFF0C;&#x901A;&#x8FC7;&#x4F7F;&#x7528;&#x5C40;&#x90E8;&#x5377;&#x79EF;&#x6765;&#x5B9E;&#x73B0;&#x8F83;&#x5927;&#x7684;&#x975E;&#x5171;&#x4EAB;&#x5377;&#x79EF;&#x6838;&#xFF0C;&#x8DE8;&#x8D8A;&#x4E86;&#x7279;&#x5F81;&#x56FE;&#x4E2D;&#x4E0D;&#x540C;&#x7684;&#x4F4D;&#x7F6E;&#x4EE5;&#x83B7;&#x5F97;&#x6A21;&#x578B;&#x8FDC;&#x7A0B;&#x4F9D;&#x8D56;&#x7684;&#x7FFB;&#x8BD1;&#x76F8;&#x5173;&#x7279;&#x5F81;&#x3002;&#x4E0E;&#x6807;&#x51C6;CRF&#x7C7B;&#x4F3C;&#xFF0C;&#x7A7A;&#x95F4;&#x5377;&#x79EF;&#x60E9;&#x7F5A;&#x6982;&#x7387;&#x6620;&#x5C04;&#x57FA;&#x4E8E;&#x5C40;&#x90E8;&#x6807;&#x7B7E;&#x7684;&#x4E0A;&#x4E0B;&#x6587;&#x5173;&#x7CFB;</li>
<li>Finally, with block min pooling that does a pixel-wise min-pooling across the depth to accept the prediction with the lowest penalty. Similarly, in the works of [126], a row/columnwise propagation model was proposed the calculated the global pairwise relationship across an image. With a dense affinity matrix drawn from a sparse transformation matrix, coarsely predicted labels were reclassified based on the affinity of pixels.<br>
&#x6700;&#x540E;&#xFF0C;&#x901A;&#x8FC7;&#x5757;&#x6700;&#x5C0F;&#x6C60;&#x5316;&#x8FDB;&#x884C;&#x8DE8;&#x8D8A;&#x6DF1;&#x5EA6;&#x7684;&#x50CF;&#x7D20;&#x6700;&#x5C0F;&#x503C;&#x6C60;&#x5316;&#xFF0C;&#x6765;&#x63A5;&#x53D7;&#x60E9;&#x7F5A;&#x6700;&#x4F4E;&#x7684;&#x9884;&#x6D4B;&#x3002;&#x76F8;&#x4F3C;&#x5730;&#xFF0C;&#x5728;<a href>[126]</a>&#x7684;&#x5DE5;&#x4F5C;&#x4E2D;&#xFF0C;&#x63D0;&#x51FA;&#x4E86;&#x8BA1;&#x7B97;&#x8DE8;&#x8D8A;&#x56FE;&#x7247;&#x7684;&#x5168;&#x5C40;&#x6210;&#x5BF9;&#x5173;&#x7CFB;&#x7684;&#x884C;/&#x5217;&#x4F20;&#x64AD;&#x6A21;&#x578B;&#x3002;&#x901A;&#x8FC7;&#x7A00;&#x758F;&#x53D8;&#x6362;&#x77E9;&#x9635;&#x5F97;&#x5230;&#x7684;&#x7A20;&#x5BC6;&#x5173;&#x7CFB;&#x77E9;&#x9635;&#xFF0C;&#x7C97;&#x7565;&#x5730;&#x9884;&#x6D4B;&#x4E86;&#x57FA;&#x4E8E;&#x50CF;&#x7D20;&#x5173;&#x7CFB;&#x5206;&#x7C7B;&#x7684;&#x6807;&#x7B7E;</li>
</ul>
<h3 class="mume-header" id="415-multi-scale-networks">4.1.5 Multi-scale networks</h3>

<ul>
<li>One of the main problems with image segmentation for natural scene images is that the size of the object of interest is very unpredictable, as in real world objects may be of different sizes and objects may look bigger or smaller depending on the position of the object and the camera. The nature of a CNN dictates that delicate small scale features are captured in early layers whereas as one moves across the depth of the network the features become more specific for larger objects.<br>
&#x81EA;&#x7136;&#x573A;&#x666F;&#x56FE;&#x50CF;&#x5206;&#x5272;&#x7684;&#x4E00;&#x4E2A;&#x4E3B;&#x8981;&#x95EE;&#x9898;&#x662F;&#x611F;&#x5174;&#x8DA3;&#x76EE;&#x6807;&#x7684;&#x5927;&#x5C0F;&#x975E;&#x5E38;&#x96BE;&#x4EE5;&#x9884;&#x6D4B;&#xFF0C;&#x6B63;&#x5982;&#x771F;&#x5B9E;&#x4E16;&#x754C;&#x4E2D;&#x7269;&#x4F53;&#x53EF;&#x80FD;&#x5927;&#x5C0F;&#x4E0D;&#x540C;&#xFF0C;&#x5E76;&#x4E14;&#x53EF;&#x80FD;&#x4F1A;&#x56E0;&#x4E3A;&#x7269;&#x4F53;&#x548C;&#x76F8;&#x673A;&#x4E4B;&#x95F4;&#x7684;&#x4F4D;&#x7F6E;&#x800C;&#x770B;&#x8D77;&#x6765;&#x66F4;&#x5927;&#x6216;&#x66F4;&#x5C0F;&#x3002;CNN&#x7684;&#x7279;&#x6027;&#x51B3;&#x5B9A;&#x4E86;&#x5C0F;&#x5C3A;&#x5BF8;&#x7684;&#x7279;&#x5F81;&#x5728;&#x66F4;&#x65E9;&#x7684;&#x5C42;&#x63D0;&#x53D6;&#xFF0C;&#x7136;&#x800C;&#x968F;&#x7740;&#x7F51;&#x7EDC;&#x7684;&#x52A0;&#x6DF1;&#xFF0C;&#x5927;&#x7269;&#x4F53;&#x7684;&#x7279;&#x5F81;&#x53D8;&#x5F97;&#x66F4;&#x52A0;&#x660E;&#x663E;</li>
<li>For example a tiny car in a scene has much lesser chance of being captured in the higher layers due to operations like pooling or down-sampling. It is often beneficial to extract information from feature maps of various scales to create segmentations that are agnostic of the size of the object in the image. Multiscale auto-encoder models [33] consider activations of different resolutions to provide image segmentation output.<br>
&#x4F8B;&#x5982;&#xFF0C;&#x7531;&#x4E8E;&#x6C60;&#x5316;&#x6216;&#x964D;&#x91C7;&#x6837;&#x4E4B;&#x7C7B;&#x7684;&#x64CD;&#x4F5C;&#xFF0C;&#x573A;&#x666F;&#x4E2D;&#x7684;&#x4E00;&#x8F86;&#x5C0F;&#x6C7D;&#x8F66;&#x5728;&#x9AD8;&#x5C42;&#x88AB;&#x6355;&#x83B7;&#x7684;&#x51E0;&#x7387;&#x66F4;&#x4F4E;&#x3002;&#x4ECE;&#x4E0D;&#x540C;&#x5C3A;&#x5BF8;&#x7684;&#x7279;&#x5F81;&#x56FE;&#x63D0;&#x53D6;&#x4FE1;&#x606F;&#x6765;&#x751F;&#x6210;&#x56FE;&#x50CF;&#x4E2D;&#x7269;&#x4F53;&#x5927;&#x5C0F;&#x4E0D;&#x53EF;&#x77E5;&#x7684;&#x5206;&#x5272;&#x901A;&#x5E38;&#x6BD4;&#x8F83;&#x65B9;&#x4FBF;&#x3002;&#x591A;&#x5C3A;&#x5EA6;&#x81EA;&#x52A8;&#x7F16;&#x7801;&#x5668;&#x6A21;&#x578B;<a href>[33]</a>&#x8003;&#x8651;&#x4E86;&#x4E0D;&#x540C;&#x5206;&#x8FA8;&#x7387;&#x7684;&#x6FC0;&#x6D3B;&#x6765;&#x63D0;&#x4F9B;&#x56FE;&#x50CF;&#x5206;&#x5272;&#x7684;&#x8F93;&#x51FA;<br>
<strong>PSPNet</strong></li>
<li>The pyramid scene parsing network [220] was built upon the FCN based pixel level classification network. The feature maps from a ResNet-101 network are converted to activations of different resolutions thorough multi-scale pooling layers which are later upsampled and concatenated with the original feature map to perform segmentation(Refer fig.10). The learning process in deep networks like ResNet was further optimized by using auxiliary classifiers.<br>
&#x91D1;&#x5B57;&#x5854;&#x573A;&#x666F;&#x89E3;&#x6790;&#x7F51;&#x7EDC;<a href>[220]</a>&#x57FA;&#x4E8E;FCN&#x7684;&#x50CF;&#x7D20;&#x7EA7;&#x522B;&#x5206;&#x7C7B;&#x7F51;&#x7EDC;&#x6784;&#x5EFA;&#x800C;&#x6210;&#x3002;ResNet-101&#x7F51;&#x7EDC;&#x7684;&#x7279;&#x5F81;&#x56FE;&#x901A;&#x8FC7;&#x591A;&#x5C3A;&#x5EA6;&#x6C60;&#x5316;&#x5C42;&#x88AB;&#x8F6C;&#x5316;&#x4E3A;&#x4E0D;&#x540C;&#x5206;&#x8FA8;&#x7387;&#x7684;&#x6FC0;&#x6D3B;&#xFF0C;&#x8FD9;&#x4E9B;&#x6C60;&#x5316;&#x5C42;&#x968F;&#x540E;&#x88AB;&#x4E0A;&#x91C7;&#x7528;&#x5E76;&#x4E32;&#x8054;&#x5230;&#x539F;&#x6765;&#x7684;&#x7279;&#x5F81;&#x56FE;&#x4E2D;&#x6765;&#x8FDB;&#x884C;&#x5206;&#x5272;&#xFF08;&#x5982;&#x56FE;10&#xFF09;&#x3002;ResNet&#x4E4B;&#x7C7B;&#x6DF1;&#x5EA6;&#x7F51;&#x7EDC;&#x5B66;&#x4E60;&#x7684;&#x8FC7;&#x7A0B;&#x5C06;&#x88AB;&#x901A;&#x8FC7;&#x8F85;&#x52A9;&#x5206;&#x7C7B;&#x5668;&#x8FDB;&#x4E00;&#x6B65;&#x4F18;&#x5316;</li>
</ul>
<div align="center"><img src="./resource/fig10.png" width="600"></div>
<center>fig. 10 PSPNet&#x7684;&#x539F;&#x7406;&#x56FE;</center>
<ul>
<li>The different types of pooling modules focus on different areas of the activation map. Pooling kernels of various sizes like 1 &#xD7; 1, 2 &#xD7; 2, 3 &#xD7; 3, 6 &#xD7; 6 look into different areas of the activation map to create the spatial pooling pyramid. On the ImageNet scene parsing challenge the PSPNet was able to score an mean IoU of 57.21 with respect to 44.80 of FCN and 40.79 of SegNet.<br>
&#x4E0D;&#x540C;&#x7C7B;&#x578B;&#x7684;&#x6C60;&#x5316;&#x6A21;&#x5757;&#x5173;&#x6CE8;&#x6FC0;&#x6D3B;&#x6620;&#x5C04;&#x7684;&#x4E0D;&#x540C;&#x533A;&#x57DF;&#x3002;&#x4E0D;&#x540C;&#x5C3A;&#x5BF8;&#x7684;&#x6C60;&#x5316;&#x5185;&#x6838;&#xFF0C;&#x5982; 1 &#xD7; 1, 2 &#xD7; 2, 3 &#xD7; 3, 6 &#xD7; 6 &#x641C;&#x7D22;&#x6FC0;&#x6D3B;&#x6620;&#x5C04;&#x7684;&#x4E0D;&#x540C;&#x533A;&#x57DF;&#x6765;&#x751F;&#x6210;&#x7A7A;&#x95F4;&#x6C60;&#x5316;&#x91D1;&#x5B57;&#x5854;&#x3002;&#x5728;ImageNet&#x573A;&#x666F;&#x89E3;&#x6790;&#x6311;&#x6218;&#x4E0A;&#xFF0C;PSPNet&#x8FBE;&#x5230;&#x4E86;57.21&#x7684;&#x5E73;&#x5747;IoU&#xFF0C;&#x5BF9;&#x6BD4;FCN&#x7684;44.80&#x548C;SegNet&#x7684;40.79</li>
</ul>
<p><strong>RefineNet</strong></p>
<ul>
<li>Working with features from last layer of a CNN produces soft boundaries for the object segments. This issue was avoided in DeepLab algorithms with atrous convolutions. RefineNet [120] takes an alternative approach by refining intermediate activation maps and hierarchically concatenating it to combine multi-scale activations and prevent loss of sharpness simultaneously. The network consisted of separate RefineNet modules for each block of the ResNet. Each RefineNet module were made up of three main blocks, namely, Residual convolution unit(RCU), multi-resolution fusion(MRF) and chained residual pooling(CRP)(Refer fig.11).<br>
&#x901A;&#x8FC7;&#x5904;&#x7406;&#x5148;&#x524D;&#x5C42;&#x7684;&#x7279;&#x5F81;CNN&#x4EA7;&#x751F;&#x67D4;&#x548C;&#x7684;&#x7269;&#x4F53;&#x5206;&#x5272;&#x8FB9;&#x754C;&#x3002;&#x5728;DeepLab&#x7B97;&#x6CD5;&#x4E2D;&#x901A;&#x8FC7;&#x591A;&#x5B54;&#x5377;&#x79EF;&#x907F;&#x514D;&#x4E86;&#x8FD9;&#x4E2A;&#x95EE;&#x9898;&#x3002;RefineNet<a href>[120]</a>&#x4F7F;&#x7528;&#x66FF;&#x4EE3;&#x7684;&#x65B9;&#x6CD5;&#xFF0C;&#x901A;&#x8FC7;&#x63D0;&#x70BC;&#x4E2D;&#x95F4;&#x5C42;&#x7684;&#x6FC0;&#x6D3B;&#x6620;&#x5C04;&#x5E76;&#x5206;&#x5C42;&#x5730;&#x8FDE;&#x63A5;&#x5230;&#x591A;&#x5C3A;&#x5EA6;&#x6FC0;&#x6D3B;&#x5E76;&#x540C;&#x65F6;&#x9632;&#x6B62;&#x9510;&#x5EA6;&#x7684;&#x635F;&#x5931;&#x3002;&#x7F51;&#x7EDC;&#x7531;&#x5206;&#x5F00;&#x7684;RefineNet&#x6A21;&#x5757;&#x7EC4;&#x6210;&#xFF0C;&#x6BCF;&#x4E2A;&#x6A21;&#x5757;&#x7531;ResNet&#x5757;&#x6784;&#x6210;&#x3002;&#x6BCF;&#x4E2A;RefineNet&#x6A21;&#x5757;&#x7531;3&#x4E2A;&#x4E3B;&#x8981;&#x7684;&#x5757;&#x7EC4;&#x6210;&#xFF0C;&#x5373;&#x6B8B;&#x5DEE;&#x5377;&#x79EF;&#x5355;&#x5143;&#xFF08;RCU&#xFF09;&#xFF0C;&#x591A;&#x5206;&#x8FA8;&#x7387;&#x6A21;&#x7CCA;&#xFF08;MRF&#xFF09;&#x548C;&#x8FDE;&#x63A5;&#x6B8B;&#x5DEE;&#x6C60;&#x5316;&#xFF08;CRP&#xFF09;&#xFF08;&#x5982;&#x56FE;11&#xFF09;</li>
</ul>
<div align="center"><img src="./resource/fig11.png" width="600"></div>
<center>fig. 11 RefineNet&#x7684;&#x539F;&#x7406;&#x56FE;</center>
<ul>
<li>The RCU block consists of an adaptive convolution set that fine-tunes the pre-trained weights of the ResNet weights for the segmentation problem. The MRF layer fuses activations of different resolutions using convolutions and upsampling layers to create a higher resolution map. Finally in CRP layer pooling kernels of multiple sizes are used on the activations to capture background context from large image areas. The RefineNet was tested on the Person-Part Dataset where it obtained an IOU of 68.6 as compared to 64.9 by DeepLab-v2 both of which used the ResNet-101 as a feature extractor.<br>
RCU&#x5757;&#x7531;&#x4E00;&#x7CFB;&#x5217;&#x5FAE;&#x8C03;&#x7528;&#x4E8E;&#x5206;&#x5272;&#x95EE;&#x9898;&#x7684;ResNet&#x9884;&#x8BAD;&#x7EC3;&#x6743;&#x503C;&#x7684;&#x81EA;&#x9002;&#x5E94;&#x5377;&#x79EF;&#x7EC4;&#x6210;&#x3002;MRF&#x5C42;&#x4F7F;&#x7528;&#x5377;&#x79EF;&#x6838;&#x4E0A;&#x91C7;&#x6837;&#x5C42;&#x878D;&#x5408;&#x4E86;&#x4E0D;&#x540C;&#x5206;&#x8FA8;&#x7387;&#x7684;&#x6FC0;&#x6D3B;&#xFF0C;&#x6765;&#x751F;&#x6210;&#x66F4;&#x9AD8;&#x5206;&#x8FA8;&#x7387;&#x7684;&#x6620;&#x5C04;&#x3002;&#x6700;&#x540E;&#x5728;CRP&#x5C42;&#x4E2D;&#xFF0C;&#x6FC0;&#x6D3B;&#x4E2D;&#x4F7F;&#x7528;&#x4E86;&#x591A;&#x79CD;&#x5C3A;&#x5BF8;&#x7684;&#x6C60;&#x5316;&#x6838;&#x6765;&#x4ECE;&#x5927;&#x7684;&#x56FE;&#x50CF;&#x533A;&#x57DF;&#x63D0;&#x53D6;&#x80CC;&#x666F;&#x7684;&#x4E0A;&#x4E0B;&#x6587;&#x4FE1;&#x606F;&#x3002;RefineNet&#x5728;Person-Part&#x6570;&#x636E;&#x96C6;&#x4E2D;&#x6D4B;&#x8BD5;&#xFF0C;&#x83B7;&#x5F97;&#x4E86;68.6&#x7684;IoU&#xFF0C;&#x4E0E;&#x540C;&#x6837;&#x4F7F;&#x7528;&#x4E86;ResNet-101&#x4F5C;&#x4E3A;&#x7279;&#x5F81;&#x63D0;&#x53D6;&#x5668;&#x7684;DeepLab-v2&#x76F8;&#x6BD4;</li>
</ul>
<h2 class="mume-header" id="42-convolutional-autoencoders">4.2 Convolutional autoencoders</h2>

<ul>
<li>The last subsection deals with discriminative models that are used to perform pixel level classification to deal with image segmentation problems. Another line of thought gets its inspiration from autoencoders. Autoencoders have been traditionally used for feature extraction from input samples while trying to retain most of the original information.<br>
&#x6700;&#x540E;&#x4E00;&#x4E2A;&#x90E8;&#x5206;&#x4ECB;&#x7ECD;&#x4E86;&#x7528;&#x4E8E;&#x8FDB;&#x884C;&#x50CF;&#x7D20;&#x7EA7;&#x522B;&#x5206;&#x7C7B;&#x6765;&#x5904;&#x7406;&#x56FE;&#x50CF;&#x5206;&#x5272;&#x95EE;&#x9898;&#x7684;&#x5224;&#x522B;&#x6A21;&#x578B;&#x3002;&#x4ECE;&#x81EA;&#x52A8;&#x7F16;&#x7801;&#x5668;&#x83B7;&#x5F97;&#x4E86;&#x53E6;&#x4E00;&#x6761;&#x601D;&#x8DEF;&#x3002;&#x81EA;&#x52A8;&#x7F16;&#x7801;&#x5668;&#x4F20;&#x7EDF;&#x4E0A;&#x7528;&#x4E8E;&#x4ECE;&#x8F93;&#x5165;&#x6837;&#x672C;&#x4E0A;&#x63D0;&#x53D6;&#x7279;&#x5F81;&#xFF0C;&#x5E76;&#x8BD5;&#x56FE;&#x4FDD;&#x7559;&#x5927;&#x90E8;&#x5206;&#x539F;&#x59CB;&#x4FE1;&#x606F;</li>
<li>An autoencoder is basically composed of an encoder that encodes the input representations from a raw input to a possibly lower dimensional intermediate representation and a decoder that attempts to reconstruct the original input from the intermediate representation. The loss is computed in terms of the difference between the raw input images and the reconstructed output image.<br>
&#x81EA;&#x52A8;&#x7F16;&#x7801;&#x5668;&#x57FA;&#x672C;&#x4E0A;&#x7531;&#x5C06;&#x8F93;&#x5165;&#x8868;&#x793A;&#x4ECE;&#x539F;&#x59CB;&#x8F93;&#x5165;&#x7F16;&#x7801;&#x4E3A;&#x53EF;&#x80FD;&#x66F4;&#x4F4E;&#x7EF4;&#x7684;&#x4E2D;&#x95F4;&#x8868;&#x793A;&#x7684;&#x7F16;&#x7801;&#x5668;&#xFF0C;&#x4EE5;&#x53CA;&#x5C1D;&#x8BD5;&#x4ECE;&#x4E2D;&#x95F4;&#x8868;&#x793A;&#x91CD;&#x5EFA;&#x539F;&#x59CB;&#x8F93;&#x5165;&#x7684;&#x7F16;&#x7801;&#x5668;&#x7EC4;&#x6210;&#x3002;&#x635F;&#x5931;&#x5C06;&#x88AB;&#x4EE5;&#x539F;&#x59CB;&#x8F93;&#x5165;&#x56FE;&#x7247;&#x548C;&#x91CD;&#x5EFA;&#x7684;&#x8F93;&#x5165;&#x56FE;&#x7247;&#x7684;&#x5DEE;&#x5F02;&#x7684;&#x5F62;&#x5F0F;&#x6240;&#x8868;&#x793A;</li>
<li>The generative nature of the decoder part has often been modified and used for image segmentation purposes. Unlike the traditional autoencoders, during segmentation the loss is computed in terms of the difference between the reconstructed pixel level class distribution and the desired pixel level class distribution. This kind of segmentation approach is more of a generative procedure as compared to the classification approach of RCNN or DeepLab algorithms.<br>
&#x89E3;&#x7801;&#x5668;&#x751F;&#x6210;&#x90E8;&#x5206;&#x7684;&#x7279;&#x6027;&#x7ECF;&#x5E38;&#x88AB;&#x4FEE;&#x6539;&#x5E76;&#x7528;&#x4E8E;&#x56FE;&#x50CF;&#x5206;&#x5272;&#x7528;&#x9014;&#x3002;&#x4E0D;&#x540C;&#x4E8E;&#x4F20;&#x7EDF;&#x7684;&#x81EA;&#x52A8;&#x7F16;&#x7801;&#x5668;&#xFF0C;&#x5728;&#x5206;&#x5272;&#x65F6;&#x635F;&#x5931;&#x5C06;&#x88AB;&#x4EE5;&#x91CD;&#x5EFA;&#x7684;&#x50CF;&#x7D20;&#x7EA7;&#x522B;&#x7C7B;&#x522B;&#x5206;&#x5E03;&#x548C;&#x76EE;&#x6807;&#x50CF;&#x7D20;&#x7C7B;&#x522B;&#x5206;&#x5E03;&#x7684;&#x5DEE;&#x5F02;&#x6765;&#x8BA1;&#x7B97;&#x3002;&#x8FD9;&#x79CD;&#x5206;&#x5272;&#x65B9;&#x6CD5;&#x4E0E;RCNN&#x6216;DeepLab&#x7B97;&#x6CD5;&#x8FD9;&#x79CD;&#x5206;&#x7C7B;&#x65B9;&#x6CD5;&#x66F4;&#x52A0;&#x5177;&#x6709;&#x751F;&#x6210;&#x7684;&#x80FD;&#x529B;</li>
<li>The problem with approaches such as this is to prevent over-abstraction of images during the encoding process. The primary benefit of such approaches is the ability to generate sharper boundaries with much lesser complication. Unlike the classification approaches, the generative nature of the decoder can learn to create delicate boundaries based on extracted features.<br>
&#x8FD9;&#x79CD;&#x65B9;&#x6CD5;&#x7684;&#x95EE;&#x9898;&#x5728;&#x4E8E;&#x9632;&#x6B62;&#x5728;&#x7F16;&#x7801;&#x7684;&#x8FC7;&#x7A0B;&#x4E2D;&#x56FE;&#x50CF;&#x7684;&#x8FC7;&#x5EA6;&#x62BD;&#x8C61;&#x3002;&#x8FD9;&#x79CD;&#x65B9;&#x6CD5;&#x4E3B;&#x8981;&#x7684;&#x597D;&#x5904;&#x5728;&#x4E8E;&#x66F4;&#x4F4E;&#x7684;&#x590D;&#x6742;&#x5EA6;&#x4E0B;&#x751F;&#x6210;&#x66F4;&#x52A0;&#x9510;&#x5229;&#x7684;&#x8FB9;&#x7F18;&#x7684;&#x80FD;&#x529B;&#x3002;&#x4E0D;&#x540C;&#x4E8E;&#x5206;&#x7C7B;&#x65B9;&#x6CD5;&#xFF0C;&#x89E3;&#x7801;&#x5668;&#x7684;&#x751F;&#x6210;&#x7279;&#x6027;&#x80FD;&#x591F;&#x5B66;&#x4E60;&#x5982;&#x4F55;&#x57FA;&#x4E8E;&#x63D0;&#x53D6;&#x7684;&#x7279;&#x5F81;&#x751F;&#x6210;&#x7CBE;&#x81F4;&#x7684;&#x8FB9;&#x7F18;</li>
<li>The major issue that affects these algorithm is the level of abstraction. It has been seen that without proper modification the reduction in the size of the feature map created inconsistencies during the reconstruction. in the paradigm of convolutional neural networks the encoding is basically a series of convolution and pooling layers or strided convolutions. The reconstruction however can be tricky. The commonly used techniques for decoding from a lower dimensional feature are transposed convolution or a unpooling layers.<br>
&#x5F71;&#x54CD;&#x8FD9;&#x4E9B;&#x7B97;&#x6CD5;&#x7684;&#x4E3B;&#x8981;&#x95EE;&#x9898;&#x662F;&#x62BD;&#x8C61;&#x7684;&#x7A0B;&#x5EA6;&#x3002;&#x6211;&#x4EEC;&#x5DF2;&#x7ECF;&#x770B;&#x5230;&#x5982;&#x679C;&#x6CA1;&#x6709;&#x9002;&#x5F53;&#x7684;&#x4FEE;&#x6539;&#xFF0C;&#x7279;&#x5F81;&#x56FE;&#x5C3A;&#x5BF8;&#x7684;&#x51CF;&#x5C0F;&#x4F1A;&#x5BFC;&#x81F4;&#x91CD;&#x5EFA;&#x8FC7;&#x7A0B;&#x7684;&#x77DB;&#x76FE;&#x3002;&#x5728;&#x5377;&#x79EF;&#x795E;&#x7ECF;&#x7F51;&#x7EDC;&#x7684;&#x8303;&#x4F8B;&#x4E2D;&#xFF0C;&#x7F16;&#x7801;&#x57FA;&#x672C;&#x4E0A;&#x7531;&#x4E00;&#x7CFB;&#x5217;&#x7684;&#x5377;&#x79EF;&#x548C;&#x6C60;&#x5316;&#x5C42;&#x6216;&#x6709;&#x4E00;&#x5B9A;&#x6B65;&#x957F;&#x7684;&#x5377;&#x79EF;&#x6765;&#x8FDB;&#x884C;&#x3002;&#x91CD;&#x5EFA;&#x7684;&#x8FC7;&#x7A0B;&#x4E5F;&#x5341;&#x5206;&#x68D8;&#x624B;&#x3002;&#x901A;&#x5E38;&#x7528;&#x4E8E;&#x4ECE;&#x4F4E;&#x7EF4;&#x7279;&#x5F81;&#x89E3;&#x7801;&#x7684;&#x6280;&#x672F;&#x4E3A;&#x8F6C;&#x7F6E;&#x5377;&#x79EF;&#x6216;&#x4E0A;&#x6C60;&#x5316;&#x5C42;</li>
<li>One of the main advantages of using autoencoder based approach over normal convolutional feature extractor is the freedom of choosing input size. With a clever use of down-sampling and up-sampling operation it is possible to output a pixel-level probability that is of the same resolution as the input image. This benefit has made encoder-decoder architectures with multi-scale feature forwarding has become ubiquitous for networks where input size is not predetermined and an output of same size as the input is needed.<br>
&#x4E0E;&#x666E;&#x901A;&#x7684;&#x5377;&#x79EF;&#x7279;&#x5F81;&#x63D0;&#x53D6;&#x5668;&#x76F8;&#x6BD4;&#xFF0C;&#x4F7F;&#x7528;&#x57FA;&#x4E8E;&#x81EA;&#x52A8;&#x7F16;&#x7801;&#x5668;&#x65B9;&#x6CD5;&#x7684;&#x4E3B;&#x8981;&#x4F18;&#x52BF;&#x5728;&#x4E8E;&#x81EA;&#x7531;&#x9009;&#x62E9;&#x8F93;&#x5165;&#x7684;&#x5927;&#x5C0F;&#x3002;&#x901A;&#x8FC7;&#x964D;&#x91C7;&#x6837;&#x548C;&#x4E0A;&#x91C7;&#x7528;&#x64CD;&#x4F5C;&#x5DE7;&#x5999;&#x7684;&#x4F7F;&#x7528;&#xFF0C;&#x4F7F;&#x5F97;&#x8F93;&#x51FA;&#x4E0E;&#x8F93;&#x5165;&#x56FE;&#x50CF;&#x5206;&#x8FA8;&#x7387;&#x76F8;&#x540C;&#x7684;&#x50CF;&#x7D20;&#x7EA7;&#x7684;&#x6982;&#x7387;&#x6210;&#x4E3A;&#x53EF;&#x80FD;&#x3002;&#x8FD9;&#x4E2A;&#x597D;&#x5904;&#x4F7F;&#x5F97;&#x5E26;&#x6709;&#x591A;&#x5C3A;&#x5EA6;&#x7279;&#x5F81;&#x4F20;&#x9012;&#x7684;&#x7F16;&#x7801;&#x5668;-&#x89E3;&#x7801;&#x5668;&#x7ED3;&#x6784;&#x5728;&#x8F93;&#x5165;&#x5927;&#x5C0F;&#x65E0;&#x6CD5;&#x9884;&#x5148;&#x5F97;&#x77E5;&#xFF0C;&#x5E76;&#x4E14;&#x9700;&#x8981;&#x8F93;&#x51FA;&#x4E0E;&#x8F93;&#x5165;&#x5927;&#x5C0F;&#x4E00;&#x81F4;&#x7684;&#x7F51;&#x7EDC;&#x4E2D;&#x65E0;&#x5904;&#x4E0D;&#x5728;</li>
</ul>
<p><strong>Transposed Convolution</strong></p>
<ul>
<li>Transposed convolution also known as convolution with fractional strides has been introduced to reverse the effects of a traditional convolution operation [156, 53]. It is often referred to as deconvolution. However deconvolution, as defined in signal processing, is different than transposed convolution in terms of the basic formulation, although they effectively address the same problem.<br>
&#x8F6C;&#x7F6E;&#x5377;&#x79EF;&#xFF0C;&#x4E5F;&#x79F0;&#x4E3A;&#x5C0F;&#x6570;&#x6B65;&#x957F;&#x5377;&#x79EF;&#xFF0C;&#x7528;&#x4E8E;&#x53CD;&#x8F6C;&#x4F20;&#x7EDF;&#x5377;&#x79EF;&#x64CD;&#x4F5C;&#x7684;&#x4F5C;&#x7528;<a href>[156]</a>&#xFF0C;<a href>[53]</a>&#x3002;&#x5B83;&#x901A;&#x5E38;&#x88AB;&#x79F0;&#x4E3A;&#x53CD;&#x5377;&#x79EF;&#x3002;&#x53CD;&#x5377;&#x79EF;&#x5728;&#x4FE1;&#x53F7;&#x5904;&#x7406;&#x4E2D;&#x7684;&#x5B9A;&#x4E49;&#xFF0C;&#x4E0E;&#x8F6C;&#x7F6E;&#x5377;&#x79EF;&#x5728;&#x57FA;&#x672C;&#x516C;&#x5F0F;&#x4E0A;&#x662F;&#x4E0D;&#x540C;&#x7684;&#xFF0C;&#x5C3D;&#x7BA1;&#x4ED6;&#x4EEC;&#x90FD;&#x65B9;&#x4FBF;&#x5730;&#x89E3;&#x51B3;&#x4E86;&#x76F8;&#x540C;&#x7684;&#x95EE;&#x9898;</li>
</ul>
<div align="center"><img src="./resource/fig12.png" width="600"></div>
<center>fig. 12 &#x6574;&#x6570;&#x6B65;&#x957F;&#x7684;&#x666E;&#x901A;&#x5377;&#x79EF;&#xFF08;&#x5DE6;&#xFF09;&#x548C;&#x5C0F;&#x6570;&#x6B65;&#x957F;&#x7684;&#x8F6C;&#x7F6E;&#x5377;&#x79EF;&#xFF08;&#x53F3;&#xFF09;</center>
<ul>
<li>In a convolution operation there is a change in size of the input based on the amount of padding and stride of the kernels. As shown in fig. 12 a stride of 2 will create half the number of activations as that of a stride of 1. For a transposed convolution to work padding and stride should be controlled in a way that the size change is reversed. This is achieved by dilating the input space. Note that unlike atrous convolutions, where the kernels were dilated, here the input spaces are dilated.<br>
&#x5728;&#x5377;&#x79EF;&#x64CD;&#x4F5C;&#x4E2D;&#xFF0C;&#x8F93;&#x5165;&#x7684;&#x5927;&#x5C0F;&#x4F1A;&#x56E0;&#x4E3A;&#x586B;&#x5145;&#x7684;&#x6570;&#x91CF;&#x548C;&#x5377;&#x79EF;&#x6838;&#x7684;&#x6B65;&#x957F;&#x800C;&#x6539;&#x53D8;&#x3002;&#x5982;&#x56FE;12&#x6240;&#x793A;&#xFF0C;&#x6B65;&#x957F;&#x4E3A;2&#x7684;&#x5377;&#x79EF;&#x4F1A;&#x4EA7;&#x751F;&#x4E0E;&#x6B65;&#x957F;&#x4E3A;1&#x7684;&#x5377;&#x79EF;&#x76F8;&#x6BD4;&#x4E00;&#x534A;&#x7684;&#x6FC0;&#x6D3B;&#x3002;&#x5BF9;&#x4E8E;&#x8F6C;&#x7F6E;&#x77E9;&#x9635;&#xFF0C;&#x586B;&#x5145;&#x548C;&#x6B65;&#x957F;&#x9700;&#x8981;&#x4EE5;&#x5927;&#x5C0F;&#x76F8;&#x53CD;&#x7684;&#x6539;&#x53D8;&#x6765;&#x63A7;&#x5236;&#x3002;&#x8FD9;&#x901A;&#x8FC7;&#x6269;&#x5927;&#x8F93;&#x5165;&#x7A7A;&#x95F4;&#x6765;&#x5B9E;&#x73B0;&#x3002;&#x9700;&#x8981;&#x6CE8;&#x610F;&#x7684;&#x662F;&#xFF0C;&#x4E0D;&#x540C;&#x4E8E;&#x591A;&#x5B54;&#x5377;&#x79EF;&#xFF0C;&#x56E0;&#x4E3A;&#x5185;&#x6838;&#x6269;&#x5927;&#x4E86;&#xFF0C;&#x8F93;&#x5165;&#x7A7A;&#x95F4;&#x4E5F;&#x968F;&#x4E4B;&#x6269;&#x5927;</li>
</ul>
<p><strong>Unpooling</strong></p>
<ul>
<li>Another approach to reduce the size of the activations is through pooling layers. a 2&#xD7;2 pooling layer with a stride of two reduces the height and width of the image by a factor of 2. In such a pooling layer, a 2&#xD7;2 neighborhood of pixel is compressed to a single pixel. Different types of pooling performs the compression in different ways. Max-pooling considers the maximum activation value among 4 pixels while average pooling takes an average of the same. A corresponding unpooling layer decompresses a single pixel to a neighborhood of 2 &#xD7; 2 pixels to double the height and width of the image.<br>
&#x53E6;&#x4E00;&#x4E2A;&#x51CF;&#x5C0F;&#x6FC0;&#x6D3B;&#x7684;&#x5C3A;&#x5BF8;&#x7684;&#x65B9;&#x6CD5;&#x662F;&#x6C60;&#x5316;&#x5C42;&#x3002;&#x4E00;&#x4E2A;&#x6B65;&#x957F;&#x4E3A;2&#x7684; 2&#xD7;2 &#x6C60;&#x5316;&#x5C42;&#x5C06;&#x56FE;&#x50CF;&#x7684;&#x957F;&#x548C;&#x5BBD;&#x5404;&#x51CF;&#x5C0F;&#x4E00;&#x534A;&#x3002;&#x5728;&#x8FD9;&#x79CD;&#x6C60;&#x5316;&#x5C42;&#x4E2D;&#xFF0C; 2&#xD7;2 &#x7684;&#x50CF;&#x7D20;&#x90BB;&#x57DF;&#x5C06;&#x88AB;&#x538B;&#x7F29;&#x4E3A;&#x4E00;&#x4E2A;&#x5355;&#x72EC;&#x7684;&#x50CF;&#x7D20;&#x3002;&#x4E0D;&#x540C;&#x7C7B;&#x578B;&#x7684;&#x6C60;&#x5316;&#x4EE5;&#x4E0D;&#x540C;&#x7684;&#x65B9;&#x5F0F;&#x8FDB;&#x884C;&#x538B;&#x7F29;&#x3002;&#x6700;&#x5927;&#x503C;&#x6C60;&#x5316;&#x53D6;4&#x4E2A;&#x50CF;&#x7D20;&#x4E2D;&#x7684;&#x6700;&#x5927;&#x6FC0;&#x6D3B;&#x503C;&#xFF0C;&#x5E73;&#x5747;&#x503C;&#x6C60;&#x5316;&#x5219;&#x53D6;&#x4ED6;&#x4EEC;&#x7684;&#x5E73;&#x5747;&#x503C;&#x3002;&#x4E00;&#x4E2A;&#x76F8;&#x5E94;&#x7684;&#x53CD;&#x6C60;&#x5316;&#x5C42;&#x5C06;&#x4E00;&#x4E2A;&#x50CF;&#x7D20;&#x89E3;&#x538B;&#x4E3A; 2 &#xD7; 2 &#x7684;&#x50CF;&#x7D20;&#x90BB;&#x57DF;&#xFF0C;&#x5C06;&#x56FE;&#x50CF;&#x7684;&#x957F;&#x548C;&#x5BBD;&#x589E;&#x500D;</li>
</ul>
<h3 class="mume-header" id="421-skip-connections">4.2.1 Skip Connections</h3>

<p><strong>&#x8DF3;&#x8FDE;&#x63A5;</strong></p>
<ul>
<li>Linear skip connections has often been used in convolutional neural networks to improve gradient flow across a large number of layers [78]. As depth increases in a network the activations maps tend to focus on more and more abstract concepts. Skip connections has proved to be very effective to combine different levels of abstractions from different layers to generate crisp segmentation maps.<br>
&#x7EBF;&#x6027;&#x8DF3;&#x8FDE;&#x63A5;&#x901A;&#x5E38;&#x5728;&#x5377;&#x79EF;&#x795E;&#x7ECF;&#x7F51;&#x7EDC;&#x4E2D;&#x7528;&#x4E8E;&#x63D0;&#x9AD8;&#x5927;&#x91CF;&#x5C42;&#x4E4B;&#x95F4;&#x7684;&#x68AF;&#x5EA6;&#x6D41;&#x52A8;&#x3002;&#x968F;&#x7740;&#x7F51;&#x7EDC;&#x6DF1;&#x5EA6;&#x7684;&#x589E;&#x52A0;&#xFF0C;&#x6FC0;&#x6D3B;&#x6620;&#x5C04;&#x503E;&#x5411;&#x4E8E;&#x5173;&#x6CE8;&#x8D8A;&#x6765;&#x8D8A;&#x62BD;&#x8C61;&#x7684;&#x6982;&#x5FF5;&#x3002;&#x8DF3;&#x8FDE;&#x63A5;&#x88AB;&#x8BC1;&#x660E;&#x80FD;&#x591F;&#x975E;&#x5E38;&#x6709;&#x6548;&#x5730;&#x7ED3;&#x5408;&#x6765;&#x81EA;&#x4E0D;&#x540C;&#x5C42;&#x7684;&#x4E0D;&#x540C;&#x7EA7;&#x522B;&#x7684;&#x62BD;&#x8C61;&#x6765;&#x751F;&#x6210;&#x6E05;&#x6670;&#x7684;&#x5206;&#x5272;&#x56FE;</li>
</ul>
<p><strong>U-NET</strong></p>
<ul>
<li>
<p>The U-Net architecture, proposed in 2015, proved to be quite efficient for a variety of problems such as segmentation of neuronal structures, radiog- raphy, and cell tracking challenges [177]. The network is characterized by an encoder with a series of convolution and max pooling layers. The decoding layer contains a mirrored sequence of convolutions and transposed convolutions. As described till now it behaves as a traditional auto-encoder. Previously it has been mentioned how the level of abstraction plays an important role in the quality of image segmentation.</p>
</li>
<li>
<p>To consider various levels of abstraction U-Net implements skip connections to copy the uncompressed activations from encod- ing blocks to their mirrored counterparts among the decoding blocks as shown in the fig. 13. The feature extractor of the U-Net can also be upgraded to provide better segmentation maps. The network nicknamed &#x201D;The one hundred layers Tiramisu&#x201D; [88] applied the concept of U-Net using a dense-net based feature extractor. Other modern variations involve the use of capsule networks [183] along with locally constrained routing [108].</p>
</li>
<li>
<p>U-Net was selected as a winner for an ISBI cell tracking challenge. In the PhC-U373 dataset it scored a mean IoU of 0.9203 whereas the second best was at 0.83. In the DIC-HeLa dataset, it scored a mean IoU of 0.7756 which was significantly better than the second best approach which scored only 0.46.</p>
</li>
</ul>
<h3 class="mume-header" id="422-forwarding-pooling-indices">4.2.2 Forwarding pooling indices</h3>

<ul>
<li>Max-pooling has been the most commonly used technique for reducing the size of the activation maps for various reasons. The activations represent of the response of the region of an image to a specific kernel. In max pooling, a region of pixels is compressed to single value by considering only the maximum response obtained within that region. If a typical autoencoder compresses a 2&#xD7;2 neighborhood of pixels to a single pixel in the encoding phase, the decoder must decompress the pixel to a similar dimension of 2 &#xD7; 2. By forwarding pooling indices the network basically remembers the location of the maximum value among the 4 pixels while performing max-pooling. The index corresponding to the maximum value is forwarded to the decoder(Refer fig.14) so that while the un-pooling operation the value from the single pixel can be copied to the corresponding location in 2 &#xD7; 2 region in the next layer [215]. The values in rest of the three positions are computed in the subsequent convolutional layers. If the value was copied to random location without the knowledge of the pooling indices, there would be inconsistencies in classification especially in the boundary regions.</li>
</ul>
<p><strong>SegNet</strong></p>
<ul>
<li>The SegNet algorithm [9] was launched in 2015 to compete with the FCN network on complex indoor and outdoor images. The architecture was composed of 5 encoding blocks and and 5 decoding blocks. The encoding blocks followed the architecture of the feature extractor in VGG-16 network. Each block is a sequence of multiple convolution, batch normalization and ReLU layers. Each encoding block ends with a max-pooling layer where the indices are stored. Each decoding block begins with a unpooling layer where the saved pooling indices are used (Refer fig.15). The indices from the max-pooling layer of the ith block in the encoder is forwarded to the max-unpooling layer in the (L&#x2212;i+1)th block in the decoder where L is the total number of blocks in each of the encoder and decoder. The SegNet architecture scored an mIoU of 60.10 as compared to 53.88 by DeepLab-LargeFOV[31] or 49.83 by FCN[130] or 59.77 by Deconvnet[156] on the CamVid Dataset.</li>
</ul>
<h2 class="mume-header" id="43-adversarial-models">4.3 Adversarial Models</h2>

<ul>
<li>
<p>Until now, we have seen purely discriminative models like FCN, DeepMask, DeepLab that primarily generates a probability distribution for every pixel across the number of classes. Furthermore, autoencoder treated segmentation as a generative process however the last layer is generally connected to a pixel- wise soft-max classifier. The adversarial learning framework approaches the optimization problem from a different perspective. Generative Adversarial Net- works (GANs) gained a lot of popularity due to there remarkable performance as a generative network. The adversarial learning framework mainly consists of two networks a generative network and a discriminator network. The generator G tries to generate images,,./ like the ones from the training dataset using a noisy input prior distribution called pz(z). The network G(z; &#x3B8;g) represents a differentiable function represented by a neural network with weights &#x3B8;g. A dis- criminator network tries to correctly guess whether an input data is from the training data distribution (pdata(x)) or generated by the generator G. The goal of the discriminator is to get better at catching a fake image, while the generator tries to get better at fooling the discriminator, thus in the process generating better outputs. The entire optimization process can be written as a min-max problem as follows:</p>
</li>
<li>
<p>The segmentation problem has also been approached from a adversarial learning perspective. The segmentation network is treated as a generator that generates the segmenation masks for each class, whereas a discriminator network tries to predict whether a set of masks is from the ground truth or from the output of the generator [133]. A schematic diagram of the process is shown in fig.20. Furthermore, conditional GANs have been used to perform image to image translation[86]. This framework can be used for image segmentation problems where the semantic boundaries of the image and output segmentation map do not necessarily coincide, for example, in case of creating a schematic diagram of a fa&#xB8;cade of a building.</p>
</li>
</ul>
<h2 class="mume-header" id="44-sequential-models">4.4 Sequential Models</h2>

<ul>
<li>Till now almost all the techniques discussed deal with semantic image segmen- tation. Another class of segmentation problem, namely, instance level segmen- tation needs slightly different approach. Unlike semantic image segmentation, here all instances of the same object are segmented into different classes. This type of segmentation problem is mostly handled as a learning to give a sequence of object segments as outputs. Hence sequential models come into play in such problems. Some of the main architectures commonly used are convolutional LSTMs, Reccurent Networks, Attention-based models and so on.</li>
</ul>
<h3 class="mume-header" id="441-recurrent-models">4.4.1 Recurrent Models</h3>

<ul>
<li>Traditional LSTM networks employ fully connected weights to model long and short term memories accross sequential inputs. But they fail to capture spatial information of images. Moreover, fully connected weights for images increases the cost of computation by a great extent. In convolutional LSTM [176] these weights are replaced by convolutional layers (Refer fig. ??). Convolutional LSTMs have been used in several works to perform instance level segmentation. Normally they are used as a suffix to a object segmentation network. The purpose of the recurrent model like LSTM is to select each instance of the object in different timestamps of the sequential output. The approach has been implemented with object segmentation frameworks like FCN and U-NET [28].</li>
</ul>
<h3 class="mume-header" id="442-attention-models">4.4.2 Attention Models</h3>

<ul>
<li>While convolutional LSTMs can select different instance of objects at different timestamps, attention models are designed to have more control over this pro- cess of localizing individual instances. One simple method to control attention is by spatial inhibition [176]. Spatial inhibition network is designed to learn a bias parameter that cuts off previously detected segments from future activations. Attention models have been further developed with the introduction of dedicated attention module and an external memory to keep track of segments. In the works of [174], the instance segmentation network was divided into 4 modules. First, and external memory provides object boundary details from all previous steps. Second, a box network attempts to predict the location of the next instance of the object and outputs a sub-region of the image for the third module that is the segmentation module. The segmentation module is similar to a convolutional auto-encoder model discussed previously. The fourth module scores the predicted segments based on whether they qualify as a proper instance of the object. The network terminates when the score goes below a user-defined threshold.</li>
</ul>
<h2 class="mume-header" id="45-weakly-supervised-or-unsupervised-models">4.5 Weakly Supervised or Unsupervised Models</h2>

<ul>
<li>
<p>Neural Networks in general are trained with algorithms like back-propagation, where the parameters w are updated based on their local partial derivative with respect to a error value E obtained using a loss function f.</p>
</li>
<li>
<p>The loss function is generally expressed in terms of a distance between a tar- get value and the predicted value. But in many scenarios image segmentation requires the use of data without annotations with ground truth. This leads to the development of unsupervised image segmentation techniques. One of the straight forward ways to achieve this is to use networks pre-trained on other larger datasets with similar kinds of samples and ground truths and use clustering algorithms like K-means on the feature maps. However this kind of semi-supervised technique is inefficient for data samples that have a unique distribution of sample space. Another cons is that the network is trained to perform on a input distribution which is still different from the test data. That does not allow the network to perform to it with full potential. The key problem in fully unsupervised segmentation algorithm is the development of a loss func- tion capable of measuring the quality of segments or cluster of pixels. With all these limitations the amount of literature is comparatively much lighter when it comes to weakly supervised or unsupervised approaches.</p>
</li>
</ul>
<h3 class="mume-header" id="451-weakly-supervised-algorithms">4.5.1 Weakly supervised algorithms</h3>

<ul>
<li>Even in the lack of proper pixel level annotations, segmentation algorithms can exploit coarser annotations like bounding boxes or even image level labels[161, 116] for performing pixel level segmentation.</li>
</ul>
<p><strong>Exploiting bounding boxes</strong></p>
<ul>
<li>From the angle of data annotation, defining bounding boxes is a much less expensive task as compared to pixel level seg- mentation. The availability of datasets with bounding boxes is also much larger than those with pixel level segmentations. The bounding box can be used as a weak supervision to generate pixel level segmentation maps. In the works of [42], titled BoxSup, segmentation proposals were generated using region pro- posal methods like selective search. After that multi-scale combinatorial group- ing is used to combine candidate masks and the objective is to select the optimal combination that has the highest IOU with the box. This segmentation map is used to tune a traditional image segmentation network like FCN. BoxSup was able to attain an mIOU of 75.1 in the pascal VOC 2012 test set as compared to 62.2 of FCN or 66.4 of DeepLab-CRF.</li>
</ul>

      </div>
      
      
    
    
    
    
    
    
    
    
  
    </body></html>