<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/xhtml;charset=UTF-8"/>
<meta http-equiv="X-UA-Compatible" content="IE=9"/>
<meta name="generator" content="Doxygen 1.8.11"/>
<title>Superpixel Benchmark: Submission</title>
<link href="tabs.css" rel="stylesheet" type="text/css"/>
<script type="text/javascript" src="jquery.js"></script>
<script type="text/javascript" src="dynsections.js"></script>
<link href="search/search.css" rel="stylesheet" type="text/css"/>
<script type="text/javascript" src="search/searchdata.js"></script>
<script type="text/javascript" src="search/search.js"></script>
<script type="text/javascript">
  $(document).ready(function() { init_search(); });
</script>
<link href="doxygen.css" rel="stylesheet" type="text/css" />
</head>
<body>
<div id="top"><!-- do not remove this div, it is closed by doxygen! -->
<div id="titlearea">
<table cellspacing="0" cellpadding="0">
 <tbody>
 <tr style="height: 56px;">
  <td id="projectalign" style="padding-left: 0.5em;">
   <div id="projectname">Superpixel Benchmark
   </div>
   <div id="projectbrief">Superpixel benchmark, tools and algorithms.</div>
  </td>
 </tr>
 </tbody>
</table>
</div>
<!-- end header part -->
<!-- Generated by Doxygen 1.8.11 -->
<script type="text/javascript">
var searchBox = new SearchBox("searchBox", "search",false,'Search');
</script>
  <div id="navrow1" class="tabs">
    <ul class="tablist">
      <li><a href="index.html"><span>Main&#160;Page</span></a></li>
      <li class="current"><a href="pages.html"><span>Related&#160;Pages</span></a></li>
      <li><a href="annotated.html"><span>Classes</span></a></li>
      <li><a href="files.html"><span>Files</span></a></li>
      <li>
        <div id="MSearchBox" class="MSearchBoxInactive">
        <span class="left">
          <img id="MSearchSelect" src="search/mag_sel.png"
               onmouseover="return searchBox.OnSearchSelectShow()"
               onmouseout="return searchBox.OnSearchSelectHide()"
               alt=""/>
          <input type="text" id="MSearchField" value="Search" accesskey="S"
               onfocus="searchBox.OnSearchFieldFocus(true)" 
               onblur="searchBox.OnSearchFieldFocus(false)" 
               onkeyup="searchBox.OnSearchFieldChange(event)"/>
          </span><span class="right">
            <a id="MSearchClose" href="javascript:searchBox.CloseResultsWindow()"><img id="MSearchCloseImg" border="0" src="search/close.png" alt=""/></a>
          </span>
        </div>
      </li>
    </ul>
  </div>
<!-- window showing the filter options -->
<div id="MSearchSelectWindow"
     onmouseover="return searchBox.OnSearchSelectShow()"
     onmouseout="return searchBox.OnSearchSelectHide()"
     onkeydown="return searchBox.OnSearchSelectKey(event)">
</div>

<!-- iframe showing the search results (closed by default) -->
<div id="MSearchResultsWindow">
<iframe src="javascript:void(0)" frameborder="0" 
        name="MSearchResults" id="MSearchResults">
</iframe>
</div>

</div><!-- top -->
<div class="header">
  <div class="headertitle">
<div class="title">Submission </div>  </div>
</div><!--header-->
<div class="contents">
<div class="textblock"><p>We encourage authors to submit new or adapted superpixel algorithms to keep the benchmark alive. To submit an implementation, follow these steps:</p>
<ul>
<li><a href="#datasets">Preparing the Datasets</a></li>
<li><a href="#implementation">Preparing the Implementation</a></li>
<li><a href="#parameter-optimization">Performing Parameter Optimization</a></li>
<li><a href="#evaluation">Performing Evaluation</a></li>
<li><a href="#submit-implementation">Submit Implementation and Results</a></li>
</ul>
<p>A full example:</p>
<ul>
<li><a class="el" href="EXAMPLE_8md.html">Example</a></li>
</ul>
<h2>Datasets</h2>
<p>**We are currently working on publishing the converted datasets in the data repository, <a href="https://github.com/davidstutz/superpixel-benchmark-data">davidstutz/superpixel-benchmark-data</a></p><ul>
<li>BSDS500 is still missing. **</li>
</ul>
<p>For NYUV2, SBD, SUNRGBD and Fash, the following steps are sufficient to get started:</p>
<ul>
<li>Clone <a href="https://github.com/davidstutz/superpixel-benchmark-data">davidstutz/superpixel-benchmark-data</a>;</li>
<li>Extract the datasets into <code>superpixel-benchmark/data/</code>;</li>
<li>Check the directory structure - ideally, the NYUV2 dataset should be found in <code>superpixel-benchmark/data/NYUV2</code> with subdirectories <code>images</code>, <code>csv_groundTruth</code> and <code>depth</code> (eahc with <code>test</code> and <code>train</code> subdirectories).</li>
</ul>
<p>For BSDS500 which is not yet included in <a href="https://github.com/davidstutz/superpixel-benchmark-data">davidstutz/superpixel-benchmark-data</a>, follow the instructions in <a class="el" href="DATASETS_8md.html">Datasets</a>. This requires a working installation of MatLab.</p>
<p>Similarly, for Fash which is also not included in <a href="https://github.com/davidstutz/superpixel-benchmark-data">davidstutz/superpixel-benchmark-data</a>, follow the instructions in <a class="el" href="DATASETS_8md.html">Datasets</a>.</p>
<p>Algorithms can comfortably evaluated on individual datasets, not all datasets need to be downloaded or extracted.</p>
<h2>Implementation</h2>
<p><b>The bare minimum:</b> <a class="el" href="classEvaluation.html" title="Provides measures to evaluate (over-) segmentations. ">Evaluation</a> itself only requires the superpixel segmentations as CSV files in a directory where the names correspond to the names of the images. For example, see <code>eval_summary_cli</code> in <a class="el" href="EXECUTABLES_8md.html">Executables</a>.</p>
<p>For fairness, the generated superpixel segmentations are expected to be post-processed using the provided connected components algorithms, i.e. <code><a class="el" href="classSuperpixelTools.html#adc11994f0a1575477c7f4cb04ae90284" title="Relabel superpixels based on connected components. ">SuperpixelTools::relabelConnectedSuperpixels</a></code> in C++ (see <code>lib_eval</code>) or <code>sp_fast_connected_relabel</code> in MatLab (see <code>lib_tools</code>).</p>
<p>If the algorithm is based on initial markers or a grid-like initialization, <code><a class="el" href="classSuperpixelTools.html#a0816df16890a5ae85051c5aba777bf62" title="Compute region size from desired number of superpixels. ">SuperpixelTools::computeRegionSizeFromSuperpixels</a></code> or <code><a class="el" href="classSuperpixelTools.html#a8790f88cee8b76b27b5ecfb6d26cd70f" title="Compute width and height for the given number of superpixels. ">SuperpixelTools::computeHeightWidthFromSuperpixels</a></code> in C++ (see <code>lib_eval</code>) and <code>sp_region_size_superpixels</code> or <code>sp_height_width_superpixels</code> in MatLab (see <code>lib_tools</code>) should be used.</p>
<p><b>Recommended:</b> In addition to the above constraints, all implementations in the benchmark provide an easy-to-use command line tool. It is highly recommended to provide a similar command line tool with at least the following two command line options:</p>
<ul>
<li><code>-h</code>: display all available options with descriptions;</li>
<li><code>-i</code>: accepts a directory containing multiple PNG, JPG or JPEG images;</li>
<li><code>-o</code>: accepts a directory which is created and used to store the superpixel segmentations for each image in CSV format - the CSV files should be named according to the images and contain the superpixel labels as integers.</li>
<li><code>-v</code>: accepts a directory which is created and used to store visualizations of the generated superpixel segmentations, e.g. using <code><a class="el" href="visualization_8h.html">lib_eval/visualization.h</a></code>;</li>
<li><code>-w</code>: verbose mode detailing the number of superpixels generated for each image found in the directory provided by <code>-i</code>.</li>
</ul>
<p>Examples of simple C++ command line tools:</p>
<ul>
<li><code>reseeds_cli</code></li>
<li><code>w_cli</code></li>
<li><code>slic_cli</code></li>
<li><code>cw_cli</code></li>
<li>...</li>
</ul>
<p>Examples of non-C++ command line tools:</p>
<ul>
<li><code>wp_cli</code> (Python)</li>
<li><code>pf_cli</code> (Java)</li>
<li><code>rw_cli</code> (MatLab)</li>
<li>...</li>
</ul>
<p>Examples for converting superpixel boundaries to superpixel segmentations:</p>
<ul>
<li><code>w_cli</code></li>
<li><code>tp_cli</code></li>
</ul>
<h2>Parameter Optimization</h2>
<p>Parameters should be optimized or chosen on the validation sets. Depending on the algorithm/implementation to be submitted, there are two cases:</p>
<ul>
<li>Good parameters are already known (e.g. from theory, from similar experiments or by construction);</li>
<li>or the parameters need to be optimized, i.e. found.</li>
</ul>
<p>In the first case, this step can be skipped. Next step: <a href="#evaluation">Performing Evaluation</a>.</p>
<p>For the second case, <code>eval_parameter_optimization_cli</code> might be used to perform parameter optimization. Details can be found in <code><a class="el" href="eval__parameter__optimization__cli_2main_8cpp.html">eval_parameter_optimization_cli/main.cpp</a></code> or in the full examples: <a class="el" href="EXAMPLE_8md.html">Example</a>.</p>
<h2><a class="el" href="classEvaluation.html" title="Provides measures to evaluate (over-) segmentations. ">Evaluation</a></h2>
<p><a class="el" href="classEvaluation.html" title="Provides measures to evaluate (over-) segmentations. ">Evaluation</a> consists of two steps. In the first step, <code>eval_summary_cli</code> is used to evaluate the generated superpixel segmentations against the ground truth. The result is a evaluation summary consisting of several metrics and statistics. Details on <code>eval_summary_cli</code> can also be found in <a class="el" href="EXECUTABLES_8md.html">Executables</a>. To match the experiments presented in the paper, evaluation has to be run for the following numbers of superpixels: </p><pre class="fragment">200, 300, 400, 600, 800, 1000, 1200, 1400, 1600, 1800, 2000, 2400, 2800, 3200, 3600, 4000, 4600, 5200
</pre><p>Using the <code>--append-file</code> option, the results can be gathered in a single CSV file. This file is required for the next step.</p>
<p>Given this single CSV file, <code>eval_average_cli</code> is used to compute the average statistics for Boundary Recall, Undersegmentation Error and Explained Variation. Note that in the first draft, these metrics were called Average Boundary Recall, Average Undersegmentation Error and Average Explained Variation. Note that in the latest version, these metrics are called Average Miss Rate, Average Undersegmentation Error and Average Unexplained Variation. Also see <a class="el" href="EXECUTABLES_8md.html">Executables</a> for details. Note that <code>eval_average_cli</code> assumes results to be available in the interval [200, 5200] as discussed above.</p>
<p>Examples for both <code>eval_summary_cli</code> and <code>eval_average_cli</code>:</p>
<ul>
<li><code>evaluate_w.sh</code></li>
<li><code>compare_fh_refh.sh</code></li>
<li><code>compare_seeds_reseeds.sh</code></li>
</ul>
<h2>Submit Implementation</h2>
<p>In the spirit of reproducible and fair research, a submission consists of the source code, the command line tool, a bash script used for evaluation as well as the results of the evaluation in form of the summary (as CSV file) and the average metrics (as CSV file). The source code and the command line tool should be accompanied with building instructions if applicable and the evaluation results provided should be reproducible when running the evaluation script.</p>
<p>As also pointed out above, the following aspects are critical for a fair comparison:</p>
<ul>
<li>Parameters where optimized on the validation sets;</li>
<li>the computed superpixel segmentation are post-processed using one of the provided connected components algorithm;</li>
<li>to choose a grid-like initialization (if applicable) the provided schemes are to be used;</li>
<li>and evaluation is performed for the number of superpixels as discussed above in the range [200, 5200].</li>
</ul>
<p>Please provide the source code and scripts, e.g. zipped, by mail. As we encourage authors to also make their algorithms publicly available, the source code and results can also be made available through a public repository (e.g. GitHub, BitBucket etc.). The easiest way will be to work with a fork of this repository, i.e. davidstutz/superpixel-benchmark. For up-to-date contact information see <a href="http://davidstutz.de/projects/superpixel-benchmark/">davidstutz.de/projects/superpixel-benchmark</a> or the repository's root <code>README.md</code>. </p>
</div></div><!-- contents -->
<!-- start footer part -->
<hr class="footer"/><address class="footer"><small>
Generated on Sun Apr 16 2017 16:39:17 for Superpixel Benchmark by &#160;<a href="http://www.doxygen.org/index.html">
<img class="footer" src="doxygen.png" alt="doxygen"/>
</a> 1.8.11
</small></address>
</body>
</html>
