<!DOCTYPE html><html><head>
      <title>README</title>
      <meta charset="utf-8">
      <meta name="viewport" content="width=device-width, initial-scale=1.0">
      
      <link rel="stylesheet" href="file:///c:\Users\zze\.vscode\extensions\shd101wyy.markdown-preview-enhanced-0.6.5\node_modules\@shd101wyy\mume\dependencies\katex\katex.min.css">
      
      
      
      
      
      
      
      
      
      <style>
      /**
 * prism.js Github theme based on GitHub's theme.
 * @author Sam Clarke
 */
code[class*="language-"],
pre[class*="language-"] {
  color: #333;
  background: none;
  font-family: Consolas, "Liberation Mono", Menlo, Courier, monospace;
  text-align: left;
  white-space: pre;
  word-spacing: normal;
  word-break: normal;
  word-wrap: normal;
  line-height: 1.4;

  -moz-tab-size: 8;
  -o-tab-size: 8;
  tab-size: 8;

  -webkit-hyphens: none;
  -moz-hyphens: none;
  -ms-hyphens: none;
  hyphens: none;
}

/* Code blocks */
pre[class*="language-"] {
  padding: .8em;
  overflow: auto;
  /* border: 1px solid #ddd; */
  border-radius: 3px;
  /* background: #fff; */
  background: #f5f5f5;
}

/* Inline code */
:not(pre) > code[class*="language-"] {
  padding: .1em;
  border-radius: .3em;
  white-space: normal;
  background: #f5f5f5;
}

.token.comment,
.token.blockquote {
  color: #969896;
}

.token.cdata {
  color: #183691;
}

.token.doctype,
.token.punctuation,
.token.variable,
.token.macro.property {
  color: #333;
}

.token.operator,
.token.important,
.token.keyword,
.token.rule,
.token.builtin {
  color: #a71d5d;
}

.token.string,
.token.url,
.token.regex,
.token.attr-value {
  color: #183691;
}

.token.property,
.token.number,
.token.boolean,
.token.entity,
.token.atrule,
.token.constant,
.token.symbol,
.token.command,
.token.code {
  color: #0086b3;
}

.token.tag,
.token.selector,
.token.prolog {
  color: #63a35c;
}

.token.function,
.token.namespace,
.token.pseudo-element,
.token.class,
.token.class-name,
.token.pseudo-class,
.token.id,
.token.url-reference .token.variable,
.token.attr-name {
  color: #795da3;
}

.token.entity {
  cursor: help;
}

.token.title,
.token.title .token.punctuation {
  font-weight: bold;
  color: #1d3e81;
}

.token.list {
  color: #ed6a43;
}

.token.inserted {
  background-color: #eaffea;
  color: #55a532;
}

.token.deleted {
  background-color: #ffecec;
  color: #bd2c00;
}

.token.bold {
  font-weight: bold;
}

.token.italic {
  font-style: italic;
}


/* JSON */
.language-json .token.property {
  color: #183691;
}

.language-markup .token.tag .token.punctuation {
  color: #333;
}

/* CSS */
code.language-css,
.language-css .token.function {
  color: #0086b3;
}

/* YAML */
.language-yaml .token.atrule {
  color: #63a35c;
}

code.language-yaml {
  color: #183691;
}

/* Ruby */
.language-ruby .token.function {
  color: #333;
}

/* Markdown */
.language-markdown .token.url {
  color: #795da3;
}

/* Makefile */
.language-makefile .token.symbol {
  color: #795da3;
}

.language-makefile .token.variable {
  color: #183691;
}

.language-makefile .token.builtin {
  color: #0086b3;
}

/* Bash */
.language-bash .token.keyword {
  color: #0086b3;
}

/* highlight */
pre[data-line] {
  position: relative;
  padding: 1em 0 1em 3em;
}
pre[data-line] .line-highlight-wrapper {
  position: absolute;
  top: 0;
  left: 0;
  background-color: transparent;
  display: block;
  width: 100%;
}

pre[data-line] .line-highlight {
  position: absolute;
  left: 0;
  right: 0;
  padding: inherit 0;
  margin-top: 1em;
  background: hsla(24, 20%, 50%,.08);
  background: linear-gradient(to right, hsla(24, 20%, 50%,.1) 70%, hsla(24, 20%, 50%,0));
  pointer-events: none;
  line-height: inherit;
  white-space: pre;
}

pre[data-line] .line-highlight:before, 
pre[data-line] .line-highlight[data-end]:after {
  content: attr(data-start);
  position: absolute;
  top: .4em;
  left: .6em;
  min-width: 1em;
  padding: 0 .5em;
  background-color: hsla(24, 20%, 50%,.4);
  color: hsl(24, 20%, 95%);
  font: bold 65%/1.5 sans-serif;
  text-align: center;
  vertical-align: .3em;
  border-radius: 999px;
  text-shadow: none;
  box-shadow: 0 1px white;
}

pre[data-line] .line-highlight[data-end]:after {
  content: attr(data-end);
  top: auto;
  bottom: .4em;
}html body{font-family:"Helvetica Neue",Helvetica,"Segoe UI",Arial,freesans,sans-serif;font-size:16px;line-height:1.6;color:#333;background-color:#fff;overflow:initial;box-sizing:border-box;word-wrap:break-word}html body>:first-child{margin-top:0}html body h1,html body h2,html body h3,html body h4,html body h5,html body h6{line-height:1.2;margin-top:1em;margin-bottom:16px;color:#000}html body h1{font-size:2.25em;font-weight:300;padding-bottom:.3em}html body h2{font-size:1.75em;font-weight:400;padding-bottom:.3em}html body h3{font-size:1.5em;font-weight:500}html body h4{font-size:1.25em;font-weight:600}html body h5{font-size:1.1em;font-weight:600}html body h6{font-size:1em;font-weight:600}html body h1,html body h2,html body h3,html body h4,html body h5{font-weight:600}html body h5{font-size:1em}html body h6{color:#5c5c5c}html body strong{color:#000}html body del{color:#5c5c5c}html body a:not([href]){color:inherit;text-decoration:none}html body a{color:#08c;text-decoration:none}html body a:hover{color:#00a3f5;text-decoration:none}html body img{max-width:100%}html body>p{margin-top:0;margin-bottom:16px;word-wrap:break-word}html body>ul,html body>ol{margin-bottom:16px}html body ul,html body ol{padding-left:2em}html body ul.no-list,html body ol.no-list{padding:0;list-style-type:none}html body ul ul,html body ul ol,html body ol ol,html body ol ul{margin-top:0;margin-bottom:0}html body li{margin-bottom:0}html body li.task-list-item{list-style:none}html body li>p{margin-top:0;margin-bottom:0}html body .task-list-item-checkbox{margin:0 .2em .25em -1.8em;vertical-align:middle}html body .task-list-item-checkbox:hover{cursor:pointer}html body blockquote{margin:16px 0;font-size:inherit;padding:0 15px;color:#5c5c5c;background-color:#f0f0f0;border-left:4px solid #d6d6d6}html body blockquote>:first-child{margin-top:0}html body blockquote>:last-child{margin-bottom:0}html body hr{height:4px;margin:32px 0;background-color:#d6d6d6;border:0 none}html body table{margin:10px 0 15px 0;border-collapse:collapse;border-spacing:0;display:block;width:100%;overflow:auto;word-break:normal;word-break:keep-all}html body table th{font-weight:bold;color:#000}html body table td,html body table th{border:1px solid #d6d6d6;padding:6px 13px}html body dl{padding:0}html body dl dt{padding:0;margin-top:16px;font-size:1em;font-style:italic;font-weight:bold}html body dl dd{padding:0 16px;margin-bottom:16px}html body code{font-family:Menlo,Monaco,Consolas,'Courier New',monospace;font-size:.85em !important;color:#000;background-color:#f0f0f0;border-radius:3px;padding:.2em 0}html body code::before,html body code::after{letter-spacing:-0.2em;content:"\00a0"}html body pre>code{padding:0;margin:0;font-size:.85em !important;word-break:normal;white-space:pre;background:transparent;border:0}html body .highlight{margin-bottom:16px}html body .highlight pre,html body pre{padding:1em;overflow:auto;font-size:.85em !important;line-height:1.45;border:#d6d6d6;border-radius:3px}html body .highlight pre{margin-bottom:0;word-break:normal}html body pre code,html body pre tt{display:inline;max-width:initial;padding:0;margin:0;overflow:initial;line-height:inherit;word-wrap:normal;background-color:transparent;border:0}html body pre code:before,html body pre tt:before,html body pre code:after,html body pre tt:after{content:normal}html body p,html body blockquote,html body ul,html body ol,html body dl,html body pre{margin-top:0;margin-bottom:16px}html body kbd{color:#000;border:1px solid #d6d6d6;border-bottom:2px solid #c7c7c7;padding:2px 4px;background-color:#f0f0f0;border-radius:3px}@media print{html body{background-color:#fff}html body h1,html body h2,html body h3,html body h4,html body h5,html body h6{color:#000;page-break-after:avoid}html body blockquote{color:#5c5c5c}html body pre{page-break-inside:avoid}html body table{display:table}html body img{display:block;max-width:100%;max-height:100%}html body pre,html body code{word-wrap:break-word;white-space:pre}}.markdown-preview{width:100%;height:100%;box-sizing:border-box}.markdown-preview .pagebreak,.markdown-preview .newpage{page-break-before:always}.markdown-preview pre.line-numbers{position:relative;padding-left:3.8em;counter-reset:linenumber}.markdown-preview pre.line-numbers>code{position:relative}.markdown-preview pre.line-numbers .line-numbers-rows{position:absolute;pointer-events:none;top:1em;font-size:100%;left:0;width:3em;letter-spacing:-1px;border-right:1px solid #999;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none}.markdown-preview pre.line-numbers .line-numbers-rows>span{pointer-events:none;display:block;counter-increment:linenumber}.markdown-preview pre.line-numbers .line-numbers-rows>span:before{content:counter(linenumber);color:#999;display:block;padding-right:.8em;text-align:right}.markdown-preview .mathjax-exps .MathJax_Display{text-align:center !important}.markdown-preview:not([for="preview"]) .code-chunk .btn-group{display:none}.markdown-preview:not([for="preview"]) .code-chunk .status{display:none}.markdown-preview:not([for="preview"]) .code-chunk .output-div{margin-bottom:16px}.scrollbar-style::-webkit-scrollbar{width:8px}.scrollbar-style::-webkit-scrollbar-track{border-radius:10px;background-color:transparent}.scrollbar-style::-webkit-scrollbar-thumb{border-radius:5px;background-color:rgba(150,150,150,0.66);border:4px solid rgba(150,150,150,0.66);background-clip:content-box}html body[for="html-export"]:not([data-presentation-mode]){position:relative;width:100%;height:100%;top:0;left:0;margin:0;padding:0;overflow:auto}html body[for="html-export"]:not([data-presentation-mode]) .markdown-preview{position:relative;top:0}@media screen and (min-width:914px){html body[for="html-export"]:not([data-presentation-mode]) .markdown-preview{padding:2em calc(50% - 457px + 2em)}}@media screen and (max-width:914px){html body[for="html-export"]:not([data-presentation-mode]) .markdown-preview{padding:2em}}@media screen and (max-width:450px){html body[for="html-export"]:not([data-presentation-mode]) .markdown-preview{font-size:14px !important;padding:1em}}@media print{html body[for="html-export"]:not([data-presentation-mode]) #sidebar-toc-btn{display:none}}html body[for="html-export"]:not([data-presentation-mode]) #sidebar-toc-btn{position:fixed;bottom:8px;left:8px;font-size:28px;cursor:pointer;color:inherit;z-index:99;width:32px;text-align:center;opacity:.4}html body[for="html-export"]:not([data-presentation-mode])[html-show-sidebar-toc] #sidebar-toc-btn{opacity:1}html body[for="html-export"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc{position:fixed;top:0;left:0;width:300px;height:100%;padding:32px 0 48px 0;font-size:14px;box-shadow:0 0 4px rgba(150,150,150,0.33);box-sizing:border-box;overflow:auto;background-color:inherit}html body[for="html-export"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc::-webkit-scrollbar{width:8px}html body[for="html-export"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc::-webkit-scrollbar-track{border-radius:10px;background-color:transparent}html body[for="html-export"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc::-webkit-scrollbar-thumb{border-radius:5px;background-color:rgba(150,150,150,0.66);border:4px solid rgba(150,150,150,0.66);background-clip:content-box}html body[for="html-export"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc a{text-decoration:none}html body[for="html-export"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc ul{padding:0 1.6em;margin-top:.8em}html body[for="html-export"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc li{margin-bottom:.8em}html body[for="html-export"]:not([data-presentation-mode])[html-show-sidebar-toc] .md-sidebar-toc ul{list-style-type:none}html body[for="html-export"]:not([data-presentation-mode])[html-show-sidebar-toc] .markdown-preview{left:300px;width:calc(100% -  300px);padding:2em calc(50% - 457px -  150px);margin:0;box-sizing:border-box}@media screen and (max-width:1274px){html body[for="html-export"]:not([data-presentation-mode])[html-show-sidebar-toc] .markdown-preview{padding:2em}}@media screen and (max-width:450px){html body[for="html-export"]:not([data-presentation-mode])[html-show-sidebar-toc] .markdown-preview{width:100%}}html body[for="html-export"]:not([data-presentation-mode]):not([html-show-sidebar-toc]) .markdown-preview{left:50%;transform:translateX(-50%)}html body[for="html-export"]:not([data-presentation-mode]):not([html-show-sidebar-toc]) .md-sidebar-toc{display:none}
/* Please visit the URL below for more information: */
/*   https://shd101wyy.github.io/markdown-preview-enhanced/#/customize-css */

      </style>
    </head>
    <body for="html-export">
      <div class="mume markdown-preview  ">
      <h1 class="mume-header" id="polygonal-building-segmentation-by-frame-field-learning">Polygonal Building Segmentation by Frame Field Learning</h1>

<p>We add a frame field output to an image segmentation neural network to improve segmentation quality<br>
and provide structural information for the subsequent polygonization step.</p>
<p align="center">
    <img src="images/frame_field_sample.png" width="512">
    <br>
    Figure 1: Close-up of our additional frame field output on a test image.
    <br>
    <br>
    <br>
    <img src="images/model_training.png" width="768">
    <br>
    Figure 2: Given an overhead image, the model outputs an edge mask, an interior mask,
    and a frame field for buildings. The total loss includes terms that align the masks and
    frame field to ground truth data as well as regularizers to enforce smoothness of the
    frame field and consistency between the outputs.
    <br>
    <br>
    <br>
    <img src="images/schematic_polygonization.png" width="768">
    <br>
    Figure 3: Given classification maps and a frame field as input, we optimize skeleton polylines to
    align to the frame field using an Active Skeleton Model (ASM) and detect corners using
    the frame field, simplifying non-corner vertices.
</p>
<p>This repository contains the official code for the paper:</p>
<p><strong>Polygonal Building Segmentation by Frame Field Learning</strong><br>
<a href="https://www-sop.inria.fr/members/Nicolas.Girard/">Nicolas Girard</a>,<br>
<a href="https://people.csail.mit.edu/smirnov/">Dmitriy Smirnov</a>,<br>
<a href="https://people.csail.mit.edu/jsolomon/">Justin Solomon</a>,<br>
<a href="https://www-sop.inria.fr/members/Yuliya.Tarabalka/">Yuliya Tarabalka</a><br>
CVPR 2021<br>
<strong><span style="color: #ee7f49; font-weight: 500;">ParseError: KaTeX parse error: Expected &apos;EOF&apos;, got &apos;&amp;&apos; at position 95: &#x2026;h?v=226pPTBsNJ8&amp;&#x332;t=8s)</span></strong></p>
<h1 class="mume-header" id="setup">Setup</h1>

<h2 class="mume-header" id="git-submodules">Git submodules</h2>

<p>This project uses various git submodules that should be cloned too.</p>
<p>To clone a repository including its submodules execute:</p>
<pre data-role="codeBlock" data-info class="language-"><code>git clone --recursive --jobs 8 &lt;URL to Git repo&gt;
</code></pre><p>If you already have cloned the repository and now want to load it&#x2019;s submodules execute:</p>
<pre data-role="codeBlock" data-info class="language-"><code>git submodule update --init --recursive --jobs 8
</code></pre><p>or:</p>
<pre data-role="codeBlock" data-info class="language-"><code>git submodule update --recursive
</code></pre><p>For more about explanations about using submodules and git, see <a href="SUBMODULES.md">SUBMODULES.md</a>.</p>
<h2 class="mume-header" id="docker">Docker</h2>

<p>The easiest way to setup environment is to use the Docker image provided in the <a href="docker">docker</a> (see README inside the folder).</p>
<p>Once the docker container is built and launched, execute the <a href="setup.sh">setup.sh</a> script inside to install required packages.</p>
<p>The environment in the container is now ready for use.</p>
<h2 class="mume-header" id="conda-environment">Conda environment</h2>

<p>Alternatively you can install all dependencies in a conda environment.<br>
I provide my environment specifications in the  <a href="environment.yml">environment.yml</a> which you can use to create your environment own with:</p>
<pre data-role="codeBlock" data-info class="language-"><code>conda env create -f environment.yml
</code></pre><h1 class="mume-header" id="data">Data</h1>

<p>Several datasets are used in this work.<br>
We typically put all datasets in a &quot;data&quot; folder which we link to the &quot;/data&quot; folder in the container (with the <code>-v</code> argument when running the container).<br>
Each dataset has it&apos;s own sub-folder, usually named with a short version of that dataset&apos;s name.<br>
Each dataset sub-folder should have a &quot;raw&quot; folder inside containing all the original folders and files fo the datset.<br>
When pre-processing data, &quot;processed&quot; folders will be created alongside the &quot;raw&quot; folder.</p>
<p>For example, here is an example working file structure inside the container:</p>
<pre data-role="codeBlock" data-info class="language-"><code>/data 
|-- AerialImageDataset
     |-- raw
         |-- train
         |   |-- aligned_gt_polygons_2
         |   |-- gt
         |   |-- gt_polygonized
         |   |-- images
         `-- test
             |-- aligned_gt_polygons_2
             |-- images
`-- mapping_challenge_dataset
     |-- raw
         |-- train
         |   |-- images
         |   |-- annotation.json
         |   `-- annotation-small.json
         `-- val
              `-- ...
</code></pre><p>If however you would like to use a different folder for the datasets (for example while not using Docker),<br>
you can change the path to datasets in config files.<br>
You can modify the &quot;data_dir_candidates&quot; list in the config to only include your path.<br>
The training script checks this list of paths one at a time and picks the first one that exists.<br>
It then appends the &quot;data_root_partial_dirpath&quot; directory to get to the dataset.</p>
<p>You can find some of the data we used in this shared &quot;data&quot; folder: <a href="https://drive.google.com/drive/folders/19yqseUsggPEwLFTBl04CmGmzCZAIOYhy?usp=sharing">https://drive.google.com/drive/folders/19yqseUsggPEwLFTBl04CmGmzCZAIOYhy?usp=sharing</a>.</p>
<h2 class="mume-header" id="inria-aerial-image-labeling-dataset">Inria Aerial Image Labeling Dataset</h2>

<p>Link to the dataset: <a href="https://project.inria.fr/aerialimagelabeling/">https://project.inria.fr/aerialimagelabeling/</a></p>
<p>For the Inria dataset, the original ground truth is just a collection of raster masks.<br>
As our method requires annotations to be polygons in order to compute the ground truth angle for the frame field, we made 2 versions of the dataset:</p>
<p>The <em>Inria OSM dataset</em> has aligned annotations pulled from OpenStreetMap.</p>
<p>The <em>Inria Polygonized dataset</em> has polygon annotations obtained from using our frame field polygonization algorithm on the original raster masks.<br>
This was done by running the <code>polygonize_mask.py</code> script like so:<br>
<code>python polygonize_mask.py --run_name inria_dataset_osm_mask_only.unet16 --filepath ~/data/AerialImageDataset/raw/train/gt/*.tif</code></p>
<p>You can find this new ground truth for both cases in the shared &quot;data&quot; folder (<a href="https://drive.google.com/drive/folders/19yqseUsggPEwLFTBl04CmGmzCZAIOYhy?usp=sharing">https://drive.google.com/drive/folders/19yqseUsggPEwLFTBl04CmGmzCZAIOYhy?usp=sharing</a>.).</p>
<h1>Running the <a href="http://main.py">main.py</a> script</h1>
<p>Execute <a href="main.py">main.py</a> script to train a model, test a model or use a model on your own image.<br>
See the help of the main script with:</p>
<p><code>python main.py --help</code></p>
<p>The script can be launched on multiple GPUs for multi-GPU training and evaluation.<br>
Simply set the <code>--gpus</code> argument to the number of gpus you want to use.<br>
However, for the first launch of the script on a particular dataset (when it will pre-process the data),<br>
it is best to leave it at 1 as I did not implement multi-GPU synchronization when pre-processing datasets.</p>
<p>An example use is for training a model with a certain config file, like so:<br>
<code>python main.py --config configs/config.mapping_dataset.unet_resnet101_pretrained</code><br>
which will train the Unet-Resnet101 on the CrowdAI Mapping Challenge dataset.<br>
The batch size can be adjusted like so:<br>
<code>python main.py --config configs/config.mapping_dataset.unet_resnet101_pretrained -b &lt;new batch size&gt;</code></p>
<p>When training is done, the script can be launched in eval mode, to evaluate the trained model:<br>
<code>python main.py --config configs/config.mapping_dataset.unet_resnet101_pretrained --mode eval</code>.<br>
Depending on the eval parameters of the config file, running this will output results on the test dataset.</p>
<p>Finally, if you wish to compute AP and AR metrics with the COCO API, you can run:<br>
<code>python main.py --config configs/config.mapping_dataset.unet_resnet101_pretrained --mode eval_coco</code>.</p>
<h2 class="mume-header" id="launch-inference-on-one-image">Launch inference on one image</h2>

<p>Make sure the run folder has the correct structure:</p>
<pre data-role="codeBlock" data-info class="language-"><code>Polygonization-by-Frame-Field-Learning
|-- frame_field_learning
|   |-- runs
|   |   |-- &lt;run_name&gt; | &lt;yyyy-mm-dd hh:mm:ss&gt;
|   |   `-- ...
|   |-- inference.py
|   `-- ...
|-- main.py
|-- README.md (this file)
`-- ...
</code></pre><p>Execute the [<a href="http://main.py">main.py</a>] script like so (filling values for arguments run_name and in_filepath):<br>
<code>python main.py --run_name &lt;run_name&gt; --in_filepath &lt;your_image_filepath&gt;</code></p>
<p>The outputs will be saved next to the input image</p>
<h2>Download trained models</h2>
<p>We provide already-trained models so you can run inference right away.<br>
Download here: <a href="https://drive.google.com/drive/folders/1poTQbpCz12ra22CsucF_hd_8dSQ1T3eT?usp=sharing">https://drive.google.com/drive/folders/1poTQbpCz12ra22CsucF_hd_8dSQ1T3eT?usp=sharing</a>.<br>
Each model was trained in a &quot;run&quot;, whose folder (named with the format <code>&lt;run_name&gt; | &lt;yyyy-mm-dd hh:mm:ss&gt;</code>) you can download at the provided link.<br>
You should then place those runs in a folder named &quot;runs&quot; inside the &quot;frame_field_learning&quot; folder like so:</p>
<pre data-role="codeBlock" data-info class="language-"><code>Polygonization-by-Frame-Field-Learning
|-- frame_field_learning
|   |-- runs
|   |   |-- inria_dataset_polygonized.unet_resnet101_pretrained.leaderboard | 2020-06-02 07:57:31
|   |   |-- mapping_dataset.unet_resnet101_pretrained.field_off.train_val | 2020-09-07 11:54:48
|   |   |-- mapping_dataset.unet_resnet101_pretrained.train_val | 2020-09-07 11:28:51
|   |   `-- ...
|   |-- inference.py
|   `-- ...
|-- main.py
|-- README.md (this file)
`-- ...
</code></pre><p>Because Google Drive reformats folder names, you have to rename the run folders as above.</p>
<h1>Cite:</h1>
<p>If you use this code for your own research, please cite</p>
<pre data-role="codeBlock" data-info class="language-"><code>@InProceedings{Girard_2021_CVPR,
    author    = {Girard, Nicolas and Smirnov, Dmitriy and Solomon, Justin and Tarabalka, Yuliya},
    title     = {Polygonal Building Extraction by Frame Field Learning},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2021},
    pages     = {5891-5900}
}
</code></pre>
      </div>
      
      
    
    
    
    
    
    
    
    
  
    </body></html>