 
<!DOCTYPE html>
<head>
    <meta charset="utf-8"/>
    <title>Point2Mesh</title>
    <link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.8.1/css/all.css"
          integrity="sha384-50oBUHEmvpQ+1lW4y57PTFmhCaXp0ML5d60M1M7uH2+nqUivzIebhndOJK28anvf" crossorigin="anonymous">
    <meta content="width=device-width, initial-scale=1" name="viewport"/>
    <link href="min.css" rel="stylesheet" type="text/css"/>
    <script src="https://ajax.googleapis.com/ajax/libs/webfont/1.6.26/webfont.js" type="text/javascript"></script>
    <script type="text/javascript">
        WebFont.load({google: {families: ["Lato:100,100italic,300,300italic,400,400italic,700,700italic,900,900italic", "Montserrat:100,100italic,200,200italic,300,300italic,400,400italic,500,500italic,600,600italic,700,700italic,800,800italic,900,900italic", "Ubuntu:300,300italic,400,400italic,500,500italic,700,700italic", "Changa One:400,400italic", "Open Sans:300,300italic,400,400italic,600,600italic,700,700italic,800,800italic", "Varela Round:400", "Bungee Shade:regular", "Roboto:300,regular,500"]}});</script>
    <!--[if lt IE 9]>
    <script src="https://cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.3/html5shiv.min.js"
            type="text/javascript"></script><![endif]-->
    <script type="text/javascript">!function (o, c) {
        var n = c.documentElement, t = " w-mod-";
        n.className += t + "js", ("ontouchstart" in o || o.DocumentTouch && c instanceof DocumentTouch) && (n.className += t + "touch")
    }(window, document);</script>
    <link href="images/thumbnail.jpg"
          rel="shortcut icon" type="image/x-icon"/>
    <style>
        .wf-loading * {
            opacity: 0;
        }
    </style>

    <script src="https://threejs.org/build/three.js"></script>
    <script src='https://threejs.org/examples/js/controls/OrbitControls.js'></script>
    <script src='https://threejs.org/examples/js/loaders/OBJLoader.js'></script>
    <script src="meshes/mesh.js"></script>
</head>
<body>


<div class="row">
    <div class="cell"><img src="images/sig2020.svg" width="200"></div>
</div>

<div class="section hero p2m">
    <div class="container-2 p2m_header_v2 w-container"><h1 class="title">Point2Mesh</h1>
        <h1 class="subheader"> A Self-Prior for Deformable Meshes</h1>
        <div class="p2m_authors_list_single w-row">
            <div class="w-col w-col-3 w-col-small-4 w-col-tiny-6">
                <a class="authors" href="https://www.cs.tau.ac.il/~hanocka/" target="_blank">Rana Hanocka</a></div>
            <div class="w-col w-col-3 w-col-small-2 w-col-tiny-6">
                <a class="authors" href="https://www.linkedin.com/in/gal-metzer-512803a1/" target="_blank">Gal Metzer</a></div>
            <div class="w-col w-col-3 w-col-small-2 w-col-tiny-6">
                <a class="authors" href="http://web.eng.tau.ac.il/~raja/" target="_blank">Raja Giryes</a></div>
            <div class="w-col w-col-3 w-col-small-4 w-col-tiny-6">
                <a class="authors" href="https://www.cs.tau.ac.il/~dcor/" target="_blank">Daniel Cohen-Or</a>
            </div>
        </div>
        <div class="div-block-10">
            <div class="equal_v2">Tel Aviv University</div>
        </div>

        <div>
            <br>
            <span class="center"><img src="images/lizard2.gif"></span>
            <!--            <span class="center"><img src="https://gist.github.com/ranahanocka/9513a4444d38ea419a3d6922f3563831/raw/4d0b90de8beff343de236e198726e24eeda15b68/lizard.gif" alt="lizard.gif"></span>-->
        </div>

        <!--start links -->
        <div class="p2m_authors_list_single w-row">
            <div class="w-col w-col-4 w-col-small-3 w-col-tiny-4">
                <a class="authors" href="" target="_blank">
                    <a href="https://arxiv.org/abs/2005.11084" target="_blank"><i
                            class="far fa-4x fa-file text-primary mb-3 "></i></a>
                </a></div>

            <div class="w-col w-col-4 w-col-small-3 w-col-tiny-4">
                <a class="authors" href="" target="_blank">
                    <a href="https://github.com/ranahanocka/Point2Mesh/" target="_blank"><i
                            class="fab fa-4x fa-github text-primary mb-3 "></i></a>
                </a></div>

            <div class="w-col w-col-4 w-col-small-3 w-col-tiny-4">
                <a class="authors" href="" target="_blank">
                    <a href="https://docs.google.com/presentation/d/1F0vQBgSMUBO-nvIqYjir11I7IFYdFegWwAAi0Kbc7tE/edit#slide=id.g8be134eecb_0_98" target="_blank"><i
                            class="fas fa-thumbtack fa-4x text-primary mb-3 "></i></a>
                </a></div>
        </div>


        <div class="div-block-4 w-row">
            <div class="w-col w-col-4 w-col-small-3 w-col-tiny-4">
                <div class="text-block-2"><strong style="color:#18446c" class="icon-bold-text">Paper</strong></div>
            </div>
            <div class="w-col w-col-4 w-col-small-3 w-col-tiny-4">
                <div class="text-block-2">
                    <strong style="color:#18446c" class="icon-bold-text">Code</strong>
                </div>
            </div>
            <div class="w-col w-col-4 w-col-small-3 w-col-tiny-4">
                <div class="text-block-2"><strong style="color:#18446c" class="icon-bold-text">Slides</strong></div>
            </div>
        </div>
    </div>

    <!--  end links  -->
</div>
</div>

<div class="white_section">
    <div class="w-container"><h2 class="grey-heading">Abstract</h2>
        <p class="paragraph-3 the_text">
            In this paper, we introduce Point2Mesh, a technique for reconstructing a surface mesh from an input
            point cloud.
            Instead of explicitly specifying a prior that encodes the expected shape properties, the prior is
            defined automatically
            using the input point cloud, which we refer to as a <i>self-prior</i>. The self-prior encapsulates
            reoccurring geometric
            repetitions from a single shape within the weights of a deep neural network.
            We optimize the network weights to deform an initial mesh to <i>shrink-wrap</i> a single input point
            cloud.
            This explicitly considers the entire reconstructed shape, since shared local kernels are calculated to
            fit the overall
            object. The convolutional kernels are optimized globally across the entire shape, which inherently
            encourages
            local-scale geometric self-similarity across the shape surface. We show that shrink-wrapping a point
            cloud with a
            self-prior converges to a desirable solution; compared to a prescribed smoothness prior, which often
            becomes trapped
            in undesirable local minima. While the performance of traditional reconstruction approaches degrades in
            non-ideal
            conditions that are often present in real world scanning, <i>i.e.,</i> unoriented normals, noise and
            missing (low density)
            parts, Point2Mesh is robust to non-ideal conditions. We demonstrate the performance of Point2Mesh on a
            large variety of shapes with varying complexity.
        </p>
    </div>
</div>



<div class="section the_section" data-anchor="slide1">
    <div class="w-container"><h2 class="grey-heading">Video</h2>
        <div class="w-embed-youtubevideo stega_movie youtube" id="w-node-e5e45b1d55ac-81500a5f"
             style="padding-top:56.17021276595745%">
            <iframe allow="autoplay; encrypted-media" allowfullscreen="" frameBorder="0"
                    src="https://www.youtube.com/embed/AySwwJuPqOk?rel=1&amp;controls=1&amp;autoplay=0&amp;mute=0&amp;start=0"
                    style="position:absolute;left:0;top:0;width:100%;height:100%;pointer-events:auto"></iframe>
        </div>
    </div>
</div>



<div class="white_section">
    <div class="w-container"><h2 class="grey-heading">Point2Mesh Overview</h2>
        <p class="paragraph-3 the_text" >
            Point2Mesh is a technique for reconstructing a surface mesh from an input point cloud.
            This approach "learns" from a single object, by optimizing the weights of a CNN to deform some initial mesh to shrink-wrap the input point cloud:
        </p>
        <div><span class="center" id="anky_image" ><img src="images/anky_resize_17.gif"></span></div>
         <p class="paragraph-3 the_text">
             The optimized CNN weights act as a <i>prior</i>, which encode the expected shape properties, which we refer to as a <i>self-prior</i>.
             The premise is that shapes are <i>not</i> random, and contain strong self-correlation across multiple scales.
        </p>
        <div><span class="center"><img src="images/global_anky.gif"></span></div>

        <p class="paragraph-3 the_text">
            Central to the self-prior is the weight-sharing structure of a CNN, which inherently models recurring and
            correlated structures and, hence, is weak in modeling noise and outliers, which have non-recurring geometries.
        </p>
        <div><span class="center"><img src="images/anky_prior.gif"></span></div>
    </div>
</div>

<div class="white_section">
    <div class="w-container"><h2 class="grey-heading">Results</h2>
    <div class="row" style="text-align:center;padding:0;margin:0" id="meshes">
            <!-- CSS includes -->
        <div id="mesh1" class="center">
            <script>render('#mesh1', 'meshes/thorn2.obj', THREE.SmoothShading);</script>
        </div>

        <div id="mesh2" class="center">
            <script>render('#mesh2', 'meshes/cat.obj', THREE.FlatShading);</script>
        </div>

        <div id="mesh3" class="center">
            <script>render('#mesh3', 'meshes/bunny.obj', THREE.FlatShading);</script>
        </div>

        <div id="mesh4" class="center">
            <script>render('#mesh4', 'meshes/turtle.obj', THREE.FlatShading);</script>
        </div>

        <div id="mesh6" class="center">
            <script>render('#mesh6', 'meshes/triceratops.obj', THREE.FlatShading);</script>
        </div>

        <div id="mesh5" class="center">
            <script>render('#mesh5', 'meshes/G.obj', THREE.FlatShading);</script>
        </div>

    </div>
    </div>
</div>


<div class="white_section">
    <div class="w-container"><h2 class="grey-heading">Acknowledgements</h2>
        <p class="paragraph-3 the_text">
           We thank <a class= "link" href="https://cims.nyu.edu/gcl/daniele.html" target="_blank">Daniele Panozzo</a> for his helpful suggestions. We are also
            thankful for help from <a class= "link" href="http://shihaowu.net/" target="_blank">Shihao Wu</a>,
            <a class= "link" href="https://cims.nyu.edu/~francisw/" target="_blank">Francis Williams</a>,
            <a class= "link" href="https://cs.nyu.edu/~teseo/" target="_blank">Teseo Schneider</a>,
            <a class= "link" href="https://www.cs.tau.ac.il/~noafish/" target="_blank">Noa Fish</a>
            and
            <a class= "link" href="https://yifita.github.io/" target="_blank">Yifan Wang</a>.
            We are grateful for the 3D scans
            provided by Tom Pierce and <a class= "link" href="http://pierce-design.com/" target="_blank">Pierce Design</a>. This work is supported
            by the NSF-BSF grant (No. 2017729), the European research council
            (ERC-StG 757497 PI Giryes), ISF grant 2366/16, and the Israel Science
            Foundation ISF-NSFC joint program grant number 2472/17.
        </p>
    </div>
</div>

</body></html>