<!doctype html>
<html>
  <head>
    <meta charset="utf-8">
    <meta http-equiv="X-UA-Compatible" content="chrome=1">
    <title>Dex-Net by BerkeleyAutomation</title>

    <link rel="stylesheet" href="stylesheets/styles.css">
    <link rel="stylesheet" href="stylesheets/github-light.css">
    <script src="javascripts/scale.fix.js"></script>
    <meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=no">
    <!--[if lt IE 9]>
    <script src="//html5shiv.googlecode.com/svn/trunk/html5.js"></script>
    <![endif]-->
    <script>
      (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
      (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
      m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
      })(window,document,'script','https://www.google-analytics.com/analytics.js','ga');

      ga('create', 'UA-101589905-1', 'auto');
      ga('send', 'pageview');

    </script>
  </head>
  <body>
    <div class="wrapper">
      <header>
        <h1 class="header"><a href="https://berkeleyautomation.github.io/dex-net">Dex-Net</a></h1>
        <p class="header"></p>

        <h2 class="header">Publications</h2>
        <p class="header"></p>
        <ul>
          <li class="download"><a class="buttons" href="#dexnet_2">Dex-Net 2.0</a></li>
          <li class="download"><a class="buttons" href="#dexnet_1">Dex-Net 1.0</a></li>
        </ul>

        <h2 class="header">Code</h2>
        <p class="header"></p>
        <ul>
          <li class="download"><a class="buttons" href="https://github.com/BerkeleyAutomation/gqcnn">gqcnn</a></li>
        </ul>

        <h2 class="header">Datasets</h2>
        <p class="header"></p>
        <ul>
          <li class="download"><a class="buttons" href="http://bit.ly/2rIM7Jk">GQ-CNN Training</a></li>
        </ul>

        <h2 class="header">Models</h2>
        <p class="header"></p>
        <ul>
          <li class="download"><a class="buttons" href="http://bit.ly/2tAFMko">GQ-CNN Weights</a></li>
        </ul>

        <p class="header">This project is maintained by <a class="header name" href="https://github.com/jeffmahler">Jeff Mahler</a> and <a class="header name" href="https://github.com/BerkeleyAutomation">BerkeleyAutomation</a>.</p>
        <p class="header">For more info, <a class="header name" href="#contact">contact us.</a></p>

      </header>

      <section>
      
      <html>
      <body>
      <iframe width="650" height="355" src="https://www.youtube.com/embed/i6K3GI2_EgU" frameborder="0" allowfullscreen></iframe>
      </body>
      </html>

      <p>
      The Dexterity Network (Dex-Net) is a research project including code, datasets, and algorithms for generating datasets of synthetic point clouds, robot parallel-jaw grasps and metrics of grasp robustness based on physics for thousands of 3D object models to train machine learning-based methods to plan robot grasps.
      The broader goal of the Dex-Net project is to develop highly reliable robot grasping across a wide variety of rigid objects such as tools, household items, packaged goods, and industrial parts.
      </p>
      <p>
      <a href="#dexnet_2">Dex-Net 2.0</a> is designed for learning Grasp Quality Convolutional Neural Network (GQ-CNN) models that predict the probability of success of candidate grasps on objects from point clouds.
      GQ-CNNs may be useful for quickly planning grasps that can lift and transport a wide variety of objects a physical robot.
      <a href="#dexnet_1">Dex-Net 1.0</a> was designed for learning predictors of grasp success for new 3D mesh models to accelerate generation of new datasets.
      </p>
      <p>
      The project was created by <a href="http://www.jeff-mahler.com">Jeff Mahler</a> and <a href="http://goldberg.berkeley.edu">Prof. Ken Goldberg</a> of the <a href="http://autolab.berkeley.edu/">Berkeley AUTOLAB</a>.
      </p>

      <h2 class="header" id="links" ><font color="black">Project Links</font></h2>
      <p>
      <ul>
      <li> <a href="http://bair.berkeley.edu/blog/2017/06/27/dexnet-2.0/">BAIR Blog Post</a> </li>
      <li> <a href="http://bit.ly/2rIM7Jk">Dex-Net 2.0 Datasets</a> </li>
      <li> <a href="http://bit.ly/2tAFMko">Trained GQ-CNN Models</a> </li>
      <li> <a href="https://github.com/BerkeleyAutomation/gqcnn">GQ-CNN Training Code</a> </li>
      <li> <a href="https://berkeleyautomation.github.io/gqcnn">Documentation for GQ-CNN Code</a> </li>
      <li> <a href="https://jeffmahler.github.io/dex-net/code.html">Documentation for Dex-Net Code</a> </li>
      <li> <a href="http://autolab.berkeley.edu">Berkeley AUTOLAB</a> </li>
      </ul>
      </p>

      <h2 class="header" id="links" ><font color="black">News Coverage</font></h2>
      <p>
      <ul>
      <li> <a href="http://spectrum.ieee.org/automaton/robotics/robotics-software/uc-berkeley-releases-massive-dexnet-20-dataset">IEEE Spectrum. June 27, 2017.</a> </li>
      <li> <a href="https://www.wired.com/story/grasping-robot/">WIRED. June 19, 2017.</a> </li>
      <li> <a href="https://www.technologyreview.com/s/607931/meet-the-most-nimble-fingered-robot-yet/?utm_campaign=add_this&utm_source=twitter&utm_medium=post">MIT Technology Review. May 25, 2017.</a> </li>
      <li> <a href="https://www.fastcompany.com/3066863/robot-revolution/why-its-so-hard-for-robots-to-get-a-grip">Fast Company. Jan 12, 2017.</a> </li>
      </ul>
      </p>

      </section>

      <section>

      <h2 class="header" id="dexnet_2" ><font color="black">Dex-Net 2.0: Deep Learning to Plan Robust Grasps with Synthetic Point Clouds and Analytic Grasp Metrics</font></h2>
      <h4 class="header"><font color="black">Jeff Mahler, Jacky Liang, Sherdil Niyaz, Michael Laskey, Richard Doan, Xinyu Liu, Juan Ojea, Ken Goldberg</font></h4>
      <h5 class="header"><font color="black"> To appear in <b> RSS 2017 </b> </font></h5>
      <p>
      [<a href="https://github.com/BerkeleyAutomation/dex-net/raw/gh-pages/docs/dexnet_rss2017_final.pdf">Paper</a>]
      [<a href="https://github.com/BerkeleyAutomation/dex-net/raw/gh-pages/docs/dexnet_rss2017_supplement.pdf">Supplement</a>]
      [<a href="https://arxiv.org/abs/1703.09312">Extended Version</a>]
      [<a href="https://github.com/BerkeleyAutomation/dex-net/raw/gh-pages/docs/dexnet_icra2017_lecom_workshop_abstract.pdf">ICRA 2017 LECOM Workshop Abstract</a>]
      [<a href="https://github.com/BerkeleyAutomation/gqcnn">Code</a>]
      [<a href="http://bit.ly/2rIM7Jk">Datasets</a>]
      [<a href="http://bit.ly/2tAFMko">Pretrained Models</a>]
      [<a href="https://www.youtube.com/watch?v=i6K3GI2_EgU">Video</a>]
      [<a href="https://github.com/BerkeleyAutomation/dex-net/raw/gh-pages/docs/dexnet_rss2017.bib">Bibtex</a>]
      </p>

      <center>
      <div class="image">
      <img src="images/dexnet_2_teaser.png?raw=true" alt="Image cannot be displayed" width="90%">
      </div>
      </center>

      <h3>
      <a class="anchor" haria-hidden="true"><span class="octicon octicon-link"></span></a>Overview</h3>

      <p>
      To reduce data collection time for deep learning of robust robotic grasp plans, we explore training from a synthetic dataset of 6.7 million point clouds, grasps, and robust analytic grasp metrics generated from thousands of 3D models from Dex-Net 1.0 in randomized poses on a table.
      We use the resulting dataset, Dex-Net 2.0, to train a Grasp Quality Convolutional Neural Network (GQ-CNN) model that rapidly predicts the probability of success of grasps from depth images, where grasps are specified as the planar position, angle, and depth of a gripper relative to an RGB-D sensor.
      Experiments with over 1,000 trials on an ABB YuMi comparing grasp planning methods on singulated objects suggest that a GQ-CNN trained  with only synthetic data from Dex-Net 2.0 can be used to plan grasps in 0.8s with a success rate of 93% on eight known objects with adversarial geometry and is 3x faster than registering point clouds to a precomputed dataset of objects and indexing grasps.
      The GQ-CNN is also the highest performing method on a dataset of ten novel household objects, with zero false positives out of 29 grasps classified as robust (100% precision) and a 1.5x higher success rate than a registration-based method.
      </p>

      <h3>
      <a id="codeanddata" class="anchor" href="#codeanddata" aria-hidden="true"><span class="octicon octicon-link"></span></a>Code and Data</h3>

      <p>
      We are planning on releasing the code and dataset for this project over summer 2017 with the following tentative release dates:
      <ul>
      <li> <span style="font-weight:bold"> GQ-CNN Package: </span> <span style="font-style:italic">  June 20, 2017. </span> Dex-Net 2.0 GQ-CNN training dataset with 6.7 million datapoints and ROS integration. The gqcnn package is now available at <a href="https://github.com/BerkeleyAutomation/gqcnn">https://github.com/BerkeleyAutomation/gqcnn</a> with an example ROS service for grasp planning.
      <li> <span style="font-weight:bold"> Dex-Net Object Mesh Dataset v1.1: </span> <span style="font-style:italic"> July 12, 2017. </span> The subset of 1,500 3D object models from Dex-Net 1.0 used in the RSS paper, labeled with parallel-Jaw grasps for the ABB YuMi. The dataset and dex-net Python API for manipulating the dataset are now available <a href="https://github.com/BerkeleyAutomation/dex-net/code.html">here</a>.
      <li> <span style="font-weight:bold"> Dex-Net as a Service: </span> <span style="font-style:italic"> Fall 2017. </span>  HTTP web API to create new databases with custom 3D models and compute grasp robustness metrics.
      </ul>
      </p>

      </section>
      
      <section>

      <h2 class="header" id="dexnet_1"><font color="black">Dex-Net 1.0: A Cloud-Based Network of 3D Objects for Robust Grasp Planning Using a Multi-Armed Bandit Model with Correlated Rewards</font></h2>
      <h4 class="header"><font color="black">Jeff Mahler, Florian Pokorny, Brian Hou, Melrose Roderick, Michael Laskey, Mathieu Aubry, Kai Kohlhoff, Torsten Kroeger, James Kuffner, Ken Goldberg</font></h4>
      <h5 class="header"><font color="black"> ICRA 2017 (Finalist, Best Manipulation Paper) </font></h5>
      <p>
      [<a href="https://github.com/BerkeleyAutomation/dex-net/raw/gh-pages/docs/dexnet_icra2016_final.pdf">Paper</a>]
      [<a href="https://github.com/BerkeleyAutomation/dex-net/raw/gh-pages/docs/dexnet_icra2016.bib">Bibtex</a>]
      </p>

      <center>
      <div class="image">
      <img src="https://github.com/BerkeleyAutomation/dex-net/raw/gh-pages/images/dexnet_teaser_website.png?raw=true" alt="Image cannot be displayed" width="90%">
      </div>
      </center>

      <h3>
      <a id="abstract" class="anchor" href="#abstract" aria-hidden="true"><span class="octicon octicon-link"></span></a>Overview</h3>

      <p>
      We present Dexterity Network 1.0 (Dex-Net), a new dataset and associated algorithm to study the scaling effects of Big Data and cloud computation on robust grasp planning. 
      The algorithm uses a Multi-Armed Bandit model with correlated rewards to leverage prior grasps and 3D object models in a growing dataset that currently includes over 10,000 unique 3D object models and 2.5 million parallel-jaw grasps.
      Each grasp includes an estimate of the probability of force closure under uncertainty in object and gripper pose and friction.
      Dex-Net 1.0 uses Multi-View Convolutional Neural Networks (MV-CNNs), a new deep learning method for 3D object classification, as a similarity metric between objects and the Google Cloud Platform to simultaneously run up to 1,500 virtual machines, reducing experiment runtime by three orders of magnitude.
      Experiments suggest that prior data can speed up robust grasp planning by a factor of up to 2 on average and that the quality of planned grasps increases with the number of similar objects in the dataset.
      We also study system sensitivity to varying similarity metrics and pose and friction uncertainty levels.
      </p>

      <h3>
      <a id="codeanddata" class="anchor" href="#codeanddata" aria-hidden="true"><span class="octicon octicon-link"></span></a>Code and Data</h3>

      <p>
      The code for this project can be found on <a href="https://github.com/jeffmahler/GPIS/tree/dev/src/grasp_selection">our github page</a>.
      This code is deprecated as of May 2017 and will be updated in the Dex-Net 2.0 codebase (see above).
      </p>
      </section>
      
      <section>

      <h2 class="header" id="contact"><font color="black">Contributors</font></h2>

      <p>This is an ongoing project at UC Berkeley with active contributions from:<br>
      <a href="http://www.jeff-mahler.com">Jeff Mahler</a>, Vishal Satish, Alan Li, Matt Matl, Jacky Liang, Xinyu Liu, and <a href="http://goldberg.berkeley.edu">Ken Goldberg</a>.</p>

      <p>Past contributors include:<br>
      <a href="http://www.cs.berkeley.edu/~ftpokorny/">Florian Pokorny</a>, Brian Hou, Sherdil Niyaz, Melrose Roderkick, <a href="http://imagine.enpc.fr/~aubrym/index.html">Mathieu Aubry</a>, Michael Laskey, Richard Doan, Brenton Chu, Raul Puri, Sahanna Suri, Nikhil Sharma, and Josh Price.</p>

      <h3>
      <a id="support-or-contact" class="anchor" href="#support-or-contact" aria-hidden="true"><span class="octicon octicon-link"></span></a>Support or Contact</h3>

      <p>Please Contact <a href="http://www.jeff-mahler.com">Jeff Mahler</a> (<a href="mailto:jmahler@berkeley.edu">email</a>) or <a href="goldberg.berkeley.edu">Prof. Ken Goldberg</a> (<a href="mailto:goldberg@berkeley.edu">email</a>) of the <a href="http://autolab.berkeley.edu/">AUTOLAB</a> at UC Berkeley.</p>
      </section>
    </div>
    <!--[if !IE]><script>fixScale(document);</script><![endif]-->
		
  </body>
</html>
